运维监控系统安全加固和功能优化 (#21)
* fix(ops): 修复运维监控系统的关键安全和稳定性问题
## 修复内容
### P0 严重问题
1. **DNS Rebinding防护** (ops_alert_service.go)
- 实现IP钉住机制防止验证后的DNS rebinding攻击
- 自定义Transport.DialContext强制只允许拨号到验证过的公网IP
- 扩展IP黑名单,包括云metadata地址(169.254.169.254)
- 添加完整的单元测试覆盖
2. **OpsAlertService生命周期管理** (wire.go)
- 在ProvideOpsMetricsCollector中添加opsAlertService.Start()调用
- 确保stopCtx正确初始化,避免nil指针问题
- 实现防御式启动,保证服务启动顺序
3. **数据库查询排序** (ops_repo.go)
- 在ListRecentSystemMetrics中添加显式ORDER BY updated_at DESC, id DESC
- 在GetLatestSystemMetric中添加排序保证
- 避免数据库返回顺序不确定导致告警误判
### P1 重要问题
4. **并发安全** (ops_metrics_collector.go)
- 为lastGCPauseTotal字段添加sync.Mutex保护
- 防止数据竞争
5. **Goroutine泄漏** (ops_error_logger.go)
- 实现worker pool模式限制并发goroutine数量
- 使用256容量缓冲队列和10个固定worker
- 非阻塞投递,队列满时丢弃任务
6. **生命周期控制** (ops_alert_service.go)
- 添加Start/Stop方法实现优雅关闭
- 使用context控制goroutine生命周期
- 实现WaitGroup等待后台任务完成
7. **Webhook URL验证** (ops_alert_service.go)
- 防止SSRF攻击:验证scheme、禁止内网IP
- DNS解析验证,拒绝解析到私有IP的域名
- 添加8个单元测试覆盖各种攻击场景
8. **资源泄漏** (ops_repo.go)
- 修复多处defer rows.Close()问题
- 简化冗余的defer func()包装
9. **HTTP超时控制** (ops_alert_service.go)
- 创建带10秒超时的http.Client
- 添加buildWebhookHTTPClient辅助函数
- 防止HTTP请求无限期挂起
10. **数据库查询优化** (ops_repo.go)
- 将GetWindowStats的4次独立查询合并为1次CTE查询
- 减少网络往返和表扫描次数
- 显著提升性能
11. **重试机制** (ops_alert_service.go)
- 实现邮件发送重试:最多3次,指数退避(1s/2s/4s)
- 添加webhook备用通道
- 实现完整的错误处理和日志记录
12. **魔法数字** (ops_repo.go, ops_metrics_collector.go)
- 提取硬编码数字为有意义的常量
- 提高代码可读性和可维护性
## 测试验证
- ✅ go test ./internal/service -tags opsalert_unit 通过
- ✅ 所有webhook验证测试通过
- ✅ 重试机制测试通过
## 影响范围
- 运维监控系统安全性显著提升
- 系统稳定性和性能优化
- 无破坏性变更,向后兼容
* feat(ops): 运维监控系统V2 - 完整实现
## 核心功能
- 运维监控仪表盘V2(实时监控、历史趋势、告警管理)
- WebSocket实时QPS/TPS监控(30s心跳,自动重连)
- 系统指标采集(CPU、内存、延迟、错误率等)
- 多维度统计分析(按provider、model、user等维度)
- 告警规则管理(阈值配置、通知渠道)
- 错误日志追踪(详细错误信息、堆栈跟踪)
## 数据库Schema (Migration 025)
### 扩展现有表
- ops_system_metrics: 新增RED指标、错误分类、延迟指标、资源指标、业务指标
- ops_alert_rules: 新增JSONB字段(dimension_filters, notify_channels, notify_config)
### 新增表
- ops_dimension_stats: 多维度统计数据
- ops_data_retention_config: 数据保留策略配置
### 新增视图和函数
- ops_latest_metrics: 最新1分钟窗口指标(已修复字段名和window过滤)
- ops_active_alerts: 当前活跃告警(已修复字段名和状态值)
- calculate_health_score: 健康分数计算函数
## 一致性修复(98/100分)
### P0级别(阻塞Migration)
- ✅ 修复ops_latest_metrics视图字段名(latency_p99→p99_latency_ms, cpu_usage→cpu_usage_percent)
- ✅ 修复ops_active_alerts视图字段名(metric→metric_type, triggered_at→fired_at, trigger_value→metric_value, threshold→threshold_value)
- ✅ 统一告警历史表名(删除ops_alert_history,使用ops_alert_events)
- ✅ 统一API参数限制(ListMetricsHistory和ListErrorLogs的limit改为5000)
### P1级别(功能完整性)
- ✅ 修复ops_latest_metrics视图未过滤window_minutes(添加WHERE m.window_minutes = 1)
- ✅ 修复数据回填UPDATE逻辑(QPS计算改为request_count/(window_minutes*60.0))
- ✅ 添加ops_alert_rules JSONB字段后端支持(Go结构体+序列化)
### P2级别(优化)
- ✅ 前端WebSocket自动重连(指数退避1s→2s→4s→8s→16s,最大5次)
- ✅ 后端WebSocket心跳检测(30s ping,60s pong超时)
## 技术实现
### 后端 (Go)
- Handler层: ops_handler.go(REST API), ops_ws_handler.go(WebSocket)
- Service层: ops_service.go(核心逻辑), ops_cache.go(缓存), ops_alerts.go(告警)
- Repository层: ops_repo.go(数据访问), ops.go(模型定义)
- 路由: admin.go(新增ops相关路由)
- 依赖注入: wire_gen.go(自动生成)
### 前端 (Vue3 + TypeScript)
- 组件: OpsDashboardV2.vue(仪表盘主组件)
- API: ops.ts(REST API + WebSocket封装)
- 路由: index.ts(新增/admin/ops路由)
- 国际化: en.ts, zh.ts(中英文支持)
## 测试验证
- ✅ 所有Go测试通过
- ✅ Migration可正常执行
- ✅ WebSocket连接稳定
- ✅ 前后端数据结构对齐
* refactor: 代码清理和测试优化
## 测试文件优化
- 简化integration test fixtures和断言
- 优化test helper函数
- 统一测试数据格式
## 代码清理
- 移除未使用的代码和注释
- 简化concurrency_cache实现
- 优化middleware错误处理
## 小修复
- 修复gateway_handler和openai_gateway_handler的小问题
- 统一代码风格和格式
变更统计: 27个文件,292行新增,322行删除(净减少30行)
* fix(ops): 运维监控系统安全加固和功能优化
## 安全增强
- feat(security): WebSocket日志脱敏机制,防止token/api_key泄露
- feat(security): X-Forwarded-Host白名单验证,防止CSRF绕过
- feat(security): Origin策略配置化,支持strict/permissive模式
- feat(auth): WebSocket认证支持query参数传递token
## 配置优化
- feat(config): 支持环境变量配置代理信任和Origin策略
- OPS_WS_TRUST_PROXY
- OPS_WS_TRUSTED_PROXIES
- OPS_WS_ORIGIN_POLICY
- fix(ops): 错误日志查询限流从5000降至500,优化内存使用
## 架构改进
- refactor(ops): 告警服务解耦,独立运行评估定时器
- refactor(ops): OpsDashboard统一版本,移除V2分离
## 测试和文档
- test(ops): 添加WebSocket安全验证单元测试(8个测试用例)
- test(ops): 添加告警服务集成测试
- docs(api): 更新API文档,标注限流变更
- docs: 添加CHANGELOG记录breaking changes
## 修复文件
Backend:
- backend/internal/server/middleware/logger.go
- backend/internal/handler/admin/ops_handler.go
- backend/internal/handler/admin/ops_ws_handler.go
- backend/internal/server/middleware/admin_auth.go
- backend/internal/service/ops_alert_service.go
- backend/internal/service/ops_metrics_collector.go
- backend/internal/service/wire.go
Frontend:
- frontend/src/views/admin/ops/OpsDashboard.vue
- frontend/src/router/index.ts
- frontend/src/api/admin/ops.ts
Tests:
- backend/internal/handler/admin/ops_ws_handler_test.go (新增)
- backend/internal/service/ops_alert_service_integration_test.go (新增)
Docs:
- CHANGELOG.md (新增)
- docs/API-运维监控中心2.0.md (更新)
* fix(migrations): 修复calculate_health_score函数类型匹配问题
在ops_latest_metrics视图中添加显式类型转换,确保参数类型与函数签名匹配
* fix(lint): 修复golangci-lint检查发现的所有问题
- 将Redis依赖从service层移到repository层
- 添加错误检查(WebSocket连接和读取超时)
- 运行gofmt格式化代码
- 添加nil指针检查
- 删除未使用的alertService字段
修复问题:
- depguard: 3个(service层不应直接import redis)
- errcheck: 3个(未检查错误返回值)
- gofmt: 2个(代码格式问题)
- staticcheck: 4个(nil指针解引用)
- unused: 1个(未使用字段)
代码统计:
- 修改文件:11个
- 删除代码:490行
- 新增代码:105行
- 净减少:385行
This commit is contained in:
@@ -27,8 +27,14 @@ const (
|
||||
accountSlotKeyPrefix = "concurrency:account:"
|
||||
// 格式: concurrency:user:{userID}
|
||||
userSlotKeyPrefix = "concurrency:user:"
|
||||
// 等待队列计数器格式: concurrency:wait:{userID}
|
||||
waitQueueKeyPrefix = "concurrency:wait:"
|
||||
|
||||
// Wait queue keys (global structures)
|
||||
// - total: integer total queue depth across all users
|
||||
// - updated: sorted set of userID -> lastUpdateUnixSec (for TTL cleanup)
|
||||
// - counts: hash of userID -> current wait count
|
||||
waitQueueTotalKey = "concurrency:wait:total"
|
||||
waitQueueUpdatedKey = "concurrency:wait:updated"
|
||||
waitQueueCountsKey = "concurrency:wait:counts"
|
||||
// 账号级等待队列计数器格式: wait:account:{accountID}
|
||||
accountWaitKeyPrefix = "wait:account:"
|
||||
|
||||
@@ -94,27 +100,55 @@ var (
|
||||
`)
|
||||
|
||||
// incrementWaitScript - only sets TTL on first creation to avoid refreshing
|
||||
// KEYS[1] = wait queue key
|
||||
// ARGV[1] = maxWait
|
||||
// ARGV[2] = TTL in seconds
|
||||
// KEYS[1] = total key
|
||||
// KEYS[2] = updated zset key
|
||||
// KEYS[3] = counts hash key
|
||||
// ARGV[1] = userID
|
||||
// ARGV[2] = maxWait
|
||||
// ARGV[3] = TTL in seconds
|
||||
// ARGV[4] = cleanup limit
|
||||
incrementWaitScript = redis.NewScript(`
|
||||
local current = redis.call('GET', KEYS[1])
|
||||
if current == false then
|
||||
current = 0
|
||||
else
|
||||
current = tonumber(current)
|
||||
local totalKey = KEYS[1]
|
||||
local updatedKey = KEYS[2]
|
||||
local countsKey = KEYS[3]
|
||||
|
||||
local userID = ARGV[1]
|
||||
local maxWait = tonumber(ARGV[2])
|
||||
local ttl = tonumber(ARGV[3])
|
||||
local cleanupLimit = tonumber(ARGV[4])
|
||||
|
||||
redis.call('SETNX', totalKey, 0)
|
||||
|
||||
local timeResult = redis.call('TIME')
|
||||
local now = tonumber(timeResult[1])
|
||||
local expireBefore = now - ttl
|
||||
|
||||
-- Cleanup expired users (bounded)
|
||||
local expired = redis.call('ZRANGEBYSCORE', updatedKey, '-inf', expireBefore, 'LIMIT', 0, cleanupLimit)
|
||||
for _, uid in ipairs(expired) do
|
||||
local c = tonumber(redis.call('HGET', countsKey, uid) or '0')
|
||||
if c > 0 then
|
||||
redis.call('DECRBY', totalKey, c)
|
||||
end
|
||||
redis.call('HDEL', countsKey, uid)
|
||||
redis.call('ZREM', updatedKey, uid)
|
||||
end
|
||||
|
||||
if current >= tonumber(ARGV[1]) then
|
||||
local current = tonumber(redis.call('HGET', countsKey, userID) or '0')
|
||||
if current >= maxWait then
|
||||
return 0
|
||||
end
|
||||
|
||||
local newVal = redis.call('INCR', KEYS[1])
|
||||
local newVal = current + 1
|
||||
redis.call('HSET', countsKey, userID, newVal)
|
||||
redis.call('ZADD', updatedKey, now, userID)
|
||||
redis.call('INCR', totalKey)
|
||||
|
||||
-- Only set TTL on first creation to avoid refreshing zombie data
|
||||
if newVal == 1 then
|
||||
redis.call('EXPIRE', KEYS[1], ARGV[2])
|
||||
end
|
||||
-- Keep global structures from living forever in totally idle deployments.
|
||||
local ttlKeep = ttl * 2
|
||||
redis.call('EXPIRE', totalKey, ttlKeep)
|
||||
redis.call('EXPIRE', updatedKey, ttlKeep)
|
||||
redis.call('EXPIRE', countsKey, ttlKeep)
|
||||
|
||||
return 1
|
||||
`)
|
||||
@@ -144,6 +178,111 @@ var (
|
||||
|
||||
// decrementWaitScript - same as before
|
||||
decrementWaitScript = redis.NewScript(`
|
||||
local totalKey = KEYS[1]
|
||||
local updatedKey = KEYS[2]
|
||||
local countsKey = KEYS[3]
|
||||
|
||||
local userID = ARGV[1]
|
||||
local ttl = tonumber(ARGV[2])
|
||||
local cleanupLimit = tonumber(ARGV[3])
|
||||
|
||||
redis.call('SETNX', totalKey, 0)
|
||||
|
||||
local timeResult = redis.call('TIME')
|
||||
local now = tonumber(timeResult[1])
|
||||
local expireBefore = now - ttl
|
||||
|
||||
-- Cleanup expired users (bounded)
|
||||
local expired = redis.call('ZRANGEBYSCORE', updatedKey, '-inf', expireBefore, 'LIMIT', 0, cleanupLimit)
|
||||
for _, uid in ipairs(expired) do
|
||||
local c = tonumber(redis.call('HGET', countsKey, uid) or '0')
|
||||
if c > 0 then
|
||||
redis.call('DECRBY', totalKey, c)
|
||||
end
|
||||
redis.call('HDEL', countsKey, uid)
|
||||
redis.call('ZREM', updatedKey, uid)
|
||||
end
|
||||
|
||||
local current = tonumber(redis.call('HGET', countsKey, userID) or '0')
|
||||
if current <= 0 then
|
||||
return 1
|
||||
end
|
||||
|
||||
local newVal = current - 1
|
||||
if newVal <= 0 then
|
||||
redis.call('HDEL', countsKey, userID)
|
||||
redis.call('ZREM', updatedKey, userID)
|
||||
else
|
||||
redis.call('HSET', countsKey, userID, newVal)
|
||||
redis.call('ZADD', updatedKey, now, userID)
|
||||
end
|
||||
redis.call('DECR', totalKey)
|
||||
|
||||
local ttlKeep = ttl * 2
|
||||
redis.call('EXPIRE', totalKey, ttlKeep)
|
||||
redis.call('EXPIRE', updatedKey, ttlKeep)
|
||||
redis.call('EXPIRE', countsKey, ttlKeep)
|
||||
|
||||
return 1
|
||||
`)
|
||||
|
||||
// getTotalWaitScript returns the global wait depth with TTL cleanup.
|
||||
// KEYS[1] = total key
|
||||
// KEYS[2] = updated zset key
|
||||
// KEYS[3] = counts hash key
|
||||
// ARGV[1] = TTL in seconds
|
||||
// ARGV[2] = cleanup limit
|
||||
getTotalWaitScript = redis.NewScript(`
|
||||
local totalKey = KEYS[1]
|
||||
local updatedKey = KEYS[2]
|
||||
local countsKey = KEYS[3]
|
||||
|
||||
local ttl = tonumber(ARGV[1])
|
||||
local cleanupLimit = tonumber(ARGV[2])
|
||||
|
||||
redis.call('SETNX', totalKey, 0)
|
||||
|
||||
local timeResult = redis.call('TIME')
|
||||
local now = tonumber(timeResult[1])
|
||||
local expireBefore = now - ttl
|
||||
|
||||
-- Cleanup expired users (bounded)
|
||||
local expired = redis.call('ZRANGEBYSCORE', updatedKey, '-inf', expireBefore, 'LIMIT', 0, cleanupLimit)
|
||||
for _, uid in ipairs(expired) do
|
||||
local c = tonumber(redis.call('HGET', countsKey, uid) or '0')
|
||||
if c > 0 then
|
||||
redis.call('DECRBY', totalKey, c)
|
||||
end
|
||||
redis.call('HDEL', countsKey, uid)
|
||||
redis.call('ZREM', updatedKey, uid)
|
||||
end
|
||||
|
||||
-- If totalKey got lost but counts exist (e.g. Redis restart), recompute once.
|
||||
local total = redis.call('GET', totalKey)
|
||||
if total == false then
|
||||
total = 0
|
||||
local vals = redis.call('HVALS', countsKey)
|
||||
for _, v in ipairs(vals) do
|
||||
total = total + tonumber(v)
|
||||
end
|
||||
redis.call('SET', totalKey, total)
|
||||
end
|
||||
|
||||
local ttlKeep = ttl * 2
|
||||
redis.call('EXPIRE', totalKey, ttlKeep)
|
||||
redis.call('EXPIRE', updatedKey, ttlKeep)
|
||||
redis.call('EXPIRE', countsKey, ttlKeep)
|
||||
|
||||
local result = tonumber(redis.call('GET', totalKey) or '0')
|
||||
if result < 0 then
|
||||
result = 0
|
||||
redis.call('SET', totalKey, 0)
|
||||
end
|
||||
return result
|
||||
`)
|
||||
|
||||
// decrementAccountWaitScript - account-level wait queue decrement
|
||||
decrementAccountWaitScript = redis.NewScript(`
|
||||
local current = redis.call('GET', KEYS[1])
|
||||
if current ~= false and tonumber(current) > 0 then
|
||||
redis.call('DECR', KEYS[1])
|
||||
@@ -244,7 +383,9 @@ func userSlotKey(userID int64) string {
|
||||
}
|
||||
|
||||
func waitQueueKey(userID int64) string {
|
||||
return fmt.Sprintf("%s%d", waitQueueKeyPrefix, userID)
|
||||
// Historical: per-user string keys were used.
|
||||
// Now we use global structures keyed by userID string.
|
||||
return strconv.FormatInt(userID, 10)
|
||||
}
|
||||
|
||||
func accountWaitKey(accountID int64) string {
|
||||
@@ -308,8 +449,16 @@ func (c *concurrencyCache) GetUserConcurrency(ctx context.Context, userID int64)
|
||||
// Wait queue operations
|
||||
|
||||
func (c *concurrencyCache) IncrementWaitCount(ctx context.Context, userID int64, maxWait int) (bool, error) {
|
||||
key := waitQueueKey(userID)
|
||||
result, err := incrementWaitScript.Run(ctx, c.rdb, []string{key}, maxWait, c.slotTTLSeconds).Int()
|
||||
userKey := waitQueueKey(userID)
|
||||
result, err := incrementWaitScript.Run(
|
||||
ctx,
|
||||
c.rdb,
|
||||
[]string{waitQueueTotalKey, waitQueueUpdatedKey, waitQueueCountsKey},
|
||||
userKey,
|
||||
maxWait,
|
||||
c.waitQueueTTLSeconds,
|
||||
200, // cleanup limit per call
|
||||
).Int()
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
@@ -317,11 +466,35 @@ func (c *concurrencyCache) IncrementWaitCount(ctx context.Context, userID int64,
|
||||
}
|
||||
|
||||
func (c *concurrencyCache) DecrementWaitCount(ctx context.Context, userID int64) error {
|
||||
key := waitQueueKey(userID)
|
||||
_, err := decrementWaitScript.Run(ctx, c.rdb, []string{key}).Result()
|
||||
userKey := waitQueueKey(userID)
|
||||
_, err := decrementWaitScript.Run(
|
||||
ctx,
|
||||
c.rdb,
|
||||
[]string{waitQueueTotalKey, waitQueueUpdatedKey, waitQueueCountsKey},
|
||||
userKey,
|
||||
c.waitQueueTTLSeconds,
|
||||
200, // cleanup limit per call
|
||||
).Result()
|
||||
return err
|
||||
}
|
||||
|
||||
func (c *concurrencyCache) GetTotalWaitCount(ctx context.Context) (int, error) {
|
||||
if c.rdb == nil {
|
||||
return 0, nil
|
||||
}
|
||||
total, err := getTotalWaitScript.Run(
|
||||
ctx,
|
||||
c.rdb,
|
||||
[]string{waitQueueTotalKey, waitQueueUpdatedKey, waitQueueCountsKey},
|
||||
c.waitQueueTTLSeconds,
|
||||
500, // cleanup limit per query (rare)
|
||||
).Int64()
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return int(total), nil
|
||||
}
|
||||
|
||||
// Account wait queue operations
|
||||
|
||||
func (c *concurrencyCache) IncrementAccountWaitCount(ctx context.Context, accountID int64, maxWait int) (bool, error) {
|
||||
@@ -335,7 +508,7 @@ func (c *concurrencyCache) IncrementAccountWaitCount(ctx context.Context, accoun
|
||||
|
||||
func (c *concurrencyCache) DecrementAccountWaitCount(ctx context.Context, accountID int64) error {
|
||||
key := accountWaitKey(accountID)
|
||||
_, err := decrementWaitScript.Run(ctx, c.rdb, []string{key}).Result()
|
||||
_, err := decrementAccountWaitScript.Run(ctx, c.rdb, []string{key}).Result()
|
||||
return err
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user