feat: 删除openspec

This commit is contained in:
yangjianbo
2025-12-30 08:42:51 +08:00
parent b63b338e95
commit 2ea4dafa08
8 changed files with 0 additions and 974 deletions

View File

@@ -1,456 +0,0 @@
# OpenSpec Instructions
Instructions for AI coding assistants using OpenSpec for spec-driven development.
## TL;DR Quick Checklist
- Search existing work: `openspec spec list --long`, `openspec list` (use `rg` only for full-text search)
- Decide scope: new capability vs modify existing capability
- Pick a unique `change-id`: kebab-case, verb-led (`add-`, `update-`, `remove-`, `refactor-`)
- Scaffold: `proposal.md`, `tasks.md`, `design.md` (only if needed), and delta specs per affected capability
- Write deltas: use `## ADDED|MODIFIED|REMOVED|RENAMED Requirements`; include at least one `#### Scenario:` per requirement
- Validate: `openspec validate [change-id] --strict` and fix issues
- Request approval: Do not start implementation until proposal is approved
## Three-Stage Workflow
### Stage 1: Creating Changes
Create proposal when you need to:
- Add features or functionality
- Make breaking changes (API, schema)
- Change architecture or patterns
- Optimize performance (changes behavior)
- Update security patterns
Triggers (examples):
- "Help me create a change proposal"
- "Help me plan a change"
- "Help me create a proposal"
- "I want to create a spec proposal"
- "I want to create a spec"
Loose matching guidance:
- Contains one of: `proposal`, `change`, `spec`
- With one of: `create`, `plan`, `make`, `start`, `help`
Skip proposal for:
- Bug fixes (restore intended behavior)
- Typos, formatting, comments
- Dependency updates (non-breaking)
- Configuration changes
- Tests for existing behavior
**Workflow**
1. Review `openspec/project.md`, `openspec list`, and `openspec list --specs` to understand current context.
2. Choose a unique verb-led `change-id` and scaffold `proposal.md`, `tasks.md`, optional `design.md`, and spec deltas under `openspec/changes/<id>/`.
3. Draft spec deltas using `## ADDED|MODIFIED|REMOVED Requirements` with at least one `#### Scenario:` per requirement.
4. Run `openspec validate <id> --strict` and resolve any issues before sharing the proposal.
### Stage 2: Implementing Changes
Track these steps as TODOs and complete them one by one.
1. **Read proposal.md** - Understand what's being built
2. **Read design.md** (if exists) - Review technical decisions
3. **Read tasks.md** - Get implementation checklist
4. **Implement tasks sequentially** - Complete in order
5. **Confirm completion** - Ensure every item in `tasks.md` is finished before updating statuses
6. **Update checklist** - After all work is done, set every task to `- [x]` so the list reflects reality
7. **Approval gate** - Do not start implementation until the proposal is reviewed and approved
### Stage 3: Archiving Changes
After deployment, create separate PR to:
- Move `changes/[name]/``changes/archive/YYYY-MM-DD-[name]/`
- Update `specs/` if capabilities changed
- Use `openspec archive <change-id> --skip-specs --yes` for tooling-only changes (always pass the change ID explicitly)
- Run `openspec validate --strict` to confirm the archived change passes checks
## Before Any Task
**Context Checklist:**
- [ ] Read relevant specs in `specs/[capability]/spec.md`
- [ ] Check pending changes in `changes/` for conflicts
- [ ] Read `openspec/project.md` for conventions
- [ ] Run `openspec list` to see active changes
- [ ] Run `openspec list --specs` to see existing capabilities
**Before Creating Specs:**
- Always check if capability already exists
- Prefer modifying existing specs over creating duplicates
- Use `openspec show [spec]` to review current state
- If request is ambiguous, ask 12 clarifying questions before scaffolding
### Search Guidance
- Enumerate specs: `openspec spec list --long` (or `--json` for scripts)
- Enumerate changes: `openspec list` (or `openspec change list --json` - deprecated but available)
- Show details:
- Spec: `openspec show <spec-id> --type spec` (use `--json` for filters)
- Change: `openspec show <change-id> --json --deltas-only`
- Full-text search (use ripgrep): `rg -n "Requirement:|Scenario:" openspec/specs`
## Quick Start
### CLI Commands
```bash
# Essential commands
openspec list # List active changes
openspec list --specs # List specifications
openspec show [item] # Display change or spec
openspec validate [item] # Validate changes or specs
openspec archive <change-id> [--yes|-y] # Archive after deployment (add --yes for non-interactive runs)
# Project management
openspec init [path] # Initialize OpenSpec
openspec update [path] # Update instruction files
# Interactive mode
openspec show # Prompts for selection
openspec validate # Bulk validation mode
# Debugging
openspec show [change] --json --deltas-only
openspec validate [change] --strict
```
### Command Flags
- `--json` - Machine-readable output
- `--type change|spec` - Disambiguate items
- `--strict` - Comprehensive validation
- `--no-interactive` - Disable prompts
- `--skip-specs` - Archive without spec updates
- `--yes`/`-y` - Skip confirmation prompts (non-interactive archive)
## Directory Structure
```
openspec/
├── project.md # Project conventions
├── specs/ # Current truth - what IS built
│ └── [capability]/ # Single focused capability
│ ├── spec.md # Requirements and scenarios
│ └── design.md # Technical patterns
├── changes/ # Proposals - what SHOULD change
│ ├── [change-name]/
│ │ ├── proposal.md # Why, what, impact
│ │ ├── tasks.md # Implementation checklist
│ │ ├── design.md # Technical decisions (optional; see criteria)
│ │ └── specs/ # Delta changes
│ │ └── [capability]/
│ │ └── spec.md # ADDED/MODIFIED/REMOVED
│ └── archive/ # Completed changes
```
## Creating Change Proposals
### Decision Tree
```
New request?
├─ Bug fix restoring spec behavior? → Fix directly
├─ Typo/format/comment? → Fix directly
├─ New feature/capability? → Create proposal
├─ Breaking change? → Create proposal
├─ Architecture change? → Create proposal
└─ Unclear? → Create proposal (safer)
```
### Proposal Structure
1. **Create directory:** `changes/[change-id]/` (kebab-case, verb-led, unique)
2. **Write proposal.md:**
```markdown
# Change: [Brief description of change]
## Why
[1-2 sentences on problem/opportunity]
## What Changes
- [Bullet list of changes]
- [Mark breaking changes with **BREAKING**]
## Impact
- Affected specs: [list capabilities]
- Affected code: [key files/systems]
```
3. **Create spec deltas:** `specs/[capability]/spec.md`
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL provide...
#### Scenario: Success case
- **WHEN** user performs action
- **THEN** expected result
## MODIFIED Requirements
### Requirement: Existing Feature
[Complete modified requirement]
## REMOVED Requirements
### Requirement: Old Feature
**Reason**: [Why removing]
**Migration**: [How to handle]
```
If multiple capabilities are affected, create multiple delta files under `changes/[change-id]/specs/<capability>/spec.md`—one per capability.
4. **Create tasks.md:**
```markdown
## 1. Implementation
- [ ] 1.1 Create database schema
- [ ] 1.2 Implement API endpoint
- [ ] 1.3 Add frontend component
- [ ] 1.4 Write tests
```
5. **Create design.md when needed:**
Create `design.md` if any of the following apply; otherwise omit it:
- Cross-cutting change (multiple services/modules) or a new architectural pattern
- New external dependency or significant data model changes
- Security, performance, or migration complexity
- Ambiguity that benefits from technical decisions before coding
Minimal `design.md` skeleton:
```markdown
## Context
[Background, constraints, stakeholders]
## Goals / Non-Goals
- Goals: [...]
- Non-Goals: [...]
## Decisions
- Decision: [What and why]
- Alternatives considered: [Options + rationale]
## Risks / Trade-offs
- [Risk] → Mitigation
## Migration Plan
[Steps, rollback]
## Open Questions
- [...]
```
## Spec File Format
### Critical: Scenario Formatting
**CORRECT** (use #### headers):
```markdown
#### Scenario: User login success
- **WHEN** valid credentials provided
- **THEN** return JWT token
```
**WRONG** (don't use bullets or bold):
```markdown
- **Scenario: User login** ❌
**Scenario**: User login ❌
### Scenario: User login ❌
```
Every requirement MUST have at least one scenario.
### Requirement Wording
- Use SHALL/MUST for normative requirements (avoid should/may unless intentionally non-normative)
### Delta Operations
- `## ADDED Requirements` - New capabilities
- `## MODIFIED Requirements` - Changed behavior
- `## REMOVED Requirements` - Deprecated features
- `## RENAMED Requirements` - Name changes
Headers matched with `trim(header)` - whitespace ignored.
#### When to use ADDED vs MODIFIED
- ADDED: Introduces a new capability or sub-capability that can stand alone as a requirement. Prefer ADDED when the change is orthogonal (e.g., adding "Slash Command Configuration") rather than altering the semantics of an existing requirement.
- MODIFIED: Changes the behavior, scope, or acceptance criteria of an existing requirement. Always paste the full, updated requirement content (header + all scenarios). The archiver will replace the entire requirement with what you provide here; partial deltas will drop previous details.
- RENAMED: Use when only the name changes. If you also change behavior, use RENAMED (name) plus MODIFIED (content) referencing the new name.
Common pitfall: Using MODIFIED to add a new concern without including the previous text. This causes loss of detail at archive time. If you arent explicitly changing the existing requirement, add a new requirement under ADDED instead.
Authoring a MODIFIED requirement correctly:
1) Locate the existing requirement in `openspec/specs/<capability>/spec.md`.
2) Copy the entire requirement block (from `### Requirement: ...` through its scenarios).
3) Paste it under `## MODIFIED Requirements` and edit to reflect the new behavior.
4) Ensure the header text matches exactly (whitespace-insensitive) and keep at least one `#### Scenario:`.
Example for RENAMED:
```markdown
## RENAMED Requirements
- FROM: `### Requirement: Login`
- TO: `### Requirement: User Authentication`
```
## Troubleshooting
### Common Errors
**"Change must have at least one delta"**
- Check `changes/[name]/specs/` exists with .md files
- Verify files have operation prefixes (## ADDED Requirements)
**"Requirement must have at least one scenario"**
- Check scenarios use `#### Scenario:` format (4 hashtags)
- Don't use bullet points or bold for scenario headers
**Silent scenario parsing failures**
- Exact format required: `#### Scenario: Name`
- Debug with: `openspec show [change] --json --deltas-only`
### Validation Tips
```bash
# Always use strict mode for comprehensive checks
openspec validate [change] --strict
# Debug delta parsing
openspec show [change] --json | jq '.deltas'
# Check specific requirement
openspec show [spec] --json -r 1
```
## Happy Path Script
```bash
# 1) Explore current state
openspec spec list --long
openspec list
# Optional full-text search:
# rg -n "Requirement:|Scenario:" openspec/specs
# rg -n "^#|Requirement:" openspec/changes
# 2) Choose change id and scaffold
CHANGE=add-two-factor-auth
mkdir -p openspec/changes/$CHANGE/{specs/auth}
printf "## Why\n...\n\n## What Changes\n- ...\n\n## Impact\n- ...\n" > openspec/changes/$CHANGE/proposal.md
printf "## 1. Implementation\n- [ ] 1.1 ...\n" > openspec/changes/$CHANGE/tasks.md
# 3) Add deltas (example)
cat > openspec/changes/$CHANGE/specs/auth/spec.md << 'EOF'
## ADDED Requirements
### Requirement: Two-Factor Authentication
Users MUST provide a second factor during login.
#### Scenario: OTP required
- **WHEN** valid credentials are provided
- **THEN** an OTP challenge is required
EOF
# 4) Validate
openspec validate $CHANGE --strict
```
## Multi-Capability Example
```
openspec/changes/add-2fa-notify/
├── proposal.md
├── tasks.md
└── specs/
├── auth/
│ └── spec.md # ADDED: Two-Factor Authentication
└── notifications/
└── spec.md # ADDED: OTP email notification
```
auth/spec.md
```markdown
## ADDED Requirements
### Requirement: Two-Factor Authentication
...
```
notifications/spec.md
```markdown
## ADDED Requirements
### Requirement: OTP Email Notification
...
```
## Best Practices
### Simplicity First
- Default to <100 lines of new code
- Single-file implementations until proven insufficient
- Avoid frameworks without clear justification
- Choose boring, proven patterns
### Complexity Triggers
Only add complexity with:
- Performance data showing current solution too slow
- Concrete scale requirements (>1000 users, >100MB data)
- Multiple proven use cases requiring abstraction
### Clear References
- Use `file.ts:42` format for code locations
- Reference specs as `specs/auth/spec.md`
- Link related changes and PRs
### Capability Naming
- Use verb-noun: `user-auth`, `payment-capture`
- Single purpose per capability
- 10-minute understandability rule
- Split if description needs "AND"
### Change ID Naming
- Use kebab-case, short and descriptive: `add-two-factor-auth`
- Prefer verb-led prefixes: `add-`, `update-`, `remove-`, `refactor-`
- Ensure uniqueness; if taken, append `-2`, `-3`, etc.
## Tool Selection Guide
| Task | Tool | Why |
|------|------|-----|
| Find files by pattern | Glob | Fast pattern matching |
| Search code content | Grep | Optimized regex search |
| Read specific files | Read | Direct file access |
| Explore unknown scope | Task | Multi-step investigation |
## Error Recovery
### Change Conflicts
1. Run `openspec list` to see active changes
2. Check for overlapping specs
3. Coordinate with change owners
4. Consider combining proposals
### Validation Failures
1. Run with `--strict` flag
2. Check JSON output for details
3. Verify spec file format
4. Ensure scenarios properly formatted
### Missing Context
1. Read project.md first
2. Check related specs
3. Review recent archives
4. Ask for clarification
## Quick Reference
### Stage Indicators
- `changes/` - Proposed, not yet built
- `specs/` - Built and deployed
- `archive/` - Completed changes
### File Purposes
- `proposal.md` - Why and what
- `tasks.md` - Implementation steps
- `design.md` - Technical decisions
- `spec.md` - Requirements and behavior
### CLI Essentials
```bash
openspec list # What's in progress?
openspec show [item] # View details
openspec validate --strict # Is it correct?
openspec archive <change-id> [--yes|-y] # Mark complete (add --yes for automation)
```
Remember: Specs are truth. Changes are proposals. Keep them in sync.

View File

@@ -1,13 +0,0 @@
# Change: Add unit tests for delete paths (user/group/proxy/redeem)
## Why
删除流程缺少单元测试,容易在重构或边界条件变化时回归,且问题排查成本高。
## What Changes
- 新增服务层删除流程单元测试(覆盖 AdminService 删除入口与对应 Repo 错误传播)
- 覆盖成功/不存在/权限保护/幂等删除/底层错误等关键分支
- 需要的轻量测试替身repositories / cache
## Impact
- Affected specs: testing (new)
- Affected code: backend/internal/service/admin_service.go

View File

@@ -1,63 +0,0 @@
## ADDED Requirements
### Requirement: Delete path unit coverage
服务层删除流程 SHALL 具备单元测试覆盖用户、分组、代理、兑换码等资源的关键分支,且覆盖 AdminService 删除入口的权限保护、幂等删除与错误传播。
#### Scenario: User delete success
- **WHEN** 删除存在的用户
- **THEN** 返回成功且仓储删除被调用
#### Scenario: User delete not found
- **WHEN** 删除不存在的用户
- **THEN** 返回未找到错误
#### Scenario: User delete propagates errors
- **WHEN** 删除用户时仓储返回错误
- **THEN** 错误被向上返回且不吞掉
#### Scenario: User delete rejects admin accounts
- **WHEN** 删除管理员用户
- **THEN** 返回拒绝删除的错误
#### Scenario: Group delete success
- **WHEN** 删除存在的分组
- **THEN** 返回成功且仓储级联删除被调用
#### Scenario: Group delete not found
- **WHEN** 删除不存在的分组
- **THEN** 返回 ErrGroupNotFound
#### Scenario: Group delete propagates errors
- **WHEN** 删除分组时仓储返回错误
- **THEN** 错误被向上返回且不吞掉
#### Scenario: Proxy delete success
- **WHEN** 删除存在的代理
- **THEN** 返回成功且仓储删除被调用
#### Scenario: Proxy delete is idempotent
- **WHEN** 删除不存在的代理
- **THEN** 不返回错误且调用删除流程
#### Scenario: Proxy delete propagates errors
- **WHEN** 删除代理时仓储返回错误
- **THEN** 错误被向上返回且不吞掉
#### Scenario: Redeem code delete success
- **WHEN** 删除存在的兑换码
- **THEN** 返回成功且仓储删除被调用
#### Scenario: Redeem code delete is idempotent
- **WHEN** 删除不存在的兑换码
- **THEN** 不返回错误且调用删除流程
#### Scenario: Redeem code delete propagates errors
- **WHEN** 删除兑换码时仓储返回错误
- **THEN** 错误被向上返回且不吞掉
#### Scenario: Batch redeem code delete success
- **WHEN** 批量删除兑换码且全部成功
- **THEN** 返回删除数量等于输入数量且不返回错误
#### Scenario: Batch redeem code delete partial failures
- **WHEN** 批量删除兑换码且部分失败
- **THEN** 返回删除数量小于输入数量且不返回错误

View File

@@ -1,11 +0,0 @@
## 1. Implementation
- [x] 1.1 为 AdminService 删除入口准备测试替身user/group/proxy/redeem repo 与 cache
- [x] 1.2 新增 AdminService.DeleteUser 单元测试(成功/不存在/错误传播/管理员保护)
- [x] 1.3 新增 AdminService.DeleteGroup 单元测试(成功/不存在/错误传播,缓存失效逻辑如适用)
- [x] 1.4 新增 AdminService.DeleteProxy 单元测试(成功/幂等删除/错误传播)
- [x] 1.5 新增 AdminService.DeleteRedeemCode 与 BatchDeleteRedeemCodes 单元测试(成功/幂等删除/错误传播/部分失败)
- [x] 1.6 运行 unit 测试并将结果记录在本 tasks.md 末尾
## Test Results
- `go test -tags=unit ./internal/service/...` (workdir: `backend`)
- ok github.com/Wei-Shaw/sub2api/internal/service 0.475s

View File

@@ -1,269 +0,0 @@
# Proposal: 将 GORM 迁移至 Ent保留软删除语义
## Change ID
`migrate-orm-gorm-to-ent`
## 背景
当前后端(`backend/`)使用 GORM 作为 ORM仓储层`backend/internal/repository/*.go`)大量依赖字符串 SQL、`Preload``gorm.Expr``clause` 等机制。
为支持后续从 GORM 迁移到 Ent本变更首先把“schema 管理”从 GORM AutoMigrate 切换为 **版本化 SQL migrations**`backend/migrations/*.sql`+ `schema_migrations` 记录表,避免 ORM 层隐式改表导致的不可审计/不可回滚问题,并确保空库可通过 migrations 重建得到“当前代码可运行”的 schema。
项目已明确:
- **生产环境依赖软删除语义**`deleted_at` 过滤必须默认生效)。
- 更看重 **类型安全 / 可维护性**(希望减少字符串拼接与运行期错误)。
因此,本变更将数据库访问从 GORM 迁移到 Ent`entgo.io/ent`),并用 Ent 的 **Interceptor + Hook + Mixin** 实现与现有行为一致的软删除默认过滤能力(参考 Ent 官方拦截器文档中的 soft delete 模式)。
说明:
- Ent 的拦截器/软删除方案需要在代码生成阶段启用相关 feature例如 `intercept`),并按 Ent 的要求在入口处引入 `ent/runtime` 以注册 schema hooks/interceptors避免循环依赖
- 本仓库的 Go module 位于 `backend/go.mod`,因此 Ent 生成代码建议放在 `backend/ent/`(例如 `backend/ent/schema/`),而不是仓库根目录。
落地提示:
- 入口处的实际 import 路径应以模块路径为准。以当前仓库为例,若 ent 生成目录为 `backend/ent/`,则 runtime import 形如:`github.com/Wei-Shaw/sub2api/ent/runtime`
## 目标
1. 用 Ent 替代 GORM提升查询/更新的类型安全与可维护性。
2. **保持现有软删除语义**:默认查询不返回软删除记录;支持显式 bypass例如后台审计/修复任务)。
3. 将“启动时 AutoMigrate”替换为“可审计、可控的迁移流程”第一阶段采用 `backend/migrations/*.sql` 在部署阶段执行)。
4. 保持 `internal/service` 与 handler 等上层不感知 ORM继续以 repository interface 为边界)。
## 非目标
- 不重写业务逻辑与对外 API 行为(除必要的错误类型映射外)。
- 不强行把现有复杂统计 SQL`usage_log_repo.go` 的趋势/CTE/聚合)全部改成 Ent Builder这类保持 Raw/SQL Builder 更可控。
## 关键决策(本提案给出推荐方案)
### 1) `users.allowed_groups`:从 Postgres array 改为关系表(推荐)
现状:`users.allowed_groups BIGINT[]`,并使用 `ANY()` / `array_remove()`(见 `user_repo.go` / `group_repo.go`)。
决策:新增中间表 `user_allowed_groups(user_id, group_id, created_at)`,并建立唯一约束 `(user_id, group_id)`
理由:
- Ent 对 array 需要自定义类型 + 仍大量依赖 raw SQL可维护性一般。
- 关系表建模更“Ent-friendly”查询/权限/过滤更清晰,后续扩展(例如允许来源、备注、有效期)更容易。
约束与说明:
- **不建议对该 join 表做软删除**:解绑/移除应为硬删除(否则“重新绑定”与唯一约束会引入额外复杂度)。如需审计,建议写审计日志/事件表。
- 外键建议 `ON DELETE CASCADE`(删除 user/group 时自动清理绑定关系,语义更接近当前级联清理逻辑)。
兼容策略:
- Phase 1新增表并 **从旧 array 回填**;仓储读取改从新表,写入可短期双写(可选)。
- Phase 2灰度确认后移除 `allowed_groups` 列与相关 SQL。
### 2) `account_groups`:保持复合主键,使用 Ent Edge Schema推荐
现状:`account_groups``(account_id, group_id)` 复合主键,并附带 `priority/created_at` 等额外字段(见 `account_repo.go`)。
决策:**不修改数据库表结构**,在 Ent 中将其建模为 Edge Schema带额外字段的 M2M join entity并将其标识符配置为复合主键`account_id + group_id`)。
理由:
- 该表是典型“多对多 + 额外字段”场景Ent 原生支持 Edge Schema允许对 join 表做 CRUD、加 hooks/策略,并保持类型安全。
- 避免线上 DDL更换主键带来的锁表风险与回滚复杂度。
- 当前表已具备唯一性(复合主键),与 Edge Schema 的复合标识符完全匹配。
## 设计概览
### A. Ent 客户端与 DI
-`ProvideDB/InitDB` 从返回 `*gorm.DB` 改为返回 `*ent.Client`(必要时同时暴露 `*sql.DB` 供 raw 统计使用)。
- `cmd/server/wire.go` 的 cleanup 从 `db.DB().Close()` 改为 `client.Close()`
### A.1 迁移边界与命名映射(必须明确)
为保证线上数据与查询语义不变Ent schema 需要显式对齐现有表/字段:
- **表名**:使用 `users``api_keys``groups``accounts``account_groups``proxies``redeem_codes``settings``user_subscriptions``usage_logs` 等现有名称(不要让 Ent 默认命名生成新表)。
- **ID 类型**:现有主键是 `BIGSERIAL`,建议 Ent 中统一用 `int64`(避免 Go 的 `int` 在 32-bit 环境或跨系统时产生隐性问题)。
- **时间字段**`created_at/updated_at/deleted_at` 均为 `TIMESTAMPTZ`schema 中应显式声明 DB 类型,避免生成 `timestamp without time zone` 导致行为变化。
### A.2 代码生成与 feature flags必须写死
建议在 `backend/ent/generate.go` 固化生成命令(示例):
```go
//go:build ignore
package ent
//go:generate go run -mod=mod entgo.io/ent/cmd/ent generate --feature intercept --feature sql/upsert ./schema
```
说明:
- `intercept`:用于软删除的通用拦截器工具(以及未来可复用的全局 query policy
- `sql/upsert`:用于替代 GORM 的 `ON CONFLICT`(例如 `settings` 的 upsert如果短期不迁移 upsert可暂不启用。
> 生成命令与 feature flags 必须进入 CI 校验避免“本地生成了、CI/生产没生成”的隐性差异)。
### B. 软删除实现(必须)
对所有需要软删除的实体:
- 在 Ent schema 中通过 `Mixin` 添加 `deleted_at`(或 `delete_time`)字段。
- 通过 **Query Interceptor** 在查询阶段默认追加 `deleted_at IS NULL` 过滤(含 traversals
- 通过 **Mutation Hook** 处理两类行为:
- 拦截 delete 操作,将 delete 变为 update设置 `deleted_at = now()`
- 拦截 update 操作,默认追加 `deleted_at IS NULL` 过滤,避免软删除记录被意外更新(与当前 GORM 行为对齐)。
- 提供 `SkipSoftDelete(ctx)`:在需要包含软删数据的查询或需要 hard delete 的管理任务中显式使用。
**SkipSoftDelete 推荐实现**
```go
type softDeleteKey struct{}
func SkipSoftDelete(ctx context.Context) context.Context {
return context.WithValue(ctx, softDeleteKey{}, true)
}
func shouldSkipSoftDelete(ctx context.Context) bool {
v, _ := ctx.Value(softDeleteKey{}).(bool)
return v
}
```
**注意**Ent 的“默认不更新软删记录”通常应通过 mutation hook 实现(而不是 query interceptor否则容易出现“UpdateOneByID 仍可更新已软删记录”的行为差异。
**行为兼容性约定(建议写入测试)**
- `Delete(id)` 对“已软删”的记录应尽量保持 **幂等**(返回成功或 rows=0但不应抛 `NotFound` 破坏现有行为)。
- 默认查询(列表/详情/关联加载)均不应返回软删记录。
- 仅在明确管理/审计场景允许 hard delete并且必须显式传递 `SkipSoftDelete(ctx)` 或使用专用方法)。
### B.1 Raw SQL 与事务一致性(必须遵守)
本项目存在不少事务型写操作(如 `group_repo.DeleteCascade`),并且部分逻辑使用 raw SQL或未来保留 raw
规则:
- **事务内的 raw 写操作必须绑定到同一个事务**:优先使用 Ent 的 `tx.ExecContext(ctx, ...)` 执行 raw DML确保与 Ent mutation 同一事务提交/回滚。
- 避免在事务中直接使用独立注入的 `*sql.DB` 执行写操作(会绕开事务,破坏原子性)。
### C. 仓储层迁移策略
优先改动“CRUD/关联加载明显”的仓储,复杂统计保持 raw
1. `user_repo.go` / `api_key_repo.go` / `group_repo.go` / `proxy_repo.go` / `redeem_code_repo.go` / `setting_repo.go`
2. `account_repo.go`JSONB merge、复杂筛选与 join 排序,部分保留 raw
3. `user_subscription_repo.go`(原子增量、批量更新)
4. `usage_log_repo.go`(建议保留 Raw SQL底层连接迁移到 `database/sql` 或 Ent driver
### D. 错误映射
`repository/translatePersistenceError` 从 GORM error 改为:
- `ent.IsNotFound(err)` → 映射为 `service.ErrXxxNotFound`
- `ent.IsConstraintError(err)` / 驱动层 unique violation → 映射为 `service.ErrXxxExists`
同时清理所有 GORM 错误泄漏点:
- `backend/internal/server/middleware/api_key_auth_google.go` - 已修复:改为判断 `service.ErrApiKeyNotFound`(并已有单元测试覆盖)
- `backend/internal/repository/account_repo.go:50` - 需迁移:直接判断 `gorm.ErrRecordNotFound`
- `backend/internal/repository/redeem_code_repo.go:125` - 需迁移:使用 `gorm.ErrRecordNotFound`
- `backend/internal/repository/error_translate.go:16` - 核心翻译函数,需改为 Ent 错误
### E. JSONB 字段处理策略
`accounts` 表的 `credentials``extra` 字段使用 JSONB 类型,当前使用 PostgreSQL `||` 操作符进行合并更新。
Ent 处理方案:
- 定义自定义 `JSONMap` 类型用于 schema
- 对于简单的 JSONB 读写,使用 Ent 的 `field.JSON()` 类型
- 对于 JSONB 合并操作(`COALESCE(credentials,'{}') || ?`),使用 raw SQL
- **事务外**:使用 `client.ExecContext(ctx, ...)`(确保复用同一连接池与可观测性能力)。
- **事务内**:使用 `tx.ExecContext(ctx, ...)`(确保原子性,不得绕开事务)。
- 或者在应用层先读取、合并、再写入(需要事务保证原子性)
### F. DECIMAL/NUMERIC 字段(必须显式确认)
当前 schema 中存在多处 `DECIMAL/NUMERIC`(例如 `users.balance``groups.rate_multiplier`、订阅/统计中的 cost 字段等。GORM 当前用 `float64` 读写这些列。
第一阶段结论(兼容优先):
- 继续使用 `float64`,并在 Ent schema 中把字段的数据库类型显式设为 Postgres `numeric(… , …)`(避免生成 `double precision`),同时接受现有的精度风险(与当前行为一致)。
- **精度优先(后续可选)**:改用 `decimal.Decimal`(或其他 decimal 类型)作为 Go 类型,以避免金额/费率累积误差;但会波及 `internal/service` 的字段类型与 JSON 序列化,属于更大范围重构。
## 数据库迁移(建议)
本仓库已存在 `backend/migrations/*.sql`,且当前数据库演进也更契合“版本化 SQL 迁移”模式,而不是在应用启动时自动改动 schema。
**决策(第一阶段)**:继续使用 `backend/migrations/*.sql` 作为唯一的版本化迁移来源Ent 仅负责运行期访问,不在启动阶段自动改动 schema。
**可选(后续阶段)**:若团队希望更强的 schema diff/漂移检测能力,可再引入 Atlas并与现有 SQL 迁移策略对齐后逐步迁移(但不作为第一阶段前置)。
重要现状说明(必须先处理):
- 历史上存在“启动期 AutoMigrate + 迁移脚本覆盖不全”的混用风险:新环境仅跑 SQL migrations 可能出现缺表/缺列。
- 另一个高风险点是 SQL migrations 中的默认管理员/默认分组种子(如果存在固定密码/固定账号,属于明显的生产安全隐患),应当从 migrations 中移除,改为在安装流程中显式创建。
当前处理策略(本变更已落地的基线):
- 通过 `backend/internal/infrastructure/migrations_runner.go` 引入内置 migrations runner`schema_migrations` + `pg_advisory_lock`),用于按文件名顺序执行 `backend/migrations/*.sql` 并记录校验和。
- 补齐 migrations 覆盖面(新增 schema parity / legacy 数据修复迁移),确保空库执行 migrations 后即可跑通当前集成测试。
- 移除 migrations 内的默认管理员/默认分组种子,避免固定凭据风险;管理员账号由 `internal/setup` 显式创建。
第一阶段至少包含:
- 新增 `user_allowed_groups` 表,并从 `users.allowed_groups` 回填数据。
- (如需要)为所有软删表统一索引:`(deleted_at)``(deleted_at, id)`,确保默认过滤不拖慢查询。
### 迁移 SQL 草案PostgreSQL
> 以下 SQL 旨在让执行方案更“可落地”,实际落地时请按 `backend/migrations/*.sql` 拆分为可回滚步骤,并评估锁表窗口。
**(1) 新增 join 表:`user_allowed_groups`**
```sql
CREATE TABLE IF NOT EXISTS user_allowed_groups (
user_id BIGINT NOT NULL REFERENCES users(id) ON DELETE CASCADE,
group_id BIGINT NOT NULL REFERENCES groups(id) ON DELETE CASCADE,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY (user_id, group_id)
);
CREATE INDEX IF NOT EXISTS idx_user_allowed_groups_group_id
ON user_allowed_groups(group_id);
```
**(2) 从 `users.allowed_groups` 回填**
```sql
INSERT INTO user_allowed_groups (user_id, group_id)
SELECT u.id, x.group_id
FROM users u
CROSS JOIN LATERAL unnest(u.allowed_groups) AS x(group_id)
WHERE u.allowed_groups IS NOT NULL
ON CONFLICT DO NOTHING;
```
**(3) 回填校验(建议在灰度/发布前跑一次)**
```sql
-- 旧列展开后的行数(去重后) vs 新表行数
WITH old_pairs AS (
SELECT DISTINCT u.id AS user_id, x.group_id
FROM users u
CROSS JOIN LATERAL unnest(u.allowed_groups) AS x(group_id)
WHERE u.allowed_groups IS NOT NULL
)
SELECT
(SELECT COUNT(*) FROM old_pairs) AS old_pair_count,
(SELECT COUNT(*) FROM user_allowed_groups) AS new_pair_count;
```
> Phase 2 删除 `users.allowed_groups` 列应在“代码已完全切换到新表 + 已灰度验证”之后执行,并作为单独迁移文件。
### Phase 2 清理计划(仅在灰度完成后执行)
前置条件(必须同时满足):
- 应用侧 **读路径** 已完全从 `user_allowed_groups` 获取 allowed-groups不再读取 `users.allowed_groups`)。
- 应用侧 **写路径** 已稳定双写/已切到只写 `user_allowed_groups`(并确认线上没有写回旧列的旧版本)。
- 运行期指标确认allowed-groups 相关功能无报错、无权限回归(建议至少一个发布周期)。
执行步骤(建议):
1. 先发布“只读新表 + 仍保留旧列”的版本(兼容期),并监控一段时间。
2. 发布“停止写旧列(只写 join 表)”的版本,并监控一段时间。
3. 执行独立迁移DDL
- `ALTER TABLE users DROP COLUMN allowed_groups;`
- (可选)删除任何旧列相关的索引/约束(如果存在)。
4. 发布“移除旧列代码路径”的版本(清理遗留 SQL例如 `ANY(allowed_groups)`/`array_remove`)。
回滚策略:
- 如果在步骤 1/2 发现功能回归可直接回滚应用版本DB 仍向后兼容)。
- 一旦执行步骤 3DROP COLUMN回滚将需要手动加回列并从 join 表回填(不推荐在线上紧急回滚时做)。
部署策略:
- 先跑 DB migration兼容旧代码再灰度切换 Ent 仓储。
- 保留回滚路径feature flag 或快速回切到旧版本镜像DB 迁移需保持向后兼容)。
## 影响范围
- 文件(预计修改):`backend/internal/infrastructure/*`, `backend/cmd/server/*`, `backend/internal/repository/*`, `backend/internal/setup/*`, `backend/internal/server/middleware/*`
- 依赖:新增 `entgo.io/ent`、(可选)`ariga.io/atlas`/`ent/migrate`
## 风险与缓解
| 风险 | 说明 | 缓解 |
| --- | --- | --- |
| 软删除语义不一致 | Ent 默认不会自动过滤软删 | 强制使用 mixin+interceptor+hook并加集成测试覆盖"软删不可见/可 bypass" |
| Schema 迁移风险 | `allowed_groups` 需要数据变更array→join 表) | 迁移分两阶段migration 保持向后兼容;灰度发布 |
| 迁移脚本缺失/漂移 | 过去依赖 AutoMigrate 演进 schemaSQL migrations 可能不完整 | 在切换前补齐 migrations新增“迁移脚本可重建全量 schema”的 CI/集成测试校验 |
| 统计 SQL 行为变化 | 迁移连接方式后可能出现 SQL 细节差异 | `usage_log_repo` 保持原 SQL优先做黑盒回归 |
| 性能退化 | 默认过滤 soft delete 增加条件 | 为 `deleted_at` 加索引;对热点查询做 explain/压测 |
| 集成测试中断 | 测试 harness 依赖 `*gorm.DB` 事务回滚 | 优先迁移测试基础设施,改用 `*ent.Tx``*sql.Tx` |
| JSONB 合并操作 | Ent 不直接支持 PostgreSQL `\|\|` 操作符 | 使用 `client.ExecContext/tx.ExecContext` 执行 raw SQL事务内必须用 tx或应用层合并 |
| 行级锁 | `clause.Locking{Strength: "UPDATE"}` 需替换 | 使用 Ent 的 `ForUpdate()` 方法 |
| Upsert 语义 | `clause.OnConflict` 的等价实现 | 使用 `OnConflict().UpdateNewValues()``DoNothing()` |
## 成功标准(验收)
1. 现有单元/集成测试通过repository integration tests带 Docker通过。
2. 软删除默认过滤行为与线上一致:任意 `Delete` 后常规查询不可见;显式 `SkipSoftDelete` 可见。
3. `allowed_groups` 相关功能回归通过:查询/绑定/解绑/分组删除联动保持一致。
4. 关键读写路径API key 鉴权、账户调度、订阅扣费/限额)无行为变化,错误类型与 HTTP 状态码保持兼容。

View File

@@ -1,28 +0,0 @@
## ADDED Requirements
### Requirement: Versioned SQL Migrations
The system MUST manage database schema changes via versioned SQL migration files under `backend/migrations/*.sql` and MUST record applied migrations in the database for auditability and idempotency.
#### Scenario: Migrations are applied idempotently
- **GIVEN** an empty PostgreSQL database
- **WHEN** the backend initializes its database connection
- **THEN** it MUST apply all SQL migrations in lexicographic filename order
- **AND** it MUST record each applied migration in `schema_migrations` with a checksum
- **AND** a subsequent initialization MUST NOT re-apply already-recorded migrations
### Requirement: Soft Delete Semantics
For entities that support soft delete, the system MUST preserve the existing semantics: soft-deleted rows are excluded from queries by default, and delete operations are idempotent.
#### Scenario: Soft-deleted rows are hidden by default
- **GIVEN** a row has `deleted_at` set
- **WHEN** the backend performs a standard "list" or "get" query
- **THEN** the row MUST NOT be returned by default
### Requirement: Allowed Groups Data Model
The system MUST migrate `users.allowed_groups` from a PostgreSQL array column to a normalized join table for type safety and maintainability.
#### Scenario: Allowed groups are represented as relationships
- **GIVEN** a user is allowed to bind a group
- **WHEN** the user/group association is stored
- **THEN** it MUST be stored as a `(user_id, group_id)` relationship row
- **AND** removing an association MUST hard-delete that relationship row

View File

@@ -1,103 +0,0 @@
## 0. 基线确认与准备
- [x] 0.1 梳理生产依赖的软删除表清单(所有带 `deleted_at` 的实体)。
- [x] 0.2 盘点所有 GORM 用法:`Preload``Transaction``Locking``Expr``datatypes.JSONMap``Raw` 统计 SQL。
- [x] 0.3 确认数据库为 PostgreSQL明确迁移执行位置部署期 vs 启动期)。
- [x] 0.3.1 **确定迁移工具链(第一阶段)**:使用 `backend/migrations/*.sql` 作为唯一迁移来源;由内置 runner 记录 `schema_migrations`(含 checksum
- [x] 0.3.2 **补齐迁移脚本覆盖面**:新增 schema parity/legacy 数据修复迁移,确保空库可重建并覆盖当前代码所需表/列(含 `settings``redeem_codes` 扩展列、`accounts` 调度字段、`usage_logs.billing_type` 等)。
- [x] 0.4 **修复现有 GORM 错误处理 bug**`api_key_auth_google.go` 已改为判断业务错误(`service.ErrApiKeyNotFound`),并补充单元测试覆盖。
## 1. 引入 Ent代码生成与基础设施
- [x] 1.1 新增 `backend/ent/` 目录schema、生成代码、mixin配置 `entc` 生成go generate 或 make target
- [x] 1.1.1 固化 `go:generate` 命令与 feature flags`intercept` + `sql/upsert`,并指定 `--idtype int64`)。
- [x] 1.2 实现 SoftDelete mixinQuery Interceptor + Mutation Hook + SkipSoftDelete(ctx)),确保默认过滤/软删 delete 语义可用。
- [x] 1.3 改造 `backend/internal/infrastructure`:提供 `*ent.Client`;同时提供 `*sql.DB`(当前阶段通过 `gorm.DB.DB()` 暴露,供 raw SQL 使用)。
- [x] 1.4 改造 `backend/cmd/server/wire.go` cleanup关闭 ent client。
- [x] 1.5 **更新 Wire 依赖注入配置**:更新所有 Provider 函数签名,从 `*gorm.DB` 改为 `*ent.Client`
- [x] 1.6 在服务入口引入 `backend/ent/runtime`Ent 生成)以注册 schema hooks/interceptors避免循环依赖导致未注册
- 代码 import 示例:`github.com/Wei-Shaw/sub2api/ent/runtime`
## 2. 数据模型与迁移(向后兼容优先)
- [x] 2.1 新增 `user_allowed_groups` 表:定义字段、索引、唯一约束;从 `users.allowed_groups` 回填数据。
- [x] 2.1.1 为 `user_allowed_groups` 编写回填校验 SQLold_pairs vs new_pairs并把执行步骤写入部署文档/README。
- [x] 2.1.2 设计 Phase 2 清理:在灰度完成后删除 `users.allowed_groups` 列(独立迁移文件,确保可回滚窗口足够)。
- [x] 2.2 `account_groups` 保持现有复合主键,迁移为 Ent Edge Schema无 DB 变更);补充校验:确保 `(account_id, group_id)` 唯一性在 DB 层已被约束PK 或 Unique
- [x] 2.3 为软删除字段建立必要索引(`deleted_at`)。
- [x] 2.4 移除启动时 `AutoMigrate`,改为执行 `backend/migrations/*.sql`(对齐单一迁移来源)。
- [x] 2.5 更新安装/初始化流程:`internal/setup` 不再调用 `repository.AutoMigrate`,改为执行 `backend/migrations/*.sql`(确保新安装环境与生产迁移链路一致)。
## 3. 仓储层迁移(按风险分批)
### 3.A 低风险仓储(优先迁移,用于验证 Ent 基础设施)
- [x] 3.1 迁移 `setting_repo`:简单 CRUD + upsertEnt `OnConflictColumns(...).UpdateNewValues()`)。
- [x] 3.2 迁移 `proxy_repo`CRUD + 软删除 + 账户数量统计(统计保持 raw SQLproxy 表读写改为 Ent
### 3.B 中等风险仓储
- [x] 3.3 迁移 `api_key_repo`:关联 eager-load`WithUser``WithGroup`),错误翻译为业务错误。
- [x] 3.4 迁移 `redeem_code_repo`CRUD + 状态更新。
- [x] 3.5 迁移 `group_repo`:事务、级联删除逻辑(可保留 raw SQL但必须在 ent Tx 内执行,例如 `tx.ExecContext`,避免绕开事务)。
- 迁移 `users.allowed_groups` 相关逻辑:在删除分组时改为 `DELETE FROM user_allowed_groups WHERE group_id = ?`
### 3.C 高风险仓储
- [x] 3.6 迁移 `user_repo`CRUD、分页/过滤、余额/并发原子更新(`gorm.Expr`allowed groups 改为 join 表实现。
- 替换 `ANY(allowed_groups)`/`array_remove` 语义:改为对 `user_allowed_groups` 的 join/filter/delete
- 覆盖 `RemoveGroupFromAllowedGroups`:改为 `DELETE FROM user_allowed_groups WHERE group_id = ?` 并返回 rowsAffected
- [x] 3.7 迁移 `user_subscription_repo`:批量过期、用量增量更新(`gorm.Expr`)、关联预加载。
- [x] 3.8 迁移 `account_repo`join 表排序、JSONB merge写操作优先用 `client.ExecContext/tx.ExecContext` 执行 raw SQL校验 bulk update 的 rowsAffected 语义一致。
### 3.D 保留 Raw SQL
- [x] 3.9 `usage_log_repo` 保留原 SQL底层改为注入/获取 `*sql.DB` 执行(例如 infrastructure 同时提供 `*sql.DB`)。
- 识别可用 Ent Builder 的简单查询(如 `Create``GetByID`
- 保留 CTE/聚合等复杂 SQL趋势统计、Top N 等)
## 4. 错误处理与边角清理
- [x] 4.1 替换 `repository/error_translate.go`:用 `ent.IsNotFound/IsConstraintError` 等映射。
- [x] 4.2 清理 GORM 泄漏点:
- [x] `middleware/api_key_auth_google.go` - 已修复:从 `gorm.ErrRecordNotFound` 判断迁移为业务错误判断
- [x] `repository/account_repo.go:50` - 直接判断 `gorm.ErrRecordNotFound`
- [x] `repository/redeem_code_repo.go:125` - 使用 `gorm.ErrRecordNotFound`
- [x] 4.3 检查 `internal/setup/` 包是否有 GORM 依赖。
- [x] 4.4 检查 `*_cache.go` 文件是否有潜在 GORM 依赖。
## 5. 测试与回归
- [x] 5.1 **迁移测试基础设施**(优先级高):
- [x] **建表策略对齐生产GORM 阶段)**:在 Postgres testcontainer 中执行 `backend/migrations/*.sql` 初始化 schema不再依赖 AutoMigrate
- [x] 增加“schema 对齐/可重建”校验:新增集成测试断言关键表/列存在,并验证 migrations runner 幂等性。
- [x] 为已迁移仓储增加 Ent 事务测试工具:使用 `*sql.Tx` + Ent driver 绑定到同一事务,实现按测试用例回滚(见 `testEntSQLTx`)。
- [x] 更新 `integration_harness_test.go`:从 `*gorm.DB` 改为 `*ent.Client`
- [x] 更新 `IntegrationDBSuite`:从 `testTx()` 返回 `*gorm.DB` 改为 `*ent.Tx``*sql.Tx`
- [x] 确保事务回滚机制在 Ent 下正常工作
- [x] 5.2 新增软删除回归用例:
- delete 后默认不可见
- `SkipSoftDelete(ctx)` 可见
- 重复 delete 的幂等性(不应引入新的 `NotFound` 行为)
- hard delete 可用(仅管理场景)
- [ ] 5.3 跑全量单测 + 集成测试;重点覆盖:
- API key 鉴权
- 订阅扣费/限额
- 账号调度
- 统计接口
## 6. 收尾(去除 GORM
- [x] 6.1 移除 `gorm.io/*` 依赖与相关代码路径。
- [x] 6.2 更新 README/部署文档:迁移命令、回滚策略、开发者生成代码指引。
- [x] 6.3 清理 `go.mod` 中的 GORM 相关依赖:
- `gorm.io/gorm`
- `gorm.io/driver/postgres`
- `gorm.io/datatypes`
## 附录:工作量参考
| 组件 | 代码行数 | GORM 调用点 | 复杂度 |
|------|---------|------------|--------|
| 仓储层总计 | ~13,000 行 | (待统计) | - |
| Raw SQL | - | (待统计) | 高 |
| gorm.Expr | - | (待统计) | 中 |
| 集成测试 | (待统计) | - | 高 |
**建议迁移顺序**
1. 测试基础设施5.1)→ 确保后续迁移可验证
2. 低风险仓储3.1-3.2)→ 验证 Ent 基础设施
3. 中等风险仓储3.3-3.5)→ 验证关联加载和事务
4. 高风险仓储3.6-3.8)→ 处理复杂场景
5. 错误处理清理4.x→ 统一错误映射
6. 收尾6.x→ 移除 GORM

View File

@@ -1,31 +0,0 @@
# Project Context
## Purpose
[Describe your project's purpose and goals]
## Tech Stack
- [List your primary technologies]
- [e.g., TypeScript, React, Node.js]
## Project Conventions
### Code Style
[Describe your code style preferences, formatting rules, and naming conventions]
### Architecture Patterns
[Document your architectural decisions and patterns]
### Testing Strategy
[Explain your testing approach and requirements]
### Git Workflow
[Describe your branching strategy and commit conventions]
## Domain Context
[Add domain-specific knowledge that AI assistants need to understand]
## Important Constraints
[List any technical, business, or regulatory constraints]
## External Dependencies
[Document key external services, APIs, or systems]