Files
sub2api/backend/migrations
erio a296425994 feat(channel-monitor): request templates with snapshot apply + headers/body override
Problem:
Upstream channels can reject monitor probes based on client fingerprint
(e.g. "only Claude Code clients allowed"). The monitor had no way to
customize the outgoing request to bypass such restrictions.

Solution:
Introduce reusable request templates that carry extra_headers plus an
optional body override; monitors reference a template and receive a
snapshot copy on apply. Template edits do NOT auto-propagate — users
must click "apply to associated monitors" to refresh snapshots, so a
bad template edit cannot instantly break all production monitors.

Data model (migration 112):
- channel_monitor_request_templates: id, name, provider, description,
  extra_headers jsonb, body_override_mode ('off'|'merge'|'replace'),
  body_override jsonb. Unique (provider, name).
- channel_monitors: +template_id (FK, ON DELETE SET NULL), +extra_headers,
  +body_override_mode, +body_override (the three runtime snapshot fields).

Checker (channel_monitor_checker.go):
- callProvider + runCheckForModel accept a CheckOptions carrying the
  snapshot fields. mergeHeaders applies user headers on top of adapter
  defaults (forbidden list: Host / Content-Length / Transfer-Encoding /
  Connection / Content-Encoding).
- buildRequestBody:
    off     -> adapter default body
    merge   -> shallow-merge over default; per-provider deny list
               (model/messages/contents) protects the challenge contract
    replace -> user body verbatim
- Replace mode skips challenge validation; instead HTTP 2xx + non-empty
  extracted response text = operational, empty = failed.
- 4 new unit tests cover all three modes + replace/empty-response case.

Admin API:
- /admin/channel-monitor-templates CRUD + /:id/apply (overwrite snapshot
  on all template_id=id monitors, returns affected count).
- channel_monitor request/response DTOs gain the 4 new fields.

Frontend:
- channelMonitorTemplate.ts API client.
- MonitorAdvancedRequestConfig.vue shared component for headers textarea
  + body mode radio + body JSON editor; used by both template and monitor
  forms.
- MonitorTemplateManagerDialog.vue: provider tabs, list/create/edit/
  delete/apply, live "associated monitors" count per row.
- MonitorFiltersBar: new 模板管理 button next to 新增监控.
- MonitorFormDialog: collapsible 高级 section with template dropdown
  (filtered by form.provider, clears on provider change) + embedded
  AdvancedRequestConfig. Picking a template copies its fields into the
  form (snapshot semantics mirrored on the client).
- i18n zh/en entries for all new copy.

chore: bump version to 0.1.114.32
2026-04-21 14:14:49 +08:00
..
2025-12-18 13:50:39 +08:00
2025-12-18 13:50:39 +08:00
2026-02-02 22:13:50 +08:00
2026-02-02 22:13:50 +08:00

Database Migrations

Overview

This directory contains SQL migration files for database schema changes. The migration system uses SHA256 checksums to ensure migration immutability and consistency across environments.

Migration File Naming

Format: NNN_description.sql

  • NNN: Sequential number (e.g., 001, 002, 003)
  • description: Brief description in snake_case

Example: 017_add_gemini_tier_id.sql

_notx.sql 命名与执行语义(并发索引专用)

当迁移包含 CREATE INDEX CONCURRENTLYDROP INDEX CONCURRENTLY 时,必须使用 _notx.sql 后缀,例如:

  • 062_add_accounts_priority_indexes_notx.sql
  • 063_drop_legacy_indexes_notx.sql

运行规则:

  1. *.sql(不带 _notx)按事务执行。
  2. *_notx.sql 按非事务执行,不会包裹在 BEGIN/COMMIT 中。
  3. *_notx.sql 仅允许并发索引语句,不允许混入事务控制语句或其他 DDL/DML。

幂等要求(必须):

  • 创建索引:CREATE INDEX CONCURRENTLY IF NOT EXISTS ...
  • 删除索引:DROP INDEX CONCURRENTLY IF EXISTS ...

这样可以保证灾备重放、重复执行时不会因对象已存在/不存在而失败。

Migration File Structure

This project uses a custom migration runner (internal/repository/migrations_runner.go) that executes the full SQL file content as-is.

  • Regular migrations (*.sql): executed in a transaction.
  • Non-transactional migrations (*_notx.sql): split by statement and executed without transaction (for CONCURRENTLY).
-- Forward-only migration (recommended)
ALTER TABLE usage_logs ADD COLUMN IF NOT EXISTS example_column VARCHAR(100);

⚠️ Do not place executable "Down" SQL in the same file. The runner does not parse goose Up/Down sections and will execute all SQL statements in the file.

Important Rules

⚠️ Immutability Principle

Once a migration is applied to ANY environment (dev, staging, production), it MUST NOT be modified.

Why?

  • Each migration has a SHA256 checksum stored in the schema_migrations table
  • Modifying an applied migration causes checksum mismatch errors
  • Different environments would have inconsistent database states
  • Breaks audit trail and reproducibility

Correct Workflow

  1. Create new migration

    # Create new file with next sequential number
    touch migrations/018_your_change.sql
    
  2. Write forward-only migration SQL

    • Put only the intended schema change in the file
    • If rollback is needed, create a new migration file to revert
  3. Test locally

    # Apply migration
    make migrate-up
    
    # Test rollback
    make migrate-down
    
  4. Commit and deploy

    git add migrations/018_your_change.sql
    git commit -m "feat(db): add your change"
    

What NOT to Do

  • Modify an already-applied migration file
  • Delete migration files
  • Change migration file names
  • Reorder migration numbers

🔧 If You Accidentally Modified an Applied Migration

Error message:

migration 017_add_gemini_tier_id.sql checksum mismatch (db=abc123... file=def456...)

Solution:

# 1. Find the original version
git log --oneline -- migrations/017_add_gemini_tier_id.sql

# 2. Revert to the commit when it was first applied
git checkout <commit-hash> -- migrations/017_add_gemini_tier_id.sql

# 3. Create a NEW migration for your changes
touch migrations/018_your_new_change.sql

Migration System Details

  • Checksum Algorithm: SHA256 of trimmed file content
  • Tracking Table: schema_migrations (filename, checksum, applied_at)
  • Runner: internal/repository/migrations_runner.go
  • Auto-run: Migrations run automatically on service startup

Best Practices

  1. Keep migrations small and focused

    • One logical change per migration
    • Easier to review and rollback
  2. Write reversible migrations

    • Always provide a working Down migration
    • Test rollback before committing
  3. Use transactions

    • Wrap DDL statements in transactions when possible
    • Ensures atomicity
  4. Add comments

    • Explain WHY the change is needed
    • Document any special considerations
  5. Test in development first

    • Apply migration locally
    • Verify data integrity
    • Test rollback

Example Migration

-- Add tier_id field to Gemini OAuth accounts for quota tracking
UPDATE accounts
SET credentials = jsonb_set(
    credentials,
    '{tier_id}',
    '"LEGACY"',
    true
)
WHERE platform = 'gemini'
  AND type = 'oauth'
  AND credentials->>'tier_id' IS NULL;

Troubleshooting

Checksum Mismatch

See "If You Accidentally Modified an Applied Migration" above.

Migration Failed

# Check migration status
psql -d sub2api -c "SELECT * FROM schema_migrations ORDER BY applied_at DESC;"

# Manually rollback if needed (use with caution)
# Better to fix the migration and create a new one

Need to Skip a Migration (Emergency Only)

-- DANGEROUS: Only use in development or with extreme caution
INSERT INTO schema_migrations (filename, checksum, applied_at)
VALUES ('NNN_migration.sql', 'calculated_checksum', NOW());

References