* fix(gemini): 修复 google_one OAuth 配置和 scopes 问题 - 修复 google_one 类型在 ExchangeCode 和 RefreshToken 中使用内置客户端 - 添加 DefaultGoogleOneScopes,包含 generative-language 和 drive.readonly 权限 - 在 EffectiveOAuthConfig 中为 google_one 类型使用专门的 scopes - 将 docker-compose.override.yml 重命名为 .example 并添加到 .gitignore - 完善 docker-compose.override.yml.example 示例文档 解决问题: 1. google_one OAuth 授权后 API 调用返回 403 权限不足 2. 缺少访问 Gemini API 所需的 generative-language scope 3. 缺少获取 Drive 存储配额所需的 drive.readonly scope * fix(antigravity): 完全跳过 Claude 模型的所有 thinking 块 问题分析: - 当前代码尝试保留有 signature 的 thinking 块 - 但 Vertex AI 的 signature 是完整性令牌,无法在本地验证 - 导致 400 错误:Invalid signature in thinking block 根本原因: 1. thinking 功能已对非 Gemini 模型禁用 (isThinkingEnabled=false) 2. Vertex AI 要求原样重放 (thinking, signature) 对或完全不发送 3. 本地无法复制 Vertex 的加密验证逻辑 修复方案: - 对 Claude 模型完全跳过所有 thinking 块(无论是否有 signature) - 保持 Gemini 模型使用 dummy signature 的行为不变 - 更新测试用例以反映新的预期行为 影响: - 消除 thinking 相关的 400 错误 - 与现有的 thinking 禁用策略保持一致 - 不影响 Gemini 模型的 thinking 功能 测试: - ✅ TestBuildParts_ThinkingBlockWithoutSignature 全部通过 - ✅ TestBuildTools_CustomTypeTools 全部通过 参考:Codex review 建议 * fix(gateway): 修复 count_tokens 端点 400 错误 问题分析: - count_tokens 请求包含 thinking 块时返回 400 错误 - 原因:thinking 块未被过滤,直接转发到上游 API - 上游 API 拒绝无效的 thinking signature 根本原因: 1. /v1/messages 请求通过 TransformClaudeToGemini 过滤 thinking 块 2. count_tokens 请求绕过转换,直接转发原始请求体 3. 导致包含无效 signature 的 thinking 块被发送到上游 修复方案: - 创建 FilterThinkingBlocks 工具函数 - 在 buildCountTokensRequest 中应用过滤(1 行修改) - 与 /v1/messages 行为保持一致 实现细节: - FilterThinkingBlocks: 解析 JSON,过滤 thinking 块,重新序列化 - 失败安全:解析/序列化失败时返回原始请求体 - 性能优化:仅在发现 thinking 块时重新序列化 测试: - ✅ 6 个单元测试全部通过 - ✅ 覆盖正常过滤、无 thinking 块、无效 JSON 等场景 - ✅ 现有测试不受影响 影响: - 消除 count_tokens 的 400 错误 - 不影响 Antigravity 账号(仍返回模拟响应) - 适用于所有账号类型(OAuth、API Key) 文件修改: - backend/internal/service/gateway_request.go: +62 行(新函数) - backend/internal/service/gateway_service.go: +2 行(应用过滤) - backend/internal/service/gateway_request_test.go: +62 行(测试) * fix(gateway): 增强 thinking 块过滤逻辑 基于 Codex 分析和建议的改进: 问题分析: - 新错误:signature: Field required(signature 字段缺失) - 旧错误:Invalid signature(signature 存在但无效) - 两者都说明 thinking 块在请求中是危险的 Codex 建议: - 保持 Option A:完全跳过所有 thinking 块 - 原因:thinking 块应该是只输出的,除非有服务端来源证明 - 在无状态代理中,无法安全区分上游来源 vs 客户端注入 改进内容: 1. 增强 FilterThinkingBlocks 函数 - 过滤显式的 thinking 块:{"type":"thinking", ...} - 过滤无 type 的 thinking 对象:{"thinking": {...}} - 保留 tool_use 等其他类型块中的 thinking 字段 - 修复:只在实际过滤时更新 content 数组 2. 扩展过滤范围 - 将 FilterThinkingBlocks 应用到 /v1/messages 主路径 - 之前只应用于 count_tokens,现在两个端点都过滤 - 防止所有端点的 thinking 相关 400 错误 3. 改进测试 - 新增:过滤无 type discriminator 的 thinking 块 - 新增:不过滤 tool_use 中的 thinking 字段 - 使用 containsThinkingBlock 辅助函数验证 测试: - ✅ 8 个测试用例全部通过 - ✅ 覆盖各种 thinking 块格式 - ✅ 确保不误伤其他类型的块 影响: - 消除 signature required 和 invalid signature 错误 - 统一 /v1/messages 和 count_tokens 的行为 - 更健壮的 thinking 块检测逻辑 参考:Codex review 和代码改进 * refactor: 根据 Codex 审查建议进行代码优化 基于 Codex 代码审查的 P1 和 P2 改进: P1 改进(重要问题): 1. 优化日志输出 - 移除 thinking 块跳过时的 log.Printf - 避免高频请求下的日志噪音 - 添加注释说明可通过指标监控 2. 清理遗留代码 - 删除未使用的 isValidThoughtSignature 函数(27行) - 该函数在改为完全跳过 thinking 块后不再需要 P2 改进(性能优化): 3. 添加快速路径检查 - 在 FilterThinkingBlocks 中添加 bytes.Contains 预检查 - 如果请求体不包含 "thinking" 字符串,直接返回 - 避免不必要的 JSON 解析,提升性能 技术细节: - request_transformer.go: -27行(删除函数),+1行(优化注释) - gateway_request.go: +5行(快速路径 + bytes 导入) 测试: - ✅ TestBuildParts_ThinkingBlockWithoutSignature 全部通过 - ✅ TestFilterThinkingBlocks 全部通过(8个测试用例) 影响: - 减少日志噪音 - 提升性能(快速路径) - 代码更简洁(删除未使用代码) 参考:Codex 代码审查建议 * fix: 修复 golangci-lint 检查问题 - 格式化 gateway_request_test.go - 使用 switch 语句替代 if-else 链(staticcheck QF1003) * fix(antigravity): 修复 thinking signature 处理并实现 Auto 模式降级 问题分析: 1. 原先代码错误地禁用了 Claude via Vertex 的 thinkingConfig 2. 历史 thinking 块的 signature 被完全跳过,导致验证失败 3. 跨模型混用时 dummy signature 会导致 400 错误 修复内容: **request_transformer.go**: - 删除第 38-43 行的错误逻辑(禁用 thinkingConfig) - 引入 thoughtSignatureMode(Preserve/Dummy)策略 - Claude 模式:透传真实 signature,过滤空/dummy - Gemini 模式:使用 dummy signature - 支持 signature-only thinking 块 - tool_use 的 signature 也透传 **antigravity_gateway_service.go**: - 新增 isSignatureRelatedError() 检测 signature 相关错误 - 新增 stripThinkingFromClaudeRequest() 移除 thinking 块 - 实现 Auto 模式:检测 400 + signature 关键词时自动降级重试 - 重试时完全移除 thinking 配置和消息中的 thinking 块 - 最多重试一次,避免循环 **测试**: - 更新并新增测试覆盖 Claude preserve/Gemini dummy 模式 - 新增 tool_use signature 处理测试 - 所有测试通过(6/6) 影响: - ✅ Claude via Vertex 可以正常使用 thinking 功能 - ✅ 历史 signature 正确透传,避免验证失败 - ✅ 跨模型混用时自动过滤无效 signature - ✅ 错误驱动降级,自动修复 signature 问题 - ✅ 不影响纯 Claude API 和其他渠道 参考:Codex 深度分析和实现建议 * fix(lint): 修复 gofmt 格式问题 * fix(antigravity): 修复 stripThinkingFromClaudeRequest 遗漏 untyped thinking blocks 问题: - Codex 审查指出 stripThinkingFromClaudeRequest 只移除了 type="thinking" 的块 - 没有处理没有 type 字段的 thinking 对象(如 {"thinking": "...", "signature": "..."}) - 导致重试时仍包含无效 thinking 块,上游 400 错误持续 修复: - 添加检查:跳过没有 type 但有 thinking 字段的块 - 现在会移除两种格式: 1. {"type": "thinking", "thinking": "...", "signature": "..."} 2. {"thinking": "...", "signature": "..."}(untyped) 测试:所有测试通过 参考:Codex P1 审查意见
Sub2API
Demo
Try Sub2API online: https://v2.pincc.ai/
Demo credentials (shared demo environment; not created automatically for self-hosted installs):
| Password | |
|---|---|
| admin@sub2api.com | admin123 |
Overview
Sub2API is an AI API gateway platform designed to distribute and manage API quotas from AI product subscriptions (like Claude Code $200/month). Users can access upstream AI services through platform-generated API Keys, while the platform handles authentication, billing, load balancing, and request forwarding.
Features
- Multi-Account Management - Support multiple upstream account types (OAuth, API Key)
- API Key Distribution - Generate and manage API Keys for users
- Precise Billing - Token-level usage tracking and cost calculation
- Smart Scheduling - Intelligent account selection with sticky sessions
- Concurrency Control - Per-user and per-account concurrency limits
- Rate Limiting - Configurable request and token rate limits
- Admin Dashboard - Web interface for monitoring and management
Tech Stack
| Component | Technology |
|---|---|
| Backend | Go 1.21+, Gin, GORM |
| Frontend | Vue 3.4+, Vite 5+, TailwindCSS |
| Database | PostgreSQL 15+ |
| Cache/Queue | Redis 7+ |
Deployment
Method 1: Script Installation (Recommended)
One-click installation script that downloads pre-built binaries from GitHub Releases.
Prerequisites
- Linux server (amd64 or arm64)
- PostgreSQL 15+ (installed and running)
- Redis 7+ (installed and running)
- Root privileges
Installation Steps
curl -sSL https://raw.githubusercontent.com/Wei-Shaw/sub2api/main/deploy/install.sh | sudo bash
The script will:
- Detect your system architecture
- Download the latest release
- Install binary to
/opt/sub2api - Create systemd service
- Configure system user and permissions
Post-Installation
# 1. Start the service
sudo systemctl start sub2api
# 2. Enable auto-start on boot
sudo systemctl enable sub2api
# 3. Open Setup Wizard in browser
# http://YOUR_SERVER_IP:8080
The Setup Wizard will guide you through:
- Database configuration
- Redis configuration
- Admin account creation
Upgrade
You can upgrade directly from the Admin Dashboard by clicking the Check for Updates button in the top-left corner.
The web interface will:
- Check for new versions automatically
- Download and apply updates with one click
- Support rollback if needed
Useful Commands
# Check status
sudo systemctl status sub2api
# View logs
sudo journalctl -u sub2api -f
# Restart service
sudo systemctl restart sub2api
# Uninstall
curl -sSL https://raw.githubusercontent.com/Wei-Shaw/sub2api/main/deploy/install.sh | sudo bash -s -- uninstall -y
Method 2: Docker Compose
Deploy with Docker Compose, including PostgreSQL and Redis containers.
Prerequisites
- Docker 20.10+
- Docker Compose v2+
Installation Steps
# 1. Clone the repository
git clone https://github.com/Wei-Shaw/sub2api.git
cd sub2api
# 2. Enter the deploy directory
cd deploy
# 3. Copy environment configuration
cp .env.example .env
# 4. Edit configuration (set your passwords)
nano .env
Required configuration in .env:
# PostgreSQL password (REQUIRED - change this!)
POSTGRES_PASSWORD=your_secure_password_here
# Optional: Admin account
ADMIN_EMAIL=admin@example.com
ADMIN_PASSWORD=your_admin_password
# Optional: Custom port
SERVER_PORT=8080
# 5. Start all services
docker-compose up -d
# 6. Check status
docker-compose ps
# 7. View logs
docker-compose logs -f sub2api
Access
Open http://YOUR_SERVER_IP:8080 in your browser.
Upgrade
# Pull latest image and recreate container
docker-compose pull
docker-compose up -d
Useful Commands
# Stop all services
docker-compose down
# Restart
docker-compose restart
# View all logs
docker-compose logs -f
Method 3: Build from Source
Build and run from source code for development or customization.
Prerequisites
- Go 1.21+
- Node.js 18+
- PostgreSQL 15+
- Redis 7+
Build Steps
# 1. Clone the repository
git clone https://github.com/Wei-Shaw/sub2api.git
cd sub2api
# 2. Build frontend
cd frontend
npm install
npm run build
# Output will be in ../backend/internal/web/dist/
# 3. Build backend with embedded frontend
cd ../backend
go build -tags embed -o sub2api ./cmd/server
# 4. Create configuration file
cp ../deploy/config.example.yaml ./config.yaml
# 5. Edit configuration
nano config.yaml
Note: The
-tags embedflag embeds the frontend into the binary. Without this flag, the binary will not serve the frontend UI.
Key configuration in config.yaml:
server:
host: "0.0.0.0"
port: 8080
mode: "release"
database:
host: "localhost"
port: 5432
user: "postgres"
password: "your_password"
dbname: "sub2api"
redis:
host: "localhost"
port: 6379
password: ""
jwt:
secret: "change-this-to-a-secure-random-string"
expire_hour: 24
default:
user_concurrency: 5
user_balance: 0
api_key_prefix: "sk-"
rate_multiplier: 1.0
# 6. Run the application
./sub2api
Development Mode
# Backend (with hot reload)
cd backend
go run ./cmd/server
# Frontend (with hot reload)
cd frontend
npm run dev
Code Generation
When editing backend/ent/schema, regenerate Ent + Wire:
cd backend
go generate ./ent
go generate ./cmd/server
Antigravity Support
Sub2API supports Antigravity accounts. After authorization, dedicated endpoints are available for Claude and Gemini models.
Dedicated Endpoints
| Endpoint | Model |
|---|---|
/antigravity/v1/messages |
Claude models |
/antigravity/v1beta/ |
Gemini models |
Claude Code Configuration
export ANTHROPIC_BASE_URL="http://localhost:8080/antigravity"
export ANTHROPIC_AUTH_TOKEN="sk-xxx"
Hybrid Scheduling Mode
Antigravity accounts support optional hybrid scheduling. When enabled, the general endpoints /v1/messages and /v1beta/ will also route requests to Antigravity accounts.
⚠️ Warning: Anthropic Claude and Antigravity Claude cannot be mixed within the same conversation context. Use groups to isolate them properly.
Project Structure
sub2api/
├── backend/ # Go backend service
│ ├── cmd/server/ # Application entry
│ ├── internal/ # Internal modules
│ │ ├── config/ # Configuration
│ │ ├── model/ # Data models
│ │ ├── service/ # Business logic
│ │ ├── handler/ # HTTP handlers
│ │ └── gateway/ # API gateway core
│ └── resources/ # Static resources
│
├── frontend/ # Vue 3 frontend
│ └── src/
│ ├── api/ # API calls
│ ├── stores/ # State management
│ ├── views/ # Page components
│ └── components/ # Reusable components
│
└── deploy/ # Deployment files
├── docker-compose.yml # Docker Compose configuration
├── .env.example # Environment variables for Docker Compose
├── config.example.yaml # Full config file for binary deployment
└── install.sh # One-click installation script
License
MIT License
If you find this project useful, please give it a star!