Both projects use deploy/ directory, causing Docker Compose to share the same project name "deploy" and volume namespace. Adding explicit name: fireflyapi isolates volumes and containers completely. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sub2API Deployment Files
This directory contains files for deploying Sub2API on Linux servers.
Deployment Methods
| Method | Best For | Setup Wizard |
|---|---|---|
| Docker Compose | Quick setup, all-in-one | Not needed (auto-setup) |
| Binary Install | Production servers, systemd | Web-based wizard |
Files
| File | Description |
|---|---|
docker-compose.yml |
Docker Compose configuration (named volumes) |
docker-compose.local.yml |
Docker Compose configuration (local directories, easy migration) |
docker-deploy.sh |
One-click Docker deployment script (recommended) |
.env.example |
Docker environment variables template |
DOCKER.md |
Docker Hub documentation |
install.sh |
One-click binary installation script |
sub2api.service |
Systemd service unit file |
config.example.yaml |
Example configuration file |
Docker Deployment (Recommended)
Method 1: One-Click Deployment (Recommended)
Use the automated preparation script for the easiest setup:
# Download and run the preparation script
curl -sSL https://raw.githubusercontent.com/Wei-Shaw/sub2api/main/deploy/docker-deploy.sh | bash
# Or download first, then run
curl -sSL https://raw.githubusercontent.com/Wei-Shaw/sub2api/main/deploy/docker-deploy.sh -o docker-deploy.sh
chmod +x docker-deploy.sh
./docker-deploy.sh
What the script does:
- Downloads
docker-compose.local.ymland.env.example - Automatically generates secure secrets (JWT_SECRET, TOTP_ENCRYPTION_KEY, POSTGRES_PASSWORD)
- Creates
.envfile with generated secrets - Creates necessary data directories (data/, postgres_data/, redis_data/)
- Displays generated credentials (POSTGRES_PASSWORD, JWT_SECRET, etc.)
After running the script:
# Start services
docker-compose -f docker-compose.local.yml up -d
# View logs
docker-compose -f docker-compose.local.yml logs -f sub2api
# If admin password was auto-generated, find it in logs:
docker-compose -f docker-compose.local.yml logs sub2api | grep "admin password"
# Access Web UI
# http://localhost:8080
Method 2: Manual Deployment
If you prefer manual control:
# Clone repository
git clone https://github.com/Wei-Shaw/sub2api.git
cd sub2api/deploy
# Configure environment
cp .env.example .env
nano .env # Set POSTGRES_PASSWORD and other required variables
# Generate secure secrets (recommended)
JWT_SECRET=$(openssl rand -hex 32)
TOTP_ENCRYPTION_KEY=$(openssl rand -hex 32)
echo "JWT_SECRET=${JWT_SECRET}" >> .env
echo "TOTP_ENCRYPTION_KEY=${TOTP_ENCRYPTION_KEY}" >> .env
# Create data directories
mkdir -p data postgres_data redis_data
# Start all services using local directory version
docker-compose -f docker-compose.local.yml up -d
# View logs (check for auto-generated admin password)
docker-compose -f docker-compose.local.yml logs -f sub2api
# Access Web UI
# http://localhost:8080
Deployment Version Comparison
| Version | Data Storage | Migration | Best For |
|---|---|---|---|
| docker-compose.local.yml | Local directories (./data, ./postgres_data, ./redis_data) | ✅ Easy (tar entire directory) | Production, need frequent backups/migration |
| docker-compose.yml | Named volumes (/var/lib/docker/volumes/) | ⚠️ Requires docker commands | Simple setup, don't need migration |
Recommendation: Use docker-compose.local.yml (deployed by docker-deploy.sh) for easier data management and migration.
How Auto-Setup Works
When using Docker Compose with AUTO_SETUP=true:
-
On first run, the system automatically:
- Connects to PostgreSQL and Redis
- Applies database migrations (SQL files in
backend/migrations/*.sql) and records them inschema_migrations - Generates JWT secret (if not provided)
- Creates admin account (password auto-generated if not provided)
- Writes config.yaml
-
No manual Setup Wizard needed - just configure
.envand start -
If
ADMIN_PASSWORDis not set, check logs for the generated password:docker-compose logs sub2api | grep "admin password"
Database Migration Notes (PostgreSQL)
- Migrations are applied in lexicographic order (e.g.
001_...sql,002_...sql). schema_migrationstracks applied migrations (filename + checksum).- Migrations are forward-only; rollback requires a DB backup restore or a manual compensating SQL script.
Verify users.allowed_groups → user_allowed_groups backfill
During the incremental GORM→Ent migration, users.allowed_groups (legacy BIGINT[]) is being replaced by a normalized join table user_allowed_groups(user_id, group_id).
Run this query to compare the legacy data vs the join table:
WITH old_pairs AS (
SELECT DISTINCT u.id AS user_id, x.group_id
FROM users u
CROSS JOIN LATERAL unnest(u.allowed_groups) AS x(group_id)
WHERE u.allowed_groups IS NOT NULL
)
SELECT
(SELECT COUNT(*) FROM old_pairs) AS old_pair_count,
(SELECT COUNT(*) FROM user_allowed_groups) AS new_pair_count;
Commands
For local directory version (docker-compose.local.yml):
# Start services
docker-compose -f docker-compose.local.yml up -d
# Stop services
docker-compose -f docker-compose.local.yml down
# View logs
docker-compose -f docker-compose.local.yml logs -f sub2api
# Restart Sub2API only
docker-compose -f docker-compose.local.yml restart sub2api
# Update to latest version
docker-compose -f docker-compose.local.yml pull
docker-compose -f docker-compose.local.yml up -d
# Remove all data (caution!)
docker-compose -f docker-compose.local.yml down
rm -rf data/ postgres_data/ redis_data/
For named volumes version (docker-compose.yml):
# Start services
docker-compose up -d
# Stop services
docker-compose down
# View logs
docker-compose logs -f sub2api
# Restart Sub2API only
docker-compose restart sub2api
# Update to latest version
docker-compose pull
docker-compose up -d
# Remove all data (caution!)
docker-compose down -v
Environment Variables
| Variable | Required | Default | Description |
|---|---|---|---|
POSTGRES_PASSWORD |
Yes | - | PostgreSQL password |
JWT_SECRET |
Recommended | (auto-generated) | JWT secret (fixed for persistent sessions) |
TOTP_ENCRYPTION_KEY |
Recommended | (auto-generated) | TOTP encryption key (fixed for persistent 2FA) |
SERVER_PORT |
No | 8080 |
Server port |
ADMIN_EMAIL |
No | admin@sub2api.local |
Admin email |
ADMIN_PASSWORD |
No | (auto-generated) | Admin password |
TZ |
No | Asia/Shanghai |
Timezone |
GEMINI_OAUTH_CLIENT_ID |
No | (builtin) | Google OAuth client ID (Gemini OAuth). Leave empty to use the built-in Gemini CLI client. |
GEMINI_OAUTH_CLIENT_SECRET |
No | (builtin) | Google OAuth client secret (Gemini OAuth). Leave empty to use the built-in Gemini CLI client. |
GEMINI_OAUTH_SCOPES |
No | (default) | OAuth scopes (Gemini OAuth) |
GEMINI_QUOTA_POLICY |
No | (empty) | JSON overrides for Gemini local quota simulation (Code Assist only). |
See .env.example for all available options.
Note: The
docker-deploy.shscript automatically generatesJWT_SECRET,TOTP_ENCRYPTION_KEY, andPOSTGRES_PASSWORDfor you.
Easy Migration (Local Directory Version)
When using docker-compose.local.yml, all data is stored in local directories, making migration simple:
# On source server: Stop services and create archive
cd /path/to/deployment
docker-compose -f docker-compose.local.yml down
cd ..
tar czf sub2api-complete.tar.gz deployment/
# Transfer to new server
scp sub2api-complete.tar.gz user@new-server:/path/to/destination/
# On new server: Extract and start
tar xzf sub2api-complete.tar.gz
cd deployment/
docker-compose -f docker-compose.local.yml up -d
Your entire deployment (configuration + data) is migrated!
Gemini OAuth Configuration
Sub2API supports three methods to connect to Gemini:
Method 1: Code Assist OAuth (Recommended for GCP Users)
No configuration needed - always uses the built-in Gemini CLI OAuth client (public).
- Leave
GEMINI_OAUTH_CLIENT_IDandGEMINI_OAUTH_CLIENT_SECRETempty - In the Admin UI, create a Gemini OAuth account and select "Code Assist" type
- Complete the OAuth flow in your browser
Note: Even if you configure
GEMINI_OAUTH_CLIENT_ID/GEMINI_OAUTH_CLIENT_SECRETfor AI Studio OAuth, Code Assist OAuth will still use the built-in Gemini CLI client.
Requirements:
- Google account with access to Google Cloud Platform
- A GCP project (auto-detected or manually specified)
How to get Project ID (if auto-detection fails):
- Go to Google Cloud Console
- Click the project dropdown at the top of the page
- Copy the Project ID (not the project name) from the list
- Common formats:
my-project-123456orcloud-ai-companion-xxxxx
Method 2: AI Studio OAuth (For Regular Google Accounts)
Requires your own OAuth client credentials.
Step 1: Create OAuth Client in Google Cloud Console
- Go to Google Cloud Console - Credentials
- Create a new project or select an existing one
- Enable the Generative Language API:
- Go to "APIs & Services" → "Library"
- Search for "Generative Language API"
- Click "Enable"
- Configure OAuth Consent Screen (if not done):
- Go to "APIs & Services" → "OAuth consent screen"
- Choose "External" user type
- Fill in app name, user support email, developer contact
- Add scopes:
https://www.googleapis.com/auth/generative-language.retriever(and optionallyhttps://www.googleapis.com/auth/cloud-platform) - Add test users (your Google account email)
- Create OAuth 2.0 credentials:
- Go to "APIs & Services" → "Credentials"
- Click "Create Credentials" → "OAuth client ID"
- Application type: Web application (or Desktop app)
- Name: e.g., "Sub2API Gemini"
- Authorized redirect URIs: Add
http://localhost:1455/auth/callback
- Copy the Client ID and Client Secret
- ⚠️ Publish to Production (IMPORTANT):
- Go to "APIs & Services" → "OAuth consent screen"
- Click "PUBLISH APP" to move from Testing to Production
- Testing mode limitations:
- Only manually added test users can authenticate (max 100 users)
- Refresh tokens expire after 7 days
- Users must be re-added periodically
- Production mode: Any Google user can authenticate, tokens don't expire
- Note: For sensitive scopes, Google may require verification (demo video, privacy policy)
Step 2: Configure Environment Variables
GEMINI_OAUTH_CLIENT_ID=your-client-id.apps.googleusercontent.com
GEMINI_OAUTH_CLIENT_SECRET=GOCSPX-your-client-secret
Step 3: Create Account in Admin UI
- Create a Gemini OAuth account and select "AI Studio" type
- Complete the OAuth flow
- After consent, your browser will be redirected to
http://localhost:1455/auth/callback?code=...&state=... - Copy the full callback URL (recommended) or just the
codeand paste it back into the Admin UI
- After consent, your browser will be redirected to
Method 3: API Key (Simplest)
- Go to Google AI Studio
- Click "Create API key"
- In Admin UI, create a Gemini API Key account
- Paste your API key (starts with
AIza...)
Comparison Table
| Feature | Code Assist OAuth | AI Studio OAuth | API Key |
|---|---|---|---|
| Setup Complexity | Easy (no config) | Medium (OAuth client) | Easy |
| GCP Project Required | Yes | No | No |
| Custom OAuth Client | No (built-in) | Yes (required) | N/A |
| Rate Limits | GCP quota | Standard | Standard |
| Best For | GCP developers | Regular users needing OAuth | Quick testing |
Binary Installation
For production servers using systemd.
One-Line Installation
curl -sSL https://raw.githubusercontent.com/Wei-Shaw/sub2api/main/deploy/install.sh | sudo bash
Manual Installation
- Download the latest release from GitHub Releases
- Extract and copy the binary to
/opt/sub2api/ - Copy
sub2api.serviceto/etc/systemd/system/ - Run:
sudo systemctl daemon-reload sudo systemctl enable sub2api sudo systemctl start sub2api - Open the Setup Wizard in your browser to complete configuration
Commands
# Install
sudo ./install.sh
# Upgrade
sudo ./install.sh upgrade
# Uninstall
sudo ./install.sh uninstall
Service Management
# Start the service
sudo systemctl start sub2api
# Stop the service
sudo systemctl stop sub2api
# Restart the service
sudo systemctl restart sub2api
# Check status
sudo systemctl status sub2api
# View logs
sudo journalctl -u sub2api -f
# Enable auto-start on boot
sudo systemctl enable sub2api
Configuration
Server Address and Port
During installation, you will be prompted to configure the server listen address and port. These settings are stored in the systemd service file as environment variables.
To change after installation:
-
Edit the systemd service:
sudo systemctl edit sub2api -
Add or modify:
[Service] Environment=SERVER_HOST=0.0.0.0 Environment=SERVER_PORT=3000 -
Reload and restart:
sudo systemctl daemon-reload sudo systemctl restart sub2api
Gemini OAuth Configuration
If you need to use AI Studio OAuth for Gemini accounts, add the OAuth client credentials to the systemd service file:
-
Edit the service file:
sudo nano /etc/systemd/system/sub2api.service -
Add your OAuth credentials in the
[Service]section (after the existingEnvironment=lines):Environment=GEMINI_OAUTH_CLIENT_ID=your-client-id.apps.googleusercontent.com Environment=GEMINI_OAUTH_CLIENT_SECRET=GOCSPX-your-client-secret -
Reload and restart:
sudo systemctl daemon-reload sudo systemctl restart sub2api
Note: Code Assist OAuth does not require any configuration - it uses the built-in Gemini CLI client. See the Gemini OAuth Configuration section above for detailed setup instructions.
Application Configuration
The main config file is at /etc/sub2api/config.yaml (created by Setup Wizard).
Prerequisites
- Linux server (Ubuntu 20.04+, Debian 11+, CentOS 8+, etc.)
- PostgreSQL 14+
- Redis 6+
- systemd
Directory Structure
/opt/sub2api/
├── sub2api # Main binary
├── sub2api.backup # Backup (after upgrade)
└── data/ # Runtime data
/etc/sub2api/
└── config.yaml # Configuration file
Troubleshooting
Docker
For local directory version:
# Check container status
docker-compose -f docker-compose.local.yml ps
# View detailed logs
docker-compose -f docker-compose.local.yml logs --tail=100 sub2api
# Check database connection
docker-compose -f docker-compose.local.yml exec postgres pg_isready
# Check Redis connection
docker-compose -f docker-compose.local.yml exec redis redis-cli ping
# Restart all services
docker-compose -f docker-compose.local.yml restart
# Check data directories
ls -la data/ postgres_data/ redis_data/
For named volumes version:
# Check container status
docker-compose ps
# View detailed logs
docker-compose logs --tail=100 sub2api
# Check database connection
docker-compose exec postgres pg_isready
# Check Redis connection
docker-compose exec redis redis-cli ping
# Restart all services
docker-compose restart
Binary Install
# Check service status
sudo systemctl status sub2api
# View recent logs
sudo journalctl -u sub2api -n 50
# Check config file
sudo cat /etc/sub2api/config.yaml
# Check PostgreSQL
sudo systemctl status postgresql
# Check Redis
sudo systemctl status redis
Common Issues
- Port already in use: Change
SERVER_PORTin.envor systemd config - Database connection failed: Check PostgreSQL is running and credentials are correct
- Redis connection failed: Check Redis is running and password is correct
- Permission denied: Ensure proper file ownership for binary install
TLS Fingerprint Configuration
Sub2API supports TLS fingerprint simulation to make requests appear as if they come from the official Claude CLI (Node.js client).
💡 Tip: Visit tls.sub2api.org to get TLS fingerprint information for different devices and browsers.
Default Behavior
- Built-in
claude_cli_v2profile simulates Node.js 20.x + OpenSSL 3.x - JA3 Hash:
1a28e69016765d92e3b381168d68922c - JA4:
t13d5911h1_a33745022dd6_1f22a2ca17c4 - Profile selection:
accountID % profileCount
Configuration
gateway:
tls_fingerprint:
enabled: true # Global switch
profiles:
# Simple profile (uses default cipher suites)
profile_1:
name: "Profile 1"
# Profile with custom cipher suites (use compact array format)
profile_2:
name: "Profile 2"
cipher_suites: [4866, 4867, 4865, 49199, 49195, 49200, 49196]
curves: [29, 23, 24]
point_formats: [0]
# Another custom profile
profile_3:
name: "Profile 3"
cipher_suites: [4865, 4866, 4867, 49199, 49200]
curves: [29, 23, 24, 25]
Profile Fields
| Field | Type | Description |
|---|---|---|
name |
string | Display name (required) |
cipher_suites |
[]uint16 | Cipher suites in decimal. Empty = default |
curves |
[]uint16 | Elliptic curves in decimal. Empty = default |
point_formats |
[]uint8 | EC point formats. Empty = default |
Common Values Reference
Cipher Suites (TLS 1.3): 4865 (AES_128_GCM), 4866 (AES_256_GCM), 4867 (CHACHA20)
Cipher Suites (TLS 1.2): 49195, 49196, 49199, 49200 (ECDHE variants)
Curves: 29 (X25519), 23 (P-256), 24 (P-384), 25 (P-521)