11 KiB
中文 | English
📝 Project Description
Note
This is an open-source project developed based on One API
Important
- This project is for personal learning purposes only, with no guarantee of stability or technical support.
- Users must comply with OpenAI's Terms of Use and applicable laws and regulations, and must not use it for illegal purposes.
- According to the 《Interim Measures for the Management of Generative Artificial Intelligence Services》, please do not provide any unregistered generative AI services to the public in China.
🤝 Trusted Partners
No particular order
📚 Documentation
For detailed documentation, please visit our official Wiki: https://docs.newapi.pro/
You can also access the AI-generated DeepWiki:
✨ Key Features
New API offers a wide range of features, please refer to Features Introduction for details:
- 🎨 Brand new UI interface
- 🌍 Multi-language support
- 💰 Online recharge functionality (YiPay)
- 🔍 Support for querying usage quotas with keys (works with neko-api-key-tool)
- 🔄 Compatible with the original One API database
- 💵 Support for pay-per-use model pricing
- ⚖️ Support for weighted random channel selection
- 📈 Data dashboard (console)
- 🔒 Token grouping and model restrictions
- 🤖 Support for more authorization login methods (LinuxDO, Telegram, OIDC)
- 🔄 Support for Rerank models (Cohere and Jina), API Documentation
- ⚡ Support for OpenAI Realtime API (including Azure channels), API Documentation
- ⚡ Support for Claude Messages format, API Documentation
- Support for entering chat interface via /chat2link route
- 🧠 Support for setting reasoning effort through model name suffixes:
- OpenAI o-series models
- Add
-highsuffix for high reasoning effort (e.g.:o3-mini-high) - Add
-mediumsuffix for medium reasoning effort (e.g.:o3-mini-medium) - Add
-lowsuffix for low reasoning effort (e.g.:o3-mini-low)
- Add
- Claude thinking models
- Add
-thinkingsuffix to enable thinking mode (e.g.:claude-3-7-sonnet-20250219-thinking)
- Add
- OpenAI o-series models
- 🔄 Thinking-to-content functionality
- 🔄 Model rate limiting for users
- 💰 Cache billing support, which allows billing at a set ratio when cache is hit:
- Set the
Prompt Cache Ratiooption inSystem Settings-Operation Settings - Set
Prompt Cache Ratioin the channel, range 0-1, e.g., setting to 0.5 means billing at 50% when cache is hit - Supported channels:
- OpenAI
- Azure
- DeepSeek
- Claude
- Set the
Model Support
This version supports multiple models, please refer to API Documentation-Relay Interface for details:
- Third-party models gpts (gpt-4-gizmo-*)
- Third-party channel Midjourney-Proxy(Plus) interface, API Documentation
- Third-party channel Suno API interface, API Documentation
- Custom channels, supporting full call address input
- Rerank models (Cohere and Jina), API Documentation
- Claude Messages format, API Documentation
- Dify, currently only supports chatflow
Environment Variable Configuration
For detailed configuration instructions, please refer to Installation Guide-Environment Variables Configuration:
GENERATE_DEFAULT_TOKEN: Whether to generate initial tokens for newly registered users, default isfalseSTREAMING_TIMEOUT: Streaming response timeout, default is 300 secondsDIFY_DEBUG: Whether to output workflow and node information for Dify channels, default istrueFORCE_STREAM_OPTION: Whether to override client stream_options parameter, default istrueGET_MEDIA_TOKEN: Whether to count image tokens, default istrueGET_MEDIA_TOKEN_NOT_STREAM: Whether to count image tokens in non-streaming cases, default istrueUPDATE_TASK: Whether to update asynchronous tasks (Midjourney, Suno), default istrueCOHERE_SAFETY_SETTING: Cohere model safety settings, options areNONE,CONTEXTUAL,STRICT, default isNONEGEMINI_VISION_MAX_IMAGE_NUM: Maximum number of images for Gemini models, default is16MAX_FILE_DOWNLOAD_MB: Maximum file download size in MB, default is20CRYPTO_SECRET: Encryption key used for encrypting database contentAZURE_DEFAULT_API_VERSION: Azure channel default API version, default is2025-04-01-previewNOTIFICATION_LIMIT_DURATION_MINUTE: Notification limit duration, default is10minutesNOTIFY_LIMIT_COUNT: Maximum number of user notifications within the specified duration, default is2ERROR_LOG_ENABLED=true: Whether to record and display error logs, default isfalse
Deployment
For detailed deployment guides, please refer to Installation Guide-Deployment Methods:
Tip
Latest Docker image:
calciumion/new-api:latest
Multi-machine Deployment Considerations
- Environment variable
SESSION_SECRETmust be set, otherwise login status will be inconsistent across multiple machines - If sharing Redis,
CRYPTO_SECRETmust be set, otherwise Redis content cannot be accessed across multiple machines
Deployment Requirements
- Local database (default): SQLite (Docker deployment must mount the
/datadirectory) - Remote database: MySQL version >= 5.7.8, PgSQL version >= 9.6
Deployment Methods
Using BaoTa Panel Docker Feature
Install BaoTa Panel (version 9.2.0 or above), find New-API in the application store and install it. Tutorial with images
Using Docker Compose (Recommended)
# Download the project
git clone https://github.com/Calcium-Ion/new-api.git
cd new-api
# Edit docker-compose.yml as needed
# Start
docker-compose up -d
Using Docker Image Directly
# Using SQLite
docker run --name new-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest
# Using MySQL
docker run --name new-api -d --restart always -p 3000:3000 -e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi" -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest
Channel Retry and Cache
Channel retry functionality has been implemented, you can set the number of retries in Settings->Operation Settings->General Settings. It is recommended to enable caching.
Cache Configuration Method
REDIS_CONN_STRING: Set Redis as cacheMEMORY_CACHE_ENABLED: Enable memory cache (no need to set manually if Redis is set)
API Documentation
For detailed API documentation, please refer to API Documentation:
Related Projects
- One API: Original project
- Midjourney-Proxy: Midjourney interface support
- chatnio: Next-generation AI one-stop B/C-end solution
- neko-api-key-tool: Query usage quota with key
Other projects based on New API:
- new-api-horizon: High-performance optimized version of New API
- VoAPI: Frontend beautified version based on New API
Help and Support
If you have any questions, please refer to Help and Support:





