t0ng7u d0fb54fbfe feat(web): add model prefill group quick-add buttons to channel models selector
- Added support to fetch and render “model prefill groups” in `EditChannelModal.jsx`
- Users can now click a group button to instantly merge that group’s models into the models Select
- Mirrors the prefill-group UX used for tags/endpoints in `EditModelModal.jsx`

Details
- UI/UX:
  - Renders one button per model group inside the models field’s extra actions
  - Clicking a button merges its items into the selected models (trimmed, deduplicated), updating immediately
  - Non-destructive and works alongside existing actions (fill related/all models, fetch upstream, clear, copy)
- API:
  - GET `/api/prefill_group?type=model`
  - Handles `items` as either an array or a JSON string array for robustness
  - If request fails or returns no groups, buttons are simply not shown
- i18n:
  - Reuses existing i18n; group names come from backend and are displayed as-is
- Performance:
  - Simple set merge; negligible overhead
- Backward compatibility:
  - No changes required on the backend or elsewhere; feature is additive
- Testing (manual):
  1) Open channel modal (new or edit) and navigate to the Models section
  2) Confirm model group buttons render when groups are configured
  3) Click a group button → models Select updates with merged models (no duplicates)
  4) Verify other actions (fill related/all, fetch upstream, clear, copy) still work
  5) Close/reopen modal → state resets as expected

Implementation
- `web/src/components/table/channels/modals/EditChannelModal.jsx`
  - Added `modelGroups` state and `fetchModelGroups()` (GET `/api/prefill_group?type=model`)
  - Invoked `fetchModelGroups()` when the modal opens
  - Rendered group buttons in the models Select `extraText`, merging group items into current selection

Chore
- Verified no new linter errors were introduced.
2025-08-08 04:48:18 +08:00
2025-08-06 20:09:22 +08:00
2025-08-06 20:09:22 +08:00
2025-08-06 20:09:22 +08:00
2025-08-06 19:40:26 +08:00
2025-08-08 03:35:31 +08:00
2025-08-08 03:22:25 +08:00
2025-06-09 14:57:01 +08:00
2023-06-22 11:37:44 +08:00
2023-06-02 14:20:40 +08:00

中文 | English

new-api

New API

🍥 Next-Generation Large Model Gateway and AI Asset Management System

Calcium-Ion%2Fnew-api | Trendshift

license release docker docker GoReportCard

📝 Project Description

Note

This is an open-source project developed based on One API

Important

📚 Documentation

For detailed documentation, please visit our official Wiki: https://docs.newapi.pro/

You can also access the AI-generated DeepWiki: Ask DeepWiki

Key Features

New API offers a wide range of features, please refer to Features Introduction for details:

  1. 🎨 Brand new UI interface
  2. 🌍 Multi-language support
  3. 💰 Online recharge functionality (YiPay)
  4. 🔍 Support for querying usage quotas with keys (works with neko-api-key-tool)
  5. 🔄 Compatible with the original One API database
  6. 💵 Support for pay-per-use model pricing
  7. ⚖️ Support for weighted random channel selection
  8. 📈 Data dashboard (console)
  9. 🔒 Token grouping and model restrictions
  10. 🤖 Support for more authorization login methods (LinuxDO, Telegram, OIDC)
  11. 🔄 Support for Rerank models (Cohere and Jina), API Documentation
  12. Support for OpenAI Realtime API (including Azure channels), API Documentation
  13. Support for Claude Messages format, API Documentation
  14. Support for entering chat interface via /chat2link route
  15. 🧠 Support for setting reasoning effort through model name suffixes:
    1. OpenAI o-series models
      • Add -high suffix for high reasoning effort (e.g.: o3-mini-high)
      • Add -medium suffix for medium reasoning effort (e.g.: o3-mini-medium)
      • Add -low suffix for low reasoning effort (e.g.: o3-mini-low)
    2. Claude thinking models
      • Add -thinking suffix to enable thinking mode (e.g.: claude-3-7-sonnet-20250219-thinking)
  16. 🔄 Thinking-to-content functionality
  17. 🔄 Model rate limiting for users
  18. 💰 Cache billing support, which allows billing at a set ratio when cache is hit:
    1. Set the Prompt Cache Ratio option in System Settings-Operation Settings
    2. Set Prompt Cache Ratio in the channel, range 0-1, e.g., setting to 0.5 means billing at 50% when cache is hit
    3. Supported channels:
      • OpenAI
      • Azure
      • DeepSeek
      • Claude

Model Support

This version supports multiple models, please refer to API Documentation-Relay Interface for details:

  1. Third-party models gpts (gpt-4-gizmo-*)
  2. Third-party channel Midjourney-Proxy(Plus) interface, API Documentation
  3. Third-party channel Suno API interface, API Documentation
  4. Custom channels, supporting full call address input
  5. Rerank models (Cohere and Jina), API Documentation
  6. Claude Messages format, API Documentation
  7. Dify, currently only supports chatflow

Environment Variable Configuration

For detailed configuration instructions, please refer to Installation Guide-Environment Variables Configuration:

  • GENERATE_DEFAULT_TOKEN: Whether to generate initial tokens for newly registered users, default is false
  • STREAMING_TIMEOUT: Streaming response timeout, default is 120 seconds
  • DIFY_DEBUG: Whether to output workflow and node information for Dify channels, default is true
  • FORCE_STREAM_OPTION: Whether to override client stream_options parameter, default is true
  • GET_MEDIA_TOKEN: Whether to count image tokens, default is true
  • GET_MEDIA_TOKEN_NOT_STREAM: Whether to count image tokens in non-streaming cases, default is true
  • UPDATE_TASK: Whether to update asynchronous tasks (Midjourney, Suno), default is true
  • COHERE_SAFETY_SETTING: Cohere model safety settings, options are NONE, CONTEXTUAL, STRICT, default is NONE
  • GEMINI_VISION_MAX_IMAGE_NUM: Maximum number of images for Gemini models, default is 16
  • MAX_FILE_DOWNLOAD_MB: Maximum file download size in MB, default is 20
  • CRYPTO_SECRET: Encryption key used for encrypting database content
  • AZURE_DEFAULT_API_VERSION: Azure channel default API version, default is 2025-04-01-preview
  • NOTIFICATION_LIMIT_DURATION_MINUTE: Notification limit duration, default is 10 minutes
  • NOTIFY_LIMIT_COUNT: Maximum number of user notifications within the specified duration, default is 2
  • ERROR_LOG_ENABLED=true: Whether to record and display error logs, default is false

Deployment

For detailed deployment guides, please refer to Installation Guide-Deployment Methods:

Tip

Latest Docker image: calciumion/new-api:latest

Multi-machine Deployment Considerations

  • Environment variable SESSION_SECRET must be set, otherwise login status will be inconsistent across multiple machines
  • If sharing Redis, CRYPTO_SECRET must be set, otherwise Redis content cannot be accessed across multiple machines

Deployment Requirements

  • Local database (default): SQLite (Docker deployment must mount the /data directory)
  • Remote database: MySQL version >= 5.7.8, PgSQL version >= 9.6

Deployment Methods

Using BaoTa Panel Docker Feature

Install BaoTa Panel (version 9.2.0 or above), find New-API in the application store and install it. Tutorial with images

# Download the project
git clone https://github.com/Calcium-Ion/new-api.git
cd new-api
# Edit docker-compose.yml as needed
# Start
docker-compose up -d

Using Docker Image Directly

# Using SQLite
docker run --name new-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest

# Using MySQL
docker run --name new-api -d --restart always -p 3000:3000 -e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi" -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest

Channel Retry and Cache

Channel retry functionality has been implemented, you can set the number of retries in Settings->Operation Settings->General Settings. It is recommended to enable caching.

Cache Configuration Method

  1. REDIS_CONN_STRING: Set Redis as cache
  2. MEMORY_CACHE_ENABLED: Enable memory cache (no need to set manually if Redis is set)

API Documentation

For detailed API documentation, please refer to API Documentation:

Other projects based on New API:

  • new-api-horizon: High-performance optimized version of New API
  • VoAPI: Frontend beautified version based on New API

Help and Support

If you have any questions, please refer to Help and Support:

🤝 Trusted Partners

Cherry Studio      Peking University      UCloud

No particular order

🌟 Star History

Star History Chart

Description
No description provided
Readme 26 MiB
Languages
JavaScript 54.5%
Go 44.6%
CSS 0.7%