Commit Graph

434 Commits

Author SHA1 Message Date
1808837298@qq.com
68097c132d feat: Improve decimal precision for quota and payment calculations
- Added github.com/shopspring/decimal for precise floating-point calculations
- Refactored quota and payment calculations in multiple files to use decimal arithmetic
- Updated go.mod and go.sum to include decimal library
- Improved precision in topup, relay, and quota service calculations
- Added support for more OpenAI model variants in cache ratio settings
2025-03-08 21:55:50 +08:00
1808837298@qq.com
a9bfcb0daf feat: Add prompt cache hit tokens support for DeepSeek channel #406 2025-03-08 16:50:53 +08:00
1808837298@qq.com
bb848b2fe0 refactor: Improve quota calculation precision using floating-point arithmetic 2025-03-08 16:44:08 +08:00
1808837298@qq.com
4f194f4e6a feat: Implement cache token ratio for more precise token pricing 2025-03-08 01:30:50 +08:00
1808837298@qq.com
81137e0533 refactor: Remove redundant user quota retrieval in audio relay 2025-03-07 19:59:00 +08:00
Sh1n3zZ
894dce7366 fix: possible incomplete return of the think field and incorrect occurrences of the reasoning field 2025-03-06 19:20:29 +08:00
Sh1n3zZ
b95142bbac fix: adapting return format for openrouter think content (#793) 2025-03-06 19:16:26 +08:00
1808837298@qq.com
e3f9ef1894 fix: error NotifyRootUser #812 2025-03-06 15:56:42 +08:00
1808837298@qq.com
558e625a01 fix: Prevent resource leaks by adding body close in stream handlers 2025-03-05 19:51:22 +08:00
1808837298@qq.com
37a83ecc33 refactor: Centralize stream handling and helper functions in relay package 2025-03-05 19:47:41 +08:00
1808837298@qq.com
8deab221f9 fix: vertex claude 2025-03-05 16:43:40 +08:00
1808837298@qq.com
cbdf26bf2c feat: Add context-aware goroutine pool for safer concurrent operations 2025-03-04 18:42:34 +08:00
1808837298@qq.com
eb46b71a71 fix: Ignore EOF errors in OpenAI stream scanner 2025-03-04 17:35:41 +08:00
1808837298@qq.com
b00dd8b405 fix: Handle scanner errors in OpenAI relay stream handler 2025-03-04 17:10:56 +08:00
1808837298@qq.com
b1be64bcf3 fix: vertex claude 2025-03-03 20:06:08 +08:00
1808837298@qq.com
d042a1bd55 refactor: Improve channel testing and model price handling 2025-03-02 15:47:12 +08:00
1808837298@qq.com
7dbb6b017c feat: Add self-use mode for model ratio and price configuration
- Introduce `SelfUseModeEnabled` setting to allow flexible model ratio configuration
- Update error handling to provide more informative messages when model ratios are not set
- Modify pricing and relay logic to support self-use mode
- Add UI toggle for enabling self-use mode in operation settings
- Implement fallback mechanism for model ratios when self-use mode is enabled
2025-03-01 21:13:48 +08:00
1808837298@qq.com
ce1854847b fix: Enhance error message for missing model ratio configuration 2025-03-01 17:02:31 +08:00
1808837298@qq.com
a5085014cc fix: Improve model ratio and price management
- Update error message for missing model ratio to be more user-friendly
- Modify ModelRatioNotSetEditor to filter models without price or ratio
- Enhance model data initialization with fallback values
2025-02-28 23:28:47 +08:00
1808837298@qq.com
18d3706ff8 feat: Add new model management features
- Implement `/api/channel/models_enabled` endpoint to retrieve enabled models
- Add `EnabledListModels` handler in controller
- Create new `ModelRatioNotSetEditor` component for managing unset model ratios
- Update router to include new models_enabled route
- Add internationalization support for new model management UI
- Include GPT-4.5 preview model in OpenAI model list
2025-02-28 21:13:30 +08:00
1808837298@qq.com
152950497e fix 2025-02-28 20:28:44 +08:00
1808837298@qq.com
d6fd50e382 feat: add new GPT-4.5 preview model ratios 2025-02-28 19:17:15 +08:00
1808837298@qq.com
cfd3f6c073 feat: Enhance Claude default max tokens configuration
- Replace ThinkingAdapterMaxTokens with a more flexible DefaultMaxTokens map
- Add support for model-specific default max tokens configuration
- Update relay and web interface to use the new configuration approach
- Implement a fallback mechanism for default max tokens
2025-02-28 17:53:08 +08:00
1808837298@qq.com
45c56b5ded feat: Implement model-specific headers configuration for Claude 2025-02-28 16:47:31 +08:00
1808837298@qq.com
d0bc8d17d1 feat: Enhance Claude MaxTokens configuration handling
- Update Claude relay to set default MaxTokens dynamically
- Modify web interface to clarify default MaxTokens input purpose
- Improve token configuration logic for thinking adapter models
2025-02-27 22:10:29 +08:00
1808837298@qq.com
3a18c0ce9f fix: Correct model request configuration in Vertex Claude adaptor 2025-02-27 20:51:10 +08:00
1808837298@qq.com
929668bead feat: Refactor model configuration management with new config system
- Introduce a new configuration management approach for model-specific settings
- Update Gemini settings to use the new config system with more flexible management
- Add support for dynamic configuration updates in option handling
- Modify Claude and Vertex adaptors to use new configuration methods
- Enhance web interface to support namespaced configuration keys
2025-02-27 20:49:34 +08:00
1808837298@qq.com
5f0b3f6d6f fix: Improve AWS Claude adaptor request conversion error handling #796 2025-02-27 14:57:00 +08:00
1808837298@qq.com
19a318c943 init openrouter adaptor 2025-02-27 00:01:21 +08:00
1808837298@qq.com
13ab0f8e4f fix: gemini&claude tool call format #795 #766 2025-02-26 23:56:10 +08:00
1808837298@qq.com
6d8d40e67b fix: claude tool call format #795 #766 2025-02-26 23:40:16 +08:00
1808837298@qq.com
287caf8e38 feat: Add Jina reranking support for OpenAI adaptor 2025-02-26 21:46:06 +08:00
1808837298@qq.com
ed4e1c2332 fix: Update Gemini safety settings category 2025-02-26 19:18:00 +08:00
1808837298@qq.com
bf80d71ddf feat: Add Gemini version settings configuration support (close #568) 2025-02-26 18:19:09 +08:00
1808837298@qq.com
e19b244e73 feat: Add Gemini safety settings configuration support (close #703) 2025-02-26 16:54:43 +08:00
1808837298@qq.com
f451268830 feat: Update Claude relay temperature setting 2025-02-25 22:01:05 +08:00
1808837298@qq.com
069f2672c1 refactor: Enhance user context and quota management
- Add new context keys for user-related information
- Modify user cache and authentication middleware to populate context
- Refactor quota and notification services to use context-based user data
- Remove redundant database queries by leveraging context information
- Update various components to use new context-based user retrieval methods
2025-02-25 20:56:16 +08:00
1808837298@qq.com
da4d1861fe fix: Adjust Claude thinking mode request parameters 2025-02-25 16:52:45 +08:00
1808837298@qq.com
607e3206b3 Merge branch 'main' into thinking
# Conflicts:
#	relay/channel/claude/dto.go
2025-02-25 15:21:22 +08:00
1808837298@qq.com
83feb492fb feat: Add support for Claude thinking parameter in request 2025-02-25 14:37:03 +08:00
MartialBE
4f212be45c feat: Add Claude 3.7 Sonnet thinking mode support 2025-02-25 14:10:43 +08:00
1808837298@qq.com
92918e3751 feat: Add Claude 3.7 Sonnet model to AWS channel mapping 2025-02-25 02:55:23 +08:00
1808837298@qq.com
de15551570 feat: Add support for Claude 3.7 Sonnet model 2025-02-25 02:51:31 +08:00
1808837298@qq.com
a81a28b7a5 feat: Support max_tokens parameter for Ollama channel #782 2025-02-24 17:35:49 +08:00
1808837298@qq.com
b6f95dca41 feat: Add support for different Dify bot types and request URLs 2025-02-24 14:18:30 +08:00
1808837298@qq.com
115a181db3 feat: Add thinking-to-content conversion for stream responses 2025-02-23 17:05:57 +08:00
1808837298@qq.com
88a2fec190 fix: mistral 2025-02-22 16:29:48 +08:00
1808837298@qq.com
27ea231d66 fix: fix image ratio calculation 2025-02-22 15:50:18 +08:00
1808837298@qq.com
bf75b30870 fix: mistral adaptor (close #774) 2025-02-21 22:21:19 +08:00
1808837298@qq.com
6e7587ab46 feat: Add reasoning content support in OpenAI response handling 2025-02-21 18:52:51 +08:00