1808837298@qq.com
0aa30ed3f6
feat: Add new model management features
...
- Implement `/api/channel/models_enabled` endpoint to retrieve enabled models
- Add `EnabledListModels` handler in controller
- Create new `ModelRatioNotSetEditor` component for managing unset model ratios
- Update router to include new models_enabled route
- Add internationalization support for new model management UI
- Include GPT-4.5 preview model in OpenAI model list
2025-02-28 21:13:30 +08:00
1808837298@qq.com
1e388d9d68
fix
2025-02-28 20:28:44 +08:00
1808837298@qq.com
3447df85c4
feat: add new GPT-4.5 preview model ratios
2025-02-28 19:17:15 +08:00
1808837298@qq.com
86a88f8203
feat: Enhance Claude default max tokens configuration
...
- Replace ThinkingAdapterMaxTokens with a more flexible DefaultMaxTokens map
- Add support for model-specific default max tokens configuration
- Update relay and web interface to use the new configuration approach
- Implement a fallback mechanism for default max tokens
2025-02-28 17:53:08 +08:00
1808837298@qq.com
29fc0a6b1d
feat: Implement model-specific headers configuration for Claude
2025-02-28 16:47:31 +08:00
1808837298@qq.com
6be0914bb6
fix: Simplify Claude settings value conversion logic
2025-02-27 22:26:21 +08:00
1808837298@qq.com
58a9c63657
fix: Prevent duplicate headers in Claude settings
2025-02-27 22:14:53 +08:00
1808837298@qq.com
9a6d84dbd6
refactor: Reorganize Claude MaxTokens configuration UI layout
2025-02-27 22:12:14 +08:00
1808837298@qq.com
96e73ad8e0
feat: Enhance Claude MaxTokens configuration handling
...
- Update Claude relay to set default MaxTokens dynamically
- Modify web interface to clarify default MaxTokens input purpose
- Improve token configuration logic for thinking adapter models
2025-02-27 22:10:29 +08:00
1808837298@qq.com
71682f1522
fix: Update Claude thinking adapter token percentage input guidance
2025-02-27 20:59:32 +08:00
1808837298@qq.com
ed1a1c9b09
fix: Correct model request configuration in Vertex Claude adaptor
2025-02-27 20:51:10 +08:00
1808837298@qq.com
5371af0b42
feat: Refactor model configuration management with new config system
...
- Introduce a new configuration management approach for model-specific settings
- Update Gemini settings to use the new config system with more flexible management
- Add support for dynamic configuration updates in option handling
- Modify Claude and Vertex adaptors to use new configuration methods
- Enhance web interface to support namespaced configuration keys
2025-02-27 20:49:34 +08:00
1808837298@qq.com
fd6ae3ea78
feat: Add Claude model configuration management #791
2025-02-27 20:49:21 +08:00
1808837298@qq.com
fe9a3025d1
fix: Add pagination support to user search functionality
2025-02-27 16:55:02 +08:00
1808837298@qq.com
bf9f5e59b5
chore: Update Azure OpenAI API version and embedding model detection
...
- Enhance channel test to detect more embedding models
- Update Azure OpenAI default API version to 2024-12-01-preview
- Remove redundant default API version setting in channel edit
- Add user cache writing in channel test
2025-02-27 16:49:32 +08:00
1808837298@qq.com
2d77733cd3
fix: Improve AWS Claude adaptor request conversion error handling #796
2025-02-27 14:57:00 +08:00
1808837298@qq.com
ac00e9bbb3
init openrouter adaptor
2025-02-27 00:01:21 +08:00
1808837298@qq.com
0646fa1892
fix: gemini&claude tool call format #795 #766
2025-02-26 23:56:10 +08:00
1808837298@qq.com
23de62ec0d
fix: claude tool call format #795 #766
2025-02-26 23:40:16 +08:00
1808837298@qq.com
c3b0e57ea4
feat: Add Jina reranking support for OpenAI adaptor
2025-02-26 21:46:06 +08:00
1808837298@qq.com
ce03e77906
fix: Update Gemini safety settings to use 'OFF' as default
2025-02-26 19:20:17 +08:00
1808837298@qq.com
832f4b2b1a
fix: Update Gemini safety settings category
2025-02-26 19:18:00 +08:00
1808837298@qq.com
7100c787d4
fix: Update Gemini safety settings default value
2025-02-26 19:01:45 +08:00
1808837298@qq.com
8a30d64a75
feat: Add Gemini version settings configuration support ( close #568 )
2025-02-26 18:19:09 +08:00
1808837298@qq.com
0a369cc193
feat: Add Gemini safety settings configuration support ( close #703 )
2025-02-26 16:54:43 +08:00
1808837298@qq.com
5ba44f5ad5
feat: Update Claude relay temperature setting
2025-02-25 22:01:05 +08:00
1808837298@qq.com
d04d78a116
refactor: Enhance user context and quota management
...
- Add new context keys for user-related information
- Modify user cache and authentication middleware to populate context
- Refactor quota and notification services to use context-based user data
- Remove redundant database queries by leveraging context information
- Update various components to use new context-based user retrieval methods
2025-02-25 20:56:16 +08:00
1808837298@qq.com
8c2323d74d
feat: redis poolsize
2025-02-25 19:39:29 +08:00
1808837298@qq.com
583678d9ff
fix: Adjust Claude thinking mode request parameters
2025-02-25 16:52:45 +08:00
1808837298@qq.com
fd38e59f78
docs: Update README
2025-02-25 16:31:42 +08:00
Calcium-Ion
f5cbab77cf
Merge pull request #788 from MartialBE/main
...
feat: Add Claude 3.7 Sonnet thinking mode support
2025-02-25 15:21:39 +08:00
1808837298@qq.com
d4706d6b8e
Merge branch 'main' into thinking
...
# Conflicts:
# relay/channel/claude/dto.go
2025-02-25 15:21:22 +08:00
1808837298@qq.com
6c8016e5f8
feat: Add support for Claude thinking parameter in request
2025-02-25 14:37:03 +08:00
MartialBE
7160012fe2
feat: Add Claude 3.7 Sonnet thinking mode support
2025-02-25 14:10:43 +08:00
1808837298@qq.com
c62276fcc4
feat: Add Claude 3.7 Sonnet model to AWS channel mapping
2025-02-25 02:55:23 +08:00
1808837298@qq.com
15a3b44689
feat: Add support for Claude 3.7 Sonnet model
2025-02-25 02:51:31 +08:00
1808837298@qq.com
8f3c7280cf
feat: Support max_tokens parameter for Ollama channel #782
2025-02-24 17:35:49 +08:00
Calcium-Ion
e5e73a33f0
Merge pull request #781 from zeyugao/main
...
feat: Pass extra_body in OpenAI request to the backend
2025-02-24 16:29:48 +08:00
Calcium-Ion
2d15f63eaa
Merge pull request #783 from Calcium-Ion/rate-limit
...
feat: Add model request rate limiting functionality
2025-02-24 16:29:23 +08:00
1808837298@qq.com
6f3072895a
feat: Add model rate limit settings in system configuration
2025-02-24 16:27:20 +08:00
1808837298@qq.com
1763145fea
feat: Add model request rate limiting functionality
2025-02-24 16:20:55 +08:00
1808837298@qq.com
66831a1bde
feat: Add support for different Dify bot types and request URLs
2025-02-24 14:18:30 +08:00
1808837298@qq.com
fd44ac7c0c
feat: Enhance token counting and content parsing for messages
2025-02-24 14:18:15 +08:00
Elsa
f5bf67c636
Pass extra_body to the backend
2025-02-24 10:52:55 +08:00
1808837298@qq.com
7becf62a7a
fix: Improve 429 error logging with detailed message
2025-02-23 21:26:31 +08:00
1808837298@qq.com
40c0333eaa
fix typo
2025-02-23 17:27:33 +08:00
1808837298@qq.com
65021e2e0e
feat: Add thinking-to-content option in channel extra settings #780
2025-02-23 17:13:08 +08:00
1808837298@qq.com
4597816a14
feat: Add thinking-to-content conversion for stream responses
2025-02-23 17:05:57 +08:00
1808837298@qq.com
991b6f8bb0
fix: mistral
2025-02-22 16:29:48 +08:00
1808837298@qq.com
0333576bee
fix: fix image ratio calculation
2025-02-22 15:50:18 +08:00