Commit Graph

382 Commits

Author SHA1 Message Date
1808837298@qq.com
06da65a9d0 refactor: Simplify model mapping and pricing logic across relay modules 2025-02-20 16:41:46 +08:00
1808837298@qq.com
60aac77c08 fix: Correct Ollama channel authentication header setting 2025-02-20 01:28:15 +08:00
Coming
a13f4d6c56 fix: Fix Ollama channel authentication 2025-02-20 00:52:30 +08:00
1808837298@qq.com
971aea09ee feat: Improve image handling for Ollama channels 2025-02-19 20:45:42 +08:00
1808837298@qq.com
a4b2b9c935 feat: Enhance Ollama channel support with additional request parameters #771 2025-02-19 19:58:34 +08:00
1808837298@qq.com
ae5875d4c7 fix: Remove redundant error handling in distributor and relay modules 2025-02-19 18:47:28 +08:00
1808837298@qq.com
5937d850d9 refactor: Replace manual goroutine creation with gopool.Go 2025-02-19 18:38:29 +08:00
Calcium-Ion
2b7435500c Merge pull request #770 from Calcium-Ion/refactor_notify
feat: Add user notification settings and multiple notification methods
2025-02-19 14:54:54 +07:00
1808837298@qq.com
4e871507cf feat: Implement comprehensive webhook notification system 2025-02-19 15:40:54 +08:00
Calcium-Ion
2e3b920a2c Merge pull request #763 from Sh1n3zZ/support-imagen-3.0-generate-002
feat: add Gemini Imagen image generation support
2025-02-18 15:32:32 +07:00
1808837298@qq.com
812c188ab1 fix: Extend temperature handling for OpenAI-like models
- Add support for suppressing temperature for o1 models
- Expand model prefix check to include 'o1' alongside 'o3' models
2025-02-18 16:00:56 +08:00
1808837298@qq.com
3da1344897 feat: Add user notification settings with quota warning and multiple notification methods
- Implement user notification settings with email and webhook options
- Add new user settings for quota warning threshold and notification preferences
- Create backend API and database support for user notification configuration
- Enhance frontend personal settings with notification configuration UI
- Support custom notification email and webhook URL
- Add service layer for sending user notifications
2025-02-18 14:54:21 +08:00
Sh1n3zZ
61d2a2f92d feat: add Gemini Imagen image generation support 2025-02-18 01:41:58 +08:00
1808837298@qq.com
995b3a2403 Merge remote-tracking branch 'origin/main' 2025-02-17 18:15:13 +08:00
1808837298@qq.com
7b384cb933 feat: Add support for DeepSeek completions endpoint 2025-02-17 18:15:01 +08:00
Calcium-Ion
78f19d4690 Merge pull request #735 from jyc001/main
feat:Add Supoorts to FIM
2025-02-17 14:37:06 +07:00
nightcoffee
e7e5a16767 feat: add 火山引擎 support stream options 2025-02-15 04:55:57 +08:00
1808837298@qq.com
6bf99f218c feat: Enhance VolcEngine channel support with bot model routing (fix #757) 2025-02-15 00:10:58 +08:00
1808837298@qq.com
bd4ce9cd91 fix: Improve OpenAI stream data parsing and handling 2025-02-14 23:52:25 +08:00
1808837298@qq.com
f5e3063f33 feat: Improve embedding request handling and support across channels
- Update EmbeddingRequest DTO to support more flexible input types
- Add input parsing method to handle various input formats
- Implement ConvertEmbeddingRequest for multiple channel adaptors
- Remove relayMode parameter from EmbeddingHelper
- Add input validation for embedding requests
- Simplify embedding request conversion for different channels
2025-02-12 14:39:36 +08:00
1808837298@qq.com
eceb6afcdd feat: Add Baidu Qianfan V2 channel support #725
- Update channel constants to include Baidu V2 channel
- Create new Baidu V2 adaptor for relay
- Add Baidu V2 models and channel configuration
- Update relay adaptor to support Baidu V2 channel
- Modify web channel constants to include Baidu V2 option
2025-02-12 00:07:02 +08:00
1808837298@qq.com
28c13e5a0f feat: Add support for VolcEngine (Doubao) channel #313 #734 2025-02-11 23:47:15 +08:00
Calcium-Ion
88bdedd2c9 Merge pull request #723 from kuwork/main
Support for MokaAI M3E
2025-02-11 22:16:18 +07:00
xy3
8910efb1da 更正硅基流动的SenseVoiceSmall模型名字 2025-02-08 11:54:08 +08:00
e.
ce4269955e feat add FIM support for siliconflow 2025-02-08 00:23:35 +08:00
HynoR
efdd6fb657 chore: sync gemini aistudio model 2025-02-06 13:32:19 +08:00
kuwork
89d48a6618 Merge branch 'main' into main 2025-02-04 22:52:37 +08:00
1808837298@qq.com
187c336121 feat: add Azure default API version configuration
- Introduce `AZURE_DEFAULT_API_VERSION` environment variable
- Set default Azure API version to `2024-12-01-preview`
- Update README documentation for new environment configuration
- Modify Azure channel relay to use default API version when not specified
2025-02-03 22:38:23 +08:00
1808837298@qq.com
c68ea5654f feat: enhance model name handling and logging
- Add `RecodeModelName` to `RelayInfo` struct for more flexible model name tracking
- Update text relay and quota consumption to use `RecodeModelName`
- Move reasoning effort from admin info to other info in log generation
- Ensure consistent model name handling across relay components
2025-02-03 15:06:46 +08:00
1808837298@qq.com
834ceda827 feat: add reasoning effort logging and display
- Add `ReasoningEffort` field to `RelayInfo` struct
- Update log generation to include reasoning effort in admin info
- Modify logs table component to display reasoning effort when available
- Preserve reasoning effort information during request processing
2025-02-03 14:44:40 +08:00
1808837298@qq.com
a29e1e0aa3 fix: improve reasoning effort model suffix handling
- Remove model name suffixes after extracting reasoning effort
- Update upstream model name to reflect the base model
- Ensure clean model name is passed to the upstream service
2025-02-03 14:34:00 +08:00
1808837298@qq.com
ce77f25576 fix: update reasoning effort model suffix parsing
- Modify model suffix parsing to use hyphen-separated suffixes
- Ensure consistent parsing of `-high`, `-medium`, and `-low` reasoning effort indicators
2025-02-03 14:23:26 +08:00
1808837298@qq.com
d5746ac347 feat: add reasoning effort configuration for models
- Support setting reasoning effort via model name suffix
- Add `-high`, `-medium`, and `-low` suffixes to control reasoning effort
- Update README with new model configuration option
- Modify OpenAI adaptor to handle reasoning effort settings
2025-02-03 14:22:34 +08:00
1808837298@qq.com
cf63ab59cf feat: support channel request proxy 2025-02-02 22:15:06 +08:00
1808837298@qq.com
b80c1ee3a4 f*** o3-mini 2025-02-01 14:11:34 +08:00
1808837298@qq.com
69102d141f feat: add support for o3-mini models in model ratio and request handling 2025-02-01 13:41:25 +08:00
Jerry
7588c42b42 Fix M3E not working 2025-01-23 05:54:39 +08:00
Calcium-Ion
6cc9c36a22 Merge pull request #710 from hubutui/main
Fix temperature not being set to 0 due to json omitempty
2025-01-22 12:44:48 +07:00
Jerry
126f04e08f Support for MokaAI M3E 2025-01-22 04:21:08 +08:00
Butui Hu
eda7ef50e0 Fix temperature not being set to 0 due to json omitempty
The issue was caused by the `omitempty` tag in the Go struct, which prevented the `temperature` field from being included in the JSON output when it was set to 0.

Signed-off-by: Butui Hu <hot123tea123@gmail.com>
2025-01-21 12:54:09 +08:00
Calcium-Ion
7f8112a325 Merge pull request #705 from maranello-o/main
fix: incorrect whisper audio usage
2025-01-21 11:21:04 +07:00
HynoR
6e2c871015 feat: 更新模型和模型倍率 2025-01-21 00:53:10 +08:00
沈浩
2abf05b314 fix: incorrect whisper audio usage 2025-01-17 18:12:05 +08:00
1808837298@qq.com
8518ca65e2 Adjust streaming timeout for OpenAI models in OaiStreamHandler
- Implemented conditional logic to double the streaming timeout for models starting with "o1" or "o3".
- Improved handling of streaming timeout configuration to enhance performance based on model type.
2025-01-06 17:52:33 +08:00
1808837298@qq.com
99245e4c1f refactor: realtime quota 2025-01-04 15:46:35 +08:00
1808837298@qq.com
4a0a841e1d feat: support gpt-4o-mini-realtime-preview 2025-01-03 18:51:09 +08:00
1808837298@qq.com
ef4c1a2e48 fix: retry prompt tokens 2025-01-02 16:33:00 +08:00
CalciumIon
bb5e032dd2 refactor: token cache logic 2024-12-30 17:10:48 +08:00
CalciumIon
ed435e5c8f refactor: user cache logic 2024-12-29 16:50:26 +08:00
Yan
2a15dfccea fix: Gemini 其他文件类型的支持(Base64URL) 2024-12-29 10:11:39 +08:00