RedwindA
ffa8a42784
fix(minimax): 添加 MiniMax-M2 系列模型到 ModelList
2026-01-09 20:46:47 +08:00
Seefs
0ef2804757
fix: fix the proxyURL is empty, not using the default HTTP client configuration && the AWS calling side did not apply the relay timeout.
2026-01-05 17:56:24 +08:00
Calcium-Ion
43f5433e6a
Merge pull request #2578 from xyfacai/fix/gemini-mimetype
...
fix: 修复 gemini 文件类型不支持 image/jpg
2026-01-04 22:19:16 +08:00
Xyfacai
eeccb2146f
fix: 修复 gemini 文件类型不支持 image/jpg
2026-01-04 22:09:03 +08:00
Seefs
be2fdceaec
Merge pull request #2550 from shikaiwei1/patch-2
2026-01-04 18:11:46 +08:00
Seefs
44f9d9040b
feat: add support for Doubao /v1/responses ( #2567 )
...
* feat: add support for Doubao /v1/responses
2026-01-03 12:35:35 +08:00
Seefs
ea60d305bb
Merge pull request #2393 from prnake/fix-claude-haiku
2026-01-03 09:36:42 +08:00
Seefs
b2e52260e1
Merge pull request #2532 from feitianbubu/pr/620211e02bd55545f0fa4568f3d55c3b4d7f3305
2026-01-03 09:36:17 +08:00
CaIon
62020d00a4
feat(adaptor): update resolution handling for wan2.6 model
2025-12-31 00:44:06 +08:00
CaIon
6d0e316ee6
refactor(image): remove unnecessary logging in oaiImage2Ali function
2025-12-31 00:23:19 +08:00
John Chen
6a2da31946
fix: 修复智普、Moonshot渠道在stream=true时无法拿到cachePrompt的统计数据。
...
根本原因:
1. 在OaiStreamHandler流式处理函数中,调用applyUsagePostProcessing(info, usage, nil)时传入的responseBody为nil,导致无法从响应体中提取缓存tokens。
2. 两个渠道的cached_tokens位置不同:
- 智普:标准位置 usage.prompt_tokens_details.cached_tokens
- Moonshot:非标准位置 choices[].usage.cached_tokens
处理方案:
1. 传递body信息到applyUsagePostProcessing中
2. 拆分智普和Moonshot的解析,并为Moonshot单独写一个解析方法。
2025-12-30 17:38:32 +08:00
CaIon
b5a0c822d2
feat(adaptor): 新适配百炼多种图片生成模型
...
- wan2.6系列生图与编辑,适配多图生成计费
- wan2.5系列生图与编辑
- z-image-turbo生图,适配prompt_extend计费
2025-12-29 23:00:17 +08:00
Seefs
6526976453
fix: glm 4.7 finish reason ( #2545 )
2025-12-29 19:41:15 +08:00
Seefs
5423d6ed8c
feat: Add "wan2.6-i2v" video ratio configuration to Ali adaptor.
2025-12-29 14:13:33 +08:00
Seefs
b10f1f7b85
feat: ionet integrate ( #2105 )
...
* wip ionet integrate
* wip ionet integrate
* wip ionet integrate
* ollama wip
* wip
* feat: ionet integration & ollama manage
* fix merge conflict
* wip
* fix: test conn cors
* wip
* fix ionet
* fix ionet
* wip
* fix model select
* refactor: Remove `pkg/ionet` test files and update related Go source and web UI model deployment components.
* feat: Enhance model deployment UI with styling improvements, updated text, and a new description component.
* Revert "feat: Enhance model deployment UI with styling improvements, updated text, and a new description component."
This reverts commit 8b75cb5bf0d1a534b339df8c033be9a6c7df7964.
2025-12-28 15:55:35 +08:00
RedwindA
518563c7eb
feat: map OpenAI developer role to Gemini system instructions
2025-12-27 02:52:33 +08:00
feitianbubu
d014e0b471
fix: kling correct fail reason
2025-12-26 16:35:46 +08:00
papersnake
0271b6f145
Merge branch 'QuantumNous:main' into fix-claude-haiku
2025-12-26 16:23:34 +08:00
Calcium-Ion
15b38adf98
Merge pull request #2460 from seefs001/feature/gemini-flash-minial
...
fix(gemini): handle minimal reasoning effort budget
2025-12-26 13:57:56 +08:00
Seefs
07cb6e9626
Merge pull request #2493 from shikaiwei1/patch-1
2025-12-24 16:52:24 +08:00
feitianbubu
1dc7ab9a97
fix: check claudeResponse delta StopReason nil point
2025-12-24 11:54:23 +08:00
John Chen
6dbe89f1cf
为Moonshot添加缓存tokens读取逻辑
...
为Moonshot添加缓存tokens读取逻辑。其与智普V4的逻辑相同,所以共用逻辑
2025-12-22 17:05:16 +08:00
Seefs
45649249b2
fix: 在Vertex Adapter过滤content[].part[].functionResponse.id
2025-12-21 17:22:04 +08:00
Seefs
39df47486c
fix(gemini): handle minimal reasoning effort budget
...
- Add minimal case to clampThinkingBudgetByEffort to avoid defaulting to full thinking budget
2025-12-18 08:10:46 +08:00
t0ng7u
c2ed76ddfd
🛡️ fix: prevent OOM on large/decompressed requests; skip heavy prompt meta when token count is disabled
...
Clamp request body size (including post-decompression) to avoid memory exhaustion caused by huge payloads/zip bombs, especially with large-context Claude requests. Add a configurable `MAX_REQUEST_BODY_MB` (default `32`) and document it.
- Enforce max request body size after gzip/br decompression via `http.MaxBytesReader`
- Add a secondary size guard in `common.GetRequestBody` and cache-safe handling
- Return **413 Request Entity Too Large** on oversized bodies in relay entry
- Avoid building large `TokenCountMeta.CombineText` when both token counting and sensitive check are disabled (use lightweight meta for pricing)
- Update READMEs (CN/EN/FR/JA) with `MAX_REQUEST_BODY_MB`
- Fix a handful of vet/formatting issues encountered during the change
- `go test ./...` passes
2025-12-16 17:00:19 +08:00
CaIon
3822f4577c
fix(audio): correct TotalTokens calculation for accurate usage reporting
2025-12-13 17:49:57 +08:00
CaIon
be2a863b9b
feat(audio): enhance audio request handling with token type detection and streaming support
2025-12-13 17:24:23 +08:00
CaIon
a1299114a6
refactor(error): replace dto.OpenAIError with types.OpenAIError for consistency
2025-12-13 16:43:57 +08:00
CaIon
7d586ef507
fix(helper): improve error handling in FlushWriter and related functions
2025-12-13 13:29:21 +08:00
Calcium-Ion
2a01d1c996
Merge pull request #2429 from QuantumNous/feat/xhigh
...
feat(adaptor): add '-xhigh' suffix to reasoning effort options
2025-12-12 22:06:19 +08:00
CaIon
27dd42718b
feat(adaptor): add '-xhigh' suffix to reasoning effort options for model parsing
2025-12-12 20:53:48 +08:00
Calcium-Ion
3c5edc54b7
Merge pull request #2426 from QuantumNous/feat/auto-cross-group-retry
...
feat(token): add cross-group retry option for token processing
2025-12-12 20:45:54 +08:00
CaIon
c87deaa7d9
feat(token): add cross-group retry option for token processing
2025-12-12 17:59:21 +08:00
zdwy5
85ecad90a7
fix: 支持aws 通过全局参数透传或者渠道参数透传来 调用 ( #2423 )
...
* fix: 支持aws 通过全局参数透传或者渠道参数透传来 调用
* fix(aws): replace json.Unmarshal with common.Unmarshal for request body processing
---------
Co-authored-by: r0 <liangchunlei@01.ai >
Co-authored-by: CaIon <i@caion.me >
2025-12-12 17:09:27 +08:00
Seefs
ee53a7b6bf
Merge pull request #2412 from seefs001/pr-2372
...
feat: add openai video remix endpoint
2025-12-11 23:35:23 +08:00
Calcium-Ion
a0f127496d
Merge pull request #2398 from seefs001/fix/video-proxy
...
fix: Use channel proxy settings for task query scenarios
2025-12-09 14:05:30 +08:00
Calcium-Ion
4e5c6297cb
Merge pull request #2356 from seefs001/feature/zhipiu_4v_image
...
feat: zhipu 4v image generations
2025-12-09 14:00:20 +08:00
Seefs
920e005048
fix: Use channel proxy settings for task query scenarios
2025-12-09 11:15:27 +08:00
Seefs
cf243588fa
Merge pull request #2229 from HynoR/chore/v1
...
fix: Set default to unsupported value for gpt-5 model series requests
2025-12-08 20:59:30 +08:00
Seefs
43c1068e50
Merge pull request #2375 from FlowerRealm/feat/add-claude-haiku-4-5
...
feat: add claude-haiku-4-5-20251001 model support
2025-12-08 20:46:02 +08:00
Papersnake
ae040d7db2
feat: support claude-haiku-4-5-20251001 on vertex
2025-12-08 17:28:36 +08:00
firstmelody
06c23ea562
fix(adaptor): fix reasoning suffix not processing in vertex adapter
2025-12-08 01:12:29 +08:00
FlowerRealm
a655801017
feat: add claude-haiku-4-5-20251001 model support
...
- Add model to Claude ModelList
- Add model ratio (0.5, $1/1M input tokens)
- Add completion ratio support (5x, $5/1M output tokens)
- Add cache read ratio (0.1, $0.10/1M tokens)
- Add cache write ratio (1.25, $1.25/1M tokens)
Model specs:
- Context window: 200K tokens
- Max output: 64K tokens
- Release date: October 1, 2025
2025-12-05 18:54:20 +08:00
Seefs
634651b463
feat: zhipu v4 image generations
2025-12-02 22:56:58 +08:00
Calcium-Ion
4c54836a53
Merge pull request #2344 from seefs001/feature/gemini-thinking-level
...
feat: gemini 3 thinking level gemini-3-pro-preview-high
2025-12-02 21:55:43 +08:00
CaIon
1fededceb3
feat: refactor token estimation logic
...
- Introduced new OpenAI text models in `common/model.go`.
- Added `IsOpenAITextModel` function to check for OpenAI text models.
- Refactored token estimation methods across various channels to use estimated prompt tokens instead of direct prompt token counts.
- Updated related functions and structures to accommodate the new token estimation approach, enhancing overall token management.
2025-12-02 21:34:39 +08:00
CaIon
e19e9ad2fa
feat(gemini): implement markdown image handling in text processing
2025-12-01 17:54:41 +08:00
Seefs
607f7305b7
feat: gemini 3 thinking level gemini-3-pro-preview-high
2025-12-01 16:40:46 +08:00
CaIon
5d05cd9d32
feat(gemini): add validation and conversion for imageConfig parameters in extra_body
2025-11-30 19:31:08 +08:00
CaIon
d4fbe1cee9
fix(vertex): ensure sampleCount is a positive integer and update OtherRatios
2025-11-30 19:05:33 +08:00