- Introduce `AZURE_DEFAULT_API_VERSION` environment variable
- Set default Azure API version to `2024-12-01-preview`
- Update README documentation for new environment configuration
- Modify Azure channel relay to use default API version when not specified
- Add `RecodeModelName` to `RelayInfo` struct for more flexible model name tracking
- Update text relay and quota consumption to use `RecodeModelName`
- Move reasoning effort from admin info to other info in log generation
- Ensure consistent model name handling across relay components
- Add `ReasoningEffort` field to `RelayInfo` struct
- Update log generation to include reasoning effort in admin info
- Modify logs table component to display reasoning effort when available
- Preserve reasoning effort information during request processing
- Remove model name suffixes after extracting reasoning effort
- Update upstream model name to reflect the base model
- Ensure clean model name is passed to the upstream service
- Modify model suffix parsing to use hyphen-separated suffixes
- Ensure consistent parsing of `-high`, `-medium`, and `-low` reasoning effort indicators
- Support setting reasoning effort via model name suffix
- Add `-high`, `-medium`, and `-low` suffixes to control reasoning effort
- Update README with new model configuration option
- Modify OpenAI adaptor to handle reasoning effort settings
The issue was caused by the `omitempty` tag in the Go struct, which prevented the `temperature` field from being included in the JSON output when it was set to 0.
Signed-off-by: Butui Hu <hot123tea123@gmail.com>
- Implemented conditional logic to double the streaming timeout for models starting with "o1" or "o3".
- Improved handling of streaming timeout configuration to enhance performance based on model type.
- Add file data DTO for structured file handling
- Implement file decoder service
- Update Claude and Gemini relay channels to handle various file types
- Reorganize worker service to cf_worker for clarity
- Update token counter and image service for new file types
- Updated the GeminiChatHandler function to accept an additional parameter, RelayInfo, allowing for better context handling during chat operations.
- Modified the DoResponse method in the Adaptor to pass RelayInfo to GeminiChatHandler, ensuring consistent usage of upstream model information.
- Enhanced the GeminiChatStreamHandler to utilize the upstream model name from RelayInfo, improving response accuracy and data representation in Gemini requests.
- Eliminated the unused `context` import and the logging of `geminiRequest` in the `CovertGemini2OpenAI` function, improving code cleanliness and reducing unnecessary overhead.
- This change enhances the maintainability of the code by removing redundant elements that do not contribute to functionality.
- Introduced a new `FunctionResponse` type to encapsulate function call responses, improving the clarity of data handling.
- Updated the `GeminiPart` struct to include the new `FunctionResponse` field, allowing for better representation of function call results in Gemini requests.
- Modified the `CovertGemini2OpenAI` function to handle tool calls more effectively by setting the message role and appending function responses to the Gemini parts, enhancing the integration with OpenAI and Gemini systems.
- Changed the type of ToolCalls in the Message struct from `any` to `json.RawMessage` for better type safety and clarity.
- Introduced ParseToolCalls and SetToolCalls methods to handle ToolCalls more effectively, improving code readability and maintainability.
- Updated the ParseContent method to work with the new MediaContent type instead of MediaMessage, enhancing the structure of content parsing.
- Refactored Gemini relay functions to utilize the new ToolCalls handling methods, streamlining the integration with OpenAI and Gemini systems.
- Removed redundant checks for non-empty properties in function parameters.
- Set function parameters to nil when no properties are needed, streamlining the logic for handling Gemini requests.
- Improved code clarity and maintainability by eliminating unnecessary complexity.
- Added logic to ensure that function parameters have non-empty properties.
- Implemented checks to add a default empty property if no parameters are needed.
- Updated the required field to match existing properties, improving the robustness of the Gemini function integration.
- Updated `CovertGemini2OpenAI` function to return an error alongside the GeminiChatRequest, improving error reporting for image processing.
- Modified `ConvertRequest` methods in both `adaptor.go` files to handle potential errors from the Gemini conversion, ensuring robust request handling.
- Improved clarity and maintainability of the code by explicitly managing error cases during request conversion.
- Introduced `GEMINI_VISION_MAX_IMAGE_NUM` to README files for better user guidance.
- Updated `env.go` to retrieve the maximum image number from environment variables, defaulting to 16.
- Modified image handling logic in `relay-gemini.go` to respect the new configuration, allowing disabling of the limit by setting it to -1.
- Removed hardcoded constant for maximum image number in `constant.go` to streamline configuration management.
- Included additional versions: "gemini-2.0-flash-thinking-exp" and "gemini-2.0-flash-thinking-exp-1219".
- Added comments to categorize versions as old, experimental, and flash experimental for better clarity.
- Simplified conditional checks in UpdateChannelStatusById function in channel.go to enhance readability.
- Commented out unused image number check in relay-gemini.go for clarity.
- Updated JSON field in en.json for currency consistency, changing "元" from "RMB/CNY" to "CNY" and added a space in "实付金额:" for formatting.