This commit refactors the application's error handling mechanism by introducing a new standardized error type, `types.NewAPIError`. It also renames common JSON utility functions for better clarity.
Previously, internal error handling was tightly coupled to the `dto.OpenAIError` format. This change decouples the internal logic from the external API representation.
Key changes:
- A new `types.NewAPIError` struct is introduced to serve as a canonical internal representation for all API errors.
- All relay adapters (OpenAI, Claude, Gemini, etc.) are updated to return `*types.NewAPIError`.
- Controllers now convert the internal `NewAPIError` to the client-facing `OpenAIError` format at the API boundary, ensuring backward compatibility.
- Channel auto-disable/enable logic is updated to use the new standardized error type.
- JSON utility functions are renamed to align with Go's standard library conventions (e.g., `UnmarshalJson` -> `Unmarshal`, `EncodeJson` -> `Marshal`).
* backend
- constant/endpoint_type.go
• Add EndpointTypeMidjourney, EndpointTypeSuno, EndpointTypeKling, EndpointTypeJimeng.
- common/endpoint_type.go
• Map Midjourney / MidjourneyPlus, SunoAPI, Kling, Jimeng channel types to the new endpoint types.
* frontend
- ModelPricing.js
• Add “Supported Endpoint Type” column.
• Implement renderSupportedEndpoints with `stringToColor` for consistent tag colors.
These changes allow `/api/pricing` and model lists to return accurate
`supported_endpoint_types` covering all non-OpenAI providers and display
them clearly in the UI.
No breaking changes.
This update replaces instances of DecodeJson and DecodeJsonStr with UnmarshalJson and UnmarshalJsonStr in various relay handlers, enhancing code consistency and clarity in JSON processing. The changes improve maintainability and align with recent refactoring efforts in the codebase.
This update adds the IOCopyBytesGracefully function to the common package, which simplifies the process of copying response bodies in the OpenAI handlers. It enhances error handling and ensures proper resource management by encapsulating the logic for setting headers and writing response data. The OpenAI handlers have been refactored to utilize this new function, improving code clarity and maintainability.
Reason: The original steps 1 and 3 in the redisRateLimitHandler method were not atomic, leading to poor precision under high concurrent requests. For example, with a rate limit set to 60, sending 200 concurrent requests would result in none being blocked, whereas theoretically around 140 should be intercepted.
Solution: I chose not to merge steps 1 and 3 into a single Lua script because a single atomic operation involving read, write, and delete operations could suffer from performance issues under high concurrency. Instead, I implemented a token bucket algorithm to optimize this, reducing the atomic operation to just read and write steps while significantly decreasing the memory footprint.