10KB is too aggressive for modern LLM API requests where conversation
context routinely exceeds 1MB. This causes error logs to contain only
a minimal placeholder, making it impossible to debug upstream failures.
256KB retains enough context for effective debugging while the existing
multi-pass trimming logic handles larger payloads gracefully.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>