Replace the custom TestLLM Effect service with the real LLM layer +
TestLLMServer HTTP mock. Tests now exercise the full HTTP→SSE→AI SDK
pipeline instead of injecting Effect streams directly.
- Extend TestLLMServer with usage support on text responses and
httpError step type for non-200 responses
- Drop reasoning test (can't produce reasoning events via
@ai-sdk/openai-compatible SSE)
- 9 tests pass, covering: text capture, token overflow, error retry,
structured errors, context overflow, abort/interrupt cleanup