fix(opencode): use openai-compatible SDK for e2e LLM mock routing
The e2eURL() path used createOpenAI().responses() which produces a Responses API format that the mock server's SSE stream can't correctly convey tool calls through. Switch to createOpenAICompatible().chatModel() which uses the chat completions format — matching what the mock was designed for and what the integration tests use. This only affects the OPENCODE_E2E_LLM_URL code path (e2e tests only).pull/20593/head
parent
9a87b785e6
commit
d603d9da65
|
|
@ -1458,11 +1458,11 @@ export namespace Provider {
|
|||
return yield* Effect.promise(async () => {
|
||||
const url = e2eURL()
|
||||
if (url) {
|
||||
const language = createOpenAI({
|
||||
const language = createOpenAICompatible({
|
||||
name: model.providerID,
|
||||
apiKey: "test-key",
|
||||
baseURL: url,
|
||||
}).responses(model.api.id)
|
||||
}).chatModel(model.api.id)
|
||||
s.models.set(key, language)
|
||||
return language
|
||||
}
|
||||
|
|
|
|||
Loading…
Reference in New Issue