Summary
The README and .env.example both document LLM_PROVIDER=ollama as a valid
option, but setting it causes an immediate crash at startup. The Ollama provider
is referenced in config but never implemented in the LLM client layer.
Steps to reproduce
- Clone the repo and copy
.env.example to .env
- Set
LLM_PROVIDER=ollama and OLLAMA_BASE_URL=http://localhost:11434
- Run
docker compose up
Error
ValueError: Unsupported LLM provider: ollama
File "/app/finbot/core/llm/client.py", line 35, in _get_client
raise ValueError(f"Unsupported LLM provider: {self.provider}")
Root cause
Two separate issues compound each other:
finbot/core/llm/client.py has no ollama branch — only openai is handled
- Even when worked around via
LLM_PROVIDER=openai + OPENAI_BASE_URL,
finbot/agents/chat.py and finbot/core/llm/openai_client.py use the
OpenAI Responses API (client.responses.create()) which Ollama does not
implement. Ollama only supports Chat Completions (/v1/chat/completions).
Impact
Anyone wanting to run FinBot locally without an OpenAI API key cannot use the
platform at all, despite the README advertising local Ollama support.
Proposed fix
- Route
LLM_PROVIDER=ollama through the OpenAI-compatible client with
base_url set to OLLAMA_BASE_URL/v1
- Rewrite the two
responses.create() call sites to use
chat.completions.create() with streaming delta handling
- Remap Responses API message formats (
function_call, function_call_output,
developer role) to Chat Completions equivalents
- Document the working
.env setup for Ollama in README
I have a working patch and can submit a PR if this approach is acceptable.
Environment
- Ollama 0.x, WSL2 Ubuntu 24.04
- Tested with
qwen3.5:9b as the target model
- All challenges functional after patching
Summary
The README and
.env.exampleboth documentLLM_PROVIDER=ollamaas a validoption, but setting it causes an immediate crash at startup. The Ollama provider
is referenced in config but never implemented in the LLM client layer.
Steps to reproduce
.env.exampleto.envLLM_PROVIDER=ollamaandOLLAMA_BASE_URL=http://localhost:11434docker compose upError
ValueError: Unsupported LLM provider: ollama
File "/app/finbot/core/llm/client.py", line 35, in _get_client
raise ValueError(f"Unsupported LLM provider: {self.provider}")
Root cause
Two separate issues compound each other:
finbot/core/llm/client.pyhas noollamabranch — onlyopenaiis handledLLM_PROVIDER=openai+OPENAI_BASE_URL,finbot/agents/chat.pyandfinbot/core/llm/openai_client.pyuse theOpenAI Responses API (
client.responses.create()) which Ollama does notimplement. Ollama only supports Chat Completions (
/v1/chat/completions).Impact
Anyone wanting to run FinBot locally without an OpenAI API key cannot use the
platform at all, despite the README advertising local Ollama support.
Proposed fix
LLM_PROVIDER=ollamathrough the OpenAI-compatible client withbase_urlset toOLLAMA_BASE_URL/v1responses.create()call sites to usechat.completions.create()with streaming delta handlingfunction_call,function_call_output,developerrole) to Chat Completions equivalents.envsetup for Ollama in READMEI have a working patch and can submit a PR if this approach is acceptable.
Environment
qwen3.5:9bas the target model