title: "Add Ollama provider for local LLM generation"
labels: ["good first issue", "provider", "enhancement"]
Summary
Add an Ollama provider so users can run devdocs-forge-agent entirely locally with open-weight LLMs (Llama 3, Mistral, Phi, etc.) without any API key or internet connection.
Background
devdocs-forge-agent uses a provider abstraction (src/providers/provider.types.ts) that makes adding new providers straightforward. Each provider implements two fields (name, model) and one method (generate(options)). All existing providers use native fetch with no external SDK packages.
Files to Touch
| File |
Change |
src/providers/ollama.provider.ts |
Create — implement Provider interface |
src/providers/provider-registry.ts |
Add 'ollama' case to getProvider() switch |
.env.example |
Add OLLAMA_BASE_URL and OLLAMA_MODEL |
docs/PROVIDERS.md |
Document Ollama setup |
tests/provider-registry.test.ts |
Add test for DEVDOCS_PROVIDER=ollama |
Implementation Guide
- Create
src/providers/ollama.provider.ts:
import type { Provider, GenerateOptions } from './provider.types.js';
import { DocuForgeError } from '../utils/errors.js';
export class OllamaProvider implements Provider {
readonly name = 'ollama';
readonly model: string;
private readonly baseUrl: string;
constructor() {
this.baseUrl = process.env.OLLAMA_BASE_URL ?? 'http://localhost:11434';
this.model = process.env.OLLAMA_MODEL ?? 'llama3';
}
async generate(options: GenerateOptions): Promise<string> {
const response = await fetch(`${this.baseUrl}/api/generate`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
model: this.model,
prompt: options.prompt,
stream: false,
}),
}).catch(() => {
throw new DocuForgeError(
`Could not connect to Ollama at ${this.baseUrl}`,
'OLLAMA_UNREACHABLE',
'Make sure Ollama is running: https://ollama.ai',
);
});
if (!response.ok) {
throw new DocuForgeError(`Ollama error: ${response.statusText}`, 'OLLAMA_ERROR');
}
const data = await response.json() as { response: string };
return data.response;
}
}
- Register in
src/providers/provider-registry.ts:
case 'ollama': return new OllamaProvider();
- Add to
.env.example:
# --- Ollama (local LLMs, no API key needed) ---
# DEVDOCS_PROVIDER=ollama
# OLLAMA_BASE_URL=http://localhost:11434
# OLLAMA_MODEL=llama3
Acceptance Criteria
Difficulty
Low — ~80 lines of new code, no external packages needed.
How to Get Started
git clone https://github.com/AnkitParekh007/devdocs-forge-agent.git
cd devdocs-forge-agent
npm install
npm test # all 54 tests should pass before you start
Questions? Comment on this issue before opening a PR.
title: "Add Ollama provider for local LLM generation"
labels: ["good first issue", "provider", "enhancement"]
Summary
Add an Ollama provider so users can run devdocs-forge-agent entirely locally with open-weight LLMs (Llama 3, Mistral, Phi, etc.) without any API key or internet connection.
Background
devdocs-forge-agent uses a provider abstraction (
src/providers/provider.types.ts) that makes adding new providers straightforward. Each provider implements two fields (name,model) and one method (generate(options)). All existing providers use nativefetchwith no external SDK packages.Files to Touch
src/providers/ollama.provider.tssrc/providers/provider-registry.ts'ollama'case togetProvider()switch.env.exampleOLLAMA_BASE_URLandOLLAMA_MODELdocs/PROVIDERS.mdtests/provider-registry.test.tsDEVDOCS_PROVIDER=ollamaImplementation Guide
src/providers/ollama.provider.ts:src/providers/provider-registry.ts:.env.example:Acceptance Criteria
DEVDOCS_PROVIDER=ollama npm run generate -- --file ...works when Ollama is runningOLLAMA_BASE_URLandOLLAMA_MODELare read from environmentnpm run doctordoes not fail if Ollama is not setdocs/PROVIDERS.mdupdated with Ollama setup instructions.env.exampleupdatedDifficulty
Low — ~80 lines of new code, no external packages needed.
How to Get Started
Questions? Comment on this issue before opening a PR.