Skip to content

Improve summarization model output quality#16

Open
jonocodes wants to merge 1 commit intomainfrom
claude/improve-summarization-quality-OE8QV
Open

Improve summarization model output quality#16
jonocodes wants to merge 1 commit intomainfrom
claude/improve-summarization-quality-OE8QV

Conversation

@jonocodes
Copy link
Owner

Adds support for LLM-based summarization via Ollama, configurable as an alternative to the existing extractive (LexRank) summarizer. Users can toggle between backends with STASHCAST_SUMMARIZER env var.

Features:

  • New STASHCAST_SUMMARIZER setting: 'extractive' (default) or 'ollama'
  • Ollama config: STASHCAST_OLLAMA_HOST, STASHCAST_OLLAMA_MODEL
  • Default model: qwen2.5:1.5b (good quality, 128K context, CPU-friendly)
  • Automatic model availability detection with helpful error messages
  • Status indicator in admin sidebar showing summarizer state
  • New ./manage.py check_ollama command to verify setup
  • Summarization remains non-blocking (separate Huey task)

Adds support for LLM-based summarization via Ollama, configurable as an
alternative to the existing extractive (LexRank) summarizer. Users can
toggle between backends with STASHCAST_SUMMARIZER env var.

Features:
- New STASHCAST_SUMMARIZER setting: 'extractive' (default) or 'ollama'
- Ollama config: STASHCAST_OLLAMA_HOST, STASHCAST_OLLAMA_MODEL
- Default model: qwen2.5:1.5b (good quality, 128K context, CPU-friendly)
- Automatic model availability detection with helpful error messages
- Status indicator in admin sidebar showing summarizer state
- New ./manage.py check_ollama command to verify setup
- Summarization remains non-blocking (separate Huey task)
@jonocodes jonocodes marked this pull request as ready for review January 21, 2026 16:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants