Hi @daaain — first off, huge thanks for claude-code-log. The SQLite cache it produces (claude-code-log-cache.db) turned out to be the perfect substrate for a project I've been building, and I wanted to share it here in case it's interesting to you or other users of this tool.
What I built
claude-code-conversation-analyzer — a dashboard that reads the cclog SQLite DB and layers on:
- Cost analysis — per-session and per-model cost breakdowns using current Anthropic pricing (input/output/cache-creation/cache-read tokens)
- Quality scoring — weighted scoring across cache usage, conversation length, query specificity, context utilization, and prompt efficiency, with letter grades (A–F)
- Compaction detection — flags cache drops >10k tokens as compaction events
- Better UI — Next.js 16 + React 19 dashboard with:
- Project overview grid with grade distribution
- Conversation list with sortable metrics
- Expandable conversation cards with token charts (Recharts) and message transcripts
- Full-text search across projects and sessions
- Dark/light theme
How it integrates with claude-code-log
The pipeline runs claude-code-log as the data-generation step, then augments the resulting SQLite DB with two extra tables (session_costs, session_quality). Nothing in the cclog DB itself is modified — the analyzer is purely additive. A make all target handles the full flow: clone/update cclog fork → regenerate transcripts → run analysis → serve dashboard.
Hi @daaain — first off, huge thanks for
claude-code-log. The SQLite cache it produces (claude-code-log-cache.db) turned out to be the perfect substrate for a project I've been building, and I wanted to share it here in case it's interesting to you or other users of this tool.What I built
claude-code-conversation-analyzer — a dashboard that reads the cclog SQLite DB and layers on:
How it integrates with claude-code-log
The pipeline runs
claude-code-logas the data-generation step, then augments the resulting SQLite DB with two extra tables (session_costs,session_quality). Nothing in the cclog DB itself is modified — the analyzer is purely additive. Amake alltarget handles the full flow: clone/update cclog fork → regenerate transcripts → run analysis → serve dashboard.