A knowledge-grounded agent runtime. Agents that draw from a knowledge graph make better decisions.
Local-first. No cloud. No lock-in. Your conversations, notes, and wikilinks form a knowledge graph that agents draw from and contribute to — all as markdown files you own.
Early Development: APIs and storage formats may change. Contributions welcome!
Memory and knowledge are too fundamental to be an afterthought. Most AI tools treat conversations as disposable — Crucible makes them the foundation.
- Knowledge-grounded agents. Precognition auto-injects relevant context from your knowledge graph before each LLM turn. Block-level embeddings power semantic search at paragraph granularity. The more you use it, the smarter your agents get.
- Sessions are notes. Every chat saves as markdown in your kiln. Search them, link them with wikilinks, version them in git. Conversations become permanent, connectable knowledge.
- Neovim-like architecture. Lua/Fennel plugins, TUI-first, headless daemon with RPC. Most behaviors beyond the knowledge core can be scripted.
- Bring any LLM. Ollama, OpenAI, Anthropic, local GGUF models. Swap freely.
- Plaintext first. No proprietary formats. Files are the source of truth. The database is optional acceleration.
| Crucible | ChatGPT | Obsidian + AI | OpenClaw | |
|---|---|---|---|---|
| Local-first | ✅ | ❌ | ✅ | ✅ |
| Sessions as markdown | ✅ | ❌ | ❌ | ❌ |
| Knowledge graph | ✅ | ❌ | ✅ | ❌ |
| Bring your own LLM | ✅ | ❌ | Partial | ✅ |
| Plugin system | ✅ Lua/Fennel | ❌ | ✅ JS | ✅ TS |
| MCP server | ✅ | ❌ | ❌ | ❌ |
| Semantic search | ✅ Block-level | ❌ | Plugin | ❌ |
| Setup time | ~2 min | 0 | ~5 min | 2-7 hrs |
Pre-built binaries (Linux x86_64/aarch64, macOS Apple Silicon):
curl -fsSL https://github.com/Mootikins/crucible/releases/latest/download/crucible-cli-installer.sh | shFrom source:
cargo install --git https://github.com/Mootikins/crucible.git crucible-cli# Start a chat session
cru chat
# Chat with Claude Code, enriched by your knowledge base
cru chat -a claude
# Or start the MCP server for Claude/GPT integration
cru mcpFirst run prompts for a kiln path and detects available LLM providers. A background daemon auto-spawns via cru daemon serve to manage session state, file watching, and multi-session support. It communicates over a Unix socket and restarts automatically if stopped.
In a chat session:
- Type naturally, the agent responds with access to your knowledge base
/search queryinjects relevant notes into context:model,:set,:exportfor REPL commandsBackTabcycles modes: Normal → Plan → Auto
Interactive conversations with full session persistence. The TUI supports streaming markdown, tool calls, and multi-turn context. Sessions save as markdown files organized by workspace.
Wikilinks ([[Note Name]]) define your graph. No extraction step, no special syntax beyond what you'd write naturally. Query by graph traversal, semantic similarity, tags, or full-text search.
Expose your knowledge base to any MCP-compatible AI (Claude Desktop, Claude Code, GPT, local models):
cru mcpTools include semantic_search, create_note, get_outlinks, get_inlinks, and more.
Crucible can spawn and orchestrate external AI agents through the Agent Context Protocol. Your agent gets full access to Crucible's knowledge graph, semantic search, and tools.
# Use Claude Code with your knowledge base
cru chat -a claude
# Use OpenCode
cru chat -a opencode
# Use Gemini CLI
cru chat -a geminiBuilt-in agents (auto-discovered if installed):
| Agent | Command | Install |
|---|---|---|
| opencode | opencode |
go install github.com/grafana/opencode@latest |
| claude | npx @zed-industries/claude-agent-acp |
npm install -g @zed-industries/claude-agent-acp |
| gemini | gemini |
npm install -g gemini-cli |
| codex | npx @zed-industries/codex-acp |
npm install -g @zed-industries/codex-acp |
| cursor | cursor-acp |
npm install -g cursor-acp |
Agents can delegate tasks to each other. An ACP agent like Claude can hand off work to Cursor or OpenCode mid-conversation using the delegate_session tool, then incorporate the results. Delegation works both directions: internal agents can delegate to ACP agents, and ACP agents can delegate to other ACP agents.
Custom profiles go in crucible.toml:
[acp.agents.my-claude]
extends = "claude"
env = { ANTHROPIC_BASE_URL = "http://localhost:4000" }Then: cru chat -a my-claude
Drop .lua or .fnl files into ~/.config/crucible/plugins/ or your kiln's plugins/ directory:
-- @tool name="summarize" description="Summarize notes matching query"
-- @param query string "Search query"
function summarize(args)
local results = crucible.search(args.query)
return { summary = "Found " .. #results .. " notes" }
endSee the plugin guide for the full API.
- Documentation Site — searchable, organized reference
- docs/ is both the user guide and a working example kiln (155 interlinked notes)
- AGENTS.md covers architecture and AI agent instructions
| Command | Alias | Description |
|---|---|---|
cru chat |
c |
Interactive AI chat with session persistence |
cru chat -a <agent> |
Use an ACP agent (claude, opencode, gemini, etc.) | |
cru chat --resume <id> |
Resume a previous session | |
cru mcp |
Start MCP server for external AI agents | |
cru process |
p |
Parse, enrich, and store markdown files |
cru init |
i |
Initialize a new kiln (workspace) |
cru session create |
Create a new session (add --agent <profile> for ACP) |
|
cru session list |
s |
List sessions (live by default, --all includes persisted) |
cru session show <id> |
Show session details (daemon first, file fallback) | |
cru session open <id> |
Open a previous session in the TUI | |
cru session send <id> "msg" |
Send a message and stream the response | |
cru session configure <id> |
Set agent backend (provider, model, endpoint) | |
cru session pause <id> |
Pause a running daemon session | |
cru session resume <id> |
Resume a paused daemon session | |
cru session end <id> |
End a daemon session | |
cru session export <id> |
Export session to markdown | |
cru session search <q> |
Search sessions by title | |
cru set <id> key=val |
Tweak runtime settings (model, temperature, etc.) | |
cru stats |
Display kiln statistics | |
cru status |
Storage status and metrics | |
cru models |
List available LLM models | |
cru config init |
cfg |
Initialize config file |
cru config show |
Show effective configuration | |
cru agents list |
List registered agent cards | |
cru skills list |
List discovered agent skills | |
cru tasks list |
Manage tasks from TASKS.md | |
cru daemon start |
Start background daemon | |
cru daemon status |
Check daemon status | |
cru storage verify |
Verify content integrity | |
cru auth login |
Store LLM provider API key |
Run cru <command> --help for full options.
- TUI chat with session persistence and resume
- MCP server for external agents
- Lua/Fennel plugin system (17+ API modules)
- Block-level semantic search with reranking
- Precognition (auto-RAG before each turn)
- Daemon with auto-spawn, file watching, multi-session support
- Web chat interface
- ACP host mode (use Claude Code, Cursor, OpenCode through Crucible)
- ACP agent mode (embed Crucible in editors like Zed, JetBrains, Neovim)
MIT or Apache-2.0, at your option.

