This guide walks through connecting Baft's multi-agent analytical pipeline to Claude Desktop (macOS or Windows), so that a Claude chat session becomes the human interface (HI-A) to the ITP analytical system.
┌─────────────────────┐ MCP (stdio or HTTP) ┌──────────────────┐
│ Claude Desktop │ ◄──────────────────────────► │ Heddle MCP Server │
│ (HI-A node) │ │ (baft gateway) │
└─────────────────────┘ └────────┬─────────┘
│ NATS bus
┌───────────────┼───────────────┐
▼ ▼ ▼
┌─────────┐ ┌──────────┐ ┌──────────┐
│ Router │ │ Workers │ │ DuckDB │
│ │ │ SP,IA,DE │ │ Queries │
│ │ │ XV,TN,...│ │ │
└─────────┘ └──────────┘ └──────────┘
Claude Desktop connects to the Heddle MCP server, which exposes Baft's workers and pipelines as MCP tools. When you ask Claude to process a source or run an analysis, Claude calls the appropriate tool, which routes through NATS to the right worker, and the structured result flows back into the chat.
What Claude sees as tools:
process_sources— Extract claims from raw source material (SP worker)analyze_intelligence— Analytical assessment against ITP framework (IA worker)update_database— Persist validated changes to YAML database (DE worker)validate_cross_refs— Check entity reference consistency (XV worker)submit_input— Quick note capture for time-sensitive findings (IN worker)run_quick_pipeline— Tier 1: direct database operation (XV → DE)run_standard_pipeline— Tier 2: full analytical cycle (SP → IA → XV → DE)run_audit_pipeline— Tier 3: publication audit with blind review (TN → LA+PA+RT → AS)itp_search,itp_filter,itp_stats,itp_get— DuckDB entity queriesworkshop.worker.list,workshop.worker.get,workshop.worker.update— Worker config managementworkshop.worker.test— Test a worker against a sample payloadworkshop.eval.run,workshop.eval.compare— Run evaluations and compare against baselinesworkshop.impact.analyze— Check which pipelines are affected by a config changeworkshop.deadletter.list,workshop.deadletter.replay— Dead-letter queue inspection and retry
What Claude sees as resources:
variables.yaml,observations.yaml,scenarios.yaml,traps.yaml,gaps.yaml,modules.yaml,sessions.yaml— readable ITP baseline data files
A separate MCP server (itp-telegram on 127.0.0.1:8765/mcp/) exposes
live Telegram capture from 30+ Persian/Arabic/English channels with
bias-weighted search and LLM-backed corroboration analysis. It runs
alongside the main Baft gateway documented here — both can be registered
in claude_desktop_config.json simultaneously. See
Telegram Capture + MCP for setup and tool details.
| Requirement | Install | Verify |
|---|---|---|
| Python 3.11+ | python.org or brew install python@3.11 |
python3 --version |
| uv | curl -LsSf https://astral.sh/uv/install.sh | sh |
uv --version |
| NATS server | brew install nats-server or Docker |
nats-server --version |
| Ollama | ollama.com | ollama --version |
| Claude Desktop | claude.ai/download | Open the app |
| Key | For | Get it |
|---|---|---|
ANTHROPIC_API_KEY |
IA worker (frontier tier) and audit workers | console.anthropic.com |
All three repos must be siblings in the same parent directory:
ITP_ROOT/ # e.g. ~/Projects/ITP
├── baseline/ # ITP YAML database
│ └── data/
│ ├── variables.yaml
│ ├── observations.yaml
│ └── ...
├── heddle/ # Actor mesh framework
└── baft/ # This repo — ITP application layer
cd /path/to/baft
# Install baft + heddle (heddle resolved from ../heddle automatically)
uv sync --extra devThis creates .venv/ and installs all dependencies including Heddle as an editable path dependency.
Local-tier workers (SP, DE, XV, IN, TN) use Ollama. Pull the default model:
ollama pull llama3.2:3bYou can use a different model by setting OLLAMA_MODEL:
export OLLAMA_MODEL="llama3.2:3b" # default
export OLLAMA_MODEL="qwen2.5:7b" # alternativeTo test which models work best for each role, use the audition script:
uv run python scripts/audition_models.py --role de --all-providersThis creates the queryable entity database used by the itp_search, itp_filter, itp_stats, and itp_get tools:
export ITP_ROOT="/path/to/ITP" # parent of baseline/, heddle/, baft/
uv run python pipeline/scripts/itp_import_to_duckdb.pyAfter initial import, use --incremental for updates:
uv run python pipeline/scripts/itp_import_to_duckdb.py --incrementalAdd these to your shell profile (~/.zshrc, ~/.bashrc, etc.):
# Required
export ANTHROPIC_API_KEY="sk-ant-api03-..."
export ITP_ROOT="/path/to/ITP"
# Optional (defaults shown)
export NATS_URL="nats://localhost:4222"
export OLLAMA_URL="http://localhost:11434"
export OLLAMA_MODEL="llama3.2:3b"This is the simplest setup. Claude Desktop spawns the MCP server directly as a child process via stdio. No HTTP, no ports, no network configuration.
The unified script starts NATS, the router, and all workers:
cd /path/to/baft
# Start everything except MCP (MCP will be spawned by Claude Desktop)
bash scripts/run_workers.shVerify workers are running:
# Check process status
cat .worker-pids
# Check NATS health
curl -s http://localhost:8222/varz | python3 -m json.toolOpen Claude Desktop's config file:
- Open Claude Desktop
- Click Claude menu (top menu bar) → Settings...
- Go to Developer tab
- Click Edit Config
This opens the config file at:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Add the Baft MCP server configuration:
{
"mcpServers": {
"baft": {
"command": "/path/to/baft/.venv/bin/heddle",
"args": [
"mcp",
"--config",
"/path/to/baft/configs/mcp/itp.yaml"
],
"env": {
"ANTHROPIC_API_KEY": "sk-ant-api03-...",
"ITP_ROOT": "/path/to/ITP",
"NATS_URL": "nats://localhost:4222",
"OLLAMA_URL": "http://localhost:11434",
"OLLAMA_MODEL": "llama3.2:3b"
}
}
}
}{
"mcpServers": {
"baft": {
"command": "C:\\path\\to\\baft\\.venv\\Scripts\\heddle.exe",
"args": [
"mcp",
"--config",
"C:\\path\\to\\baft\\configs\\mcp\\itp.yaml"
],
"env": {
"ANTHROPIC_API_KEY": "sk-ant-api03-...",
"ITP_ROOT": "C:\\path\\to\\ITP",
"NATS_URL": "nats://localhost:4222",
"OLLAMA_URL": "http://localhost:11434",
"OLLAMA_MODEL": "llama3.2:3b"
}
}
}
}Finding the heddle binary path:
# macOS / Linux
which heddle # if installed globally
# or use the venv path directly:
echo "$(cd /path/to/baft && pwd)/.venv/bin/heddle"
# Windows (PowerShell)
(Get-Command heddle).Source
# or:
Join-Path (Resolve-Path .\baft\.venv\Scripts) "heddle.exe"Fully quit Claude Desktop (don't just close the window):
- macOS: Right-click dock icon → Quit, or Cmd+Q
- Windows: Right-click system tray icon → Exit
Reopen Claude Desktop. You should see the MCP tools icon (hammer) in the chat input area. Click it to verify Baft's tools are listed.
In a new Claude conversation, try:
Search for entities related to "IRGC"
Claude should call itp_search and return structured results from the DuckDB database.
Use this if:
- You want the MCP server running independently of Claude Desktop
- You're connecting from claude.ai (web) instead of Claude Desktop
- You want to share the MCP server across multiple clients
The unified script can start the full stack including an HTTP MCP server:
cd /path/to/baft
bash scripts/baft.sh start --httpThis starts: NATS → router → workers → MCP server (HTTP on port 8765).
Verify:
# Health check
curl http://127.0.0.1:8765/health
# Should return: {"status": "ok", "name": "baft"}In claude_desktop_config.json:
{
"mcpServers": {
"baft": {
"url": "http://127.0.0.1:8765/mcp"
}
}
}Restart Claude Desktop after saving.
- Go to claude.ai → Settings → Connectors
- Scroll to Add custom connector
- Enter the URL:
http://127.0.0.1:8765/mcp - Click Add
Note: claude.ai custom connectors require the server to be reachable from your browser. For local development this works if both run on the same machine. For remote access you would need to expose the port (with appropriate authentication — see "Production considerations" below).
bash scripts/baft.sh status # Show running processes
bash scripts/baft.sh logs # Tail all logs
bash scripts/baft.sh logs ia_intelligence_analyst # Tail specific worker
bash scripts/baft.sh stop # Stop everythingClaude Code connects to MCP servers via stdio. Add the server to your Claude Code MCP config:
claude mcp add baft -- /path/to/baft/.venv/bin/heddle mcp --config /path/to/baft/configs/mcp/itp.yamlOr manually in ~/.claude/settings.json:
{
"mcpServers": {
"baft": {
"command": "/path/to/baft/.venv/bin/heddle",
"args": ["mcp", "--config", "/path/to/baft/configs/mcp/itp.yaml"],
"env": {
"ANTHROPIC_API_KEY": "sk-ant-...",
"ITP_ROOT": "/path/to/ITP",
"NATS_URL": "nats://localhost:4222"
}
}
}
}Start workers separately (Claude Code spawns only the MCP server, not the workers):
bash scripts/run_workers.shOnce connected, here are the common patterns for using Baft through Claude:
Update the status of variable VAR-042 to "active"
Claude calls run_quick_pipeline → XV validates the entity ID → DE writes the change.
Here is a new report from Fars News about IRGC economic activities. [paste or attach source text] Process this through the standard pipeline.
Claude calls run_standard_pipeline:
- SP extracts structured claims with epistemic tags
- IA interprets claims against the analytical framework, produces integration spec
- XV validates all cross-references
- DE persists to the YAML database
Run a publication audit on Brief BR-015 before we publish.
Claude calls run_audit_pipeline:
- TN strips ITP-specific terminology for blind review
- LA, PA, RT run in parallel — each receives only neutralized text
- AS synthesizes findings and produces an integration patch
How many active observations do we have by epistemic tag?
Claude calls itp_stats with group_by: epistemic_tag.
Show me all gaps related to nuclear program
Claude calls itp_search with query: "nuclear program", entity_type: "gap".
Validate these cross-references: ENT-001, ENT-002, ENT-015
Claude calls validate_cross_refs directly with the entity list.
-
Check the config JSON is valid. Use a JSON validator. A single trailing comma breaks it.
-
Verify the heddle binary path. Run the
command+argsmanually in a terminal:/path/to/baft/.venv/bin/heddle mcp --config /path/to/baft/configs/mcp/itp.yaml
You should see no output (it's waiting on stdio). Press Ctrl-C to exit.
-
Check Claude Desktop logs:
- macOS:
~/Library/Logs/Claude/mcp*.log - Windows:
%APPDATA%\Claude\logs\mcp*.log
tail -50 ~/Library/Logs/Claude/mcp-server-baft.log - macOS:
-
Ensure NATS is running before the MCP server starts:
curl -s http://localhost:8222/varz
- Default worker timeout is 60 seconds, pipeline timeout is 300 seconds.
- Check that workers are running:
bash scripts/baft.sh statusorcat .worker-pids - Check worker logs for errors:
bash scripts/baft.sh logs ia_intelligence_analyst - Verify Ollama is serving:
curl http://localhost:11434/api/tags
Workers and the MCP server need NATS to be running:
# Option 1: native
nats-server -p 4222 --http_port 8222 &
# Option 2: Docker
docker run -d --name nats-baft -p 4222:4222 -p 8222:8222 nats:latest --http_port 8222The resolve_config.py script expands silo: references in worker configs. Ensure ITP_ROOT points to the correct directory and that baseline/data/ exists:
ls $ITP_ROOT/baseline/data/-
Use forward slashes or escaped backslashes in JSON paths:
"C:/path/to/baft"or"C:\\path\\to\\baft" -
If
heddle.exeis not found, try usingpythonas the command:{ "command": "C:\\path\\to\\baft\\.venv\\Scripts\\python.exe", "args": ["-m", "heddle.cli.main", "mcp", "--config", "C:\\path\\to\\baft\\configs\\mcp\\itp.yaml"] } -
Ensure NATS is installed or running in Docker Desktop
Once connected, you also have access to tools for monitoring and quality management:
Ask Claude to test a worker or run evaluations:
Test the source processor with this sample text: [text]
Run the eval suite for the intelligence analyst
Compare eval results against the baseline
See the Analyst Guide for detailed workflows.
Failed tasks land in the dead-letter queue. Ask Claude:
Show me the dead-letter queue
Replay dead-letter entry DL-042
For real-time monitoring, open a terminal and run:
uv run heddle ui --nats-url nats://localhost:4222For hands-on worker management:
uv run heddle workshop --port 8080See the Operations Guide for technical details.
For deployments beyond local development:
- Authentication: The streamable-http transport currently has no authentication. For network-exposed deployments, add a reverse proxy (nginx, Caddy) with TLS and bearer token validation.
- Process management: Use
systemd(Linux),launchd(macOS), or a process manager likesupervisordto keep NATS, workers, and the MCP server running. - Monitoring: NATS exposes metrics at
:8222. Worker logs are in.worker-logs/. Use the TUI dashboard for real-time observation. Set up OpenTelemetry tracing for end-to-end pipeline visibility. - Scaling: Workers use NATS queue groups for competing-consumer load balancing. Start multiple instances of the same worker for horizontal scaling.
- Quality tracking: Use eval baselines to detect quality regressions when changing models or prompts.
| Task | Command |
|---|---|
| Install | cd baft && uv sync --extra dev |
| Import DuckDB | uv run python pipeline/scripts/itp_import_to_duckdb.py |
| Start everything (stdio) | Workers: bash scripts/run_workers.sh + Claude Desktop config |
| Start everything (HTTP) | bash scripts/baft.sh start --http |
| Stop everything | bash scripts/baft.sh stop |
| Check status | bash scripts/baft.sh status |
| View logs | bash scripts/baft.sh logs [worker_name] |
| Run tests | uv run pytest tests/ -v -m "not e2e" |
| Audition models | uv run python scripts/audition_models.py --role de --all-providers |