How to use Synix memory from your agent.
Synix runs offline — it processes sources into structured memory artifacts. Your agent reads the output at inference time. They're decoupled.
sources ──→ synix build ──→ artifacts (immutable)
│
synix release
│
▼
search.db + context.md
│
your agent reads at runtime
You rebuild when new sources arrive. Your agent always reads the latest release. They don't need to be running at the same time.
The most direct integration. Import synix, open your project, and query the release.
import synix
project = synix.open_project("/path/to/my-project")
mem = project.release("local")
# Keyword search (fastest, no embeddings needed)
results = mem.search("return policy", mode="keyword", limit=5)
# Hybrid search (keyword + semantic, requires embedding_config in pipeline)
results = mem.search("return policy", mode="hybrid", limit=5)
for r in results:
print(f"[{r.layer}] {r.label}: {r.score:.2f}")
print(f" Source: {' → '.join(r.provenance)}")
print(f" {r.content[:300]}")Each result includes:
content— the matching artifact textlabel— artifact identifierlayer— which pipeline layer produced it (e.g., "episodes", "monthly")score— relevance scoreprovenance— list of ancestor labels back to the sourcemetadata— arbitrary metadata from the pipeline
Search specific layers to control the level of detail:
# Only search episode summaries (granular)
results = mem.search("return policy", layers=["episodes"], limit=5)
# Only search the core memory (high-level)
results = mem.search("return policy", layers=["core"], limit=5)If your pipeline includes a FlatFile projection, you can load it directly into your agent's system prompt:
context = mem.flat_file("context-doc")
# context is a string — the rendered markdown of your core memory
# inject it into your agent's system prompt# Get a specific artifact by label
core = mem.artifact("core-memory")
print(core.content)
# Walk provenance
lineage = mem.lineage("core-memory")
for ancestor in lineage:
print(f"← {ancestor.label} ({ancestor.layer})")import synix
# Open project and load pipeline
project = synix.open_project("./my-project")
project.load_pipeline()
# Add a new source
src = project.source("transcripts")
src.add_text(new_content, label="session-2026-03-10")
# Rebuild (incremental — only new artifacts process)
result = project.build()
print(f"Built: {result.built}, Cached: {result.cached}")
# Release and search
project.release_to("local")
mem = project.release("local")
results = mem.search("what happened today")from synix.sdk import (
SynixNotFoundError, # no .synix/ directory found
ReleaseNotFoundError, # named release doesn't exist
ArtifactNotFoundError, # label not in snapshot
EmbeddingRequiredError, # semantic search requested but no embeddings
PipelineRequiredError, # operation needs a loaded pipeline
)For agents running in Claude Desktop, Cursor, or any MCP-compatible host. Synix exposes its full SDK as MCP tools over stdio.
{
"mcpServers": {
"synix": {
"command": "uvx",
"args": ["--from", "synix[mcp]", "python", "-m", "synix.mcp"],
"env": {
"SYNIX_PROJECT": "/path/to/my-project"
}
}
}
}| Tool | What it does |
|---|---|
open_project |
Open a synix project directory |
build |
Run the pipeline (incremental) |
release |
Materialize search index from latest build |
search |
Query the search index |
get_artifact |
Read a specific artifact by label |
lineage |
Walk provenance chain for an artifact |
list_artifacts |
List all artifacts, optionally filtered by layer |
list_layers |
List layers with artifact counts |
get_flat_file |
Read a flat file projection (e.g., context.md) |
source_add_text |
Add a source as text |
source_add_file |
Add a source from a file path |
source_list |
List source files |
list_releases |
List all named releases |
A typical agent session:
open_project→ connect to the memory projectsearch→ retrieve relevant memory for the current query- Use results in the response
source_add_text→ save the current session as a new sourcebuild→ rebuild memory (incremental)release→ update the search index
See docs/mcp.md for the full tool reference.
Pipe Synix CLI output into your automation scripts:
# Search with JSON output
uvx synix search "return policy" --release local --mode keyword --limit 5
# Get a specific artifact
uvx synix show core-memory
# List all artifacts in a layer
uvx synix list episodesUseful for cron jobs, CI/CD pipelines, or shell-based agent frameworks.
The release directory contains standard files you can read directly:
.synix/releases/local/
├── search.db # SQLite FTS5 database — query with any SQLite client
├── context.md # Flat markdown file — load as agent context
└── receipt.json # Release metadata (snapshot ref, timestamps, adapter status)
import sqlite3
conn = sqlite3.connect(".synix/releases/local/search.db")
cursor = conn.execute(
"SELECT label, layer, content, rank FROM search WHERE search MATCH ? ORDER BY rank LIMIT 5",
("return policy",)
)
for row in cursor:
print(f"[{row[1]}] {row[0]}: {row[3]:.2f}")with open(".synix/releases/local/context.md") as f:
context = f.read()
# inject into system promptRebuild when new sources arrive. Synix handles the rest:
- New sources: Only new episodes process. Existing artifacts stay cached.
- Changed prompts: Only downstream artifacts rebuild.
- Changed model config: Affected layers rebuild (fingerprint includes model settings).
- Nothing changed: Build completes instantly (everything cached).
Automate rebuilds with cron, a file watcher, or trigger from your application when new data arrives.
# Example cron: rebuild every hour
0 * * * * cd /path/to/project && uvx synix build && uvx synix release HEAD --to local| Mode | What it does | Requirements |
|---|---|---|
keyword |
BM25 full-text search | Default. No extra config. |
semantic |
Cosine similarity on embeddings | Requires embedding_config on SearchSurface |
hybrid |
Keyword + semantic with rank fusion | Requires embedding_config |
layered |
Search with layer-level weighting | Requires embedding_config |
If your pipeline only declares modes=["fulltext"], use keyword mode. If it declares modes=["fulltext", "semantic"] with an embedding_config, all modes are available.
- SDK Reference — full Python API with all methods and types
- MCP Server — complete tool reference and configuration
- Architecture — why memory needs tiers
- Getting Started — build your first pipeline