Skip to content

Releases: cdeust/Cortex

v3.0.0

30 Mar 00:15

Choose a tag to compare

  • feat: redesign unified graph visualization UX
  • docs: update hero image alt text for v2.6.0 screenshots
  • docs: update hero images with v2.6.0 unified graph screenshots

Full Changelog: v2.6.0...v3.0.0

v2.6.0

29 Mar 21:25

Choose a tag to compare

  • fix: add HF model cache to release workflow
  • fix: add sentence-transformers and networkx to dev deps for CI
  • fix: move model pre-download after pip install, allow soft failure
  • fix: cache HuggingFace model in CI to prevent download failures
  • fix: ruff format http_standalone.py
  • release: v2.6.0 — remove memory dashboard, global memory dedup fix, cleanup

Full Changelog: v2.5.5...v2.6.0

v2.5.2

29 Mar 17:42

Choose a tag to compare

  • release: v2.5.2 — schema resilience, lint fixes, all backends aligned
  • fix: ruff format + remove unused variable to pass CI lint
  • fix: consistent per-statement schema init across PG, SQLite, and Docker
  • feat: add cortex-remember-global and cortex-recall-global skills
  • feat: global memories — cross-project knowledge with auto-detection and visualization
  • fix: split DDL into individual statements to prevent cascading schema failures

Full Changelog: v2.5.1...v2.5.2

v2.5.1

29 Mar 15:27

Choose a tag to compare

  • release: v2.5.1 — version bump (2.5.0 tarball burned on PyPI)

Full Changelog: v2.5.0...v2.5.1

v2.5.0

29 Mar 14:47

Choose a tag to compare

  • fix: align package.json and safeskill-report.json to v2.5.0
  • release: v2.5.0
  • fix: 5 user-reported bugs — path validation, hierarchical recall, causal chain, narrative, seed
  • fix: persistent memory across Docker container restarts
  • fix: use shields.io badge for SafeSkill — their badge API is broken
  • fix: SafeSkill badge uses dynamic API from published npm package
  • fix: Docker entrypoint registers MCP in .claude.json, strips host config
  • fix: badge links to local SafeSkill report
  • docs: add SafeSkill 94/100 scan report, remove stale badge link
  • fix: add minimal package.json for SafeSkill scanner, update badge to 94/100
  • fix: remove legacy Node.js files flagged by SafeSkill security scan
  • Merge pull request #2 from OyaAIProd/safeskill-scan-1774780323456
  • Add SafeSkill security badge (80/100)
  • fix: Docker runs Claude Code inside container with stdio MCP (no HTTP bridge)

What's Changed

  • Add SafeSkill security badge (80/100 — Passes with Notes) by @OyaAIProd in #2

New Contributors

Full Changelog: v2.4.1...v2.5.0

v2.4.1 — Fix UI path resolution for plugin installs

29 Mar 01:41

Choose a tag to compare

  • fix: release workflow installs postgresql extras for test isolation
  • release: v2.4.1 — fix UI path resolution for uv/plugin installs
  • fix: UI files not found when installed via uv plugin cache
  • docs: add Docker install path to README, update plugin descriptions
  • feat: Docker image with PostgreSQL, pgvector, and Claude Code CLI
  • fix: bundle UI in wheel, standalone HTTP servers survive MCP shutdown (v2.4.0)

Full Changelog: v2.3.0...v2.4.1

v2.4.0

28 Mar 23:09

Choose a tag to compare

What's Changed

Bug Fixes

  • UI files bundled in packagememory-dashboard.html, unified-viz.html, and methodology-viz.html were missing from pip/uv installs. Added hatch force-include to bundle ui/ into the wheel, plus get_ui_root() to resolve both package and dev layouts.

  • Visualization servers survive session end — HTTP servers previously ran as daemon threads inside the MCP process and died when Claude's session ended, leaving a dead browser page. Now spawn as detached subprocesses (start_new_session=True) with their own DB connections and 10-minute idle self-termination. No human intervention needed — call the tool, browser opens, server stays alive.

  • Tree-sitter fallback hardenedast_parser now gracefully falls back to regex parsing when tree-sitter-language-pack is not installed, instead of crashing with ModuleNotFoundError.

Testing

  • Removed all test skips — tests adapt to available dependencies (tree-sitter, networkx, PostgreSQL) instead of skipping
  • Fixed test isolation — stats and checkpoint tests no longer assume empty DB, spell benchmark tests fall back to direct store queries when semantic recall is unavailable
  • All CI jobs passing on Python 3.10–3.13

v2.3.0 — Codebase Intelligence + SQLite Fallback

28 Mar 18:12

Choose a tag to compare

What's New

Codebase Intelligence (replaces GitNexus)

Index your codebase directly into Cortex memory — no external tools needed.

codebase_analyze(directory="/path/to/project")
  • Tree-sitter AST parsing for Python, TypeScript, Go, Swift, Rust (regex fallback)
  • Cross-file import resolution — resolves module names to actual files
  • Type-reference resolution — for Swift/Go implicit cross-file type usage
  • Class-method binding — methods scoped to parent class
  • Inheritance tracking — extends edges in the knowledge graph
  • Community detection — Louvain clustering via networkx
  • Impact analysis — upstream/downstream BFS
  • Incremental — SHA-256 hash, only re-processes changed files
  • Works on both PostgreSQL and SQLite (Cowork compatible)

Pipeline → Memory Integration

Pipeline events flow back to Cortex memory. Each stage stores observations with structured tags for cross-session recall.

SQLite Fallback

Cortex runs in sandboxed environments without PostgreSQL. SQLite + sqlite-vec provides the same API with automatic fallback.

Spell Alteration Benchmark

Needle-in-a-haystack: ingest 1.5M tokens as 3000+ memories, replace 2 terms, test detection.

  • Test A: Spot fakes in 3372 memories — PASS
  • Test B: Compare original vs altered, identify correct pairing — PASS

Stats

  • 35 MCP tools (+1 codebase_analyze)
  • 2000+ tests passing across Python 3.10-3.13
  • 7 benchmarks

Install

```bash
claude plugin marketplace add cdeust/Cortex
claude plugin install cortex
```

Optional:
```bash
pip install neuro-cortex-memory[codebase] # tree-sitter + networkx
pip install neuro-cortex-memory[postgresql] # psycopg + pgvector
```

v2.1.0

27 Mar 22:35

Choose a tag to compare