Evolve is a system designed to help agents improve over time by learning from their trajectories. It uses a combination of an MCP server for tool integration, vector storage for memory, and LLM-based conflict resolution to refine its knowledge base.
- MCP Server: Exposes tools to get guidelines and save trajectories.
- Conflict Resolution: Intelligently merges new insights with existing guidelines using LLMs.
- Trajectory Analysis: Automatically analyzes agent trajectories to generate guidelines and best practices.
- Milvus Integration: Uses Milvus (or Milvus Lite) for efficient vector storage and retrieval.
Prerequisites:
- Python 3.12 or higher
uv(recommended) orpip
git clone <repository_url>
cd altk-evolve
uv venv --python=3.12 && source .venv/bin/activate
uv syncFor direct OpenAI usage:
export OPENAI_API_KEY=sk-...For LiteLLM proxy usage and model selection (including global fallback via EVOLVE_MODEL_NAME), see the configuration guide.
Evolve provides both a standard MCP server and a full Web UI (Dashboard & Entity Explorer).
Important
Building from Source: If you cloned this repository (rather than installing a pre-built package), you must build the UI before it can be served.
cd altk_evolve/frontend/ui
npm ci && npm run build
cd ../../../See altk_evolve/frontend/ui/README.md for more frontend development details.
The easiest way to start both the MCP Server (on standard input/output) and the HTTP UI backend is to run the module directly:
uv run python -m evolve.frontend.mcpThis will start the UI server in the background on port 8000 and the MCP server in the foreground. You can then access the UI locally by opening your browser to:
http://127.0.0.1:8000/ui/
If you only want to access the Web UI and API (without the MCP server stdio blocking the terminal), you can run the FastAPI application directly using uvicorn:
uv run uvicorn evolve.frontend.mcp.mcp_server:app --host 127.0.0.1 --port 8000Then navigate to http://127.0.0.1:8000/ui/.
If you're attaching Evolve to an MCP client that requires a direct command (like Claude Desktop):
uv run fastmcp run altk_evolve/frontend/mcp/mcp_server.py --transport stdioOr for SSE transport:
uv run fastmcp run altk_evolve/frontend/mcp/mcp_server.py --transport sse --port 8201Verify it's running:
npx @modelcontextprotocol/inspector@latest http://127.0.0.1:8201/sse --cli --method tools/listAvailable tools:
get_entities(task: str, entity_type: str): Get relevant entities for a specific task, filtered by type (e.g., 'guideline', 'policy').get_guidelines(task: str): Get relevant guidelines for a specific task (backward compatibility alias).save_trajectory(trajectory_data: str, task_id: str | None): Save a conversation trajectory and generate new guidelines.create_entity(content: str, entity_type: str, metadata: str | None, enable_conflict_resolution: bool): Create a single entity in the namespace.delete_entity(entity_id: str): Delete a specific entity by its ID.
Evolve automatically tracks the origin of every guideline it generates or stores. Every tip entity contains metadata identifying its source:
creation_mode: Identifies how the tip was created (auto-phoenixvia trace observability,auto-mcpvia trajectory saving tools, ormanual).source_task_id: The ID of the original trace or task that inspired the tip, providing full audibility.
See the Low-Code Tracing Guide for more details.
Evolve is an active project, and real‑world usage helps guide its direction.
If Evolve is useful or aligned with your work, consider giving the repo a ⭐ — it helps others discover it.
If you’re experimenting with Evolve or exploring on‑the‑job learning for agents, feel free to open an issue or discussion to share use cases, ideas, or feedback.
- Documentation Home - Overview of guides, reference docs, and tutorials
- Installation - Setup instructions for supported platforms
- Configuration - Environment variables and backend options
- CLI Reference - Command-line interface documentation
- Evolve Lite - Lightweight Claude Code plugin mode
- Claude Code Demo - End-to-end demo walkthrough
- Policies - Policy support and schema
The test suite is organized into 4 cleanly isolated tiers depending on infrastructure requirements:
-
Default Local Suite Runs both fast logic tests (
unit) and filesystem script verifications (platform_integrations).uv run pytest
-
Unit Tests (Only) Fast, fully-mocked tests verifying core logic and offline pipeline schemas.
uv run pytest -m unit
-
Platform Integration Tests Fast filesystem-level integration tests verifying local tool installation and idempotency.
uv run pytest -m platform_integrations
-
End-to-End Infrastructure Tests Heavy tests that autonomously spin up a background Phoenix server and simulate full agent workflows.
uv run pytest -m e2e --run-e2e
(See the Low-Code Tracing Guide for more details).
-
LLM Evaluation Tests Tests needing active LLM inference to test resolution pipelines (requires LLM API keys).
uv run pytest -m llm
