This project is designed as a reference implementation you can study, fork, or extend to fit your needs. Each branch introduces a single concept:
| Branch | Concept |
|---|---|
> main |
Core agent: REPL, context slicing, auto-summarization |
skills |
/skill — inject reusable Markdown playbooks at runtime |
tools-skills |
Tool execution — read files, grep, run commands |
- Start with
mainto understand the basics, then checkout feature branches to explore more. - Fork any branch as a starting point for your own project.
- Supports multiple providers (
anthropic,openai).
-
Acts as a read-only engineering helper: inspects problems, explains root causes, and shows you how to fix them manually
-
Normalized context messages
-
Maintains a rolling conversation context (20-turn window) with automatic summarization when the turn count or token usage (~20k) is exceeded, keeping the 5 most recent turns after each summarization
-
Offers a basic REPL with multiline input (
\\), paste mode with/pasteand/submitto send the request
tiny_agent/
cli.py # REPL loop and argument parsing
core/
messages.py # Role enum, Message normalization
context.py # Message history, context slicing, summarization
state.py # Hypothesis, actions (with timestamps), context info, summary
utils.py # Shared helpers (colorize, token counting, env loading)
ai_providers.py
LocalProvider # Offline heuristic provider (testing/prototyping)
AnthropicProvider # Anthropic Messages API
OpenAIProvider # OpenAI Chat Completions API
The CLI reads user input, adds it to the ContextManager (which holds the conversation history), checks if the history needs summarizing, then calls the provider. The provider grabs a context slice, converts it to the right API format, sends the request, and updates the StateManager with any hypothesis or action from that turn. The reply goes back into the ContextManager and gets printed.
See docs/HOW_IT_WORKS.md for more detail.
- Python 3.11+
requests,python-dotenv- API keys for external providers (Anthropic/OpenAI)
-
Install
git clone git@github.com:rgcr/tiny-agent.git ./tiny-agent cd tiny-agent pip install -e . # with uv uv pip install -e .
-
Configure environment variables (optional)
cp .env.example .env echo "ANTHROPIC_API_KEY=sk-..." >> .env echo "OPENAI_API_KEY=sk-..." >> .env # or export directly in your shell export ANTHROPIC_API_KEY=sk-... export OPENAI_API_KEY=sk-...
-
Run the CLI
tiny-agent --provider anthropic
Options:
--provider–local,anthropic,openai--model– override the provider's default model slug--api-key– inline API key for Anthropic/OpenAI--debug– print debug info; accepts categories:state,context,requests(e.g.--debug state,context)--no-color– disable colorized output
| Provider | Default model |
|---|---|
anthropic |
claude-3-haiku-20240307 |
openai |
gpt-4o-mini |
local |
N/A (offline provider for testing) |
Override with --model <slug>.
Example --model claude-3-5-sonnet-20241022
- Use
/paste+/submitto paste multi-line prompts - End a line with
\\to keep typing on the next prompt - Hit
Ctrl+Cduring a long request to cancel it without closing the agent loop
Run with --debug to inspect what the agent sends and receives:
tiny-agent --provider openai --debugThis prints:
- The full request payload sent to the provider
- The raw JSON response
- State snapshots (hypothesis, actions, summary, context_info) after each turn
- Fork the repo
- Create your feature branch:
git checkout -b my-new-feature - Commit your changes:
git commit -m 'Add some feature' - Push the branch:
git push origin my-new-feature - Open a Pull Request 🚀
© Rogelio Cedillo – Licensed under the MIT License


