🇧🇷 Português | 🇺🇸 English
"Corrigiu sozinho." — Autonomous bug-fixing pipeline, from ticket to production deploy, zero human intervention.
Author: Daniel Plácido License: MIT
Fixei receives a bug report (GitHub Issue or Jira), deeply analyzes your codebase using semantic search and LLMs, generates a complete code fix, writes new tests, opens a Pull Request, waits for CI/CD to pass, merges automatically, and closes the original ticket — all without touching a keyboard.
- How it works
- Agent Pipeline & Context Exchange
- Services
- Project Structure
- Quick Start
- Configuration Reference
- Running in Production
- Webhook Setup
- REST API
- Dashboard
- Running Tests
- Security Considerations
GitHub Issue / Jira ticket (labeled "ai-fix")
│
▼
[1] TicketAgent → normalizes and structures the raw ticket
│
▼
[2] DocumentationAgent → ensures codebase docs are fresh
│ (.bugfix-agent/BACKEND.md + FRONTEND.md)
│ triggers vector index rebuild
▼
[3] AnalysisAgent → semantic search over codebase + LLM analysis
│ confirms root cause, identifies affected files
▼
[4] CodeAgent → fetches current file contents + Context7 best practices
│ generates complete fix via LLM (structured format)
│ creates branch, commits files
▼
[5] TestAgent → generates new test file via LLM
│ commits tests, triggers GitHub Actions CI
│ polls until pass/fail (up to 10 min)
▼ if tests fail ────────────────────────────────────┐
[6] DeployAgent → creates PR + labels │ retry loop
│ waits for CI checks │ (up to MAX_RETRIES)
│ auto-merges │
▼ ◄┘
[7] TicketAgent → closes ticket as fixed
│
▼
Slack notification + GitHub audit trail comment
If all retries are exhausted, the agent escalates — posts a Slack alert, adds a needs-human label to the ticket, and stops.
Every agent is a stateless class that receives dependencies through its constructor. The Orchestrator manages a ctx object that accumulates results at each stage and passes them to the next agent. Below is the exact data that flows between each step.
src/orchestrator.js
The Orchestrator owns the pipeline lifecycle. It instantiates every agent at startup and runs them sequentially via the run(ticketPayload) method. Each step emits a numbered Step N/6 — … log on start and a completion log when done, providing real-time progress visibility in the terminal.
The shared context object ctx evolves through the pipeline:
ctx = {
runId: "run_1234567890",
ticket: null, // ← filled by TicketAgent
docs: null, // ← filled by DocumentationAgent
analysis: null, // ← filled by AnalysisAgent
fix: null, // ← filled by CodeAgent (updated each retry)
tests: null, // ← filled by TestAgent
deploy: null, // ← filled by DeployAgent
retries: 0,
maxRetries: 3,
status: 'running',
auditLog: [], // ← each agent appends an entry
}The auditLog is posted as a structured comment on the GitHub Issue at the end of every run (success or failure), giving full traceability.
src/agents/ticket-agent.js
Input: raw webhook payload (GitHub Issue or Jira format)
Output:
{
id: "123",
title: "Form does not show validation errors",
description: "Full normalized description",
stepsToReproduce: ["1. Open form", "2. Submit empty"],
expectedBehavior: "Should show error messages",
actualBehavior: "Form silently fails",
environment: "production",
severity: "medium",
labels: ["ai-fix", "bug"],
reporter: "john",
rawLogs: "...",
_provider: "github" // or "jira"
}The LLM normalizes the raw ticket into a typed structure regardless of the source format. If parsing fails, a fallback extracts data from the raw text directly.
Context passed forward: ctx.ticket is used by all subsequent agents as the source of truth for the bug description.
Also responsible for closeAsFixed(), closeAsInvalid(), and escalate() which post comments and update the ticket state.
src/agents/documentation-agent.js
Input: ctx.ticket
Output: combined string of BACKEND.md + FRONTEND.md
What it does:
Maintains two living documentation files inside the target repository itself:
(target repo)/
.bugfix-agent/
BACKEND.md ← architecture, routes, services, models, auth, queue, patterns
FRONTEND.md ← components, routing, state management, API calls, i18n, build
Each document has 11 structured sections generated by the LLM after reading the most relevant source files (up to 35 files per layer).
Staleness check (_needsUpdate): extracts all tokens ≥ 5 characters from the current ticket. If more than 30% of those tokens are absent from the existing docs, the documentation is considered stale and is regenerated. This ensures the vector index always has context relevant to the current bug.
After updating docs: triggers an async rebuild of the VectorStore index so the AnalysisAgent's semantic search is fresh.
Context passed forward: ctx.docs is injected into the AnalysisAgent prompt so the LLM understands the full architecture before looking at code.
src/agents/analysis-agent.js
Input: ctx.ticket + ctx.docs
Output:
{
confirmed: true,
reason: "Validation errors are caught but not forwarded to the component",
rootCause: "ContactController returns 422 but ContactForm ignores non-2xx responses",
codeLocations: ["ContactController.ts:L87", "ContactForm/index.js:L134"],
affectedFiles: [
"backend/src/controllers/ContactController.ts",
"frontend/src/components/ContactForm/index.js"
],
affectedFunctions: ["store()", "handleSubmit()"],
bugType: "error-handling",
backendChanges: "Return validation errors in response body under `errors` key",
frontendChanges: "Read `errors` from 422 response and display per-field messages",
suggestedApproach: "Catch non-2xx in ContactForm, map errors to form state",
riskLevel: "low",
estimatedComplexity: "simple"
}How the code context is fetched (_fetchCodeContext):
- Lists all repository files via GitHub API
- Filters out irrelevant extensions (images, lock files, binaries, etc.)
- Fast path (vector index is ready): embeds the ticket text → runs
vectorStore.searchPaths()→ fetches top-12 semantically similar files in parallel - Fallback (no index): up to 3 rounds of LLM-based triage asking for 4 files at a time, plus a dedicated frontend pass if no
.vue/.jsx/.tsxfiles were selected - Full file contents are concatenated and injected into the analysis prompt
The agent also posts a formatted comment directly on the GitHub Issue summarizing the root cause and affected code locations.
Context passed forward: ctx.analysis is the most critical handoff — it tells the CodeAgent exactly what to fix, where, and how.
src/agents/code-agent.js
Input: ctx.analysis + optional ctx.fix.feedback (test failure details from a previous attempt)
Output:
{
branch: "bugfix/auto-1711234567890",
prTitle: "fix(contacts): show validation errors in ContactForm",
prDescription: "## Root Cause\n...",
fileChanges: [
{ path: "backend/src/controllers/ContactController.ts", operation: "update", content: "..." },
{ path: "frontend/src/components/ContactForm/index.js", operation: "update", content: "..." }
],
testHints: "Test 422 response handling; test empty form submission",
breakingChange: false,
rollbackPlan: "Revert PR #N or cherry-pick the previous controller commit",
feedback: null
}What it does:
- Fetches current file contents for every
affectedFilefrom GitHub - Fetches best practices from Context7 (see Context7Service)
- Builds a structured prompt that includes:
- Root cause, bug type, risk level, suggested approach
- Backend and frontend change descriptions (from AnalysisAgent)
- Previous test failure feedback (on retries)
- Full current file contents
- Context7 best practices for the tech stack (capped at ~3000 chars)
- Calls the LLM requesting a structured response format (no JSON-embedded code):
<<<PLAN>>>
{ "prTitle": "...", "files": [{"path": "...", "operation": "update"}] }
<<<END_PLAN>>>
<<<FILE: backend/src/controllers/ContactController.ts>>>
(full file content — every single line)
<<<END_FILE>>>
<<<FILE: frontend/src/components/ContactForm/index.js>>>
(full file content)
<<<END_FILE>>>
This format separates metadata (JSON) from file content, preventing JSON parse failures caused by TypeScript braces, template literals, and other syntax inside strings.
- Truncation detection: if the LLM used placeholders like
// ...,/* existing code */, or// unchanged, the agent runs a dedicated merge pass (_expandTruncated) that combines the original file + the partial fix into a complete file - Path validation: if the LLM invented a file path not in the repository, it is remapped to the closest real path by filename match, or dropped
- Branch creation and commits via GitHub API
Recovery pass: if the LLM output cannot be parsed at all (e.g. truncated mid-response), _recoverJson() sends the broken text back to the LLM and asks it to reformat using the structured format.
Context passed forward: ctx.fix contains the branch name, PR metadata, and the list of changed files — used by TestAgent, DeployAgent, and the audit log.
src/agents/test-agent.js
Input: ctx.fix + ctx.analysis
Output:
{
passed: true,
total: 24,
failureDetails: null, // or LLM summary of failures on failure
newTestsFile: "tests/controllers/ContactController.test.ts",
ciRunUrl: "https://github.com/owner/repo/actions/runs/12345"
}What it does:
-
Generates a new test file — the LLM receives: the bug description, root cause, test hints from CodeAgent, and the changed file contents. It produces a complete test file (Jest/Vitest/Mocha — inferred from existing imports in the repo). Test path resolved from source path:
src/foo/bar.ts→tests/foo/bar.test.ts -
Commits the test file to the fix branch (if
COMMIT_TESTS !== false) -
Triggers GitHub Actions CI — calls
POST /repos/{owner}/{repo}/actions/workflows/{ciWorkflowId}/dispatches. Then retries up to 10 times (every 5 seconds, totaling 50 seconds) waiting for the workflow run to appear in the API -
Polls CI status — checks every 15 seconds until
CI_TIMEOUT_MS(default: 10 minutes). Reads step counts from the workflow run to estimatepassed/total -
On failure:
_interpretFailure(logs)— asks the LLM to summarize the test failure in 2-3 sentences
Context passed forward: if tests.passed === false, the Orchestrator sets ctx.fix.feedback = tests.failureDetails and loops back to CodeAgent with this feedback, so the next attempt addresses the specific test failures.
src/agents/deploy-agent.js
Input: ctx.fix + ctx.tests
Output:
{
prUrl: "https://github.com/owner/repo/pull/42",
prNumber: 42,
branch: "bugfix/auto-1711234567890",
environment: "production", // or "staging" if not merged
merged: true
}What it does:
- Creates a Pull Request with a fully formatted markdown body that includes: description, test results (pass count), files changed table, breaking change flag, and rollback plan
- Adds labels
auto-fixandbugfixto the PR - Waits for CI checks — polls
mergeable_stateevery 5 seconds for up to 60 seconds - Auto-merges using the configured method (
squash/merge/rebase) ifAUTO_MERGE=true
src/services/vector-store.js
Provides semantic search over the target repository's source files without any external vector database.
How it works:
- Chunking: every source file is split into ~1200-character chunks with 200-character overlap
- Embeddings: uses CodeBERT (
@xenova/transformers, ~90MB, downloaded once and cached locally) to generate 768-dimensional embedding vectors for each chunk - Similarity search: cosine similarity between the query embedding and all chunk embeddings; top-K results deduplicated by file path
- Persistence: the index is saved as
.bugfix-agent/vector-index.jsoninside the target repository — shared across runs
TF-IDF fallback:
If the CodeBERT model cannot be downloaded (network restrictions, air-gapped environments), the service automatically switches to an enhanced TF-IDF mode with:
- 3× weight for tokens that appear in the file path (e.g. a query for "contact" will rank
ContactController.tshigher) - camelCase/PascalCase decomposition (
ContactController→contact,controller) - 5-character prefix matching for cross-language cognates (
contato↔contact,clique↔click)
The scheduleBuild() method accepts a Promise<string[]> so it can run asynchronously while the pipeline continues. waitReady() is called before any search to ensure the index is fully built.
src/services/context7.js
Injects up-to-date framework documentation and best practices directly into the CodeAgent's LLM prompt.
Why this matters: LLMs have a training cutoff and may suggest outdated patterns. Context7 provides live documentation for the exact framework versions in use, so the generated code follows current best practices.
How it works:
- The CodeAgent reads
STACK_BACKENDandSTACK_FRONTENDfrom the environment (e.g.TypeScript/NestJS,Vue) getBestPractices(frameworks, topic, tokensEach)is called with the framework names and a topic derived from the bug type + suggested approach- For each framework,
resolveLibrary(name)maps it to a Context7 library ID using a built-in lookup table (TypeScript, Express, Vue, React, Next.js, NestJS, Laravel, Django, FastAPI, Prisma, Mongoose, Jest, Vitest, and others — or via API search for unknown frameworks) - Docs are fetched from
https://context7.com/api/v1/{libraryId}?tokens=800&topic=... - Results from up to 2 frameworks are concatenated and capped at ~3000 characters before being injected into the code generation prompt
If Context7 is unavailable or the framework is not found, the agent proceeds without best practices — it is entirely non-blocking.
To disable: set CONTEXT7_ENABLED=false.
To configure the token budget per library: CONTEXT7_TOKENS=2500 (default).
src/orchestrator.js (LLMService class)
All agents share a single LLMService instance. Each call passes the agentName so the correct model is selected.
llm.call(agentName, systemPrompt, userPrompt, maxTokens)Single model per agent:
Each agent uses exactly one model, configured via environment variable. Fixei works with both OpenRouter (hosted models) and Ollama (local models) — the LLM_PROVIDER env var selects the backend.
# OpenRouter (default)
LLM_PROVIDER=openrouter
MODEL_ANALYSIS=deepseek/deepseek-chat
# Ollama (local)
LLM_PROVIDER=ollama
OLLAMA_BASE_URL=http://localhost:11434
MODEL_ANALYSIS=qwen2.5-coder:7bStartup model validation (validateModels()):
On startup, Fixei calls the OpenRouter models list API and verifies that every configured model ID actually exists. If a model is not found, an error is thrown before any ticket is processed — no surprises mid-pipeline.
This step also caches each model's context_length for the token budget guard. If a model does not return context length metadata, a startup warning is logged:
WARN [LLM] Context-length guard disabled for: qwen/qwen3-next-80b-a3b-instruct:free — token budget cannot be enforced.
Token budget guard:
Before sending a request, LLMService estimates the input token count using chars / 4 (conservative approximation). If context_length metadata is available for the model:
max_tokensis automatically capped tocontextLength - inputTokens - 256(safety margin)- If the prompt already exceeds the model's context window, an error is thrown immediately with an actionable message
Reactive finish_reason=length detection:
If the model responds with finish_reason: "length" (output truncated due to token budget exhaustion), the service throws a descriptive error:
[LLM] Model "deepseek/deepseek-chat" (agent: analysis) ran out of output tokens — finish_reason=length
(model context_length: 131072 tokens). The prompt is too large for this model.
Use a model with a larger context window, reduce the number of files sent, or lower CONTEXT7_TOKENS.
Config error propagation (isLLMConfigError):
HTTP 400 ("not a valid model ID"), 401, and 403 responses set err.isLLMConfigError = true. Agents check this flag and re-throw immediately, so the pipeline fails fast with a clear message rather than silently producing wrong results.
src/services/github.js
Thin wrapper around the GitHub REST API v3. All calls use the GITHUB_TOKEN bearer token.
| Method | Description |
|---|---|
getFileContent(path) |
Reads a file from the default branch (base64 decode) |
createBranch(name) |
Resolves HEAD SHA → creates ref |
commitFile(branch, path, content, message) |
Create or update a file (fetches existing SHA for updates) |
listFiles(subPath?) |
Recursive tree listing → flat array of blob paths |
getWorkflowRun(runId) |
Fetches Actions workflow run data |
postAnalysisComment(issueNumber, analysis) |
Posts root cause + locations as a formatted issue comment |
postAuditComment(issueNumber, ctx) |
Posts full pipeline audit trail as an issue comment |
src/services/state-manager.js
Persists pipeline run state to data/state.json on disk (upsert by runId). Used by the REST API to serve /api/runs.
src/services/notification.js
Provides structured, colored terminal logging and Slack notifications.
Terminal logger (logger):
Writes to process.stdout / process.stderr with ANSI color codes. A built-in spinner shows the current operation in real time. Spinner lines are committed with a result marker when an operation finishes:
| Marker | Color | Triggered by |
|---|---|---|
✓ |
Green | logger.success() and clean process exit |
✗ |
Red | logger.error() and logger.fail() |
logger.fail() is used by the Orchestrator for pipeline-level failures (max retries exceeded, config errors). It writes a red FAIL badge to stderr, making terminal output immediately actionable.
Slack notifications: sent via incoming webhook. If SLACK_WEBHOOK_URL is not configured, messages are printed to stdout instead.
fixei/
├── src/
│ ├── orchestrator.js # Pipeline coordinator + LLMService
│ ├── agents/
│ │ ├── ticket-agent.js # Parses & manages GitHub/Jira tickets
│ │ ├── analysis-agent.js # Root cause analysis + file triage
│ │ ├── code-agent.js # Fix generation + branch/commit
│ │ ├── test-agent.js # Test generation + CI polling
│ │ ├── deploy-agent.js # PR creation + auto-merge
│ │ └── documentation-agent.js# Codebase docs maintenance
│ ├── api/
│ │ ├── server.js # Express server + webhook handlers
│ │ └── config.js # Environment variable loader
│ └── services/
│ ├── github.js # GitHub REST API wrapper
│ ├── vector-store.js # CodeBERT / TF-IDF semantic index
│ ├── context7.js # Framework best practices injection
│ ├── llm-utils.js # Robust JSON extractor for LLM output
│ ├── state-manager.js # Run state persistence (data/state.json)
│ ├── notification.js # Slack notifications
│ └── logger.js # ANSI-colored structured logger
├── tests/
│ ├── agents/ # Unit tests for each agent
│ └── services/ # Unit tests for each service
├── dashboard/
│ └── index.html # Real-time monitoring dashboard (vanilla JS)
├── data/ # Runtime state (auto-created, gitignored)
│ └── state.json
├── .env.example # Environment variable template
├── package.json
└── README.md
- Node.js 20 or higher
- An OpenRouter account and API key
- A GitHub personal access token with
repoandworkflowscopes - The target repository must have GitHub Actions configured with a CI workflow
# 1. Clone the repository
git clone https://github.com/danielplacido/fixei.git
cd fixei
# 2. Install dependencies
npm install
# 3. Configure environment variables
cp .env.example .env
# Edit .env with your credentials (see Configuration Reference below)
# 4. Start in development mode (auto-reload on file changes)
npm run dev
# → Server running at http://localhost:3000
# 5. Open the monitoring dashboard
# Open dashboard/index.html in your browsercurl -X POST http://localhost:3000/api/trigger \
-H "Content-Type: application/json" \
-d '{
"ticket": {
"id": "BUG-123",
"title": "Contact form does not show validation errors",
"description": "When submitting an empty contact form, no error message is displayed. The form silently fails and the user does not know what went wrong.",
"stepsToReproduce": "1. Open Add Contact screen\n2. Click Save without filling in any fields",
"expectedBehavior": "Show per-field validation error messages",
"actualBehavior": "Form closes or stays open with no feedback",
"rawLogs": "POST /contacts → 422 Unprocessable Entity"
}
}'| Variable | Description |
|---|---|
OPENROUTER_API_KEY |
API key from openrouter.ai/keys |
GITHUB_TOKEN |
Personal access token — needs repo + workflow scopes |
GITHUB_REPO |
Target repository in owner/repo format |
| Variable | Default | Description |
|---|---|---|
MODEL_ANALYSIS |
anthropic/claude-3.5-sonnet |
Model for root cause analysis (heavier reasoning) |
MODEL_CODE |
anthropic/claude-3.5-sonnet |
Model for code generation |
MODEL_TEST |
anthropic/claude-3.5-sonnet |
Model for test generation |
MODEL_TICKET |
anthropic/claude-3.5-sonnet |
Model for ticket parsing |
MODEL_DOCUMENTATION |
same as MODEL_ANALYSIS |
Model for codebase docs generation |
Browse available models at openrouter.ai/models.
| Variable | Example | Description |
|---|---|---|
STACK_BACKEND |
TypeScript/NestJS |
Backend language/framework — used to fetch best practices |
STACK_FRONTEND |
Vue |
Frontend framework — used to fetch best practices |
CONTEXT7_ENABLED |
true |
Set to false to disable Context7 best practice injection |
CONTEXT7_TOKENS |
2500 |
Max tokens fetched per library from Context7 |
| Variable | Default | Description |
|---|---|---|
DEFAULT_BRANCH |
main |
Branch the agent reads code from and creates fix branches off |
GITHUB_WEBHOOK_SECRET |
— | HMAC secret for webhook signature verification (strongly recommended) |
TRIGGER_LABEL |
ai-fix |
GitHub Issue label that activates the pipeline |
CI_WORKFLOW_ID |
ci.yml |
Filename of the GitHub Actions workflow the agent triggers |
CI_TIMEOUT_MS |
600000 |
How long to wait for CI to complete (milliseconds) |
AUTO_MERGE |
true |
Automatically merge the PR when CI passes |
MERGE_METHOD |
squash |
squash / merge / rebase |
DEPLOY_ENV |
production |
Label shown in notifications when merged |
| Variable | Default | Description |
|---|---|---|
MAX_RETRIES |
3 |
Maximum fix attempts before escalating to a human |
COMMIT_TESTS |
true |
Set to false to skip committing generated tests |
| Variable | Default | Description |
|---|---|---|
TICKET_PROVIDER |
github |
github or jira |
JIRA_BASE_URL |
— | e.g. https://yourcompany.atlassian.net |
JIRA_EMAIL |
— | Jira account email |
JIRA_TOKEN |
— | Jira API token |
JIRA_TRANSITION_DONE_ID |
31 |
Transition ID for "Done" status |
| Variable | Default | Description |
|---|---|---|
SLACK_WEBHOOK_URL |
— | Incoming webhook URL for Slack notifications |
SLACK_CHANNEL |
#engineering |
Channel name (informational only, set in the webhook itself) |
# /etc/systemd/system/fixei.service
[Unit]
Description=Fixei
After=network.target
[Service]
Type=simple
User=deploy
WorkingDirectory=/opt/fixei
EnvironmentFile=/opt/fixei/.env
ExecStart=/usr/bin/node src/api/server.js
Restart=on-failure
RestartSec=10
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.targetsudo systemctl daemon-reload
sudo systemctl enable --now fixei
sudo journalctl -u fixei -fFROM node:22-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --omit=dev
COPY . .
EXPOSE 3000
CMD ["node", "src/api/server.js"]docker build -t fixei .
docker run -d \
--name fixei \
--env-file .env \
-p 3000:3000 \
-v fixei-data:/app/data \
fixeinpm install -g pm2
pm2 start src/api/server.js --name fixei
pm2 save
pm2 startup- Set
GITHUB_WEBHOOK_SECRET— the server verifies HMAC-SHA256 on every webhook - The server must be publicly reachable by GitHub (use a reverse proxy like Nginx or expose with a tunnel like Cloudflare Tunnel for private networks)
-
data/directory must be writable (state persistence and vector index cache) - First run will download the CodeBERT model (~90MB) — pre-warm with:
node -e "import('./src/services/vector-store.js')" - Review
MAX_RETRIESandCI_TIMEOUT_MSfor your CI speed - Set
STACK_BACKENDandSTACK_FRONTENDfor optimal Context7 best practices injection
server {
listen 80;
server_name fixei.yourcompany.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 300s; # long enough for CI polling
}
}- Go to your target repository → Settings → Webhooks → Add webhook
- Payload URL:
https://your-server.com/webhook/github - Content type:
application/json - Secret: same value as
GITHUB_WEBHOOK_SECRETin your.env - Events: select "Issues" only
- Create the label
ai-fixin your repository (Issues → Labels → New label)
Any issue with the ai-fix label added will trigger the pipeline automatically.
- Go to Jira → Settings → System → Webhooks → Create a WebHook
- URL:
https://your-server.com/webhook/jira - Events: Issue Created, Issue Updated
- Filter (optional):
labels = "ai-fix"
| Method | Endpoint | Description |
|---|---|---|
GET |
/health |
Health check: { ok: true, ts: "..." } |
POST |
/webhook/github |
GitHub Issues webhook receiver |
POST |
/webhook/jira |
Jira webhook receiver |
POST |
/api/trigger |
Manual trigger: { ticket: {...} } — synchronous, returns pipeline result |
GET |
/api/runs |
List all runs (sorted by updatedAt desc) |
GET |
/api/runs/:runId |
Full details of a specific run including audit log |
Open dashboard/index.html directly in your browser (no build step required). It connects to http://localhost:3000 by default.
Features:
- Live list of all pipeline runs with color-coded status (green = success, red = error, amber = running, purple = escalated)
- Per-run detail: full audit trail for every pipeline step, files changed, PR link, CI run link, failure details
- Auto-polling — updates in real time without manual refresh
# Run all tests with coverage report
npm test
# Run without coverage (faster)
npm test -- --no-coverage
# Run a specific test file
npm test -- tests/agents/code-agent.test.jsThe test suite covers all 6 agents and all services (13 test files, ~197 assertions). GitHub service and Express server are intentionally excluded from coverage as they require live API calls.
- Webhook verification: all GitHub webhooks are verified using HMAC-SHA256 (
crypto.timingSafeEqual). Always setGITHUB_WEBHOOK_SECRET. - Token scope: the GitHub token only needs
repo+workflowscopes. Do not use a token with admin or org-wide permissions. .envfile: never commit your.envfile. It contains API keys. The.gitignorein this repository excludes it.- State file:
data/state.jsonmay contain ticket titles and PR URLs. Keep thedata/directory private. - LLM output: generated code is committed to a branch and goes through CI before merge. The pipeline never pushes directly to the default branch.
- Auto-merge: if your CI is not comprehensive, set
AUTO_MERGE=falseand review PRs manually. - CORS: the API has open CORS for the local dashboard. If you expose the API publicly, restrict CORS to known origins.