An automation tool that picks up Jira issues labeled llm-candidate, implements them using an LLM, and opens a pull request.
The dashboard engine uses the Vercel AI SDK and supports multiple LLM providers (Anthropic, OpenAI, Google). The CLI scripts in
cli/are an alternative entry point that uses Claude Code directly.
Unshift runs four phases per issue:
- Discover - Queries the Jira REST API for issues labeled
llm-candidateand determines which issues to process. - Plan - Reads the Jira issue, maps it to a repo via
repos.yaml, creates a branch, and generates an implementation plan (prd.json). - Implement - Works through the plan one entry at a time. If a validation step fails, it automatically retries once with the error context. This keeps token usage flat and gives every entry the full context window.
- Deliver - Commits, pushes, opens a PR, updates Jira, and cleans up. When started from the dashboard, the run pauses here for approval before proceeding.
Each run executes in an isolated git worktree, so multiple runs against the same repository can proceed in parallel without branch conflicts.
The fastest way to run unshift. Only Docker is required.
git clone https://github.com/CryptoRodeo/unshift.git
cd unshift
cp .env.example .envEdit .env and fill in your credentials (see Credentials Reference):
- LLM provider (at least one required):
ANTHROPIC_API_KEY- for Anthropic / Claude (default provider)OPENAI_API_KEY- for OpenAI / GPTGOOGLE_GENERATIVE_AI_API_KEY- for Google / Gemini- Or Vertex AI config (see credentials reference)
UNSHIFT_PROVIDER- which provider to use:anthropic(default),openai,google, orvertexUNSHIFT_MODEL- model ID to use (defaults:claude-sonnet-4-6,gpt-4o,gemini-2.0-flash)JIRA_BASE_URL,JIRA_USER_EMAIL,JIRA_API_TOKEN- required for Jira integrationGH_TOKENorGITLAB_TOKEN- required for PR/MR creationGIT_USER_NAME,GIT_USER_EMAIL- used for git commits inside the container
You can also select the provider and model from the dashboard UI when starting a run.
Create a repos.yaml to map your Jira projects to repositories. See repos.yaml.example for the schema and a starting template.
Inside the container, repos are cloned under /app/workspace/ (bind-mounted to ./workspace on the host). Keep local_dir set to the path on your host machine (e.g. ~/work/my-repo) — the dashboard uses it for the "Open Locally" dialog.
Example repos.yaml
# Frontend monorepo — two Jira projects share this repo,
# disambiguated by component and labels
- jira_projects: [FRONTEND, DESIGN]
component: ComponentLibrary
labels: [ui-core]
repo_url: git@gitlab.example.com:my-org/ui-packages.git
local_dir: ~/work/ui-packages
default_branch: main
host: GitLab
validation: [npm test, npx tsc --noEmit]
# Standalone app — simple 1:1 project-to-repo mapping
- jira_projects: [TRUSTY]
repo_url: git@github.com:my-org/trusty-ui.git
local_dir: ~/work/trusty-ui
default_branch: main
host: GitHub
validation: [npm test, npx tsc --noEmit]
# Console UI
- jira_projects: [CONSOLE]
repo_url: git@github.com:my-org/console-ui.git
local_dir: ~/work/console-ui
default_branch: main
host: GitHub
validation: [npm test, npx tsc --noEmit]
# Same Jira project as the first entry, but a different
# component routes issues to a separate repo
- jira_projects: [FRONTEND]
component: Automation
repo_url: git@github.com:my-org/automation-tools.git
local_dir: ~/projects/automation-tools
default_branch: main
host: GitHub
validation: []This shows several common patterns: multiple Jira projects sharing a repo, the same project key routing to different repos via component, label-based disambiguation, mixed GitHub/GitLab hosts, and empty validation when no checks are needed.
docker compose up --buildThe dashboard will be available at http://localhost:3000.
- Run data (SQLite) persists in a Docker volume
- Cloned repos persist in
./workspaceon the host - To stop:
docker compose down(add-vto also delete the database volume)
Vertex AI users: Run gcloud auth application-default login on the host before starting the container. The compose file mounts your ADC credentials file automatically.
| Issue | Solution |
|---|---|
Native module build failures (better-sqlite3) |
Ensure you're building on a supported architecture (amd64/arm64). Run docker compose build --no-cache to rebuild from scratch. |
gh not found inside container |
The Dockerfile installs it during build. If the build was cached before it was added, run docker compose build --no-cache. |
Permission errors on ./workspace volume |
The container runs as UID 1000 (unshift user). If the host user has a different UID, adjust ownership: sudo chown -R $(id -u):$(id -g) ./workspace |
| Git push/clone fails inside container | Verify GH_TOKEN (or GITLAB_TOKEN) is set in .env. The token needs repo scope (GitHub) or api scope (GitLab). |
| Container exits immediately | Check logs with docker compose logs dashboard. Common cause: missing required env vars in .env. |
Database lost after docker compose down |
Data is stored in a named volume (dashboard-data). Use docker compose down (without -v) to preserve it. Adding -v removes volumes. |
| Vertex AI: auth errors | Ensure gcloud auth application-default login was run on the host and the ADC file exists before starting the container. |
Stale worktrees in workspace/ |
If a run was interrupted, orphaned worktrees may remain. Clean up with git -C workspace/<repo> worktree prune. |
Use this setup if you want to develop unshift itself (modify the dashboard, CLI scripts, etc.).
| Tool | Purpose |
|---|---|
| Node.js (v18+) | Runtime for the dashboard (uses the Vercel AI SDK for LLM calls) |
| Git | Version control (pre-installed on most systems) |
Only needed if using the CLI scripts (cli/):
| Tool | Purpose |
|---|---|
| Claude Code | CLI agent that runs each phase (npm install -g @anthropic-ai/claude-code) |
| jq | Used by the CLI orchestrator |
For PR/MR creation, install one or both depending on your repos:
Git must be configured with push access to your target repositories (e.g. via SSH keys or a credential helper).
git clone https://github.com/CryptoRodeo/unshift.git
cd unshift
./cli/init.shCopy the template and fill in your tokens:
cp .env.example .unshift.envThen source it (or export the variables in your shell):
source .unshift.envWith Anthropic (default):
ANTHROPIC_API_KEY=sk-ant-...With OpenAI:
OPENAI_API_KEY=sk-...
UNSHIFT_PROVIDER=openaiWith Google Gemini:
GOOGLE_GENERATIVE_AI_API_KEY=your-key
UNSHIFT_PROVIDER=googleCommon credentials (required regardless of provider):
JIRA_BASE_URL=https://mycompany.atlassian.net
JIRA_USER_EMAIL=you@company.com
JIRA_API_TOKEN=your-jira-token
GH_TOKEN=ghp_...
# Or, if using GitLab instead:
# GITLAB_TOKEN=glpat-...With Vertex AI (Google Cloud)
For the dashboard, set the Vertex AI environment variables — the provider is auto-detected when ANTHROPIC_API_KEY is not set:
ANTHROPIC_VERTEX_PROJECT_ID=<your-gcp-project-id>
CLOUD_ML_REGION=us-east5
UNSHIFT_PROVIDER=vertex # optional — auto-detected when ANTHROPIC_API_KEY is absentFor the CLI scripts, set the Claude Code–specific flag instead:
CLAUDE_CODE_USE_VERTEX=1
CLOUD_ML_REGION=us-east5
ANTHROPIC_VERTEX_PROJECT_ID=<your-gcp-project-id>In both cases you need active GCP credentials (gcloud auth application-default login).
See Credentials Reference for how to create each token and for Data Center configuration.
The dashboard is a web UI for starting, monitoring, and approving unshift runs.
cd dashboard
npm install
npm run devThis starts both the Express/WebSocket server and the Vite dev server using concurrently. The client is available at http://localhost:5173 and the API server runs on http://localhost:3000.
- Start and stop runs, view per-phase progress, and stream logs
- Select an LLM provider and model per run, or use the defaults from your
.env - Multiple runs on the same repo can execute in parallel (each gets its own worktree)
- After Phase 2 completes, the run pauses for your approval - review changes, then approve, reject, or retry before Phase 3 creates the PR
- Run history is stored in a local SQLite database (
dashboard/server/data/runs.db) and persists across server restarts - Issues that already completed successfully are skipped automatically; use the force option to re-run
The cli/ directory contains the shell-based orchestrator that uses Claude Code directly. It runs the same phases without a web UI and requires a Claude Code installation.
./cli/unshift.shYou can also target a single issue or just list what's available:
./cli/unshift.sh --issue PROJ-123 # process one issue
./cli/unshift.sh --discover # list llm-candidate issues and exit
./cli/unshift.sh --retry --issue PROJ-123 # retry from prd.json (skips planning)--retry resets the branch to its merge-base, marks all prd.json entries as incomplete, and re-runs Phase 2 and 3. Requires the UNSHIFT_CONTEXT_FILE env var pointing at the context file from the original run.
Jira Cloud: Create a token at Atlassian API token management. Use the email of the account that created the token as JIRA_USER_EMAIL.
Jira Data Center / Server: Create a Personal Access Token from your Jira profile (Profile > Personal Access Tokens). Set JIRA_AUTH_TYPE=bearer and JIRA_API_VERSION=2 in your .unshift.env. You do not need to set JIRA_USER_EMAIL when using bearer auth.
Create a token with the repo scope (classic) or Contents + Pull requests read/write (fine-grained) at GitHub token settings. The gh CLI recognizes GH_TOKEN automatically - no separate gh auth login is needed.
Create a token with the api scope at GitLab access tokens. The glab CLI recognizes GITLAB_TOKEN automatically - no separate glab auth login is needed.
Unshift also ships as a Claude Code custom skill that you can invoke inside any Claude Code session with /unshift. The skill uses Jira MCP tools directly (instead of acli) and runs the full Jira-to-PR workflow from within Claude Code.
The skill uses the gh or glab CLI to create pull/merge requests - see Install prerequisites and Credentials Reference.
From the project where you want to use the skill, run:
mkdir -p .claude/skills/unshift
curl -fsSL https://raw.githubusercontent.com/CryptoRodeo/unshift/main/.claude/skills/unshift/SKILL.md \
-o .claude/skills/unshift/SKILL.mdClaude Code automatically discovers skills in .claude/skills/.
The skill communicates with Jira via the Atlassian MCP server.
Add the MCP server with your credentials in .claude/settings.local.json (this file should not be committed):
{
"mcpServers": {
"atlassian": {
"type": "url",
"url": "https://mcp.atlassian.com/v1/sse",
"headers": {
"Authorization": "Basic <base64-encoded email:api-token>"
}
}
}
}To generate the Base64 value, run:
echo -n "you@company.com:your-jira-api-token" | base64See Credentials Reference for how to get the token.
Inside a Claude Code session, run:
/unshift # discover and process all llm-candidate issues
/unshift PROJ-123 # process a specific issue
The skill reads repos.yaml from this repo's root to map Jira projects to repositories. See repos.yaml.example for the schema.
| File | Purpose |
|---|---|
dashboard/ |
Web UI for starting, monitoring, and approving runs |
dashboard/server/src/engine/ |
Agentic engine (orchestrator, phase runner, prompts, tools, providers) |
cli/unshift.sh |
Shell orchestrator - drives all four phases |
cli/ralph/ralph.sh |
Implementation loop - one claude -p per prd.json entry, with automatic retry on failure |
cli/prompts/phase1.md |
Phase 1 prompt template for repo setup and planning |
cli/prompts/phase3.md |
Phase 3 prompt template for PR creation and Jira update |
cli/init.sh |
Configures Claude Code permissions for CLI usage |
.claude/skills/unshift/SKILL.md |
Claude Code custom skill - run /unshift inside a session |
compose.yml |
Docker Compose service definition for the dashboard |
dashboard/Dockerfile |
Multi-stage Docker build (no Claude Code — the engine calls LLM APIs directly) |
dashboard/entrypoint.sh |
Container entrypoint - sets git identity and GCP credentials |
.dockerignore |
Files excluded from Docker build context |
repos.yaml |
Project-to-repository mapping (shared by dashboard and CLI) |
prd.json |
Implementation plan, created per issue, cleaned up after (in target repo at runtime) |
progress.txt |
Append-only execution log, cleaned up after (in target repo at runtime) |
runs.db |
SQLite database storing run history, logs, and progress (in dashboard/server/data/ at runtime) |
