Colony Frontier is a complete, browser-playable colony management game built with modular ES modules.
You build structures, manage colonists and resources, research technologies, and push your settlement to victory.
Contributor workflow and PR checklist mapping are documented in CONTRIBUTING.md.
CONTRIBUTING.md and .github/pull_request_template.md are intentionally synchronized process docs.
If command/script names change, update this README, CONTRIBUTING.md, and .github/pull_request_template.md together.
package.json scripts are the canonical source for command names used in docs and CI.
For command-rename PRs, include a brief docs-sync note in the PR summary listing updated docs/CI references.
Start with the New Contributor Quick Start subsection there for the shortest command path.
Opening your first tuning PR? Review .github/pull_request_template.md and follow the tuning checklist items before requesting review.
- Complete colony loop: build β produce β research β expand β win/lose.
- 12+ building types across housing, production, infrastructure, culture, and defense.
- Colonist simulation with jobs, movement, skills, and needs (hunger, rest, health, morale).
- Construction queue with builder-driven progress.
- Research tree with prerequisites and unlocks.
- Objective tracker with milestone rewards and progression guidance.
- Objective cards show explicit reward details before completion.
- Objective rewards automatically scale with scenario difficulty.
- Run analytics: track peak population, completions, deaths, run outcomes, and balance profile context.
- Runtime state invariants to detect and pause on simulation corruption.
- Invariant violations are logged in run stats for debugging and postmortems.
- Scenario presets: Frontier, Prosperous, and Harsh with distinct start conditions and ongoing production/workforce tuning.
- Balance profiles: Standard, Forgiving, and Brutal simulation tuning.
- Deterministic seeded simulation support for reproducible runs.
- Save/Load/Reset controls backed by
localStorage. - Save Export/Import for portable JSON save files.
- Versioned save schema with migration support for legacy save payloads.
- Strict save validation for imports with actionable error feedback.
- Import safety limits to reject unexpectedly large save files.
- Invariant-checked save loading to reject structurally unsafe game states.
- 3D rendering with Three.js, plus an automatic 2D fallback when WebGL is unavailable.
- Responsive UI that supports both desktop and touch interactions.
- Added scenario tuning trend reporting with dual comparison sources:
- baseline dashboard artifact (when present),
- committed baseline signatures/intensity maps (fallback).
- Added a baseline capture command for dashboard-to-dashboard trend workflows:
npm run simulate:capture:tuning-dashboard-baseline
- Expanded scenario tuning baseline suggestions to include:
- signature drift snippets,
- total tuning intensity drift snippets.
- Added optional strict intensity enforcement:
- local:
SIM_SCENARIO_TUNING_ENFORCE_INTENSITY=1 npm run simulate:check:tuning-baseline - CI opt-in via repository variable:
SIM_SCENARIO_TUNING_ENFORCE_INTENSITY=1
- local:
SCENARIO_DEFINITIONS
ββ> simulate:report:tuning ---------------------> scenario-tuning-dashboard.{json,md}
ββ> simulate:capture:tuning-dashboard-baseline -> scenario-tuning-dashboard.baseline.json
ββ> simulate:report:tuning:trend --------------> scenario-tuning-trend.{json,md}
ββ> simulate:suggest:tuning-baseline ----------> scenario-tuning-baseline-suggestions.{json,md}
ββ> simulate:check:tuning-baseline
(optional strict: SIM_SCENARIO_TUNING_ENFORCE_INTENSITY=1)- Vanilla HTML/CSS/JavaScript (ES modules)
- Three.js (installed via npm, served from
node_modules) - Node built-in test runner for unit tests
index.html
styles.css
src/
content/ # data definitions (resources, buildings, research)
game/ # game engine, state, selectors, event bus
systems/ # simulation systems (colonists, economy, construction, research, outcomes)
render/ # Three.js renderer + fallback renderer
ui/ # UI controller and interactions
persistence/ # save/load helpers
tests/ # unit tests for pure logic modulesnpm install
npm startThen open: http://localhost:8000
Optional URL parameters:
?scenario=frontier|prosperous|harsh?seed=any-string-you-like?balance=standard|forgiving|brutal
npm testUnit tests cover:
- economy system
- construction system
- research system
- colonist simulation behavior
- scenario setup behavior
- deterministic simulation behavior
- state serialization validity
- scripted integration progression milestones
- objective progression behavior
Run a deterministic scenario simulation matrix from CLI:
npm run simulateMost simulation CLI commands support:
SIM_STRATEGY_PROFILE(defaults tobaseline)
Run deterministic regression assertions (fails on balance regressions):
npm run simulate:assertValidate scenario tuning maps for invalid keys or unsafe multipliers:
npm run simulate:validate:tuningGenerate a compact scenario tuning dashboard (JSON + Markdown):
npm run simulate:report:tuningCapture the current scenario tuning dashboard as a reusable trend baseline artifact:
npm run simulate:capture:tuning-dashboard-baselineGenerate a scenario tuning trend report against baseline signatures (or a baseline dashboard file when available):
npm run simulate:report:tuning:trendWhen a baseline dashboard artifact is unavailable, the trend report falls back to committed signature and intensity baselines.
Troubleshooting:
- if you see a message that the baseline dashboard is missing, run:
npm run simulate:capture:tuning-dashboard-baseline- then rerun
npm run simulate:report:tuning:trendto switch comparison source to dashboard mode.
Trend baseline path can be overridden with:
SIM_SCENARIO_TUNING_TREND_BASELINE_PATH(read path used by trend report)SIM_SCENARIO_TUNING_DASHBOARD_BASELINE_PATH(write path used by baseline capture)
Enforce scenario tuning signature baseline consistency:
npm run simulate:check:tuning-baselineOptional strict mode:
- set
SIM_SCENARIO_TUNING_ENFORCE_INTENSITY=1to fail when total tuning intensity baselines drift (not just signature baselines).
Optional machine-readable diagnostics mode:
- set
REPORT_DIAGNOSTICS_JSON=1to emit one-line JSON diagnostics in addition to normal human-readable logs. - this is useful for CI log parsing and custom automation.
- optional: set
REPORT_DIAGNOSTICS_RUN_ID=<value>to attach a shared correlation ID across all emitted diagnostics in a run. - currently supported by:
npm run simulate:report:tuning:trendnpm run reports:validatenpm run simulate:check:tuning-baselinenpm run simulate:baseline:checknpm run diagnostics:smokenpm run diagnostics:smoke:validate
npm run diagnostics:smokeexecutes a lightweight end-to-end diagnostics contract check, then writes consolidated JSON + Markdown reports (reports/report-diagnostics-smoke.json/.mdby default) with counts by script/level/code plus per-scenario pass/fail details.- optional: set
REPORT_DIAGNOSTICS_SMOKE_OUTPUT_PATH=<path>to control where the consolidated smoke report is written. - optional: set
REPORT_DIAGNOSTICS_SMOKE_MD_OUTPUT_PATH=<path>to control where the markdown smoke report is written. npm run diagnostics:smoke:validatevalidates smoke JSON + markdown artifacts by default and fails iffailedScenarioCount > 0.- optional: set
REPORT_DIAGNOSTICS_SMOKE_VALIDATE_MARKDOWN=0to skip markdown artifact validation (JSON summary validation still enforced). - smoke summary payloads are versioned and tagged for automation:
type: "report-diagnostics-smoke-summary"schemaVersion: 1- includes aggregate counters (
diagnosticsByCode,diagnosticsByLevel,diagnosticsByScript) and per-scenario contract verdicts.
Common diagnostic codes:
- artifact/baseline read + validation:
artifact-missingartifact-invalid-jsonartifact-invalid-payloadartifact-read-error
- baseline drift checks:
scenario-tuning-signature-driftscenario-tuning-intensity-driftscenario-tuning-intensity-drift-strictbaseline-signature-drift
- diagnostics smoke validation:
diagnostics-smoke-run-summarydiagnostics-smoke-validation-summarydiagnostics-smoke-failed-scenarios
Diagnostic JSON line format:
{
"type": "report-diagnostic",
"schemaVersion": 1,
"generatedAt": "2026-02-13T12:34:56.789Z",
"script": "npm-script-or-null",
"runId": "optional-correlation-id-or-null",
"level": "info|warn|error",
"code": "stable-diagnostic-code",
"message": "human-readable summary",
"context": { "optional": "object payload" }
}Contract notes:
typeis alwaysreport-diagnostic.schemaVersionis an integer and currently1.generatedAtis canonical ISO-8601 (Date.toISOString()format).scriptis eithernullor a non-empty command identifier string.runIdis eithernullor a non-empty string (REPORT_DIAGNOSTICS_RUN_IDcan set this globally).codevalues are from a fixed, validated code catalog (unknown codes are rejected).contextis eithernullor an object (never an array/string).
Diagnostics compatibility policy:
- diagnostics are treated as an automation contract.
- adding new diagnostic codes is allowed, but existing codes should not be renamed or removed without coordinated consumer updates.
- schema changes should be explicit and versioned through
schemaVersion. - contract fixtures in
tests/reportDiagnosticsCompatibility.test.jsandtests/reportDiagnosticsContractsScriptIntegration.test.jsintentionally fail on unreviewed drift.
Enable strict mode in CI:
- Open repository Settings β Secrets and variables β Actions β Variables.
- Add variable
SIM_SCENARIO_TUNING_ENFORCE_INTENSITYwith value1. - Re-run CI to activate the optional strict intensity enforcement step.
Generate scenario tuning baseline suggestions (JSON + Markdown):
npm run simulate:suggest:tuning-baseline| Command | Purpose | Primary Outputs |
|---|---|---|
npm run simulate:report:tuning |
Build current tuning dashboard snapshot | reports/scenario-tuning-dashboard.json/.md |
npm run simulate:capture:tuning-dashboard-baseline |
Capture dashboard baseline artifact for trend comparisons | reports/scenario-tuning-dashboard.baseline.json |
npm run simulate:report:tuning:trend |
Compare current tuning against dashboard/signature+intensity baselines | reports/scenario-tuning-trend.json/.md |
npm run simulate:suggest:tuning-baseline |
Suggest baseline updates for signatures and total intensity | reports/scenario-tuning-baseline-suggestions.json/.md |
npm run simulate:check:tuning-baseline |
Enforce tuning baseline drift policy | console output + exit status |
npm run simulate:tuning:session |
Run the recommended manual tuning command sequence | all tuning reports + baseline check output |
npm run simulate:tuning:session:strict |
Run the same tuning sequence but fail on intensity drift | all tuning reports + strict baseline check output |
npm run simulate:tuning:prepr |
Run strict tuning session plus report artifact schema checks | strict session output + report validation summary |
npm run diagnostics:smoke |
Execute diagnostics contract smoke checks across report scripts | reports/report-diagnostics-smoke.json/.md |
npm run diagnostics:smoke:validate |
Validate smoke JSON (+markdown by default) artifacts and enforce zero failed scenarios | console output + exit status |
For local balancing sessions, use this order to get deterministic, review-friendly outputs:
-
Edit scenario tuning multipliers in
src/content/scenarios.js. -
Run the full tuning workflow:
npm run simulate:tuning:session
-
If you are intentionally redefining dashboard-based trend comparisons, capture a fresh dashboard baseline:
npm run simulate:capture:tuning-dashboard-baseline
-
Use strict mode when you want CI-parity gating locally (for example, before opening a balancing PR):
npm run simulate:tuning:session:strict
Before opening a tuning-focused PR, run:
npm run simulate:tuning:preprThen review these artifacts:
reports/scenario-tuning-dashboard.mdfor current multiplier deltas/rankings.reports/scenario-tuning-trend.mdto confirm intended scenario changes only.reports/scenario-tuning-baseline-suggestions.mdfor copy-ready baseline updates (signature + total intensity).
This suggestion report now includes copy-ready snippets for both:
EXPECTED_SCENARIO_TUNING_SIGNATURESEXPECTED_SCENARIO_TUNING_TOTAL_ABS_DELTA
Generate a machine-readable regression report:
npm run simulate:reportRun multi-seed drift checks against baseline bounds:
npm run simulate:driftRun deterministic snapshot signature checks:
npm run simulate:snapshotRun balance profile regression checks:
npm run simulate:balanceGenerate suggested baseline updates from current deterministic behavior:
npm run simulate:baseline:suggestThis produces:
reports/baseline-suggestions.json(structured data + deltas + snippets)reports/baseline-suggestions.md(human-readable summary with copy-ready snippets)
Fail CI/local checks if suggested baselines diverge from committed baselines:
npm run simulate:baseline:checkValidate generated JSON report artifacts against schema-tagged payload contracts:
npm run reports:validateThis writes: reports/report-artifacts-validation.json
and reports/report-artifacts-validation.md
CI now runs:
npm testnpm run simulate:validate:tuning(uploaded as artifact)npm run simulate:report:tuning(uploaded as artifact)npm run simulate:report:tuning:trend(uploaded as artifact)npm run simulate:suggest:tuning-baseline(uploaded as artifact)npm run simulate:check:tuning-baseline(enforced)- optional strict intensity enforcement when repo/org variable
SIM_SCENARIO_TUNING_ENFORCE_INTENSITY=1is set npm run simulate:assertnpm run simulate:report(uploaded as artifact)npm run simulate:drift(uploaded as artifact)npm run simulate:snapshot(uploaded as artifact, enforced)npm run simulate:balance(uploaded as artifact)npm run simulate:baseline:suggest(uploaded as artifact)npm run reports:validatereports/report-artifacts-validation.json/.md(uploaded as artifact)npm run diagnostics:smokereports/report-diagnostics-smoke.json/.md(local observability contract summary)npm run diagnostics:smoke:validatenpm run simulate:baseline:check(enforced)
One-command local verification:
npm run verifyverify now runs:
npm testnpm run simulate:validate:tuningnpm run simulate:report:tuningnpm run simulate:report:tuning:trendnpm run simulate:suggest:tuning-baselinenpm run simulate:check:tuning-baselinenpm run simulate:assertnpm run simulate:driftSIM_SNAPSHOT_ENFORCE=1 npm run simulate:snapshotnpm run simulate:balancenpm run simulate:baseline:suggestnpm run reports:validatenpm run diagnostics:smokenpm run diagnostics:smoke:validatenpm run simulate:baseline:check
- Use
npm run verifyfor full local parity with the default CI gate. - Use
npm run simulate:tuning:session:strictwhen iterating on tuning and you want intensity drift enforcement to match strict CI mode before opening a PR. - For tuning-focused PRs, use
npm run simulate:tuning:prepras the required local pre-submit gate (strict tuning checks + report schema validation). - If the same PR also changes non-tuning gameplay/simulation code, run
npm run verifyin addition tonpm run simulate:tuning:prepr.
- Build: choose category + building in right panel, then click/tap terrain.
- Hire Colonist: left panel button (costs food).
- Research: start technologies from the research panel when enough knowledge is available.
- Speed/Pause: top controls (1x/2x/4x).
- Save/Load/Reset: top controls.
- Export/Import: export current game to JSON or import a previous exported save.
- Build food + material production first.
- Expand housing capacity to grow population.
- Produce knowledge via schools/libraries to unlock higher-tier tech.
- Reach the late-game charter condition to win.
- Colony can collapse from starvation/despair pressure.
- If all colonists die, the game ends immediately.
The codebase is intentionally split by domain boundaries:
- data (content) vs simulation (systems) vs presentation (render/ui)
- deterministic update loop in
GameEngine - pure, testable logic modules for critical systems
This structure keeps game mechanics scalable and makes iterative balancing safer than monolithic scripts.