Skip to content

Add production Dockerfile and docker-compose for self-hosted deployment#1

Open
forge-atlas wants to merge 30 commits intomainfrom
feat/docker-self-hosted
Open

Add production Dockerfile and docker-compose for self-hosted deployment#1
forge-atlas wants to merge 30 commits intomainfrom
feat/docker-self-hosted

Conversation

@forge-atlas
Copy link
Copy Markdown
Collaborator

Summary

  • Multi-stage Dockerfile using oven/bun:1 with Next.js standalone output for a slim production image
  • app service added to docker-compose.yml with postgres healthcheck + redis-rest dependency
  • .env.docker template with Docker networking defaults and placeholder env vars
  • docker-entrypoint.sh auto-runs prisma migrate deploy on boot
  • .dockerignore to keep build context small

Setup

  1. Copy apps/web/.env.docker to apps/web/.env and fill in GitHub OAuth credentials
  2. Run docker compose up
  3. App is accessible at http://localhost:3000

Test plan

  • docker compose up brings up postgres, redis-rest, and the app
  • Prisma migrations run automatically on first boot
  • App is accessible at http://localhost:3000
  • GitHub OAuth login flow works with configured credentials

Full-Stack Engineer and others added 15 commits March 31, 2026 07:31
…ted deployment

- Multi-stage Dockerfile (deps → build → runner) using oven/bun:1 with standalone Next.js output
- App service in docker-compose.yml with postgres healthcheck dependency
- .env.docker template with Docker networking defaults (postgres:5432, redis-rest:80)
- docker-entrypoint.sh runs prisma migrate deploy on startup
- Enable Next.js standalone output in next.config.ts

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Move app service and postgres healthcheck from docker-compose.yml
to docker-compose.override.yml so the base file stays untouched
and upstream syncs remain conflict-free.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Paperclip <noreply@paperclip.ing>
The apps/web package.json has a postinstall script that runs
prisma generate. Without the schema files present during bun install,
this fails. Copy prisma.config.ts and the prisma directory into the
deps stage so the postinstall completes successfully.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Bun hoists all dependencies to the root node_modules in workspace
mode, so apps/web/node_modules does not exist in the deps stage.
The generated Prisma client is already in root node_modules/.prisma.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
The schema.prisma has output = "../src/generated/prisma" so the
generated client lives at apps/web/src/generated/prisma, not
node_modules/.prisma. Updated all three stages to use the correct
path. Also copy prisma.config.ts to runner for migrate deploy.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
The prisma.config.ts imports dotenv/config which wasn't available
in the slim runner stage, causing prisma migrate deploy to fail
on container startup.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
… deps

Instead of copying prisma.config.ts and chasing its dependency chain
(dotenv, effect, etc.), pass --schema directly to prisma migrate deploy.
DATABASE_URL is already in the environment via docker-compose env_file.
This removes the need for prisma.config.ts, dotenv, and effect in the
slim runner stage.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
The datasource block had no url field, relying solely on
prisma.config.ts which has heavy transitive deps (dotenv, effect)
not available in the Docker runner. Adding url = env("DATABASE_URL")
to the schema lets prisma migrate deploy --schema work directly.
The prisma.config.ts still takes precedence for local development.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Prisma v7 no longer supports url in the datasource schema block.
Reverted that change and instead:
- Created prisma.docker.config.ts (no dotenv import, uses process.env directly)
- Copy it as prisma.config.ts in the runner stage
- Copy effect package (transitive dep of @prisma/config's defineConfig)
- Entrypoint runs from /app where the Docker config lives

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Selectively copying individual packages (prisma, effect, fast-check...)
is fragile due to deep transitive dependencies. Instead, install prisma
fresh in /prisma-cli with bun install to get the complete dependency
tree. Separate directory avoids conflicts with standalone Next.js output.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
The config at /app/prisma.config.ts imports from "prisma/config" but
prisma is installed at /prisma-cli/node_modules/. Setting NODE_PATH
tells the runtime to also look in /prisma-cli/node_modules/ when
resolving imports from the config file.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
The project's migrations are incomplete — many tables and columns
in schema.prisma have no corresponding migration files (e.g.
subscription, usage_logs, ai_call_logs, credit_ledger, etc.).
Using prisma db push syncs the database to match the full schema
without requiring migration files.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Paperclip <noreply@paperclip.ing>
Prisma v7 removed the --skip-generate option from db push.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
@cave-internal cave-internal Bot force-pushed the feat/docker-self-hosted branch from c1eaf8d to 3bdf2e0 Compare March 31, 2026 07:32
akoenig added 13 commits March 31, 2026 10:21
- Added pgbouncer service in docker-compose.override.yml for connection pooling
- Updated app to connect to pgbouncer instead of postgres directly
- Reduced application-side connection pool size since pgbouncer handles pooling
- Added resource limits to docker-compose.yml to prevent container starvation
- Updated .env.docker to point to pgbouncer service
…e correct env vars

The previous edoburu/pgbouncer image expected a config file at /etc/pgbouncer/pgbouncer.ini.
Switched to bitnami/pgbouncer image which can be configured via environment variables.
Updated environment variable names to match Bitnami's expectations:
- POSTGRESQL_HOST, POSTGRESQL_PORT, POSTGRESQL_DATABASE_NAME, POSTGRESQL_USERNAME, POSTGRESQL_PASSWORD
The Docker Hub image 'bitnami/pgbouncer:latest' is not available or not found.
Switched to the Amazon ECR public image which is publicly accessible.
…it listen port

The PgBouncer container was starting correctly (listening on 0.0.0.0:5432) but being marked as unhealthy due to health check timing issues.
- Increased health check interval from 5s to 10s
- Increased health check retries from 5 to 10
- Added explicit PGBOUNCER_LISTEN_PORT=5432 environment variable for clarity
The health check was failing because pg_isready was checking the default postgres database
but our PgBouncer is configured for the 'better_hub' database.
Added '-d better_hub' to the pg_isready command to check the correct database.
The pg_isready health check was having issues with host resolution and database specification.
Switched to using 'nc -z localhost 5432' which simply checks if the port is open and accepting connections.
This is more reliable for checking if PgBouncer is ready to accept connections.
The healthcheck was causing issues with container startup. Removing it allows the container to start based on the successful execution of the entrypoint script without additional health verification.
Changed the port binding from '5433:5432' to '127.0.0.1:5433:5432' so that PgBouncer is only accessible on the localhost interface, not exposed to all network interfaces.
This improves security by preventing external access to the database connection pooler.
…ed for pgbouncer

Since we removed the pgbouncer healthcheck, changed the dependency condition to service_started so the app waits for pgbouncer to start (not necessarily be healthy).
Changed PgBouncer container port from 5432 to 6432 to avoid conflict with postgres healthcheck which also binds to 5432.
Updated app .env to connect to pgbouncer on port 6432.
Added healthcheck using nc to verify pgbouncer is accepting connections.
Changed app depends_on back to service_healthy for pgbouncer now that we have a healthcheck.
The Bitnami pgbouncer image defaults to listening on port 5432 internally.
Changed back from 6432 to 5432 for both the port mapping and healthcheck.
The Bitnami pgbouncer image listens on port 6432 internally by default (not 5432).
Updated port mapping and healthcheck to use 6432, matching what the container actually listens on.
akoenig added 2 commits March 31, 2026 12:36
…connect to postgres database

- Removed ports section from pgbouncer (not externally accessible)
- Changed healthcheck to use bash with /dev/tcp instead of nc
- Changed DATABASE_URL to connect to 'postgres' database instead of 'better_hub' (pgbouncer handles database routing)
Docker Compose overrides:
- PgBouncer: increased pool size to 30, min pool to 10, reserve pool to 5
- PostgreSQL: added memory tuning (shared_buffers, work_mem, cache settings)
- PostgreSQL: disabled synchronous_commit and full_page_writes for better write performance

Application optimizations:
- Next.js staleTimes increased from 300/180 to 600/600 for better edge caching
- Database connection string added statement_cache_capacity=100 for query caching
- Added connection-level timeouts (statement_timeout=30s, lock_timeout=10s)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants