This document describes the security measures implemented in the discogsography Docker deployment.
All services run as a non-root user with configurable UID/GID:
user: "${UID:-1000}:${GID:-1000}"- Default: UID=1000, GID=1000
- Customize by setting
UIDandGIDenvironment variables - Matches host user to avoid permission issues with volumes
All application containers drop all Linux capabilities:
cap_drop:
- ALLThis prevents containers from:
- Modifying network configuration
- Loading kernel modules
- Accessing raw sockets
- Other privileged operations
Prevents privilege escalation:
security_opt:
- no-new-privileges:trueApplication containers use read-only root filesystems:
read_only: true
tmpfs:
- /tmp- Prevents malicious writes to the container filesystem
/tmpis mounted as tmpfs for temporary files- Application data uses explicit volumes
All services implement HTTP health endpoints:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60sServices use a dedicated Docker network:
networks:
discogsography:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16Production deployment includes automatic restart policies:
deploy:
restart_policy:
condition: any
delay: 5s
max_attempts: 3These secrets are never passed as plain environment variables in production. Instead, they are mounted as in-memory tmpfs files via Docker Compose runtime secrets and read through the _FILE convention. See Production Secrets Setup below.
| Secret | _FILE env var |
Plain env var (dev only) |
|---|---|---|
| RabbitMQ password | RABBITMQ_PASSWORD_FILE |
RABBITMQ_PASSWORD |
| RabbitMQ username | RABBITMQ_USERNAME_FILE |
RABBITMQ_USERNAME |
| PostgreSQL password | POSTGRES_PASSWORD_FILE |
POSTGRES_PASSWORD |
| PostgreSQL username | POSTGRES_USERNAME_FILE |
POSTGRES_USERNAME |
| Neo4j password | (via entrypoint wrapper) | NEO4J_AUTH |
| JWT secret key | JWT_SECRET_KEY_FILE |
JWT_SECRET_KEY |
| Encryption master key | ENCRYPTION_MASTER_KEY_FILE |
ENCRYPTION_MASTER_KEY |
| Brevo API key | BREVO_API_KEY_FILE |
BREVO_API_KEY |
| RabbitMQ mgmt user | RABBITMQ_MANAGEMENT_USER_FILE |
RABBITMQ_MANAGEMENT_USER |
| RabbitMQ mgmt password | RABBITMQ_MANAGEMENT_PASSWORD_FILE |
RABBITMQ_MANAGEMENT_PASSWORD |
Plain env vars work in development. The production overlay (docker-compose.prod.yml) switches to the _FILE convention automatically — application code handles both via get_secret() in common/config.py.
Set these to match your host user:
export UID=$(id -u)
export GID=$(id -g)The production overlay mounts secrets as in-memory tmpfs files at /run/secrets/<name>. Secret values are never visible in docker inspect, never written to disk, and flushed when the container stops.
Step 1 — Generate secrets (idempotent, skips existing files):
bash scripts/create-secrets.shThis creates secrets/ (mode 700) with one file per secret (mode 600):
secrets/
├── jwt_secret_key.txt # openssl rand -hex 32
├── encryption_master_key.txt # base64-urlsafe 32-byte HKDF master key
├── brevo_api_key.txt # Brevo API key (empty disables email delivery)
├── postgres_username.txt # discogsography
├── postgres_password.txt # openssl rand -base64 24
├── rabbitmq_username.txt # discogsography
├── rabbitmq_password.txt # openssl rand -base64 24
└── neo4j_password.txt # openssl rand -base64 24
Use secrets.example/ as a reference for each file's format and generation command.
Step 2 — Start with production overlay:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -dNeo4j note: Neo4j does not natively support the _FILE convention. The production overlay overrides Neo4j's entrypoint with scripts/neo4j-entrypoint.sh, which reads /run/secrets/neo4j_password and sets NEO4J_AUTH=neo4j/<password> before delegating to the official Neo4j entrypoint.
Development:
# Copy and configure environment
cp .env.example .env
# Edit .env to set UID/GID
# Run with security features
docker-compose up -dProduction:
# 1. Generate secrets (first time only — safe to re-run)
bash scripts/create-secrets.sh
# 2. Start with production overlay (secrets + restart policies)
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d-
Regular Updates
- Update base images regularly
- Rebuild containers for security patches
- Monitor for vulnerabilities with tools like Trivy
-
Secrets Management
- Use
docker-compose.prod.ymlwithscripts/create-secrets.shfor Docker Compose deployments - For Kubernetes, use Kubernetes Secrets or an external secrets operator
- For cloud deployments, consider AWS Secrets Manager, Azure Key Vault, or HashiCorp Vault
- Never use default passwords in production
- Rotate secrets by updating the file in
secrets/and restarting the affected container
- Use
-
Monitoring
- Monitor container logs for suspicious activity
- Set up alerts for health check failures
- Track resource usage for anomalies
-
Network Security
- Use TLS for all external connections
- Restrict exposed ports to minimum required
- Consider using reverse proxy (nginx, traefik) for services
# Verify user execution (extractor runs as two services)
docker-compose exec extractor-discogs id
docker-compose exec extractor-musicbrainz id
# Check capabilities
docker-compose exec extractor-discogs capsh --print
# Verify read-only filesystem
docker-compose exec extractor-discogs touch /test.txt # Should fail
# Check security options
docker inspect discogsography-extractor-discogs | jq '.[0].HostConfig.SecurityOpt'# Scan images for vulnerabilities
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
aquasec/trivy image discogsography/extractor-discogs:latest
# Also scan the MusicBrainz variant:
# aquasec/trivy image discogsography/extractor-musicbrainz:latest
# Check for misconfigurations
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
aquasec/trivy config .If you encounter permission errors:
-
Check UID/GID match your user:
echo "Host: UID=$(id -u) GID=$(id -g)" docker-compose exec service id
-
Fix volume permissions:
sudo chown -R $(id -u):$(id -g) ./volumes/
Some applications may need writable directories:
-
Add specific tmpfs mounts:
tmpfs: - /tmp - /run - /var/cache
-
Or use volumes for persistent data:
volumes: - app_cache:/app/.cache