Production-ready Docker setup for ComfyUI
A complete containerized deployment of ComfyUI with GPU acceleration, flexible deployment profiles, and persistent data management. Built with Docker Buildx Bake for efficient multi-stage builds.
- Key Features
- Quick Start
- Deployment Profiles
- Data & Storage
- Configuration
- Documentation
- Related Resources
- Contributing
- FAQ
- License
- 🚀 GPU-Accelerated: NVIDIA CUDA 12.9 support with optimized runtime
- 🎯 Multiple Profiles: Core (minimal), Complete (full-featured), CPU-only
- 📁 Persistent Storage: Individual volume mounts for models, outputs, custom nodes, etc.
- 🐳 Production Ready: Multi-stage builds, layer caching, and pre-built GHCR images
- ⚡ Performance Optimized: SageAttention for 2-3x faster attention computation
- 🔧 Extensible: Custom node support via volume mounts
- 🔄 CI/CD Ready: Automated builds, weekly dependency updates
- 🔒 Security: API/Swarm/K8s ready with arbitrary user support
- Docker 20.10+ and Docker Compose 2.x
- NVIDIA GPU + drivers (for GPU modes) - Install Guide
- 8GB+ VRAM recommended for complete mode
- 20GB+ disk space for models and images
git clone https://github.com/pixeloven/ComfyUI-Docker.git
cd ComfyUI-DockerChoose an example directory and start the service:
Core GPU (recommended for most users):
cd examples/core-gpu
docker compose up -dComplete GPU (optimized dependencies + SageAttention):
cd examples/complete-gpu
docker compose up -dCore CPU (no GPU required):
cd examples/core-cpu
docker compose up -dOpen your browser to: http://localhost:8188
Place your Stable Diffusion checkpoints in ./data/models/checkpoints/ or download them through the ComfyUI interface.
ComfyUI Docker offers three deployment profiles to match your use case:
| Example | Container | Image | Best For | Features |
|---|---|---|---|---|
core-gpu |
comfyui-core-gpu |
ghcr.io/pixeloven/comfyui/core:cuda-latest |
Most users | Essential ComfyUI + GPU acceleration |
complete-gpu |
comfyui-complete-gpu |
ghcr.io/pixeloven/comfyui/complete:cuda-latest |
Power users | Pre-installed Python deps + SageAttention optimization |
core-cpu |
comfyui-core-cpu |
ghcr.io/pixeloven/comfyui/core:cpu-latest |
Testing/Compatibility | No GPU required |
Fast, lightweight ComfyUI with GPU support.
cd examples/core-gpu
docker compose up -d- ✅ Essential ComfyUI functionality
- ✅ GPU acceleration (CUDA 12.9)
- ✅ Fast startup
- ✅ Smaller image size
Optimized deployment with pre-installed Python dependencies and SageAttention.
cd examples/complete-gpu
docker compose up -d- ✅ Everything core has
- ✅ Pre-installed Python dependencies for common custom node setups
- ✅ SageAttention 2.2.0 + SageAttn3 3.0.0 optimization (2-3x faster)
⚠️ Larger image size
No GPU required, universal compatibility.
cd examples/core-cpu
docker compose up -d- ✅ Works without NVIDIA GPU
⚠️ Slower generation times- ✅ Lower resource requirements
ComfyUI Docker uses individual volume mounts for each data directory, providing granular control:
./data/
├── models/ → /app/models (AI models, checkpoints, LoRAs)
├── custom_nodes/ → /app/custom_nodes (Extensions and plugins)
├── input/ → /app/input (Input images/workflows)
├── output/ → /app/output (Generated outputs)
├── temp/ → /app/temp (Temporary files)
└── user/ → /app/user (User configurations)
Customize paths via environment variables:
COMFY_MODEL_PATH=/path/to/models \
COMFY_OUTPUT_PATH=/path/to/outputs \
docker compose up -d # from within an examples/ directorySee Data Management Guide for details.
Common configuration options:
# Server Configuration
COMFY_PORT=8188 # Web interface port
PUID=1000 # User ID for file ownership (default: 1000)
PGID=1000 # Group ID for file ownership (default: 1000)
# Performance Tuning
CLI_ARGS="--lowvram" # ComfyUI launch arguments
# Custom Paths
COMFY_MODEL_PATH=./data/models # Override model directory
COMFY_OUTPUT_PATH=./data/output # Override output directoryMatch your host user's UID/GID to avoid permission issues with mounted volumes:
PUID=$(id -u) PGID=$(id -g) docker compose up -d # from within an examples/ directoryFor complete configuration options, see:
- Running Containers Guide - Environment variables and Docker Compose
- Performance Tuning Guide - CLI arguments and optimization
Getting Started:
- Quick Start - Get running in 5 minutes
Core Guides:
- Building Images - Build locally or use pre-built GHCR images
- Running Containers - Docker Compose operations and
.envconfiguration - Data Management - Models, workflows, and persistent storage
- Performance Tuning - CLI arguments and resource optimization
Advanced:
- Custom Nodes Snapshot Spec - How the Complete image manages bundled dependencies
For developers and contributors, see the Building Images Guide for local development and the Contributing section below.
📖 View Full Documentation Index
- ComfyUI GitHub - Official ComfyUI repository
- ComfyUI Examples - Official workflow examples
- ComfyUI Wiki - Documentation and guides
- ComfyUI Manager - Custom node manager
- Civitai - Model sharing platform
- NVIDIA Container Toolkit - GPU support for Docker
- Docker Buildx - Build system documentation
- Docker Compose - Compose reference
We welcome contributions! Whether it's bug reports, feature requests, documentation improvements, or code contributions.
- Report Issues: Use GitHub Issues with our templates
- Suggest Features: Open a Feature Request
- Submit PRs: See Building Images Guide for development setup
- Improve Docs: Documentation PRs are always appreciated!
# Clone the repository
git clone https://github.com/pixeloven/ComfyUI-Docker.git
cd ComfyUI-Docker
# Build images locally
docker buildx bake all --load
# Test a specific example
cd examples/core-gpu
docker compose up -d
# View logs
docker compose logs -fFor detailed build instructions, see Building Images Guide.
- Follow existing code style and structure
- Test your changes with all three profiles
- Update documentation for new features
- Add meaningful commit messages
- Ensure CI/CD checks pass
ComfyUI Docker is a production-ready containerization of ComfyUI, a powerful node-based interface for Stable Diffusion and other AI image generation models. This project provides:
- Multiple deployment profiles (core, complete, CPU-only)
- Multi-stage Docker builds using Docker Buildx Bake
- GPU acceleration with NVIDIA CUDA support
- Persistent data management with granular volume mounting
- Pre-built images available on GitHub Container Registry
- Flexible configuration via environment variables
Perfect for local development, production deployments, or CI/CD pipelines.
- Core Mode: Best for most users - fast startup, essential features, GPU acceleration
- Complete Mode: Best for power users - pre-installed Python dependencies for common custom nodes, SageAttention optimization
- CPU Mode: Best for testing or when no GPU is available
For Core and Complete modes, yes - an NVIDIA GPU with CUDA support is required. For CPU mode, no GPU is needed, but image generation will be significantly slower.
Everything is stored in the ./data/ directory with subdirectories for models, outputs, custom nodes, etc. You can customize these paths using environment variables. See the Data Management Guide for details.
Install custom nodes through the ComfyUI interface or mount them to ./data/custom_nodes/. See the Data Management Guide for details.
Yes! Place your checkpoints, LoRAs, and other models in the appropriate subdirectories under ./data/models/. ComfyUI will automatically detect them.
Pull the latest image from within your example directory:
docker compose pull
docker compose up -dFor local builds, rebuild the images:
docker buildx bake all --no-cacheComplete mode has a larger image due to pre-installed Python dependencies and SageAttention. If startup time is a concern and you don't need the extra optimizations, consider using Core mode.
This project is licensed under the MIT License.
ComfyUI itself is licensed under GPL-3.0 - see the ComfyUI repository for details.
Questions? Check out GitHub Discussions or open an issue.