All external dependencies required to run the Medium Format Studio web UI.
- Version: 18 or higher (developed on v25; see
.nvmrc) - Install: https://nodejs.org/ or via nvm
- Check:
node --version && npm --version
- Version: 3.10 or higher (for running ComfyUI)
- Install: https://www.python.org/downloads/
- Check:
python --version
- Repository: https://github.com/comfyanonymous/ComfyUI
- Install:
git clone https://github.com/comfyanonymous/ComfyUI.git cd ComfyUI pip install -r requirements.txt - Start (CORS required):
python main.py --enable-cors-header
The Medium Format Studio workflow uses several custom nodes. Install each via ComfyUI Manager or by cloning into ComfyUI/custom_nodes/.
| Node Pack | Required For |
|---|---|
| comfyui-kjnodes | EmptyLatentImageCustomPresets (film format presets) |
| rgthree-comfy | Power Lora Loader, Seed (rgthree) |
| ComfyUI-SeedVR2 | SeedVR2 AI upscaling (Final Print stage) |
Installing via ComfyUI Manager (recommended):
- Install ComfyUI Manager:
cd ComfyUI/custom_nodes && git clone https://github.com/ltdrdata/ComfyUI-Manager.git - Restart ComfyUI
- Open ComfyUI web UI → Manager → Install Missing Custom Nodes
Place models in the indicated directories within your ComfyUI installation. All are required unless noted.
Place in ComfyUI/models/unet/ or ComfyUI/models/diffusion_models/:
flux-2-klein-9b.safetensors(FP16, ~18GB) — primary model- or
flux-2-klein-9b-Q8_0.gguf(~9GB),flux-2-klein-9b-Q6_K.gguf(~7GB), etc.
The UI auto-discovers all flux-2-klein-9b* variants present on the server and lets you select among them. At least one variant is required.
Place in ComfyUI/models/clip/:
qwen_3_8b_fp8mixed.safetensors(~8GB)
Place in ComfyUI/models/vae/:
flux2-vae.safetensors(~300MB)
Place in ComfyUI/models/ — check SeedVR2 docs for exact subdirectory:
seedvr2_ema_7b_sharp_fp16.safetensors(~14GB)
Place in ComfyUI/models/loras/FluxKlein/:
detail_slider_klein_9b_20260123_065513.safetensorsklein_slider_chiaroscuro.safetensors
LoRAs are optional at runtime (each can be disabled in the UI), but both files must be present for the workflow to load without errors.
| Component | Approximate Size |
|---|---|
| node_modules | ~200MB |
| ComfyUI installation | ~500MB |
| Flux 2 Klein 9B (FP16 .safetensors) | ~18GB |
| — or GGUF variants | 7–9GB |
| Qwen 8B text encoder | ~8GB |
| Flux2 VAE | ~300MB |
| SeedVR2 7B upscaler | ~14GB |
| LoRAs (both) | ~1GB |
Total (FP16 path): ~42GB Total (GGUF Q8 path): ~33GB
- NVIDIA GPU (CUDA): Recommended; supports FlashAttention, SageAttention, full performance
- AMD GPU (ROCm): Supported by ComfyUI; performance varies
- Apple Silicon (MPS): Works via ComfyUI's MPS backend; lacks CUDA-only optimizations (FlashAttention, Triton). GGUF quantized variants reduce VRAM requirements.
- CPU: Technically supported but impractically slow for these model sizes
| Setup | Minimum VRAM |
|---|---|
| Klein 9B FP16 | ~20GB |
| Klein 9B Q8 GGUF | ~12GB |
| Klein 9B Q6_K GGUF | ~10GB |
| + SeedVR2 (Final Print) | +8–10GB (loaded separately) |
ComfyUI offloads models as needed; lower VRAM is possible with slower generation.
- 8188 — ComfyUI backend (default; adjustable in ComfyUI settings)
- 5173 — Vite dev server
Both ports must be free on the local machine. The web UI can also connect to a remote ComfyUI instance (e.g. RunPod) — configure the URL via the server settings panel in the UI.
# Web UI
git pull && npm install
# ComfyUI
cd /path/to/ComfyUI && git pull && pip install -r requirements.txt
# Custom nodes (via ComfyUI Manager)
# Open ComfyUI web UI → Manager → Update All