This guide explains how to run InsightFace-REST directly on a machine where Docker cannot be installed.
On Ubuntu/Debian, install runtime libraries first:
sudo apt-get update
sudo apt-get install -y libgl1 libglib2.0-0 libgomp1 libturbojpegPython 3.10 is recommended.
Example using conda:
conda create -y -n ifr-local python=3.10
conda activate ifr-localExample using venv:
python3.10 -m venv .venv
source .venv/bin/activateUse the local runtime requirements file:
pip install -r requirements-local.txtFrom repository root:
python -m if_rest.run_local --host 0.0.0.0 --port 18080This command will:
- set local defaults for model/image directories
- download and prepare models automatically
- start FastAPI with Uvicorn
Open API docs at:
http://127.0.0.1:18080/docs
MODELS_DIR: model storage directory (default:<repo>/models, or/modelsif present)ROOT_IMAGES_DIR: local image root for non-URL paths (default:<repo>/misc, or/imagesif present)INFERENCE_BACKEND: backend (onnxrecommended for local CPU)DET_NAME: detector model nameREC_NAME: recognition model nameGA_NAME: gender/age model (Noneto disable)MASK_DETECTOR: mask model (Noneto disable)NUM_WORKERS: worker count forif_rest.run_local
- If model download from Google Drive is blocked by your network, place ONNX files manually under
MODELS_DIR/onnx/<model_name>/. - If startup fails after code updates, clear Python caches:
find . | grep -E "(__pycache__|\\.pyc$)" | xargs rm -rf