camwatch_object_detection is a production-oriented Rust service that accepts an uploaded image and reports whether the frame contains people, vehicles, and/or animals.
It is designed to be a decoupled standalone Object Detection service for the next version of CamWatch, or any simple automation and camera pipelines that need a stable HTTP contract, predictable deployment, and coarse scene classification rather than a fully custom ML platform.
axum+tokiofor a modern async HTTP API with strong ecosystem support.ortfor ONNX Runtime inference, which is the most practical way to serve current YOLO-family models from Rust today.- YOLO11n ONNX as the default detector because it balances detection quality, latency, and deployment size.
clap,tracing,clippy,rustfmt, pre-commit hooks, and CI to keep operations predictable.
GET /healthPOST /v1/detect- Content type:
multipart/form-data - Required field:
image
- Content type:
Example response:
{
"people": true,
"vehicles": false,
"animals": true,
"processing_time_ms": 24,
"detections": [
{
"label": "person",
"class_id": 0,
"confidence": 0.92,
"bbox": {
"x": 21.4,
"y": 48.9,
"width": 201.2,
"height": 402.1
}
}
]
}Response fields:
people:truewhen at least one COCOpersondetection passes the confidence threshold.vehicles:truewhen at least one mapped vehicle class is detected.animals:truewhen at least one mapped animal class is detected.processing_time_ms: time spent in the synchronous detection pipeline for that request.detections: raw filtered detections kept for debugging and downstream inspection.
-
Download the default ONNX model:
./scripts/download-model.sh
-
Run the API:
cargo run -- --port 8123
-
Call the detector:
curl -X POST http://127.0.0.1:8123/v1/detect \ -F image=@/path/to/frame.jpg
-
Check the health endpoint:
curl http://127.0.0.1:8123/health
-
Build the image:
docker build -t camwatch-object-detection . -
Download the model onto the host:
./scripts/download-model.sh
-
Run the container and mount the model directory:
docker run --rm \ -p 8123:8123 \ -v "$(pwd)/models:/app/models" \ camwatch-object-detectionIf
models/yolo11n.onnxis missing, the container entrypoint downloads it automatically. If you prefer fully offline startup, download the model first on the host. -
Override runtime config if needed:
docker run --rm \ -p 9000:9000 \ -e CAMWATCH_PORT=9000 \ -v "$(pwd)/models:/app/models" \ camwatch-object-detection
-
Download the model:
./scripts/download-model.sh
-
Start the service:
docker compose up --build
-
Stop it when finished:
docker compose down
The included compose.yaml mounts ./models into the container and publishes port 8123.
If ./models/yolo11n.onnx is missing or obviously invalid, the container downloads a fresh copy on startup.
Every setting is available through CLI flags and environment variables:
| Setting | Env var | Default |
|---|---|---|
| host | CAMWATCH_HOST |
0.0.0.0 |
| port | CAMWATCH_PORT |
8123 |
| model path | CAMWATCH_MODEL_PATH |
models/yolo11n.onnx |
| confidence threshold | CAMWATCH_CONFIDENCE_THRESHOLD |
0.35 |
| IoU threshold | CAMWATCH_IOU_THRESHOLD |
0.45 |
| max image bytes | CAMWATCH_MAX_IMAGE_BYTES |
10485760 |
| max detections | CAMWATCH_MAX_DETECTIONS |
300 |
| model dimension | CAMWATCH_MODEL_DIMENSION |
640 |
Examples:
cargo run -- --port 9000 --confidence-threshold 0.4
CAMWATCH_PORT=9000 CAMWATCH_MODEL_PATH=models/yolo11n.onnx cargo run- Default model file:
models/yolo11n.onnx - Expected model family: COCO-trained YOLO detection model exported to ONNX
- Startup checks reject obviously invalid model files before inference begins
If you previously downloaded models/yolov8n.onnx, delete it and run ./scripts/download-model.sh again. The old Ultralytics asset URL now returns a 404 placeholder file, which causes ONNX parsing failures.
processing_time_msreflects request-local detection time, not end-to-end network latency.- CPU is the default deployment target.
- Smaller images and stricter confidence thresholds usually reduce postprocessing work.
- The default
yolo11n.onnxmodel favors portability and fast startup over maximum accuracy.
model file not found: run./scripts/download-model.shor mount./modelsinto the container.Protobuf parsing failed: delete the bad model file and download it again.multipart field image is required: ensure the request uses-F image=@file.jpg.failed to decode image: verify the upload is a valid JPEG, PNG, or WebP image.- container startup issues: check
docker compose logsand confirmmodels/yolo11n.onnxexists on the host if you disabled auto-download.
- Format:
cargo fmt --all - Lint:
cargo clippy --all-targets --all-features -- -D warnings - Tests:
cargo test --all-targets - Pre-commit:
pre-commit run --all-files
- Real-image fixture tests live in
tests/images/. - Name each image with expected categories in the filename, such as
test_photo1-person-vehicle.jpg,some_name-animal.jpg, orempty-yard-none.jpg. - Supported expectation tags are
person/people,vehicle/vehicles,animal/animals, andnonefor negative samples. - These tests are opt-in because they require a downloaded model and user-provided images.
Validate fixture filenames before running the model-backed test:
./scripts/check-image-fixtures.shRun them manually with:
cargo test --test image_fixtures -- --ignored --nocaptureUse CAMWATCH_MODEL_PATH if you want to validate against a different ONNX model.
src/config.rs: CLI and environment-based configuration.src/api.rs: multipart request handling and error mapping.src/detector.rs: model loading, preprocessing, inference, and postprocessing.src/app.rs: router wiring, startup, tracing, and graceful shutdown.tests/image_fixtures.rs: opt-in real-image integration test driven by filename labels.tests/images/: user-supplied image fixtures and naming guidance.scripts/check-image-fixtures.sh: validates fixture filenames before running the opt-in image test.scripts/download-model.sh: fetches the default ONNX model.scripts/container-entrypoint.sh: container startup helper for model auto-download.
More detail lives in docs/architecture.md, docs/api.md, and AGENTS.md.
This project is licensed under the Apache License 2.0. See LICENSE.