Skip to content

SuyashMullick/cpp-mot-pipeline

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

C++ MOT Pipeline

C++ License Platform

A high-performance computer vision engine that tracks objects in real-time without external Python dependencies. It combines the speed of YOLOv5/v8 (via OpenCV DNN) with the robustness of SORT (Kalman Filters + Hungarian Algorithm) to deliver smooth, consistent object tracks.


✨ Features

  • ⚡ Blazing Fast C++17: Pure C++ implementation for maximum performance and easy embedding.
  • 🧠 YOLO Integration: Seamlessly loads standard ONNX models (YOLOv5, YOLOv8) using OpenCV's DNN backend.
  • 🎯 Robust Tracking: Implements SORT (Simple Online and Realtime Tracking) with:
    • Kalman Filtering for state estimation and motion prediction.
    • Hungarian Algorithm for optimal data association.
  • 🔌 Modular Architecture: Clean interfaces for Source, Detector, and Tracker allow for easy extension.
  • 📊 Versatile Outputs:
    • Headless Mode: JSONL output for server-side processing.
    • Visual Mode: Real-time debug visualization window.
    • Video Export: Save annotated video results directly to MP4.

🏗️ Architecture

graph LR
    A[Frame Source] --> B[Detector]
    B -- Detections --> C[Tracker]
    C -- Tracks --> D[Renderer/Writer]
    
    subgraph Core Loop
    B
    C
    end
Loading

🚀 Quick Start

Dependencies

  • Linux (Ubuntu/Debian recommended)
  • OpenCV 4.x (libopencv-dev)
  • CMake 3.15+
  • GCC/Clang supporting C++17
# Install dependencies on Ubuntu
sudo apt-get update
sudo apt-get install build-essential cmake libopencv-dev

🛠️ Build

mkdir build && cd build
cmake ..
make -j$(nproc)

🏃 Usage

1. Download Assets We provide helper scripts to get you started with a model and sample video:

chmod +x scripts/*.sh
./scripts/download_sample_video.sh
./scripts/download_yolo_onnx.sh

2. Run the Pipeline

Visual Mode (Desktop):

./apps/mot_run --input assets/video.mp4 --model models/yolov5n.onnx

Headless / Server Mode:

./apps/mot_run --input assets/video.mp4 \
               --model models/yolov5n.onnx \
               --headless \
               --out-jsonl tracks.jsonl

Live Webcam:

./apps/mot_run --input camera:0 --model models/yolov5n.onnx --conf 0.6

⚙️ Configuration

Flag Description Default
-i, --input Path to video file or camera:<id> Required
-m, --model Path to .onnx model file Required
--conf Confidence threshold 0.5
--nms NMS IoU threshold 0.45
--track-iou Tracker association threshold 0.3
--headless Run without GUI window false
--out-video Path to save output MP4 None
--out-jsonl Path to save tracks JSONL None
--cuda Attempt to use CUDA backend false

🧪 Testing

The project uses Catch2 for unit testing. verified correctness of core algorithms (IoU, Hungarian Matching).

cd build
./tests/mot_tests

📂 Project Structure

  • apps/ - CLI executable entry point.
  • include/mot/ - Public API headers.
  • src/ - Core library implementation.
  • tests/ - Unit tests.
  • scripts/ - Asset helper scripts.

About

A high-performance computer vision engine that tracks objects in real-time without external Python dependencies, using YOLOv5 and SORT.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors