Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
115 changes: 49 additions & 66 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,36 +3,53 @@

# AnomaVision πŸ”

**high-performance visual anomaly detection. Fast, lightweight, production-ready.**
**High-performance visual anomaly detection. Fast, lightweight, production-ready.**

AnomaVision detects defects without ever seeing defective examples during training.
<br>

[![PyPI](https://img.shields.io/pypi/v/anomavision?label=PyPI&color=blue)](https://pypi.org/project/anomavision/)
[![PyPI Downloads](https://img.shields.io/pypi/dm/anomavision?color=blue)](https://pypi.org/project/anomavision/)
[![Python](https://img.shields.io/badge/Python-3.9--3.12-blue)](https://www.python.org/)
[![Python](https://img.shields.io/badge/Python-3.10--3.12-blue)](https://www.python.org/)
[![PyTorch](https://img.shields.io/badge/PyTorch-2.0%2B-red)](https://pytorch.org/)
[![License: MIT](https://img.shields.io/badge/License-MIT-green)](LICENSE)
[![ONNX](https://img.shields.io/badge/ONNX-Export%20Ready-orange)](https://onnx.ai/)
[![TensorRT](https://img.shields.io/badge/TensorRT-Supported-76b900)](https://developer.nvidia.com/tensorrt)
[![OpenVINO](https://img.shields.io/badge/OpenVINO-Supported-0071C5)](https://docs.openvino.ai/)
[![HuggingFace](https://img.shields.io/badge/πŸ€—%20Demo-Live-yellow)](https://huggingface.co/spaces/DeepKnowledge1/mvtec-anomaly-detection)

<br>

[**Docs**](docs/quickstart.md) Β· [**Quickstart**](#-quickstart) Β· [**Models**](#-models--performance) Β· [**Tasks**](#-tasks--modes) Β· [**Integrations**](#-integrations) Β· [**Issues**](https://github.com/DeepKnowledge1/AnomaVision/issues) Β· [**Discussions**](https://github.com/DeepKnowledge1/AnomaVision/discussions)
[**Live Demo**](#-live-demo) Β· [**Docs**](docs/quickstart.md) Β· [**Quickstart**](#-quickstart) Β· [**Models**](#-models--performance) Β· [**Tasks**](#-tasks--modes) Β· [**Integrations**](#-integrations) Β· [**Issues**](https://github.com/DeepKnowledge1/AnomaVision/issues) Β· [**Discussions**](https://github.com/DeepKnowledge1/AnomaVision/discussions)

</div>

---

## πŸ€— Live Demo

> **Try AnomaVision instantly β€” no installation required.**

[![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-xl-dark.svg)](https://huggingface.co/spaces/DeepKnowledge1/mvtec-anomaly-detection)

The live demo runs a **PaDiM model trained on MVTec bottle images** and shows:

- 🌑️ **Anomaly Heatmap** β€” spatial score map highlighting defect regions
- πŸ–ΌοΈ **Overlay** β€” original image with anomaly contours drawn
- 🎭 **Predicted Mask** β€” binary segmentation of detected defects
- ⚑ **Real-time inference** β€” results in milliseconds on CPU

Upload your own bottle image or pick from the provided samples to see anomaly detection in action.

---

## What is AnomaVision?

AnomaVision delivers **visual anomaly detection** optimized for production deployment. Based on PaDiM, it learns the distribution of normal images in a **single forward pass** β€” no labels, no segmentation masks, no lengthy training loops.

The result: a 15 MB model that runs at **43 FPS on CPU** and **547 FPS on GPU**, with higher AUROC than the existing best-in-class baseline.



---

## πŸš€ Quickstart

Expand All @@ -47,6 +64,7 @@ pip install uv
```

---

#### Option A β€” From Source (development)

```bash
Expand Down Expand Up @@ -84,7 +102,6 @@ uv pip install "anomavision[cu124]" # CUDA 12.4

---


#### Option C β€” Already installed without extras?

If you're seeing `ModuleNotFoundError: No module named 'torch'`, add PyTorch into your current environment:
Expand All @@ -105,6 +122,7 @@ uv pip install torch torchvision torchaudio --index-url https://download.pytorch
python -c "import anomavision, torch; print('βœ… Ready β€”', torch.__version__)"
```

---

### CLI

Expand Down Expand Up @@ -140,6 +158,7 @@ anomavision export --help

Use the Python API when you want to embed AnomaVision into a larger pipeline,
run it inside a notebook, or integrate it with your own data loading logic.

```python
import torch
import anomavision
Expand Down Expand Up @@ -211,11 +230,12 @@ Full docs at **http://localhost:8000/docs** once the server is running.

</details>

---

<details>
<summary><strong>πŸ“Š Models & Performance</strong></summary>
<br>


### MVTec AD β€” Average over 15 Classes

| Model | Image AUROC ↑ | Pixel AUROC ↑ | CPU FPS ↑ | GPU FPS ↑ | Size ↓ |
Expand Down Expand Up @@ -255,6 +275,7 @@ Full docs at **http://localhost:8000/docs** once the server is running.
| wood | 0.986 | 0.915 | 0.973 | 0.975 | 45.3 |
| zipper | 0.914 | 0.979 | 0.972 | 0.971 | 41.0 |

</details>
</details>

---
Expand Down Expand Up @@ -289,15 +310,11 @@ anomavision export \

---

</details>

<details>
<summary><strong>πŸ“Ί Streaming Sources</strong></summary>
<br>



Run inference on **live sources** without changing your model or code β€” just update the config:
Run inference on **live sources** without changing your model or code:

| Source | `stream_source.type` | Use case |
|---|---|---|
Expand All @@ -321,15 +338,12 @@ enable_visualization: true
anomavision detect --config stream_config.yml
```

---
</details>

<details>
<summary><strong>βš™οΈ Configuration</strong></summary>
<br>

## βš™οΈ Configuration

All scripts accept `--config config.yml` and CLI overrides. **CLI always wins.**

```yaml
Expand Down Expand Up @@ -360,27 +374,22 @@ log_level: INFO

Full key reference: [`docs/config.md`](docs/config.md)

---

</details>

<details>
<summary><strong>πŸ”Œ Integrations</strong></summary>
<br>



| Integration | Description |
|---|---|
| **FastAPI** | REST API β€” `/predict`, `/predict/batch`, Swagger UI at `/docs` |
| **Streamlit** | Browser demo β€” heatmap overlay, threshold slider, batch upload |
| **Gradio** | [Live HuggingFace Space](https://huggingface.co/spaces/DeepKnowledge1/mvtec-anomaly-detection) β€” try it instantly |
| **C++ Runtime** | ONNX + OpenCV, no Python required β€” see [`docs/cpp/`](docs/cpp/README.md) |
| **OpenVINO** | Intel CPU/VPU edge optimization |
| **TensorRT** | NVIDIA GPU maximum throughput |
| **INT8 Quantization** | Dynamic + static INT8 via ONNX Runtime |

**Start the demo stack:**

```bash
# Terminal 1 β€” backend
uvicorn apps.api.fastapi_app:app --host 0.0.0.0 --port 8000
Expand All @@ -391,20 +400,12 @@ streamlit run apps/ui/streamlit_app.py -- --port 8000

Open **http://localhost:8501**

<div align="center">
<img src="docs/images/streamlit.png" alt="Streamlit Demo" width="65%"/>
</div>

---

</details>

<details>
<summary><strong>πŸ“‚ Dataset Format</strong></summary>
<br>



AnomaVision uses [MVTec AD](https://www.mvtec.com/company/research/datasets/mvtec-ad) layout. Custom datasets work with the same structure:

```
Expand All @@ -417,12 +418,13 @@ dataset/
└── <defect_name>/ ← anomalous test images (any subfolder name)
```

</details>

---

## πŸ—οΈ Architecture

<img src="docs/images/archti.png" width="100%" alt="AnomaVision archti"/>

<img src="docs/images/archti.png" width="100%" alt="AnomaVision architecture"/>

**Key design decisions:**

Expand All @@ -433,7 +435,6 @@ dataset/
**Adaptive Gaussian post-processing** is applied to score maps after inference. The kernel is sized relative to the image resolution, which is a key factor behind the Pixel AUROC gain over baseline.

---
</details>

## πŸ› οΈ Development

Expand All @@ -448,7 +449,7 @@ source .venv/bin/activate # Windows: .venv\Scripts\Activate.ps1
# Install with dev dependencies
uv sync --extra cpu # or --extra cu121 for GPU

# Install the package in editable mode (registers the `anomavision` CLI command)
# Install the package in editable mode
uv pip install -e .

# Verify CLI is working
Expand Down Expand Up @@ -481,50 +482,31 @@ PRs must pass `pytest` + `flake8` and include doc updates if behavior changes. S
<summary><strong>Docker</strong></summary>

```dockerfile
# Use a specific digest or version for reproducibility
FROM python:3.11-slim

# Install uv directly from the official binary to keep the image lean
COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/

# Set production environment variables
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1 \
UV_COMPILE_BYTECODE=1 \
UV_LINK_MODE=copy

WORKDIR /app

# Install dependencies first (layer caching)
# We use --no-install-project because we only want the libs here
RUN --mount=type=cache,target=/root/.cache/uv \
--mount=type=bind,source=uv.lock,target=uv.lock \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
uv sync --frozen --no-install-project --extra cpu

# Copy the rest of the application
COPY . .

# Install the project itself
RUN --mount=type=cache,target=/root/.cache/uv \
uv sync --frozen --extra cpu
RUN apt-get update && apt-get install -y \
git git-lfs libsm6 libxext6 libgl1 libglib2.0-0 \
&& rm -rf /var/lib/apt/lists/*

# GPU build? Replace --extra cpu with --extra cu121 (or your CUDA version)
# in both uv sync steps.
RUN useradd -m -u 1000 user

RUN pip install --upgrade pip setuptools wheel && \
pip install --no-cache-dir uv && \
uv pip install --system "anomavision[cpu]"

# Place uv-installed binaries on the PATH
ENV PATH="/app/.venv/bin:$PATH"
USER user
ENV PATH="/home/user/.local/bin:/usr/local/bin:$PATH"

EXPOSE 8000
WORKDIR /home/user/app
COPY --chown=user . .

# Use the venv's uvicorn directly
CMD ["uvicorn", "apps.api.fastapi_app:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "4"]
EXPOSE 7860
CMD ["python", "app.py"]
```

```bash
docker build -t anomavision .
docker run -p 8000:8000 -v $(pwd)/distributions:/app/distributions anomavision
docker run -p 7860:7860 -v $(pwd)/distributions:/home/user/app/distributions anomavision
```

</details>
Expand Down Expand Up @@ -631,6 +613,7 @@ More: [`docs/troubleshooting.md`](docs/troubleshooting.md)

- πŸ› [Issues](https://github.com/DeepKnowledge1/AnomaVision/issues) β€” bug reports
- πŸ’‘ [Discussions](https://github.com/DeepKnowledge1/AnomaVision/discussions) β€” questions, ideas, show & tell
- πŸ€— [Live Demo](https://huggingface.co/spaces/DeepKnowledge1/mvtec-anomaly-detection) β€” try it in your browser
- πŸ“§ [deepp.knowledge@gmail.com](mailto:deepp.knowledge@gmail.com) β€” direct contact

---
Expand Down
Loading