diff --git a/README.md b/README.md
index dcfceff..65413d0 100644
--- a/README.md
+++ b/README.md
@@ -3,36 +3,53 @@
# AnomaVision 🔍
-**high-performance visual anomaly detection. Fast, lightweight, production-ready.**
+**High-performance visual anomaly detection. Fast, lightweight, production-ready.**
AnomaVision detects defects without ever seeing defective examples during training.
[](https://pypi.org/project/anomavision/)
[](https://pypi.org/project/anomavision/)
-[](https://www.python.org/)
+[](https://www.python.org/)
[](https://pytorch.org/)
[](LICENSE)
[](https://onnx.ai/)
[](https://developer.nvidia.com/tensorrt)
[](https://docs.openvino.ai/)
+[](https://huggingface.co/spaces/DeepKnowledge1/mvtec-anomaly-detection)
-[**Docs**](docs/quickstart.md) · [**Quickstart**](#-quickstart) · [**Models**](#-models--performance) · [**Tasks**](#-tasks--modes) · [**Integrations**](#-integrations) · [**Issues**](https://github.com/DeepKnowledge1/AnomaVision/issues) · [**Discussions**](https://github.com/DeepKnowledge1/AnomaVision/discussions)
+[**Live Demo**](#-live-demo) · [**Docs**](docs/quickstart.md) · [**Quickstart**](#-quickstart) · [**Models**](#-models--performance) · [**Tasks**](#-tasks--modes) · [**Integrations**](#-integrations) · [**Issues**](https://github.com/DeepKnowledge1/AnomaVision/issues) · [**Discussions**](https://github.com/DeepKnowledge1/AnomaVision/discussions)
---
+## 🤗 Live Demo
+
+> **Try AnomaVision instantly — no installation required.**
+
+[](https://huggingface.co/spaces/DeepKnowledge1/mvtec-anomaly-detection)
+
+The live demo runs a **PaDiM model trained on MVTec bottle images** and shows:
+
+- 🌡️ **Anomaly Heatmap** — spatial score map highlighting defect regions
+- 🖼️ **Overlay** — original image with anomaly contours drawn
+- 🎭 **Predicted Mask** — binary segmentation of detected defects
+- ⚡ **Real-time inference** — results in milliseconds on CPU
+
+Upload your own bottle image or pick from the provided samples to see anomaly detection in action.
+
+---
+
## What is AnomaVision?
AnomaVision delivers **visual anomaly detection** optimized for production deployment. Based on PaDiM, it learns the distribution of normal images in a **single forward pass** — no labels, no segmentation masks, no lengthy training loops.
The result: a 15 MB model that runs at **43 FPS on CPU** and **547 FPS on GPU**, with higher AUROC than the existing best-in-class baseline.
-
-
+---
## 🚀 Quickstart
@@ -47,6 +64,7 @@ pip install uv
```
---
+
#### Option A — From Source (development)
```bash
@@ -84,7 +102,6 @@ uv pip install "anomavision[cu124]" # CUDA 12.4
---
-
#### Option C — Already installed without extras?
If you're seeing `ModuleNotFoundError: No module named 'torch'`, add PyTorch into your current environment:
@@ -105,6 +122,7 @@ uv pip install torch torchvision torchaudio --index-url https://download.pytorch
python -c "import anomavision, torch; print('✅ Ready —', torch.__version__)"
```
+---
### CLI
@@ -140,6 +158,7 @@ anomavision export --help
Use the Python API when you want to embed AnomaVision into a larger pipeline,
run it inside a notebook, or integrate it with your own data loading logic.
+
```python
import torch
import anomavision
@@ -211,11 +230,12 @@ Full docs at **http://localhost:8000/docs** once the server is running.
+---
+
📊 Models & Performance
-
### MVTec AD — Average over 15 Classes
| Model | Image AUROC ↑ | Pixel AUROC ↑ | CPU FPS ↑ | GPU FPS ↑ | Size ↓ |
@@ -255,6 +275,7 @@ Full docs at **http://localhost:8000/docs** once the server is running.
| wood | 0.986 | 0.915 | 0.973 | 0.975 | 45.3 |
| zipper | 0.914 | 0.979 | 0.972 | 0.971 | 41.0 |
+
---
@@ -289,15 +310,11 @@ anomavision export \
---
-
-
📺 Streaming Sources
-
-
-Run inference on **live sources** without changing your model or code — just update the config:
+Run inference on **live sources** without changing your model or code:
| Source | `stream_source.type` | Use case |
|---|---|---|
@@ -321,15 +338,12 @@ enable_visualization: true
anomavision detect --config stream_config.yml
```
----
⚙️ Configuration
-## ⚙️ Configuration
-
All scripts accept `--config config.yml` and CLI overrides. **CLI always wins.**
```yaml
@@ -360,27 +374,22 @@ log_level: INFO
Full key reference: [`docs/config.md`](docs/config.md)
----
-
🔌 Integrations
-
-
| Integration | Description |
|---|---|
| **FastAPI** | REST API — `/predict`, `/predict/batch`, Swagger UI at `/docs` |
| **Streamlit** | Browser demo — heatmap overlay, threshold slider, batch upload |
+| **Gradio** | [Live HuggingFace Space](https://huggingface.co/spaces/DeepKnowledge1/mvtec-anomaly-detection) — try it instantly |
| **C++ Runtime** | ONNX + OpenCV, no Python required — see [`docs/cpp/`](docs/cpp/README.md) |
| **OpenVINO** | Intel CPU/VPU edge optimization |
| **TensorRT** | NVIDIA GPU maximum throughput |
| **INT8 Quantization** | Dynamic + static INT8 via ONNX Runtime |
-**Start the demo stack:**
-
```bash
# Terminal 1 — backend
uvicorn apps.api.fastapi_app:app --host 0.0.0.0 --port 8000
@@ -391,20 +400,12 @@ streamlit run apps/ui/streamlit_app.py -- --port 8000
Open **http://localhost:8501**
-
-

-
-
----
-
📂 Dataset Format
-
-
AnomaVision uses [MVTec AD](https://www.mvtec.com/company/research/datasets/mvtec-ad) layout. Custom datasets work with the same structure:
```
@@ -417,12 +418,13 @@ dataset/
└── / ← anomalous test images (any subfolder name)
```
+
+
---
## 🏗️ Architecture
-
-
+
**Key design decisions:**
@@ -433,7 +435,6 @@ dataset/
**Adaptive Gaussian post-processing** is applied to score maps after inference. The kernel is sized relative to the image resolution, which is a key factor behind the Pixel AUROC gain over baseline.
---
-
## 🛠️ Development
@@ -448,7 +449,7 @@ source .venv/bin/activate # Windows: .venv\Scripts\Activate.ps1
# Install with dev dependencies
uv sync --extra cpu # or --extra cu121 for GPU
-# Install the package in editable mode (registers the `anomavision` CLI command)
+# Install the package in editable mode
uv pip install -e .
# Verify CLI is working
@@ -481,50 +482,31 @@ PRs must pass `pytest` + `flake8` and include doc updates if behavior changes. S
Docker
```dockerfile
-# Use a specific digest or version for reproducibility
FROM python:3.11-slim
-# Install uv directly from the official binary to keep the image lean
-COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/
-
-# Set production environment variables
-ENV PYTHONDONTWRITEBYTECODE=1 \
- PYTHONUNBUFFERED=1 \
- UV_COMPILE_BYTECODE=1 \
- UV_LINK_MODE=copy
-
-WORKDIR /app
-
-# Install dependencies first (layer caching)
-# We use --no-install-project because we only want the libs here
-RUN --mount=type=cache,target=/root/.cache/uv \
- --mount=type=bind,source=uv.lock,target=uv.lock \
- --mount=type=bind,source=pyproject.toml,target=pyproject.toml \
- uv sync --frozen --no-install-project --extra cpu
-
-# Copy the rest of the application
-COPY . .
-
-# Install the project itself
-RUN --mount=type=cache,target=/root/.cache/uv \
- uv sync --frozen --extra cpu
+RUN apt-get update && apt-get install -y \
+ git git-lfs libsm6 libxext6 libgl1 libglib2.0-0 \
+ && rm -rf /var/lib/apt/lists/*
- # GPU build? Replace --extra cpu with --extra cu121 (or your CUDA version)
- # in both uv sync steps.
+RUN useradd -m -u 1000 user
+RUN pip install --upgrade pip setuptools wheel && \
+ pip install --no-cache-dir uv && \
+ uv pip install --system "anomavision[cpu]"
-# Place uv-installed binaries on the PATH
-ENV PATH="/app/.venv/bin:$PATH"
+USER user
+ENV PATH="/home/user/.local/bin:/usr/local/bin:$PATH"
-EXPOSE 8000
+WORKDIR /home/user/app
+COPY --chown=user . .
-# Use the venv's uvicorn directly
-CMD ["uvicorn", "apps.api.fastapi_app:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "4"]
+EXPOSE 7860
+CMD ["python", "app.py"]
```
```bash
docker build -t anomavision .
-docker run -p 8000:8000 -v $(pwd)/distributions:/app/distributions anomavision
+docker run -p 7860:7860 -v $(pwd)/distributions:/home/user/app/distributions anomavision
```
@@ -631,6 +613,7 @@ More: [`docs/troubleshooting.md`](docs/troubleshooting.md)
- 🐛 [Issues](https://github.com/DeepKnowledge1/AnomaVision/issues) — bug reports
- 💡 [Discussions](https://github.com/DeepKnowledge1/AnomaVision/discussions) — questions, ideas, show & tell
+- 🤗 [Live Demo](https://huggingface.co/spaces/DeepKnowledge1/mvtec-anomaly-detection) — try it in your browser
- 📧 [deepp.knowledge@gmail.com](mailto:deepp.knowledge@gmail.com) — direct contact
---