A browser-based Gaussian splat generator built on top of Apple SHARP. ✨
This project lets you:
- upload one image
- generate Gaussian splats in the browser
- preview the result
- download a
.plyfile
- Project repo: bring-shrubbery/ml-sharp-web
- Upstream SHARP repo (Apple): apple/ml-sharp
- SHARP project page: apple.github.io/ml-sharp
- SHARP paper: arXiv:2512.10685
Apple's SHARP repository has separate licenses for code and model weights.
- SHARP code license: LICENSE
- SHARP model license: LICENSE_MODEL
If you use Apple's released SHARP checkpoint/weights, you must follow LICENSE_MODEL (research-use restrictions apply).
- Bun installed
- A modern desktop browser (Chrome or Edge recommended)
- Enough disk space and RAM for the SHARP model (the exported ONNX sidecar is large, ~2.4 GB)
If this project helps you, please star it:
bun installThis also copies ONNX Runtime Web WASM assets into public/ort/ automatically.
bun devOpen the URL shown by Vite (usually http://localhost:5173).
- Upload an image.
- Click
Generate Splat. - Preview the result and download the
.plyfile.
SHARP exports usually produce two files:
sharp_web_predictor.onnxsharp_web_predictor.onnx.data
Both files must be served together from the same folder (for example public/models/).
Why this matters:
- The
.onnxfile is only the graph and metadata. - The
.onnx.datafile contains most of the model weights.
For that reason, the app uses the hosted model by default.
Uploading only the .onnx file directly in the browser usually will not work because the .onnx.data sidecar is separate.
Everything runs in the browser, but you still need an exported SHARP ONNX model.
git clone https://github.com/apple/ml-sharp /tmp/ml-sharp-upstreamYou need Python + SHARP dependencies + ONNX export dependencies.
The easiest route is to follow the upstream SHARP setup first, then run this exporter script from this repo.
From this repo:
python3 scripts/export_sharp_onnx.py \
--sharp-repo /tmp/ml-sharp-upstream \
--output public/models/sharp_web_predictor.onnxIf the model is large (it is), the script will also write:
public/models/sharp_web_predictor.onnx.data
--checkpoint /path/to/sharp_2572gikvuh.ptto use a manually downloaded checkpoint--device cudato export on GPU (if your environment supports it)--opset 20to change ONNX opset (default is20)
If you want a static build instead of running bun dev:
bun run build
bun run previewNotes:
bun run buildcopiespublic/intodist/, including the model files.- If
sharp_web_predictor.onnx.datais present, the build output will be very large.
- React + TypeScript UI (
src/) - ONNX Runtime Web worker for inference (
src/workers/sharpWorker.ts) - Browser-side SHARP postprocessing (NDC -> metric gaussian conversion)
- Browser-side PLY writer
- In-page preview with
@mkkellogg/gaussian-splats-3d
This means a WASM file request returned HTML instead.
Try:
- run the app with
bun dev(notfile://...) - restart the dev server after
bun install - verify these load in your browser:
/ort/ort-wasm-simd-threaded.asyncify.mjs/ort/ort-wasm-simd-threaded.asyncify.wasm
This means the ONNX sidecar file is missing or not served correctly.
Check:
public/models/sharp_web_predictor.onnxpublic/models/sharp_web_predictor.onnx.data- The app can reach the hosted model files in your deployment/browser environment
SHARP is large and browser inference is heavy.
Try:
- Chrome or Edge (desktop)
- smaller
Max gaussiansin the UI - closing other memory-heavy tabs/apps
- waiting longer on first run (model + runtime initialization can take time)
Working prototype / experimental. 🧪
The app runs end-to-end in the browser, but performance and compatibility depend heavily on browser WebGPU/WASM support and your machine's available memory.