Is there an existing issue for this?
What happened?
PictoPy currently lacks a standardized approach to evaluate and optimize inference performance.
This issue proposes:
- Exporting models to ONNX format
- Benchmarking inference performance (PyTorch vs ONNX Runtime)
- Establishing a foundation for hardware-accelerated inference
This is being addressed in PR #1248.
Record
Is there an existing issue for this?
What happened?
PictoPy currently lacks a standardized approach to evaluate and optimize inference performance.
This issue proposes:
This is being addressed in PR #1248.
Record