Observable-based model evaluation, Pareto optimization, and Bayesian stacking for scientific model criticism.
ModelCriticism.jl provides a structured framework for evaluating scientific
simulation models against data via observable-based scoring, multi-objective
Pareto optimization, and Bayesian model stacking.
This is the Julia implementation of the model-criticism framework (Python).
Pre-alpha. API is being designed.
| Package | Description |
|---|---|
| model-criticism | Python implementation of this same framework |
| OpEngine.jl | Operator-partitioned solver (planned consumer) |
| OpSystem.jl | System specification compiler (planned consumer) |
using Pkg
Pkg.add("ModelCriticism")julia --project=. -e 'using Pkg; Pkg.instantiate()'
just testMIT