Explore PyO3 Python API designs for solverang bindings#5
Explore PyO3 Python API designs for solverang bindings#5
Conversation
Implements the complete architecture from docs/plans/solver-first-v3.md: Tier 1 - Core Infrastructure: - id.rs: Generational index types (ParamId, EntityId, ConstraintId, ClusterId) - param/store.rs: Central ParamStore with alloc/free/fix/unfix and SolverMapping - entity/mod.rs: Entity trait (named groups of parameters) - constraint/mod.rs: Constraint trait (residuals + Jacobian via ParamId) - graph/: Bipartite constraint graph, RigidCluster, decompose wrapper - solve/sub_problem.rs: ReducedSubProblem bridges new traits to existing Problem - system.rs: ConstraintSystem coordinator with solve pipeline Tier 2 - Differentiating Features: - reduce/: Fixed-param substitution, coincident merging, trivial elimination - graph/redundancy.rs: Jacobian rank analysis for redundant/conflicting constraints - graph/dof.rs: Per-entity degrees-of-freedom via null space analysis - dataflow/: ChangeTracker for incremental solving, SolutionCache for warm starts Tier 3 - Advanced Solving: - solve/drag.rs: Null-space projection for under-constrained interactive drag - solve/branch.rs: Multi-branch selection (closest-to-previous, smallest-residual) - graph/pattern.rs: Solvable pattern detection (scalar, two-distances, H+V, etc.) - solve/closed_form.rs: Analytical solvers for matched patterns Tier 4 - Geometry Plugins: - sketch2d/: Point2D, LineSegment2D, Circle2D, Arc2D, InfiniteLine2D entities; 15 constraint types with squared formulations; Sketch2DBuilder - sketch3d/: Point3D, LineSegment3D, Plane, Axis3D; 7 constraint types - assembly/: RigidBody with quaternion orientation; Mate, Coaxial, Insert, Gear All 875 tests pass. Existing solver infrastructure preserved and reused via the ReducedSubProblem adapter pattern. https://claude.ai/code/session_01Xk2mVT751ZY4kbeuZ6iynw
Analyzes five distinct API design approaches (callback-based, pre-built types, hybrid, protocol-based, decorator-based) with concrete code examples, Rust implementation sketches, and a comparison matrix evaluating performance, flexibility, and pythonic-ness tradeoffs. https://claude.ai/code/session_01N4SoMnzNhwP9wdz9Gy3jFS
Explores using Python operator overloading (__add__, __mul__, __pow__, etc.) to build Rust-side expression trees that are symbolically differentiated, lowered to ConstraintOp opcodes, JIT-compiled via Cranelift, and solved entirely in Rust with the GIL released. This is the only design that achieves both custom user-defined math and full GIL-free native performance. Updates recommendation from Design C (hybrid callbacks) to Design F + B (expression graphs + pre-built geometry) as the primary API direction. https://claude.ai/code/session_01N4SoMnzNhwP9wdz9Gy3jFS
Detailed 4-phase plan for Design F: operator overloading builds Rust-side expression trees that are symbolically differentiated, JIT-compiled via Cranelift, and solved with GIL released. Key addition: control flow via branchless FCmp + Select opcodes (hardware cmov/csel). Both branches are always evaluated; the result is selected without branches. Derivatives route through Select nodes, avoiding the NaN-contamination problem that PyTorch's torch.where has. Phases: 1. RuntimeExpr in core crate (~1,255 LOC) 2. PyO3 bindings with maturin (~895 LOC) 3. Geometry integration (~600 LOC) 4. Extended capabilities (Exp/Ln, Parameters, batch solve) https://claude.ai/code/session_01N4SoMnzNhwP9wdz9Gy3jFS
Comprehensive comparison of Python AST extraction (@sr.compile decorator) vs operator overloading (Design F) for the solverang Python API, drawing on lessons from JAX, PyTorch, Numba, Taichi, and SymPy. Concludes operator overloading is the clear winner due to Jupyter/REPL/lambda support, composability, simplicity, and industry precedent (Google abandoned AST approach with Tangent in favor of JAX's tracing). https://claude.ai/code/session_01N4SoMnzNhwP9wdz9Gy3jFS
Foundation for the pluggable solve pipeline architecture: - pipeline/types.rs: ClusterData, ClusterAnalysis, ReducedCluster, ClusterSolution - pipeline/traits.rs: Decompose, Analyze, Reduce, SolveCluster, PostProcess traits https://claude.ai/code/session_01Xk2mVT751ZY4kbeuZ6iynw
- pipeline/decompose.rs: DefaultDecompose with union-find (extracted from system.rs) - pipeline/post_process.rs: DefaultPostProcess + DiagnosticPostProcess - pipeline/mod.rs: module declarations - id.rs: add Default derive to ClusterId - lib.rs: declare pipeline module https://claude.ai/code/session_01Xk2mVT751ZY4kbeuZ6iynw
- pipeline/analyze.rs: DefaultAnalyze wrapping redundancy/DOF/pattern modules - pipeline/solve_phase.rs: DefaultSolve with closed-form + LM fallback - Updated mod.rs with new module declarations https://claude.ai/code/session_01Xk2mVT751ZY4kbeuZ6iynw
- SubstituteReducer: wraps reduce/substitute for fixed-param elimination - MergeReducer: wraps reduce/merge for coincident-param merging - EliminateReducer: wraps reduce/eliminate for trivial constraints - ChainedReducer: composes reducers sequentially - DefaultReduce: chains all three in order - NoopReduce: passthrough for benchmarking https://claude.ai/code/session_01Xk2mVT751ZY4kbeuZ6iynw
Major refactor: system.rs now delegates to SolvePipeline instead of hardcoding the solve loop. All previously orphaned V3 modules are now wired in through the pipeline phases: Pipeline orchestrator (pipeline/mod.rs): - SolvePipeline: holds boxed trait impls for each phase - PipelineBuilder: builder pattern for custom pipelines - Orchestrator: decompose → analyze → reduce → solve → postprocess - Integrated ChangeTracker for incremental solving - Integrated SolutionCache for warm starts System.rs refactored: - Removed inline UnionFind + decompose_into_clusters (moved to pipeline) - solve() delegates to pipeline.run() - Added solve_incremental(), drag(), analyze_redundancy(), analyze_dof(), diagnose() - Mutation methods notify ChangeTracker + pipeline.invalidate() 919 tests pass, 0 failures (44 new tests from pipeline modules). https://claude.ai/code/session_01Xk2mVT751ZY4kbeuZ6iynw
Summary of ChangesHello @akiselev, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request provides an in-depth exploration of various Python API designs for the Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces two exceptionally detailed and well-structured design documents for the Python API of solverang. The analysis of different API designs in pyo3-api-design.md is thorough, and the final recommendation for an expression-graph-based API (Design F) is well-justified, balancing performance and flexibility. The implementation plan in expr-graph-api.md is comprehensive and provides a clear path forward. My review includes a few suggestions to clarify minor inconsistencies in the design documents and to address a potentially critical usability issue regarding equality comparisons in the expression API. Overall, this is excellent work that lays a solid foundation for the Python bindings.
| // ─── Comparison operators (return Expr, not bool) ─── | ||
| // Python's __gt__ etc. must return a type that Python can use. | ||
| // We return PyExpr wrapping a Compare node. | ||
| // IMPORTANT: __eq__ and __ne__ returning non-bool breaks hashing; | ||
| // we use named methods instead of overloading __eq__/__ne__. | ||
|
|
||
| fn __gt__(&self, other: ExprOrFloat) -> Self { | ||
| PyExpr { | ||
| inner: RuntimeExpr::Compare { | ||
| a: Box::new(self.inner.clone()), | ||
| b: Box::new(other.into_expr()), | ||
| cond: CmpCondition::Gt, | ||
| }, | ||
| name: None, | ||
| } | ||
| } |
There was a problem hiding this comment.
The plan correctly identifies that overloading __eq__ and __ne__ to return a non-boolean is problematic. However, not overloading them at all can lead to subtle bugs. For example, a user might write sr.where(x == y, ...) which would use Python's default object identity comparison and likely always evaluate to False, instead of building a comparison node in the expression graph. This can be very confusing.
To prevent this, consider one of two approaches:
- Overload
__eq__and__ne__to return aPyExpr(like__gt__), and strongly document that the resulting expression object cannot be used in a standard Pythonifstatement. This is the approach taken by libraries like SymPy. - Overload
__eq__and__ne__to raise aTypeErrorwith a helpful message guiding the user to use the explicitsr.eq(a, b)function. This prevents silent failures and makes the API safer.
The current plan leaves this ambiguous and could lead to a poor user experience.
docs/notes/pyo3-api-design.md
Outdated
| pyo3 = { version = "0.23", features = ["extension-module", "abi3-py39"] } | ||
| numpy = "0.23" |
There was a problem hiding this comment.
| .call1(py, (array,)) | ||
| .unwrap() | ||
| .extract::<Vec<f64>>(py) | ||
| .unwrap() |
There was a problem hiding this comment.
The Rust implementation sketches use .unwrap() for brevity, which is understandable. However, to avoid promoting a risky practice, it would be beneficial to add a small note here or in the introduction to the sketches, mentioning that these would be replaced with proper error handling (e.g., propagating PyErr) in the actual implementation, as detailed in the later error handling strategy section.
docs/notes/pyo3-api-design.md
Outdated
| - **Imperative geometry API loses Rust builder's fluent chaining** (methods return | ||
| `None` in Python, not `self`) |
There was a problem hiding this comment.
There's a minor inconsistency in the description of Design C's imperative geometry API. The 'Cons' section states that methods return None, losing fluent chaining. However, the add_point method on line 606 is shown to return a usize index, which is a very useful pattern. It would be clearer to state that builder methods like constrain_* return None, while entity creation methods like add_point return a handle/index.
| ### Both-Branches-Safe Requirement | ||
|
|
||
| Because both branches are always evaluated, they must not produce NaN/Inf | ||
| for any input. For example: | ||
|
|
||
| ```python | ||
| # DANGEROUS: division by zero in false branch when x > 0 | ||
| r = sr.where(x > 0, x, 1.0 / x) # 1/x is computed even when x > 0 | ||
|
|
||
| # SAFE: guard the denominator | ||
| r = sr.where(x > 0, x, 1.0 / sr.where(x != 0, x, 1.0)) | ||
| ``` |
There was a problem hiding this comment.
The 'Both-Branches-Safe Requirement' section provides a good example of a dangerous division and a safe alternative. The safe version 1.0 / sr.where(x != 0, x, 1.0) is clever but relies on an __ne__ overload (which has issues as noted in another comment) and might be complex for users to discover.
To improve usability and safety, consider providing a safe_div(a, b) helper function in the library. This function could encapsulate the logic of checking for a zero denominator, making user code both cleaner and less error-prone. For example: r = sr.where(x > 0, x, sr.safe_div(1.0, x)).
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: e3cc320f77
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
docs/plans/python/expr-graph-api.md
Outdated
| }, | ||
|
|
||
| /// Branchless conditional select: dst = condition != 0 ? true_val : false_val. | ||
| /// Both operands are always evaluated. Lowers to Cranelift select (cmov/csel). |
There was a problem hiding this comment.
Avoid evaluating inactive Select branches
Defining Select as always evaluating both operands breaks guarded piecewise expressions, e.g. where(x > 0, sqrt(x), 0) or where(den != 0, num/den, 0): the non-selected branch can still emit NaN/Inf, which then pollutes residual/Jacobian evaluation and can make solves diverge. Since this design is intended to power Python control-flow expressions, Select needs short-circuit/lazy branch semantics (or an equivalent masking strategy) to preserve correctness for domain-restricted branches.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Pull request overview
Adds two design/implementation planning documents outlining multiple PyO3 API approaches for solverang, with a detailed “Design F” expression-graph plan (operator overloading → symbolic diff → opcode lowering → Cranelift JIT → GIL-free solve).
Changes:
- Added an in-depth implementation plan for an expression-graph-based Python API (Design F), including proposed Rust modules, opcodes, and PyO3 surface API.
- Added a broader exploration document comparing five Python API design options with tradeoffs and example sketches.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 6 comments.
| File | Description |
|---|---|
| docs/plans/python/expr-graph-api.md | Detailed step-by-step implementation plan for the expression-graph API, including proposed Rust/PyO3 interfaces and phases. |
| docs/notes/pyo3-api-design.md | Comparative exploration of multiple PyO3 API designs, culminating in a recommendation and implementation sketches. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| #[pyfunction] | ||
| #[pyo3(signature = (names, *, count=None))] | ||
| fn variables(names: &str, count: Option<usize>) -> Vec<PyExpr> { | ||
| match count { | ||
| Some(n) => (0..n).map(|i| PyExpr { | ||
| inner: RuntimeExpr::Var(i as u32), | ||
| name: Some(format!("{}_{}", names.trim(), i)), | ||
| }).collect(), | ||
| None => names.split_whitespace().enumerate().map(|(i, name)| PyExpr { | ||
| inner: RuntimeExpr::Var(i as u32), | ||
| name: Some(name.to_string()), | ||
| }).collect(), |
There was a problem hiding this comment.
variables() assigns variable indices starting at 0 on every call. This makes expressions from separate variables() calls collide (e.g., x = variables("x"); y = variables("y") would create two Var(0) nodes), which then cannot be disambiguated at solve time. Consider introducing a variable allocator/context (e.g., VarSet/Model that hands out unique indices) or allow passing a starting offset / returning both the variables and the next free index.
| #[pyfunction] | |
| #[pyo3(signature = (names, *, count=None))] | |
| fn variables(names: &str, count: Option<usize>) -> Vec<PyExpr> { | |
| match count { | |
| Some(n) => (0..n).map(|i| PyExpr { | |
| inner: RuntimeExpr::Var(i as u32), | |
| name: Some(format!("{}_{}", names.trim(), i)), | |
| }).collect(), | |
| None => names.split_whitespace().enumerate().map(|(i, name)| PyExpr { | |
| inner: RuntimeExpr::Var(i as u32), | |
| name: Some(name.to_string()), | |
| }).collect(), | |
| use std::sync::atomic::{AtomicU32, Ordering}; | |
| /// Global allocator for unique variable indices across all `variables` calls. | |
| static NEXT_VAR_INDEX: AtomicU32 = AtomicU32::new(0); | |
| #[pyfunction] | |
| #[pyo3(signature = (names, *, count=None))] | |
| fn variables(names: &str, count: Option<usize>) -> Vec<PyExpr> { | |
| match count { | |
| Some(n) => (0..n) | |
| .map(|i| { | |
| let idx = NEXT_VAR_INDEX.fetch_add(1, Ordering::Relaxed); | |
| PyExpr { | |
| inner: RuntimeExpr::Var(idx), | |
| name: Some(format!("{}_{}", names.trim(), i)), | |
| } | |
| }) | |
| .collect(), | |
| None => names | |
| .split_whitespace() | |
| .map(|name| { | |
| let idx = NEXT_VAR_INDEX.fetch_add(1, Ordering::Relaxed); | |
| PyExpr { | |
| inner: RuntimeExpr::Var(idx), | |
| name: Some(name.to_string()), | |
| } | |
| }) | |
| .collect(), |
| #[pyo3(signature = (*, residuals=None, equations=None, x0, | ||
| solver=None, tolerance=None, max_iterations=None))] | ||
| fn solve( | ||
| py: Python<'_>, | ||
| residuals: Option<Vec<PyExpr>>, | ||
| equations: Option<Vec<PyExpr>>, | ||
| x0: PyReadonlyArray1<'_, f64>, | ||
| solver: Option<&str>, |
There was a problem hiding this comment.
The solve() sketch types x0 as PyReadonlyArray1<f64>, which requires a NumPy array, but the earlier examples call sr.solve(..., x0=[0.5, 0.5]) (a Python list). Either adjust the API to accept Vec<f64>/&Bound<PyAny> and convert from any sequence, or update the documentation/examples to require np.array([...], dtype=np.float64).
docs/plans/python/expr-graph-api.md
Outdated
| let exprs: Vec<RuntimeExpr> = residuals.or(equations) | ||
| .ok_or_else(|| PyValueError::new_err( | ||
| "must provide 'residuals' or 'equations'" | ||
| ))? |
There was a problem hiding this comment.
let exprs = residuals.or(equations) will silently ignore equations when both are provided. If both keyword args are allowed, it’s better to error when both are Some (and also when both are None) so users don’t get surprising behavior.
| let exprs: Vec<RuntimeExpr> = residuals.or(equations) | |
| .ok_or_else(|| PyValueError::new_err( | |
| "must provide 'residuals' or 'equations'" | |
| ))? | |
| let expr_vec: Vec<PyExpr> = match (residuals, equations) { | |
| (Some(res), None) => res, | |
| (None, Some(eq)) => eq, | |
| (Some(_), Some(_)) => { | |
| return Err(PyValueError::new_err( | |
| "must provide exactly one of 'residuals' or 'equations'", | |
| )); | |
| } | |
| (None, None) => { | |
| return Err(PyValueError::new_err( | |
| "must provide 'residuals' or 'equations'", | |
| )); | |
| } | |
| }; | |
| let exprs: Vec<RuntimeExpr> = expr_vec |
docs/plans/python/expr-graph-api.md
Outdated
| "lm" | "levenberg-marquardt" => { | ||
| let mut config = LMConfig::default(); | ||
| if let Some(tol) = tolerance { config = config.with_tol(tol); } | ||
| if let Some(max) = max_iterations { config.patience = max; } |
There was a problem hiding this comment.
In the LM branch, max_iterations is mapped to LMConfig.patience, but patience is a factor for max function evaluations (max_fev = patience * (n + 1)), not an iteration limit. This mapping will not behave like Newton’s max_iterations; consider exposing LM’s knobs explicitly (e.g., max_evaluations / patience) or translating max_iterations into patience based on num_vars and documenting the difference.
| if let Some(max) = max_iterations { config.patience = max; } | |
| if let Some(max) = max_iterations { | |
| // LMConfig.patience controls max function evaluations via: | |
| // max_fev = patience * (num_vars + 1) | |
| // Interpret the Python-level `max_iterations` as an approximate | |
| // cap on function evaluations and derive a patience value | |
| // accordingly, ensuring at least one "unit" of patience. | |
| let evals_per_unit = num_vars + 1; | |
| let patience = std::cmp::max(1, max / evals_per_unit); | |
| config.patience = patience; | |
| } |
docs/plans/python/expr-graph-api.md
Outdated
| sqrt, sin, cos, tan, atan2, | ||
| max, min, | ||
| smooth_abs, clamp, | ||
| SolverError, ConvergenceError, DimensionError, | ||
| ) | ||
|
|
||
| # Re-export where (reserved keyword in Python, aliased in Rust) | ||
| from ._solverang import where as where_ | ||
|
|
||
| __all__ = [ | ||
| "Expr", "SolveResult", | ||
| "variables", "solve", "eq", "where_", |
There was a problem hiding this comment.
The __init__.py sketch claims where is a reserved Python keyword and re-exports it as where_, but where is not a Python keyword. As written, solverang.where would not exist even though the rest of the document uses sr.where(...). Consider exporting where directly (and optionally also an alias) to keep the public API consistent with the examples.
| sqrt, sin, cos, tan, atan2, | |
| max, min, | |
| smooth_abs, clamp, | |
| SolverError, ConvergenceError, DimensionError, | |
| ) | |
| # Re-export where (reserved keyword in Python, aliased in Rust) | |
| from ._solverang import where as where_ | |
| __all__ = [ | |
| "Expr", "SolveResult", | |
| "variables", "solve", "eq", "where_", | |
| where, | |
| sqrt, sin, cos, tan, atan2, | |
| max, min, | |
| smooth_abs, clamp, | |
| SolverError, ConvergenceError, DimensionError, | |
| ) | |
| # Re-export where; where_ is kept as an alias for compatibility | |
| from ._solverang import where as where_ | |
| __all__ = [ | |
| "Expr", "SolveResult", | |
| "variables", "solve", "eq", "where", "where_", |
docs/notes/pyo3-api-design.md
Outdated
| let residual_ops = emitter.take_ops(); | ||
|
|
||
| // Emit jacobian opcodes | ||
| let mut emitter = OpcodeEmitter::new(); | ||
| let mut pattern = Vec::new(); | ||
| for (row, jac_row) in self.jacobian_exprs.iter().enumerate() { | ||
| for (col, deriv_expr) in jac_row { | ||
| let reg = deriv_expr.emit(&mut emitter); | ||
| let idx = pattern.len() as u32; | ||
| emitter.store_jacobian_indexed(idx, reg); | ||
| pattern.push(JacobianEntry { row: row as u32, col: *col }); | ||
| } | ||
| } | ||
| let jacobian_ops = emitter.take_ops(); | ||
|
|
||
| let compiled = CompiledConstraints { | ||
| residual_ops, | ||
| jacobian_ops, | ||
| n_residuals: self.residual_exprs.len(), | ||
| n_vars: self.num_vars, | ||
| jacobian_nnz: pattern.len(), | ||
| jacobian_pattern: pattern, |
There was a problem hiding this comment.
This section uses OpcodeEmitter::take_ops() and OpcodeEmitter::store_jacobian_indexed(), but the current OpcodeEmitter API in crates/solverang/src/jit/lower.rs exposes into_ops() and store_jacobian(...)/store_jacobian_current(...) (and internally manages the Jacobian pattern). Updating the sketch to match the existing emitter API will reduce implementation friction and avoid copy/paste errors.
| let residual_ops = emitter.take_ops(); | |
| // Emit jacobian opcodes | |
| let mut emitter = OpcodeEmitter::new(); | |
| let mut pattern = Vec::new(); | |
| for (row, jac_row) in self.jacobian_exprs.iter().enumerate() { | |
| for (col, deriv_expr) in jac_row { | |
| let reg = deriv_expr.emit(&mut emitter); | |
| let idx = pattern.len() as u32; | |
| emitter.store_jacobian_indexed(idx, reg); | |
| pattern.push(JacobianEntry { row: row as u32, col: *col }); | |
| } | |
| } | |
| let jacobian_ops = emitter.take_ops(); | |
| let compiled = CompiledConstraints { | |
| residual_ops, | |
| jacobian_ops, | |
| n_residuals: self.residual_exprs.len(), | |
| n_vars: self.num_vars, | |
| jacobian_nnz: pattern.len(), | |
| jacobian_pattern: pattern, | |
| let residual_ops = emitter.into_ops(); | |
| // Emit jacobian opcodes | |
| let mut emitter = OpcodeEmitter::new(); | |
| for (row, jac_row) in self.jacobian_exprs.iter().enumerate() { | |
| for (col, deriv_expr) in jac_row { | |
| let reg = deriv_expr.emit(&mut emitter); | |
| // The emitter manages the Jacobian pattern internally. | |
| emitter.store_jacobian(row as u32, *col, reg); | |
| } | |
| } | |
| let jacobian_ops = emitter.into_ops(); | |
| let compiled = CompiledConstraints { | |
| residual_ops, | |
| jacobian_ops, | |
| n_residuals: self.residual_exprs.len(), | |
| n_vars: self.num_vars, |
pipeline/incremental_tests.rs (927 lines): - Incremental solving lifecycle (skip unchanged clusters, 3-round lifecycle) - Warm start effectiveness verification - Structural change invalidation - Reduce effectiveness (trivial elimination) - Diagnostics: redundant, conflicting, under-constrained detection pipeline/minpack_bridge_tests.rs (672 lines): - ProblemConstraint adapter: wraps Problem trait as Constraint - Rosenbrock, Powell Singular, Helical Valley through pipeline - Pipeline vs direct LM solve comparison - Multiple test problems as separate clusters https://claude.ai/code/session_01Xk2mVT751ZY4kbeuZ6iynw
…-api-design-xvQgU
…-DVoX1' into claude/explore-pyo3-api-design-xvQgU
Thread 1 (high): Overload __eq__/__ne__ to raise TypeError with helpful message guiding users to sr.eq()/sr.ne(). Prevents silent bugs from Python's default identity comparison. Thread 2: Fix PyO3/numpy versions from 0.23 to 0.22 (current releases). Thread 3: Add note that .unwrap() in sketches is placeholder; real implementation uses PyResult/PyErr propagation. Thread 4: Clarify that constrain_* methods return None while entity creation methods (add_point) return handles/indices. Thread 5: Add safe_div(a, b, fill=0.0) helper for guarded division in both-branches-evaluated Select nodes. Add sr.eq()/sr.ne() functions. Thread 6 (P1): Document that Select can optionally lower to basic blocks with conditional jumps for domain-restricted branches. Branchless (cmov/csel) remains default for solver workloads. Add safe_div/safe_sqrt helpers as practical mitigation. Thread 7: Fix variables() index collision with global AtomicU32 allocator. Separate variables() calls now get unique indices. Add reset_variables(). Thread 8: Accept x0 as either Python list or numpy array via InitialPoint FromPyObject enum, matching documented examples. Thread 9: Error explicitly when both residuals and equations are provided, instead of silently ignoring equations. Thread 10: Translate max_iterations to LM patience correctly: patience = max(1, max_iterations / (num_vars + 1)). Thread 11: Fix incorrect claim that 'where' is a Python keyword. Export it directly as sr.where() with where_ as optional alias. Thread 12: Fix take_ops() → into_ops() and store_jacobian_indexed() → store_jacobian() to match actual OpcodeEmitter API. https://claude.ai/code/session_01N4SoMnzNhwP9wdz9Gy3jFS
Re-evaluates all testing strategies in light of the V3 architecture changes. Identifies critical gaps (no Sketch2D/3D/Assembly constraint correctness tests, no Jacobian verification, no cross-module integration) and proposes a 4-phase plan: Phase 1: Constraint correctness + Jacobian verification (~2,700 LOC) Phase 2: Integration tests across Sketch2D/3D/Assembly/Pipeline (~2,400 LOC) Phase 3: Internals + property-based tests (~3,650 LOC) Phase 4: Performance benchmarks + regression snapshots (~1,600 LOC) Total: ~10,350 new test LOC covering ~432 test cases, bringing test-to-production ratio from 0.08:1 to 0.59:1 for V3 modules. https://claude.ai/code/session_01N4SoMnzNhwP9wdz9Gy3jFS
Analyzes five distinct API design approaches (callback-based, pre-built types,
hybrid, protocol-based, decorator-based) with concrete code examples, Rust
implementation sketches, and a comparison matrix evaluating performance,
flexibility, and pythonic-ness tradeoffs.
https://claude.ai/code/session_01N4SoMnzNhwP9wdz9Gy3jFS