Skip to content

Explore PyO3 Python API designs for solverang bindings#5

Open
akiselev wants to merge 15 commits intomasterfrom
claude/explore-pyo3-api-design-xvQgU
Open

Explore PyO3 Python API designs for solverang bindings#5
akiselev wants to merge 15 commits intomasterfrom
claude/explore-pyo3-api-design-xvQgU

Conversation

@akiselev
Copy link
Owner

@akiselev akiselev commented Feb 7, 2026

Analyzes five distinct API design approaches (callback-based, pre-built types,
hybrid, protocol-based, decorator-based) with concrete code examples, Rust
implementation sketches, and a comparison matrix evaluating performance,
flexibility, and pythonic-ness tradeoffs.

https://claude.ai/code/session_01N4SoMnzNhwP9wdz9Gy3jFS

claude added 10 commits February 7, 2026 03:49
Implements the complete architecture from docs/plans/solver-first-v3.md:

Tier 1 - Core Infrastructure:
- id.rs: Generational index types (ParamId, EntityId, ConstraintId, ClusterId)
- param/store.rs: Central ParamStore with alloc/free/fix/unfix and SolverMapping
- entity/mod.rs: Entity trait (named groups of parameters)
- constraint/mod.rs: Constraint trait (residuals + Jacobian via ParamId)
- graph/: Bipartite constraint graph, RigidCluster, decompose wrapper
- solve/sub_problem.rs: ReducedSubProblem bridges new traits to existing Problem
- system.rs: ConstraintSystem coordinator with solve pipeline

Tier 2 - Differentiating Features:
- reduce/: Fixed-param substitution, coincident merging, trivial elimination
- graph/redundancy.rs: Jacobian rank analysis for redundant/conflicting constraints
- graph/dof.rs: Per-entity degrees-of-freedom via null space analysis
- dataflow/: ChangeTracker for incremental solving, SolutionCache for warm starts

Tier 3 - Advanced Solving:
- solve/drag.rs: Null-space projection for under-constrained interactive drag
- solve/branch.rs: Multi-branch selection (closest-to-previous, smallest-residual)
- graph/pattern.rs: Solvable pattern detection (scalar, two-distances, H+V, etc.)
- solve/closed_form.rs: Analytical solvers for matched patterns

Tier 4 - Geometry Plugins:
- sketch2d/: Point2D, LineSegment2D, Circle2D, Arc2D, InfiniteLine2D entities;
  15 constraint types with squared formulations; Sketch2DBuilder
- sketch3d/: Point3D, LineSegment3D, Plane, Axis3D; 7 constraint types
- assembly/: RigidBody with quaternion orientation; Mate, Coaxial, Insert, Gear

All 875 tests pass. Existing solver infrastructure preserved and reused via
the ReducedSubProblem adapter pattern.

https://claude.ai/code/session_01Xk2mVT751ZY4kbeuZ6iynw
Analyzes five distinct API design approaches (callback-based, pre-built types,
hybrid, protocol-based, decorator-based) with concrete code examples, Rust
implementation sketches, and a comparison matrix evaluating performance,
flexibility, and pythonic-ness tradeoffs.

https://claude.ai/code/session_01N4SoMnzNhwP9wdz9Gy3jFS
Explores using Python operator overloading (__add__, __mul__, __pow__, etc.)
to build Rust-side expression trees that are symbolically differentiated,
lowered to ConstraintOp opcodes, JIT-compiled via Cranelift, and solved
entirely in Rust with the GIL released. This is the only design that achieves
both custom user-defined math and full GIL-free native performance.

Updates recommendation from Design C (hybrid callbacks) to Design F + B
(expression graphs + pre-built geometry) as the primary API direction.

https://claude.ai/code/session_01N4SoMnzNhwP9wdz9Gy3jFS
Detailed 4-phase plan for Design F: operator overloading builds Rust-side
expression trees that are symbolically differentiated, JIT-compiled via
Cranelift, and solved with GIL released.

Key addition: control flow via branchless FCmp + Select opcodes (hardware
cmov/csel). Both branches are always evaluated; the result is selected
without branches. Derivatives route through Select nodes, avoiding the
NaN-contamination problem that PyTorch's torch.where has.

Phases:
1. RuntimeExpr in core crate (~1,255 LOC)
2. PyO3 bindings with maturin (~895 LOC)
3. Geometry integration (~600 LOC)
4. Extended capabilities (Exp/Ln, Parameters, batch solve)

https://claude.ai/code/session_01N4SoMnzNhwP9wdz9Gy3jFS
Comprehensive comparison of Python AST extraction (@sr.compile decorator) vs
operator overloading (Design F) for the solverang Python API, drawing on lessons
from JAX, PyTorch, Numba, Taichi, and SymPy. Concludes operator overloading is
the clear winner due to Jupyter/REPL/lambda support, composability, simplicity,
and industry precedent (Google abandoned AST approach with Tangent in favor of
JAX's tracing).

https://claude.ai/code/session_01N4SoMnzNhwP9wdz9Gy3jFS
Foundation for the pluggable solve pipeline architecture:
- pipeline/types.rs: ClusterData, ClusterAnalysis, ReducedCluster, ClusterSolution
- pipeline/traits.rs: Decompose, Analyze, Reduce, SolveCluster, PostProcess traits

https://claude.ai/code/session_01Xk2mVT751ZY4kbeuZ6iynw
- pipeline/decompose.rs: DefaultDecompose with union-find (extracted from system.rs)
- pipeline/post_process.rs: DefaultPostProcess + DiagnosticPostProcess
- pipeline/mod.rs: module declarations
- id.rs: add Default derive to ClusterId
- lib.rs: declare pipeline module

https://claude.ai/code/session_01Xk2mVT751ZY4kbeuZ6iynw
- pipeline/analyze.rs: DefaultAnalyze wrapping redundancy/DOF/pattern modules
- pipeline/solve_phase.rs: DefaultSolve with closed-form + LM fallback
- Updated mod.rs with new module declarations

https://claude.ai/code/session_01Xk2mVT751ZY4kbeuZ6iynw
- SubstituteReducer: wraps reduce/substitute for fixed-param elimination
- MergeReducer: wraps reduce/merge for coincident-param merging
- EliminateReducer: wraps reduce/eliminate for trivial constraints
- ChainedReducer: composes reducers sequentially
- DefaultReduce: chains all three in order
- NoopReduce: passthrough for benchmarking

https://claude.ai/code/session_01Xk2mVT751ZY4kbeuZ6iynw
Major refactor: system.rs now delegates to SolvePipeline instead of
hardcoding the solve loop. All previously orphaned V3 modules are now
wired in through the pipeline phases:

Pipeline orchestrator (pipeline/mod.rs):
- SolvePipeline: holds boxed trait impls for each phase
- PipelineBuilder: builder pattern for custom pipelines
- Orchestrator: decompose → analyze → reduce → solve → postprocess
- Integrated ChangeTracker for incremental solving
- Integrated SolutionCache for warm starts

System.rs refactored:
- Removed inline UnionFind + decompose_into_clusters (moved to pipeline)
- solve() delegates to pipeline.run()
- Added solve_incremental(), drag(), analyze_redundancy(), analyze_dof(), diagnose()
- Mutation methods notify ChangeTracker + pipeline.invalidate()

919 tests pass, 0 failures (44 new tests from pipeline modules).

https://claude.ai/code/session_01Xk2mVT751ZY4kbeuZ6iynw
Copilot AI review requested due to automatic review settings February 7, 2026 05:56
@gemini-code-assist
Copy link

Summary of Changes

Hello @akiselev, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request provides an in-depth exploration of various Python API designs for the solverang Rust solver, focusing on creating a binding that is both highly performant and feels natural to Python users. It meticulously compares different architectural choices, highlighting their respective advantages and disadvantages in terms of flexibility, speed, and ease of use. The ultimate goal is to establish a robust and efficient interface that allows Python users to define and solve complex mathematical problems with the underlying power of Rust, without sacrificing Python's ergonomic benefits.

Highlights

  • Comprehensive API Design Exploration: The pull request introduces a detailed analysis of five distinct Python API design approaches for solverang's Rust bindings, evaluating each for flexibility, performance, and pythonic-ness.
  • Recommended Approach: Expression Graph + Pre-built Geometry: The core recommendation is Design F (Expression Graph via Operator Overloading) combined with Design B (Pre-built Problem Types). This hybrid approach aims to provide both custom math capabilities and high-performance, GIL-free, JIT-compiled execution with automatic Jacobian computation.
  • Detailed Implementation Plan: A phased implementation plan is provided for the recommended Expression Graph API, outlining new Rust modules, PyO3 bindings, geometry integration, and future extended capabilities, including estimated lines of code and testing strategies.
  • Automatic Symbolic Differentiation and JIT Compilation: The proposed Expression Graph API leverages Rust-side expression trees to automatically compute Jacobians via symbolic differentiation and JIT-compile the problem to native code using Cranelift, ensuring maximum performance without Python callbacks during the solve process.
Changelog
  • docs/notes/pyo3-api-design.md
    • Added a new document detailing the exploration of five distinct PyO3 Python API designs for solverang bindings.
    • Introduced key design tensions related to Rust traits, const generics, builder patterns, GIL, and copy overhead.
    • Presented five design options (Callback-Based, Pre-built Problem Types, Hybrid, Protocol-Based, Dataclass + Decorator) with Python API examples, Rust implementation sketches, pros, cons, and performance characteristics for each.
    • Included a comprehensive comparison matrix evaluating criteria such as performance, flexibility, auto Jacobian, JIT compilation, and API surface size.
    • Recommended Design F (Expression Graph via Operator Overloading) combined with Design B (Pre-built Geometry) as the optimal approach, emphasizing GIL-free, JIT-compiled performance with automatic Jacobians.
    • Detailed technical implementation aspects including error handling, const generic handling, Numpy integration, thread safety, publishing, and type stubs.
    • Provided an appendix analyzing the tradeoffs between AST extraction and operator overloading, justifying the choice of operator overloading.
  • docs/plans/python/expr-graph-api.md
    • Added a new implementation plan document for the recommended Expression Graph Python API (Design F).
    • Outlined the architecture, detailing the flow from Python operator overloads to Rust-side expression trees, symbolic differentiation, JIT compilation, and solving.
    • Defined Phase 1: RuntimeExpr in the core solverang crate, covering enum definition, symbolic differentiation, simplification, opcode emission, direct evaluation, display, and ExprProblem implementation.
    • Specified new FCmp and Select opcodes and their Cranelift translation for branchless control flow.
    • Defined Phase 2: PyO3 bindings in a new solverang-python crate, including PyExpr with operator overloads, module-level functions (variables, solve, where, math functions), SolveResult, and Python package structure.
    • Defined Phase 3: Geometry integration, enabling ConstraintSystem2D/3D and composition with expression-based custom constraints.
    • Outlined future extended capabilities (Phase 4) such as Exp/Ln opcodes, Parameter type, callback fallback, batch solve, and expression caching.
    • Detailed the control flow design using the where() function, illustrating its differentiation and opcode emission.
    • Provided a comprehensive testing strategy covering Rust unit tests, Python integration tests, and property-based tests.
    • Summarized file layout and estimated lines of code for new and modified components.
    • Identified risks and proposed mitigations for the implementation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces two exceptionally detailed and well-structured design documents for the Python API of solverang. The analysis of different API designs in pyo3-api-design.md is thorough, and the final recommendation for an expression-graph-based API (Design F) is well-justified, balancing performance and flexibility. The implementation plan in expr-graph-api.md is comprehensive and provides a clear path forward. My review includes a few suggestions to clarify minor inconsistencies in the design documents and to address a potentially critical usability issue regarding equality comparisons in the expression API. Overall, this is excellent work that lays a solid foundation for the Python bindings.

Comment on lines +769 to +784
// ─── Comparison operators (return Expr, not bool) ───
// Python's __gt__ etc. must return a type that Python can use.
// We return PyExpr wrapping a Compare node.
// IMPORTANT: __eq__ and __ne__ returning non-bool breaks hashing;
// we use named methods instead of overloading __eq__/__ne__.

fn __gt__(&self, other: ExprOrFloat) -> Self {
PyExpr {
inner: RuntimeExpr::Compare {
a: Box::new(self.inner.clone()),
b: Box::new(other.into_expr()),
cond: CmpCondition::Gt,
},
name: None,
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The plan correctly identifies that overloading __eq__ and __ne__ to return a non-boolean is problematic. However, not overloading them at all can lead to subtle bugs. For example, a user might write sr.where(x == y, ...) which would use Python's default object identity comparison and likely always evaluate to False, instead of building a comparison node in the expression graph. This can be very confusing.

To prevent this, consider one of two approaches:

  1. Overload __eq__ and __ne__ to return a PyExpr (like __gt__), and strongly document that the resulting expression object cannot be used in a standard Python if statement. This is the approach taken by libraries like SymPy.
  2. Overload __eq__ and __ne__ to raise a TypeError with a helpful message guiding the user to use the explicit sr.eq(a, b) function. This prevents silent failures and makes the API safer.

The current plan leaves this ambiguous and could lead to a poor user experience.

Comment on lines +51 to +52
pyo3 = { version = "0.23", features = ["extension-module", "abi3-py39"] }
numpy = "0.23"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The versions for pyo3 and numpy are specified as 0.23, but the latest released versions are 0.21.x. While this might be a placeholder for a future target, it could be confusing. Consider using the current latest versions (e.g., 0.21) or adding a comment to clarify that this is a future target.

Comment on lines +163 to +166
.call1(py, (array,))
.unwrap()
.extract::<Vec<f64>>(py)
.unwrap()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The Rust implementation sketches use .unwrap() for brevity, which is understandable. However, to avoid promoting a risky practice, it would be beneficial to add a small note here or in the introduction to the sketches, mentioning that these would be replaced with proper error handling (e.g., propagating PyErr) in the actual implementation, as detailed in the later error handling strategy section.

Comment on lines +725 to +726
- **Imperative geometry API loses Rust builder's fluent chaining** (methods return
`None` in Python, not `self`)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's a minor inconsistency in the description of Design C's imperative geometry API. The 'Cons' section states that methods return None, losing fluent chaining. However, the add_point method on line 606 is shown to return a usize index, which is a very useful pattern. It would be clearer to state that builder methods like constrain_* return None, while entity creation methods like add_point return a handle/index.

Comment on lines +1462 to +1473
### Both-Branches-Safe Requirement

Because both branches are always evaluated, they must not produce NaN/Inf
for any input. For example:

```python
# DANGEROUS: division by zero in false branch when x > 0
r = sr.where(x > 0, x, 1.0 / x) # 1/x is computed even when x > 0

# SAFE: guard the denominator
r = sr.where(x > 0, x, 1.0 / sr.where(x != 0, x, 1.0))
```

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The 'Both-Branches-Safe Requirement' section provides a good example of a dangerous division and a safe alternative. The safe version 1.0 / sr.where(x != 0, x, 1.0) is clever but relies on an __ne__ overload (which has issues as noted in another comment) and might be complex for users to discover.

To improve usability and safety, consider providing a safe_div(a, b) helper function in the library. This function could encapsulate the logic of checking for a zero denominator, making user code both cleaner and less error-prone. For example: r = sr.where(x > 0, x, sr.safe_div(1.0, x)).

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: e3cc320f77

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

},

/// Branchless conditional select: dst = condition != 0 ? true_val : false_val.
/// Both operands are always evaluated. Lowers to Cranelift select (cmov/csel).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Avoid evaluating inactive Select branches

Defining Select as always evaluating both operands breaks guarded piecewise expressions, e.g. where(x > 0, sqrt(x), 0) or where(den != 0, num/den, 0): the non-selected branch can still emit NaN/Inf, which then pollutes residual/Jacobian evaluation and can make solves diverge. Since this design is intended to power Python control-flow expressions, Select needs short-circuit/lazy branch semantics (or an equivalent masking strategy) to preserve correctness for domain-restricted branches.

Useful? React with 👍 / 👎.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds two design/implementation planning documents outlining multiple PyO3 API approaches for solverang, with a detailed “Design F” expression-graph plan (operator overloading → symbolic diff → opcode lowering → Cranelift JIT → GIL-free solve).

Changes:

  • Added an in-depth implementation plan for an expression-graph-based Python API (Design F), including proposed Rust modules, opcodes, and PyO3 surface API.
  • Added a broader exploration document comparing five Python API design options with tradeoffs and example sketches.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 6 comments.

File Description
docs/plans/python/expr-graph-api.md Detailed step-by-step implementation plan for the expression-graph API, including proposed Rust/PyO3 interfaces and phases.
docs/notes/pyo3-api-design.md Comparative exploration of multiple PyO3 API designs, culminating in a recommendation and implementation sketches.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +845 to +856
#[pyfunction]
#[pyo3(signature = (names, *, count=None))]
fn variables(names: &str, count: Option<usize>) -> Vec<PyExpr> {
match count {
Some(n) => (0..n).map(|i| PyExpr {
inner: RuntimeExpr::Var(i as u32),
name: Some(format!("{}_{}", names.trim(), i)),
}).collect(),
None => names.split_whitespace().enumerate().map(|(i, name)| PyExpr {
inner: RuntimeExpr::Var(i as u32),
name: Some(name.to_string()),
}).collect(),
Copy link

Copilot AI Feb 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

variables() assigns variable indices starting at 0 on every call. This makes expressions from separate variables() calls collide (e.g., x = variables("x"); y = variables("y") would create two Var(0) nodes), which then cannot be disambiguated at solve time. Consider introducing a variable allocator/context (e.g., VarSet/Model that hands out unique indices) or allow passing a starting offset / returning both the variables and the next free index.

Suggested change
#[pyfunction]
#[pyo3(signature = (names, *, count=None))]
fn variables(names: &str, count: Option<usize>) -> Vec<PyExpr> {
match count {
Some(n) => (0..n).map(|i| PyExpr {
inner: RuntimeExpr::Var(i as u32),
name: Some(format!("{}_{}", names.trim(), i)),
}).collect(),
None => names.split_whitespace().enumerate().map(|(i, name)| PyExpr {
inner: RuntimeExpr::Var(i as u32),
name: Some(name.to_string()),
}).collect(),
use std::sync::atomic::{AtomicU32, Ordering};
/// Global allocator for unique variable indices across all `variables` calls.
static NEXT_VAR_INDEX: AtomicU32 = AtomicU32::new(0);
#[pyfunction]
#[pyo3(signature = (names, *, count=None))]
fn variables(names: &str, count: Option<usize>) -> Vec<PyExpr> {
match count {
Some(n) => (0..n)
.map(|i| {
let idx = NEXT_VAR_INDEX.fetch_add(1, Ordering::Relaxed);
PyExpr {
inner: RuntimeExpr::Var(idx),
name: Some(format!("{}_{}", names.trim(), i)),
}
})
.collect(),
None => names
.split_whitespace()
.map(|name| {
let idx = NEXT_VAR_INDEX.fetch_add(1, Ordering::Relaxed);
PyExpr {
inner: RuntimeExpr::Var(idx),
name: Some(name.to_string()),
}
})
.collect(),

Copilot uses AI. Check for mistakes.
Comment on lines +951 to +958
#[pyo3(signature = (*, residuals=None, equations=None, x0,
solver=None, tolerance=None, max_iterations=None))]
fn solve(
py: Python<'_>,
residuals: Option<Vec<PyExpr>>,
equations: Option<Vec<PyExpr>>,
x0: PyReadonlyArray1<'_, f64>,
solver: Option<&str>,
Copy link

Copilot AI Feb 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The solve() sketch types x0 as PyReadonlyArray1<f64>, which requires a NumPy array, but the earlier examples call sr.solve(..., x0=[0.5, 0.5]) (a Python list). Either adjust the API to accept Vec<f64>/&Bound<PyAny> and convert from any sequence, or update the documentation/examples to require np.array([...], dtype=np.float64).

Copilot uses AI. Check for mistakes.
Comment on lines +962 to +965
let exprs: Vec<RuntimeExpr> = residuals.or(equations)
.ok_or_else(|| PyValueError::new_err(
"must provide 'residuals' or 'equations'"
))?
Copy link

Copilot AI Feb 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let exprs = residuals.or(equations) will silently ignore equations when both are provided. If both keyword args are allowed, it’s better to error when both are Some (and also when both are None) so users don’t get surprising behavior.

Suggested change
let exprs: Vec<RuntimeExpr> = residuals.or(equations)
.ok_or_else(|| PyValueError::new_err(
"must provide 'residuals' or 'equations'"
))?
let expr_vec: Vec<PyExpr> = match (residuals, equations) {
(Some(res), None) => res,
(None, Some(eq)) => eq,
(Some(_), Some(_)) => {
return Err(PyValueError::new_err(
"must provide exactly one of 'residuals' or 'equations'",
));
}
(None, None) => {
return Err(PyValueError::new_err(
"must provide 'residuals' or 'equations'",
));
}
};
let exprs: Vec<RuntimeExpr> = expr_vec

Copilot uses AI. Check for mistakes.
"lm" | "levenberg-marquardt" => {
let mut config = LMConfig::default();
if let Some(tol) = tolerance { config = config.with_tol(tol); }
if let Some(max) = max_iterations { config.patience = max; }
Copy link

Copilot AI Feb 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the LM branch, max_iterations is mapped to LMConfig.patience, but patience is a factor for max function evaluations (max_fev = patience * (n + 1)), not an iteration limit. This mapping will not behave like Newton’s max_iterations; consider exposing LM’s knobs explicitly (e.g., max_evaluations / patience) or translating max_iterations into patience based on num_vars and documenting the difference.

Suggested change
if let Some(max) = max_iterations { config.patience = max; }
if let Some(max) = max_iterations {
// LMConfig.patience controls max function evaluations via:
// max_fev = patience * (num_vars + 1)
// Interpret the Python-level `max_iterations` as an approximate
// cap on function evaluations and derive a patience value
// accordingly, ensuring at least one "unit" of patience.
let evals_per_unit = num_vars + 1;
let patience = std::cmp::max(1, max / evals_per_unit);
config.patience = patience;
}

Copilot uses AI. Check for mistakes.
Comment on lines +1126 to +1137
sqrt, sin, cos, tan, atan2,
max, min,
smooth_abs, clamp,
SolverError, ConvergenceError, DimensionError,
)

# Re-export where (reserved keyword in Python, aliased in Rust)
from ._solverang import where as where_

__all__ = [
"Expr", "SolveResult",
"variables", "solve", "eq", "where_",
Copy link

Copilot AI Feb 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The __init__.py sketch claims where is a reserved Python keyword and re-exports it as where_, but where is not a Python keyword. As written, solverang.where would not exist even though the rest of the document uses sr.where(...). Consider exporting where directly (and optionally also an alias) to keep the public API consistent with the examples.

Suggested change
sqrt, sin, cos, tan, atan2,
max, min,
smooth_abs, clamp,
SolverError, ConvergenceError, DimensionError,
)
# Re-export where (reserved keyword in Python, aliased in Rust)
from ._solverang import where as where_
__all__ = [
"Expr", "SolveResult",
"variables", "solve", "eq", "where_",
where,
sqrt, sin, cos, tan, atan2,
max, min,
smooth_abs, clamp,
SolverError, ConvergenceError, DimensionError,
)
# Re-export where; where_ is kept as an alias for compatibility
from ._solverang import where as where_
__all__ = [
"Expr", "SolveResult",
"variables", "solve", "eq", "where", "where_",

Copilot uses AI. Check for mistakes.
Comment on lines +1326 to +1347
let residual_ops = emitter.take_ops();

// Emit jacobian opcodes
let mut emitter = OpcodeEmitter::new();
let mut pattern = Vec::new();
for (row, jac_row) in self.jacobian_exprs.iter().enumerate() {
for (col, deriv_expr) in jac_row {
let reg = deriv_expr.emit(&mut emitter);
let idx = pattern.len() as u32;
emitter.store_jacobian_indexed(idx, reg);
pattern.push(JacobianEntry { row: row as u32, col: *col });
}
}
let jacobian_ops = emitter.take_ops();

let compiled = CompiledConstraints {
residual_ops,
jacobian_ops,
n_residuals: self.residual_exprs.len(),
n_vars: self.num_vars,
jacobian_nnz: pattern.len(),
jacobian_pattern: pattern,
Copy link

Copilot AI Feb 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section uses OpcodeEmitter::take_ops() and OpcodeEmitter::store_jacobian_indexed(), but the current OpcodeEmitter API in crates/solverang/src/jit/lower.rs exposes into_ops() and store_jacobian(...)/store_jacobian_current(...) (and internally manages the Jacobian pattern). Updating the sketch to match the existing emitter API will reduce implementation friction and avoid copy/paste errors.

Suggested change
let residual_ops = emitter.take_ops();
// Emit jacobian opcodes
let mut emitter = OpcodeEmitter::new();
let mut pattern = Vec::new();
for (row, jac_row) in self.jacobian_exprs.iter().enumerate() {
for (col, deriv_expr) in jac_row {
let reg = deriv_expr.emit(&mut emitter);
let idx = pattern.len() as u32;
emitter.store_jacobian_indexed(idx, reg);
pattern.push(JacobianEntry { row: row as u32, col: *col });
}
}
let jacobian_ops = emitter.take_ops();
let compiled = CompiledConstraints {
residual_ops,
jacobian_ops,
n_residuals: self.residual_exprs.len(),
n_vars: self.num_vars,
jacobian_nnz: pattern.len(),
jacobian_pattern: pattern,
let residual_ops = emitter.into_ops();
// Emit jacobian opcodes
let mut emitter = OpcodeEmitter::new();
for (row, jac_row) in self.jacobian_exprs.iter().enumerate() {
for (col, deriv_expr) in jac_row {
let reg = deriv_expr.emit(&mut emitter);
// The emitter manages the Jacobian pattern internally.
emitter.store_jacobian(row as u32, *col, reg);
}
}
let jacobian_ops = emitter.into_ops();
let compiled = CompiledConstraints {
residual_ops,
jacobian_ops,
n_residuals: self.residual_exprs.len(),
n_vars: self.num_vars,

Copilot uses AI. Check for mistakes.
pipeline/incremental_tests.rs (927 lines):
- Incremental solving lifecycle (skip unchanged clusters, 3-round lifecycle)
- Warm start effectiveness verification
- Structural change invalidation
- Reduce effectiveness (trivial elimination)
- Diagnostics: redundant, conflicting, under-constrained detection

pipeline/minpack_bridge_tests.rs (672 lines):
- ProblemConstraint adapter: wraps Problem trait as Constraint
- Rosenbrock, Powell Singular, Helical Valley through pipeline
- Pipeline vs direct LM solve comparison
- Multiple test problems as separate clusters

https://claude.ai/code/session_01Xk2mVT751ZY4kbeuZ6iynw
…-DVoX1' into claude/explore-pyo3-api-design-xvQgU
Thread 1 (high): Overload __eq__/__ne__ to raise TypeError with helpful
message guiding users to sr.eq()/sr.ne(). Prevents silent bugs from
Python's default identity comparison.

Thread 2: Fix PyO3/numpy versions from 0.23 to 0.22 (current releases).

Thread 3: Add note that .unwrap() in sketches is placeholder; real
implementation uses PyResult/PyErr propagation.

Thread 4: Clarify that constrain_* methods return None while entity
creation methods (add_point) return handles/indices.

Thread 5: Add safe_div(a, b, fill=0.0) helper for guarded division in
both-branches-evaluated Select nodes. Add sr.eq()/sr.ne() functions.

Thread 6 (P1): Document that Select can optionally lower to basic blocks
with conditional jumps for domain-restricted branches. Branchless
(cmov/csel) remains default for solver workloads. Add safe_div/safe_sqrt
helpers as practical mitigation.

Thread 7: Fix variables() index collision with global AtomicU32 allocator.
Separate variables() calls now get unique indices. Add reset_variables().

Thread 8: Accept x0 as either Python list or numpy array via
InitialPoint FromPyObject enum, matching documented examples.

Thread 9: Error explicitly when both residuals and equations are provided,
instead of silently ignoring equations.

Thread 10: Translate max_iterations to LM patience correctly:
patience = max(1, max_iterations / (num_vars + 1)).

Thread 11: Fix incorrect claim that 'where' is a Python keyword. Export
it directly as sr.where() with where_ as optional alias.

Thread 12: Fix take_ops() → into_ops() and store_jacobian_indexed() →
store_jacobian() to match actual OpcodeEmitter API.

https://claude.ai/code/session_01N4SoMnzNhwP9wdz9Gy3jFS
Re-evaluates all testing strategies in light of the V3 architecture
changes. Identifies critical gaps (no Sketch2D/3D/Assembly constraint
correctness tests, no Jacobian verification, no cross-module integration)
and proposes a 4-phase plan:

Phase 1: Constraint correctness + Jacobian verification (~2,700 LOC)
Phase 2: Integration tests across Sketch2D/3D/Assembly/Pipeline (~2,400 LOC)
Phase 3: Internals + property-based tests (~3,650 LOC)
Phase 4: Performance benchmarks + regression snapshots (~1,600 LOC)

Total: ~10,350 new test LOC covering ~432 test cases, bringing
test-to-production ratio from 0.08:1 to 0.59:1 for V3 modules.

https://claude.ai/code/session_01N4SoMnzNhwP9wdz9Gy3jFS
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants