Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
453fcc9
Revert "fix Proximal extension"
alyst Mar 22, 2025
abc2847
Revert "fix NLopt extension"
alyst Mar 22, 2025
56cdef1
Revert "fix exporting structs from package extensions"
alyst Mar 22, 2025
421927e
types.jl: move SemOptimizer API into abstract.jl
alyst Mar 22, 2025
84bd7bd
NLoptResult should not be mutable
alyst Mar 22, 2025
930e0e5
SemNLOpt: use f or f => tol pair for constraints
alyst Mar 22, 2025
230af39
NLopt: cleanup docstring
Feb 5, 2026
d1355a0
NLopt: update/simplify docs
Feb 5, 2026
6f3ccd5
Optim.md: SemOptimizerOptim => SemOptimizer
alyst Mar 22, 2025
242c602
regulariz.md: SemOptimProx => SemOptimizer
alyst Mar 22, 2025
3e5c9ac
optimizer_engine(): rename and fix signature
Jan 27, 2026
9471fbe
optimizer_engines(): new method
Jan 27, 2026
f4f9280
export optmizer_engine()
Feb 3, 2026
41c1308
sem_optimizer_subtype(engine)
Feb 3, 2026
a1cf400
streamline engine error throwing
Feb 5, 2026
efd4911
SemOptimizer{E}: remove docstring
Feb 5, 2026
3410853
SemOptimizer: cleanup docstrings
Jan 27, 2026
6ba91f4
optimizer_engine_doc()
Feb 5, 2026
2e5c9b3
fix proximal extension
Maximilian-Stefan-Ernst Jan 30, 2026
e47a94f
don't export SemOptimizerOptim
Feb 5, 2026
309c578
SemFit: add opt_engine() to the output
Feb 5, 2026
38939b7
SemOptimizerResult: streamline optim results
Feb 5, 2026
ccbf55f
docs: fix optimizer engine docs
Jan 27, 2026
1327236
docs/make.jl: disable doctest
alyst Feb 4, 2026
2f2293c
optimizer.md: rename to SemOptimizerMyopt
Feb 3, 2026
5cdcbb1
docs: apply suggestions
Feb 3, 2026
f1f453c
dovs/optimizer.md: more updates for the new API
Feb 5, 2026
1f8d2a9
.gh/FormatCheck: run on pull_request
Feb 6, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 2 additions & 3 deletions .github/workflows/FormatCheck.yml
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@
name: Format suggestions
on:
pull_request_target:
# this argument is not required if you don't use the `suggestion-label` input
types: [ opened, reopened, synchronize, labeled, unlabeled ]
push: # Runs on all pushes to any branch
pull_request: # Runs on all PR events (open, sync, reopen)
jobs:
code-style:
runs-on: ubuntu-latest
Expand Down
2 changes: 2 additions & 0 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
DelimitedFiles = "8bb1440f-4735-579b-a4ab-409b98df4dab"
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
FiniteDiff = "6a86dc24-6348-571c-b903-95158fe2bd41"
InteractiveUtils = "b77e0a4c-d291-57a0-90e8-8db25a27a240"
LazyArtifacts = "4af54fe1-eca0-43a8-85a7-787d91b784e3"
LineSearches = "d3d80556-e9d4-5f37-9878-2ab0fcc64255"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
Expand All @@ -30,6 +31,7 @@ StenoGraphs = "0.2 - 0.3, 0.4.1 - 0.5"
DataFrames = "1"
Distributions = "0.25"
FiniteDiff = "2"
InteractiveUtils = "1.11.0"
LineSearches = "7"
NLSolversBase = "7"
NLopt = "0.6, 1"
Expand Down
8 changes: 7 additions & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,7 +1,12 @@
using Documenter, StructuralEquationModels
using Documenter, StructuralEquationModels, NLopt, ProximalAlgorithms, ProximalOperators

SEM = StructuralEquationModels
SEMNLOptExt = Base.get_extension(StructuralEquationModels, :SEMNLOptExt)
SEMProximalOptExt = Base.get_extension(StructuralEquationModels, :SEMProximalOptExt)

makedocs(
sitename = "StructuralEquationModels.jl",
modules = [SEM, SEMNLOptExt, SEMProximalOptExt],
pages = [
"index.md",
"Tutorials" => [
Expand Down Expand Up @@ -60,6 +65,7 @@ makedocs(
collapselevel = 1,
),
doctest = false,
checkdocs = :none,
)

# doctest(StructuralEquationModels, fix=true)
Expand Down
71 changes: 40 additions & 31 deletions docs/src/developer/optimizer.md
Original file line number Diff line number Diff line change
@@ -1,74 +1,83 @@
# Custom optimizer types

The optimizer part of a model connects it to the optimization backend.
Let's say we want to implement a new optimizer as `SemOptimizerName`. The first part of the implementation is very similar to loss functions, so we just show the implementation of `SemOptimizerOptim` here as a reference:
The optimizer part of a model connects it to the optimization engine.
Let's say we want to implement a new optimizer as `SemOptimizerMyopt`.

```julia
############################################################################################
### Types and Constructor
############################################################################################
mutable struct SemOptimizerName{A, B} <: SemOptimizer{:Name}
struct SemOptimizerMyopt{A, B} <: SemOptimizer{:Myopt}
algorithm::A
options::B
end

SemOptimizer{:Name}(args...; kwargs...) = SemOptimizerName(args...; kwargs...)
SEM.sem_optimizer_subtype(::Val{:Myopt}) = SemOptimizerMyopt

SemOptimizerName(;
algorithm = LBFGS(),
options = Optim.Options(; f_reltol = 1e-10, x_abstol = 1.5e-8),
SemOptimizerMyopt(;
algorithm = ...,
options = ...,
kwargs...,
) = SemOptimizerName(algorithm, options)
) = SemOptimizerMyopt(algorithm, options)

struct MyoptResult{O <: SemOptimizerMyopt} <: SEM.SemOptimizerResult{O}
optimizer::O
...
end

############################################################################################
### Recommended methods
############################################################################################

update_observed(optimizer::SemOptimizerName, observed::SemObserved; kwargs...) = optimizer
update_observed(optimizer::SemOptimizerMyopt, observed::SemObserved; kwargs...) = optimizer

############################################################################################
### additional methods
############################################################################################

algorithm(optimizer::SemOptimizerName) = optimizer.algorithm
options(optimizer::SemOptimizerName) = optimizer.options
options(optimizer::SemOptimizerMyopt) = optimizer.options
```

Note that your optimizer is a subtype of `SemOptimizer{:Name}`, where you can choose a `:Name` that can later be used as a keyword argument to `fit(engine = :Name)`.
Similarly, `SemOptimizer{:Name}(args...; kwargs...) = SemOptimizerName(args...; kwargs...)` should be defined as well as a constructor that uses only keyword arguments:
Note that `SemOptimizerMyopt` is defined as a subtype of [`SemOptimizer{:Myopt}`](@ref SEM.SemOptimizer)`,
and `SEM.sem_optimizer_subtype(::Val{:Myopt})` returns `SemOptimizerMyopt`.
This instructs *SEM.jl* to use `SemOptimizerMyopt` when `:Myopt` is specified as the engine for
model fitting: `fit(..., engine = :Myopt)`.

```julia
SemOptimizerName(;
algorithm = LBFGS(),
options = Optim.Options(; f_reltol = 1e-10, x_abstol = 1.5e-8),
kwargs...,
) = SemOptimizerName(algorithm, options)
```
A method for `update_observed` and additional methods might be usefull, but are not necessary.

Now comes the substantive part: We need to provide a method for `fit`:
Now comes the essential part: we need to provide the [`fit`](@ref) method with `SemOptimizerMyopt`
as the first positional argument.

```julia
function fit(
optim::SemOptimizerName,
optim::SemOptimizerMyopt,
model::AbstractSem,
start_params::AbstractVector;
kwargs...,
)
optimization_result = ...
# ... prepare the Myopt optimization problem

...
myopt_res = ... # fit the problem with the Myopt engine
minimum = ... # extract the minimum from myopt_res
minimizer = ... # extract the solution (parameter estimates)
optim_result = MyoptResult(optim, myopt_res, ...) # store the original Myopt result and params

return SemFit(minimum, minimizer, start_params, model, optimization_result)
return SemFit(minimum, minimizer, start_params, model, optim_result)
end
```

The method has to return a `SemFit` object that consists of the minimum of the objective at the solution, the minimizer (aka parameter estimates), the starting values, the model and the optimization result (which may be anything you desire for your specific backend).
This method is responsible for converting the SEM into the format required by your optimization engine,
running the optimization, extracting the solution and returning the `SemFit` object, which should package:
* the minimum of the objective at the solution
* the minimizer (the vector of the SEM parameter estimates)
* the starting values
* the SEM model
* `MyoptResult` object with any relevant engine-specific details you want to preserve

In addition, you might want to provide methods to access properties of your optimization result:
In addition, you might want to provide methods to access engine-specific properties stored in `MyoptResult`:

```julia
optimizer(res::MyOptimizationResult) = ...
n_iterations(res::MyOptimizationResult) = ...
convergence(res::MyOptimizationResult) = ...
```
algorithm_name(res::MyoptResult) = ...
n_iterations(res::MyoptResult) = ...
convergence(res::MyoptResult) = ...
```
2 changes: 1 addition & 1 deletion docs/src/performance/simulation.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ For example,

new_observed = SemObservedData(;data = data_2, specification = partable)

my_optimizer = SemOptimizerOptim()
my_optimizer = SemOptimizer()

new_optimizer = update_observed(my_optimizer, new_observed)
```
Expand Down
41 changes: 15 additions & 26 deletions docs/src/tutorials/backends/nlopt.md
Original file line number Diff line number Diff line change
@@ -1,47 +1,36 @@
# Using NLopt.jl

[`SemOptimizerNLopt`](@ref) implements the connection to `NLopt.jl`.
It is only available if the `NLopt` package is loaded alongside `StructuralEquationModels.jl` in the running Julia session.
It takes a bunch of arguments:
When [`NLopt.jl`](https://github.com/jump-dev/NLopt.jl) is loaded in the running Julia session,
it can be used by the [`SemOptimizer`](@ref) by specifying `engine = :NLopt`
(see [NLopt-specific options](@ref SEMNLOptExt.SemOptimizerNLopt)).
Among other things, `NLopt` enables constrained optimization of SEMs, which is
explained in the [Constrained optimization](@ref) section.

```julia
• algorithm: optimization algorithm

• options::Dict{Symbol, Any}: options for the optimization algorithm

• local_algorithm: local optimization algorithm

• local_options::Dict{Symbol, Any}: options for the local optimization algorithm

• equality_constraints::Vector{NLoptConstraint}: vector of equality constraints

• inequality_constraints::Vector{NLoptConstraint}: vector of inequality constraints
```
Constraints are explained in the section on [Constrained optimization](@ref).

The defaults are LBFGS as the optimization algorithm and the standard options from `NLopt.jl`.
We can choose something different:
We can override the default *NLopt* algorithm (LFBGS) and instead use
the *augmented lagrangian* method with LBFGS as the *local* optimization algorithm,
stop at a maximum of 200 evaluations and use a relative tolerance of
the objective value of `1e-6` as the stopping criterion for the local algorithm:

```julia
using NLopt

my_optimizer = SemOptimizerNLopt(;
my_optimizer = SemOptimizer(;
engine = :NLopt,
algorithm = :AUGLAG,
options = Dict(:maxeval => 200),
local_algorithm = :LD_LBFGS,
local_options = Dict(:ftol_rel => 1e-6)
)
```

This uses an augmented lagrangian method with LBFGS as the local optimization algorithm, stops at a maximum of 200 evaluations and uses a relative tolerance of the objective value of `1e-6` as the stopping criterion for the local algorithm.

To see how to use the optimizer to actually fit a model now, check out the [Model fitting](@ref) section.

In the NLopt docs, you can find explanations about the different [algorithms](https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/) and a [tutorial](https://nlopt.readthedocs.io/en/latest/NLopt_Introduction/) that also explains the different options.
In the *NLopt* docs, you can find details about the [optimization algorithms](https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/),
and the [tutorial](https://nlopt.readthedocs.io/en/latest/NLopt_Introduction/) that demonstrates how to tweak their behavior.

To choose an algorithm, just pass its name without the 'NLOPT\_' prefix (for example, 'NLOPT\_LD\_SLSQP' can be used by passing `algorithm = :LD_SLSQP`).
To choose an algorithm, just pass its name without the `NLOPT_` prefix (for example, `NLOPT_LD_SLSQP` can be used by passing `algorithm = :LD_SLSQP`).

The README of the [julia package](https://github.com/JuliaOpt/NLopt.jl) may also be helpful, and provides a list of options:
The *README* of [*NLopt.jl*](https://github.com/JuliaOpt/NLopt.jl) may also be helpful, and provides a list of options:

- `algorithm`
- `stopval`
Expand Down
20 changes: 10 additions & 10 deletions docs/src/tutorials/backends/optim.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
# Using Optim.jl

[`SemOptimizerOptim`](@ref) implements the connection to `Optim.jl`.
It takes two arguments, `algorithm` and `options`.
The defaults are LBFGS as the optimization algorithm and the standard options from `Optim.jl`.
We can load the `Optim` and `LineSearches` packages to choose something different:
[*Optim.jl*](https://github.com/JuliaNLSolvers/Optim.jl) is the default optimization engine of *SEM.jl*,
see [`SEM.SemOptimizerOptim`](@ref) for a full list of its parameters.
It defaults to the LBFGS optimization, but we can load the `Optim` and `LineSearches` packages
and specify BFGS (!not L-BFGS) with a back-tracking linesearch and Hager-Zhang initial step length guess:

```julia
using Optim, LineSearches

my_optimizer = SemOptimizerOptim(
my_optimizer = SemOptimizer(
algorithm = BFGS(
linesearch = BackTracking(order=3),
linesearch = BackTracking(order=3),
alphaguess = InitialHagerZhang()
),
options = Optim.Options(show_trace = true)
)
),
options = Optim.Options(show_trace = true)
)
```

This optimizer will use BFGS (!not L-BFGS) with a back tracking linesearch and a certain initial step length guess. Also, the trace of the optimization will be printed to the console.
Note that we used `options` to print the optimization progress to the console.

To see how to use the optimizer to actually fit a model now, check out the [Model fitting](@ref) section.

Expand Down
25 changes: 14 additions & 11 deletions docs/src/tutorials/concept.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,13 @@ So everything that can be used as the 'observed' part has to be of type `SemObse

Here is an overview on the available building blocks:

|[`SemObserved`](@ref) | [`SemImplied`](@ref) | [`SemLossFunction`](@ref) | [`SemOptimizer`](@ref) |
|---------------------------------|-----------------------|---------------------------|-------------------------------|
| [`SemObservedData`](@ref) | [`RAM`](@ref) | [`SemML`](@ref) | [`SemOptimizerOptim`](@ref) |
| [`SemObservedCovariance`](@ref) | [`RAMSymbolic`](@ref) | [`SemWLS`](@ref) | [`SemOptimizerNLopt`](@ref) |
| [`SemObservedMissing`](@ref) | [`ImpliedEmpty`](@ref)| [`SemFIML`](@ref) | |
| | | [`SemRidge`](@ref) | |
| | | [`SemConstant`](@ref) | |
|[`SemObserved`](@ref) | [`SemImplied`](@ref) | [`SemLossFunction`](@ref) | [`SemOptimizer`](@ref) |
|---------------------------------|-----------------------|---------------------------|----------------------------|
| [`SemObservedData`](@ref) | [`RAM`](@ref) | [`SemML`](@ref) | [:Optim](@ref StructuralEquationModels.SemOptimizerOptim) |
| [`SemObservedCovariance`](@ref) | [`RAMSymbolic`](@ref) | [`SemWLS`](@ref) | [:NLopt](@ref SEMNLOptExt.SemOptimizerNLopt) |
| [`SemObservedMissing`](@ref) | [`ImpliedEmpty`](@ref)| [`SemFIML`](@ref) | [:Proximal](@ref SEMProximalOptExt.SemOptimizerProximal) |
| | | [`SemRidge`](@ref) | |
| | | [`SemConstant`](@ref) | |

The rest of this page explains the building blocks for each part. First, we explain every part and give an overview on the different options that are available. After that, the [API - model parts](@ref) section serves as a reference for detailed explanations about the different options.
(How to stick them together to a final model is explained in the section on [Model Construction](@ref).)
Expand All @@ -52,7 +52,7 @@ Available loss functions are
## The optimizer part aka `SemOptimizer`
The optimizer part of a model connects to the numerical optimization backend used to fit the model.
It can be used to control options like the optimization algorithm, linesearch, stopping criteria, etc.
There are currently three available backends, [`SemOptimizerOptim`](@ref) connecting to the [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl) backend, [`SemOptimizerNLopt`](@ref) connecting to the [NLopt.jl](https://github.com/JuliaOpt/NLopt.jl) backend and [`SemOptimizerProximal`](@ref) connecting to [ProximalAlgorithms.jl](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl).
There are currently three available engines (i.e., backends used to carry out the numerical optimization), [`:Optim`](@ref StructuralEquationModels.SemOptimizerOptim) connecting to the [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl) backend, [`:NLopt`](@ref SEMNLOptExt.SemOptimizerNLopt) connecting to the [NLopt.jl](https://github.com/JuliaOpt/NLopt.jl) backend and [`:Proximal`](@ref SEMProximalOptExt.SemOptimizerProximal) connecting to [ProximalAlgorithms.jl](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl).
For more information about the available options see also the tutorials about [Using Optim.jl](@ref) and [Using NLopt.jl](@ref), as well as [Constrained optimization](@ref) and [Regularization](@ref) .

# What to do next
Expand Down Expand Up @@ -101,8 +101,11 @@ SemConstant
## optimizer

```@docs
optimizer_engines
optimizer_engine
optimizer_engine_doc
SemOptimizer
SemOptimizerOptim
SemOptimizerNLopt
SemOptimizerProximal
SEM.SemOptimizerOptim
SEMNLOptExt.SemOptimizerNLopt
SEMProximalOptExt.SemOptimizerProximal
```
Loading
Loading