Releases: SimplexLab/TorchJD
v0.3.1
Performance improvement patch
This patch improves the performance of the function finding the default tensors with respect to which backward and mtl_backward should differentiate. We thank @austen260 for finding the source of the performance issue and for proposing a working solution.
Changelog
Changed
- Improved the performance of the graph traversal function called by
backwardandmtl_backward
to find the tensors with respect to which differentiation should be done. It now visits every node
at most once.
Contributors
- @austen260
- @PierreQuinton
- @ValerianRey
v0.3.0
The interface update
This version greatly improves the interface of backward and mtl_backward, at the cost of some easy-to-fix breaking changes (some parameters of these functions have been renamed, or their order has been swapped due to becoming optional).
Downstream changes to make to keep using backward and mtl_backward:
- Rename
Atoaggregatoror pass it as a positional argument. - For
backward, unless you specifically want to avoid differentiating with respect to some parameters, you can now simply use the default value of theinputsargument. - For
mtl_backward, unless you want to customize which params should be updated with a step of JD and which should be updated with a step of GD, you can now simply use the default value of theshared_paramsand of thetasks_paramsarguments. - If you keep providing the
inputsor theshared_paramsortasks_paramsarguments as positional arguments, you should provide them after the aggregator.
For instance,
backward(tensors, inputs, A=aggregator)should become
backward(tensors, aggregator)and
mtl_backward(losses, features, tasks_params, shared_params, A=aggregator)should become
mtl_backward(losses, features, aggregator)We thank @raeudigerRaeffi for sharing his idea of having default values for the tensors with respect to which the differentiation should be made in backward and mtl_backward, and for implementing the first working version of the function that automatically finds these parameters from the autograd graph.
Changelog
Added
- Added a default value to the
inputsparameter ofbackward. If not provided, theinputswill
default to all leaf tensors that were used to compute thetensorsparameter. This is in line
with the behavior of
torch.autograd.backward. - Added a default value to the
shared_paramsand to thetasks_paramsarguments of
mtl_backward. If not provided, theshared_paramswill default to all leaf tensors that were
used to compute thefeatures, and thetasks_paramswill default to all leaf tensors that were
used to compute each of thelosses, excluding those used to compute thefeatures. - Note in the documentation about the incompatibility of
backwardandmtl_backwardwith tensors
that retain grad.
Changed
- BREAKING: Changed the name of the parameter
Atoaggregatorinbackwardand
mtl_backward. - BREAKING: Changed the order of the parameters of
backwardandmtl_backwardto make it
possible to have a default value forinputsand forshared_paramsandtasks_params,
respectively. Usages ofbackwardandmtl_backwardthat rely on the order between arguments
must be updated. - Switched to the PEP 735 dependency groups format in
pyproject.toml(from a[tool.pdm.dev-dependencies]to a[dependency-groups]section). This
should only affect development dependencies.
Fixed
- BREAKING: Added a check in
mtl_backwardto ensure thattasks_paramsandshared_params
have no overlap. Previously, the behavior in this scenario was quite arbitrary.
Contributors
v0.2.2
This version fixes a dependency-related bug and improves the documentation.
Changelog:
Added
- PyTorch Lightning integration example.
- Explanation about Jacobian descent in the README.
Fixed
- Made the dependency on ecos explicit in pyproject.toml
(beforecvxpy1.16.0, it was installed automatically when installingcvxpy).
Contributors
v0.2.1
This version fixes some bugs and inconveniences.
Changelog:
Changed
- Removed upper cap on
numpyversion in the dependencies. This makestorchjdcompatible with
the most recent numpy versions too.
Fixed
- Prevented
IMTLGfrom dividing by zero during its weight rescaling step. If the input matrix
consists only of zeros, it will now return a vector of zeros instead of a vector ofnan.
Contributors
v0.2.0
The multi-task learning update
This version mainly introduces mtl_backward, enabling multi-task learning with Jacobian descent. See this new example to get started!
It also brings many improvements to the documentation, to the unit tests and to the internal code structure. Lastly, it fixes a few bugs and invalid behaviors.
Changelog:
Added
autojacpackage containing the backward pass functions and their dependencies.mtl_backwardfunction to make a backward pass for multi-task learning.- Multi-task learning example.
Changed
- BREAKING: Moved the
backwardmodule to theautojacpackage. Some imports may have to be
adapted. - Improved documentation of
backward.
Fixed
- Fixed wrong tensor device with
IMTLGin some rare cases. - BREAKING: Removed the possibility of populating the
.gradfield of a tensor that does not
expect it when callingbackward. If an inputtprovided to backward does not satisfy
t.requires_grad and (t.is_leaf or t.retains_grad), an error is now raised. - BREAKING: When using
backward, aggregations are now accumulated into the.gradfields
of the inputs rather than replacing those fields if they already existed. This is in line with the
behavior oftorch.autograd.backward.