RETRO.JL

REflective-bounds Trust-Region Optimizer
v0.0.1 · Julia
A high-performance, (nearly) allocation-free trust-region optimizer for bound-constrained nonlinear problems in Julia. Works with any AD backend via DifferentiationInterface.jl.
I want to:

A taste of Retro

using Retro, ForwardDiff

f(x) = 100*(x[2] - x[1]^2)^2 + (1 - x[1])^2   # Rosenbrock

prob = RetroProblem(f, [-1.2, 1.0], AutoForwardDiff();
                    lb = [-5.0, -5.0], ub = [5.0, 5.0])

result = optimize(prob)

result.x            # ≈ [1.0, 1.0]
is_successful(result)  # true

Features at a glance

FeatureDetails
Bound constraintsColeman–Li reflective step with multiple reflections
Hessian strategiesBFGS (damped), SR1, Exact Hessian via AD
Subspace solvers2-D eigenvalue, Steihaug–Toint CG, full-space
TR solversEigenvalue, or Cauchy
AD backendsAny backend via DifferentiationInterface (ForwardDiff, Enzyme, …)
Zero-allocation loopsPre-allocated RetroCache workspace
Display modesSilent(), Iteration(), Final(), Verbose() (with ProgressMeter)

Acknowledgements

Retro.jl is heavily inspired by the fides optimizer in Python, as well as MATLAB's lsqnonlin optimizer.