From 461afdf825aff66fee0ac7f537f12afba94585b5 Mon Sep 17 00:00:00 2001 From: Amiel Date: Sat, 28 Feb 2026 22:20:38 +0100 Subject: [PATCH 1/9] add brachistochrone --- ext/Descriptions/brachistochrone.md | 55 ++++++++ ext/JuMPModels/brachistochrone.jl | 128 ++++++++++++++++++ ext/MetaData/brachistochrone.jl | 12 ++ ext/OptimalControlModels/brachistochrone.jl | 93 +++++++++++++ .../brachistochrone_s.jl | 96 +++++++++++++ 5 files changed, 384 insertions(+) create mode 100644 ext/Descriptions/brachistochrone.md create mode 100644 ext/JuMPModels/brachistochrone.jl create mode 100644 ext/MetaData/brachistochrone.jl create mode 100644 ext/OptimalControlModels/brachistochrone.jl create mode 100644 ext/OptimalControlModels_s/brachistochrone_s.jl diff --git a/ext/Descriptions/brachistochrone.md b/ext/Descriptions/brachistochrone.md new file mode 100644 index 00000000..da388cf9 --- /dev/null +++ b/ext/Descriptions/brachistochrone.md @@ -0,0 +1,55 @@ +The **Brachistochrone problem** is a classical benchmark in the history of calculus of variations and optimal control.   +It seeks the shape of a curve (or wire) connecting two points $A$ and $B$ within a vertical plane, such that a bead sliding frictionlessly under the influence of uniform gravity travels from $A$ to $B$ in the shortest possible time.   +Originating from the challenge posed by Johann Bernoulli in 1696 [Bernoulli 1696](https://doi.org/10.1002/andp.19163551307), it marks the birth of modern optimal control theory.   +The problem involves nonlinear dynamics where the state includes position and velocity, and the control is the path's angle, with the final time $t_f$ being a decision variable to be minimised. + +### Mathematical formulation + +The problem can be stated as a free final time optimal control problem: + +```math +\begin{aligned} +\min_{u(\cdot), t_f} \quad & J = t_f = \int_0^{t_f} 1 \,\mathrm{d}t \\[1em] +\text{s.t.} \quad & \dot{x}(t) = v(t) \sin u(t), \\[0.5em] +& \dot{y}(t) = v(t) \cos u(t), \\[0.5em] +& \dot{v}(t) = g \cos u(t), \\[0.5em] +& x(0) = x_0, \; y(0) = y_0, \; v(0) = v_0, \\[0.5em] +& x(t_f) = x_f, \; y(t_f) = y_f, +\end{aligned} +``` + +where $u(t)$ represents the angle of the tangent to the curve with respect to the vertical axis, and $g$ is the gravitational acceleration. + +### Qualitative behaviour + +This problem is a classic example of **minimum time control** with nonlinear dynamics.   +Unlike the shortest path problem (a straight line), the optimal trajectory balances the need to minimize path length with the need to maximize velocity. + +The analytical solution to this problem is a **cycloid**—the curve traced by a point on the rim of a circular wheel as the wheel rolls along a straight line without slipping.   +Specifically: + +- **Energy Conservation**: Since the system is conservative and frictionless, the speed at any height $h$ is given by $v = \sqrt{2gh}$ (assuming start from rest). This implies that maximizing vertical drop early in the trajectory increases velocity for the remainder of the path. +- **Concavity**: The optimal curve is concave up. It starts steeply (vertical tangent if $v_0=0$) to gain speed quickly, then flattens out near the bottom before potentially rising again to reach the target. +- **Singularity**: At the initial point, if $v_0=0$, the control is theoretically singular as the bead has no velocity to direct. Numerical solvers often require a small non-zero initial velocity or a robust initial guess to handle this start. + +The control $u(t)$ is continuous and varies smoothly to trace the cycloidal arc. + +### Characteristics + +- **Nonlinear dynamics** involving trigonometric functions of the control. +- **Free final time** problem (time-optimal). +- Analytical solution is a **Cycloid**. +- Historically significant as the first problem solved using techniques that evolved into the Calculus of Variations. + +### References + +- Bernoulli, J. (1696). *Problema novum ad cujus solutionem Mathematici invitantur*.   +  Acta Eruditorum.   +  The original publication posing the challenge to the mathematicians of Europe, solved by Newton, Leibniz, L'Hôpital, and the Bernoulli brothers. + +- Sussmann, H. J., & Willems, J. C. (1997). *300 years of optimal control: from the brachystochrone to the maximum principle*.   +  IEEE Control Systems Magazine. [doi.org/10.1109/37.588108](https://doi.org/10.1109/37.588108)   +  A comprehensive historical review linking the classical Brachistochrone problem to modern optimal control theory and Pontryagin's Maximum Principle. + +- Dymos Examples: Brachistochrone. [OpenMDAO/Dymos](https://openmdao.github.io/dymos/examples/brachistochrone/brachistochrone.html)   +  The numerical implementation serving as the basis for the current benchmark formulation. \ No newline at end of file diff --git a/ext/JuMPModels/brachistochrone.jl b/ext/JuMPModels/brachistochrone.jl new file mode 100644 index 00000000..71319d34 --- /dev/null +++ b/ext/JuMPModels/brachistochrone.jl @@ -0,0 +1,128 @@ +""" +$(TYPEDSIGNATURES) + +Constructs a JuMP model representing the **Brachistochrone optimal control problem**. +The objective is to minimize the final time `tf` to travel between two points under gravity. +The problem uses a direct transcription (trapezoidal rule) manually implemented in JuMP. + +# Arguments + +- `::JuMPBackend`: Placeholder type to specify the JuMP backend or solver interface. +- `grid_size::Int=50`: (Keyword) Number of discretisation steps for the time grid. + +# Returns + +- `model::JuMP.Model`: A JuMP model containing the decision variables (including final time), dynamics constraints, and boundary conditions. + +# Example + +```julia-repl +julia> using OptimalControlProblems +julia> using JuMP +julia> model = OptimalControlProblems.brachistochrone(JuMPBackend(); N=100) +``` + +# References + +- Dymos Brachistochrone: [https://openmdao.github.io/dymos/examples/brachistochrone/brachistochrone.html](https://openmdao.github.io/dymos/examples/brachistochrone/brachistochrone.html) +""" +function OptimalControlProblems.brachistochrone( + ::JuMPBackend, + args...; + grid_size::Int=grid_size_data(:brachistochrone), + parameters::Union{Nothing,NamedTuple}=nothing, + kwargs..., +) + + # parameters + params = parameters_data(:brachistochrone, parameters) + g = params[:g] + t0 = params[:t0] + x0 = params[:x0] + y0 = params[:y0] + v0 = params[:v0] + xf = params[:xf] + yf = params[:yf] + + # model + model = JuMP.Model(args...; kwargs...) + + # N = grid_size + @expression(model, N, grid_size) + + @variables( + model, + begin + 0.1 <= tf <= 20.0, (start = 2.0) + px[0:N] + py[0:N] + v[0:N] + u[0:N] + end + ) + + + model[:time_grid] = () -> range(t0, value(tf), N+1) + model[:state_components] = ["px", "py", "v"] + model[:costate_components] = ["∂px", "∂py", "∂v"] + model[:control_components] = ["u"] + model[:variable_components] = ["tf"] + + for i in 0:N + alpha = i / N + set_start_value(px[i], x0 + alpha * (xf - x0)) + set_start_value(py[i], y0 + alpha * (yf - y0)) + set_start_value(v[i], v0 + alpha * 10.0) # Estimate speed + set_start_value(u[i], 1.57) # ~90 degrees + end + + # boundary constraints + @constraints( + model, + begin + # Start + px[0] == x0 + py[0] == y0 + v[0] == v0 + + # End + px[N] == xf + py[N] == yf + # v[N] is free + end + ) + + # dynamics + @expressions( + model, + begin + # Time step is variable: dt = (tf - t0) / N + dt, (tf - t0) / N + + # Dynamics expressions (Dymos formulation) + # dpx/dt = v * sin(u) + dpx[i = 0:N], v[i] * sin(u[i]) + + # dpy/dt = -v * cos(u) + dpy[i = 0:N], -v[i] * cos(u[i]) + + # dv/dt = g * cos(u) + dv[i = 0:N], g * cos(u[i]) + end + ) + + # Trapezoidal rule integration (Implicit) + @constraints( + model, + begin + ∂px[i = 1:N], px[i] == px[i - 1] + 0.5 * dt * (dpx[i] + dpx[i - 1]) + ∂py[i = 1:N], py[i] == py[i - 1] + 0.5 * dt * (dpy[i] + dpy[i - 1]) + ∂v[i = 1:N], v[i] == v[i - 1] + 0.5 * dt * (dv[i] + dv[i - 1]) + end + ) + + # objective: Minimize final time + @objective(model, Min, tf) + + return model +end \ No newline at end of file diff --git a/ext/MetaData/brachistochrone.jl b/ext/MetaData/brachistochrone.jl new file mode 100644 index 00000000..17d700d0 --- /dev/null +++ b/ext/MetaData/brachistochrone.jl @@ -0,0 +1,12 @@ +brachistochrone_meta = OrderedDict( + :grid_size => 500, + :parameters => ( + g = 9.80665, + t0 = 0.0, + x0 = 0.0, + y0 = 10.0, + v0 = 0.0, + xf = 10.0, + yf = 5.0, + ), +) \ No newline at end of file diff --git a/ext/OptimalControlModels/brachistochrone.jl b/ext/OptimalControlModels/brachistochrone.jl new file mode 100644 index 00000000..860ea7eb --- /dev/null +++ b/ext/OptimalControlModels/brachistochrone.jl @@ -0,0 +1,93 @@ +""" +$(TYPEDSIGNATURES) + +Constructs an **OptimalControl problem** representing the Brachistochrone problem using the OptimalControl backend. +The function sets up the state and control variables, boundary conditions, dynamics, and the objective functional. +It then performs direct transcription to generate a discrete optimal control problem (DOCP). + +# Arguments + +- `::OptimalControlBackend`: Placeholder type to specify the OptimalControl backend or solver interface. +- `grid_size::Int=grid_size_data(:brachistochrone)`: (Keyword) Number of discretisation points for the direct transcription grid. +- `parameters::Union{Nothing,NamedTuple}=nothing`: (Keyword) Custom parameters to override defaults. + +# Returns + +- `docp`: The direct optimal control problem object, representing the discretised problem. + +# Example +```julia-repl +julia> using OptimalControlProblems + +julia> docp = OptimalControlProblems.brachistochrone(OptimalControlBackend(); N=100); +``` + +# References + +- Dymos Brachistochrone: https://openmdao.github.io/dymos/examples/brachistochrone/brachistochrone.html +""" +function OptimalControlProblems.brachistochrone( + ::OptimalControlBackend, + description::Symbol...; + grid_size::Int=grid_size_data(:brachistochrone), + parameters::Union{Nothing,NamedTuple}=nothing, + kwargs..., +) + + params = parameters_data(:brachistochrone, parameters) + g = params[:g] + t0 = params[:t0] + x0 = params[:x0] + y0 = params[:y0] + v0 = params[:v0] + xf = params[:xf] + yf = params[:yf] + + ocp = @def begin + tf ∈ R, variable + t ∈ [t0, tf], time + + z = (px, py, v) ∈ R³, state + u ∈ R, control + + 0.1 ≤ tf ≤ 20.0 + + px(t0) == x0 + py(t0) == y0 + v(t0) == v0 + + px(tf) == xf + py(tf) == yf + + ∂(px)(t) == v(t) * sin(u(t)) + ∂(py)(t) == -v(t) * cos(u(t)) + ∂(v)(t) == g * cos(u(t)) + + # Objectif + tf → min + end + + # initial guess: linear interpolation to match JuMP + tf_guess = 2.0 + init = ( + state = t -> [ + x0 + (t - t0) / (tf_guess - t0) * (xf - x0), + y0 + (t - t0) / (tf_guess - t0) * (yf - y0), + v0 + (t - t0) / (tf_guess - t0) * 10.0 + ], + control = 1.57, + variable = tf_guess + ) + + docp = direct_transcription( + ocp, + description...; + lagrange_to_mayer=false, + init = init, + grid_size = grid_size, + disc_method=:trapeze, + kwargs... + ) + + return docp +end \ No newline at end of file diff --git a/ext/OptimalControlModels_s/brachistochrone_s.jl b/ext/OptimalControlModels_s/brachistochrone_s.jl new file mode 100644 index 00000000..8d46e45f --- /dev/null +++ b/ext/OptimalControlModels_s/brachistochrone_s.jl @@ -0,0 +1,96 @@ +""" +$(TYPEDSIGNATURES) + +Constructs an **OptimalControl problem** representing the Brachistochrone problem using the OptimalControl backend. +The function sets up the state and control variables, boundary conditions, dynamics, and the objective functional. +It then performs direct transcription to generate a discrete optimal control problem (DOCP). + +# Arguments + +- `::OptimalControlBackend`: Placeholder type to specify the OptimalControl backend or solver interface. +- `grid_size::Int=grid_size_data(:brachistochrone)`: (Keyword) Number of discretisation points for the direct transcription grid. +- `parameters::Union{Nothing,NamedTuple}=nothing`: (Keyword) Custom parameters to override defaults. + +# Returns + +- `docp`: The direct optimal control problem object, representing the discretised problem. + +# Example +```julia-repl +julia> using OptimalControlProblems + +julia> docp = OptimalControlProblems.brachistochrone(OptimalControlBackend(); N=100); +``` + +# References + +- Dymos Brachistochrone: https://openmdao.github.io/dymos/examples/brachistochrone/brachistochrone.html +""" +function OptimalControlProblems.brachistochrone_s( + ::OptimalControlBackend, + description::Symbol...; + grid_size::Int=grid_size_data(:brachistochrone), + parameters::Union{Nothing,NamedTuple}=nothing, + kwargs..., +) + + params = parameters_data(:brachistochrone, parameters) + g = params[:g] + t0 = params[:t0] + x0 = params[:x0] + y0 = params[:y0] + v0 = params[:v0] + xf = params[:xf] + yf = params[:yf] + + # model + ocp = @def begin + + tf ∈ R, variable + t ∈ [t0, tf], time + + + z = (px, py, v) ∈ R³, state + u ∈ R, control + + + z(t0) == [x0, y0, v0] + + + px(tf) == xf + py(tf) == yf + + 0.1 ≤ tf ≤ 20.0 + + ∂(px)(t) == v(t) * sin(u(t)) + ∂(py)(t) == -v(t) * cos(u(t)) + ∂(v)(t) == g * cos(u(t)) + + tf → min + end + + # initial guess: linear interpolation to match JuMP + tf_guess = 2.0 + init = ( + state = t -> [ + x0 + (t - t0) / (tf_guess - t0) * (xf - x0), + y0 + (t - t0) / (tf_guess - t0) * (yf - y0), + v0 + (t - t0) / (tf_guess - t0) * 10.0 + ], + control = 1.57, + variable = tf_guess + ) + + # discretise the optimal control problem + docp = direct_transcription( + ocp, + description...; + lagrange_to_mayer=false, + init=init, + grid_size=grid_size, + disc_method=:trapeze, + kwargs..., + ) + + return docp +end \ No newline at end of file From ad28d77214a3f880e021902eef3ef094ef5e26ed Mon Sep 17 00:00:00 2001 From: Amiel Date: Sat, 28 Feb 2026 23:00:49 +0100 Subject: [PATCH 2/9] fix bug brachistochrone --- ext/JuMPModels/brachistochrone.jl | 4 +++- ext/MetaData/brachistochrone.jl | 4 +++- ext/OptimalControlModels/brachistochrone.jl | 4 +++- ext/OptimalControlModels_s/brachistochrone_s.jl | 4 +++- 4 files changed, 12 insertions(+), 4 deletions(-) diff --git a/ext/JuMPModels/brachistochrone.jl b/ext/JuMPModels/brachistochrone.jl index 71319d34..42ee02dc 100644 --- a/ext/JuMPModels/brachistochrone.jl +++ b/ext/JuMPModels/brachistochrone.jl @@ -43,6 +43,8 @@ function OptimalControlProblems.brachistochrone( v0 = params[:v0] xf = params[:xf] yf = params[:yf] + u_min = params[:u_min] + u_max = params[:u_max] # model model = JuMP.Model(args...; kwargs...) @@ -57,7 +59,7 @@ function OptimalControlProblems.brachistochrone( px[0:N] py[0:N] v[0:N] - u[0:N] + u_min <= u[0:N] <= u_max end ) diff --git a/ext/MetaData/brachistochrone.jl b/ext/MetaData/brachistochrone.jl index 17d700d0..23937070 100644 --- a/ext/MetaData/brachistochrone.jl +++ b/ext/MetaData/brachistochrone.jl @@ -7,6 +7,8 @@ brachistochrone_meta = OrderedDict( y0 = 10.0, v0 = 0.0, xf = 10.0, - yf = 5.0, + yf = 5.0, + u_min = -pi, + u_max = pi, ), ) \ No newline at end of file diff --git a/ext/OptimalControlModels/brachistochrone.jl b/ext/OptimalControlModels/brachistochrone.jl index 860ea7eb..cb2d2eb8 100644 --- a/ext/OptimalControlModels/brachistochrone.jl +++ b/ext/OptimalControlModels/brachistochrone.jl @@ -42,13 +42,15 @@ function OptimalControlProblems.brachistochrone( v0 = params[:v0] xf = params[:xf] yf = params[:yf] + u_min = params[:u_min] + u_max = params[:u_max] ocp = @def begin tf ∈ R, variable t ∈ [t0, tf], time z = (px, py, v) ∈ R³, state - u ∈ R, control + u ∈ [u_min, u_max], control 0.1 ≤ tf ≤ 20.0 diff --git a/ext/OptimalControlModels_s/brachistochrone_s.jl b/ext/OptimalControlModels_s/brachistochrone_s.jl index 8d46e45f..f57d2004 100644 --- a/ext/OptimalControlModels_s/brachistochrone_s.jl +++ b/ext/OptimalControlModels_s/brachistochrone_s.jl @@ -42,6 +42,8 @@ function OptimalControlProblems.brachistochrone_s( v0 = params[:v0] xf = params[:xf] yf = params[:yf] + u_min = params[:u_min] + u_max = params[:u_max] # model ocp = @def begin @@ -51,7 +53,7 @@ function OptimalControlProblems.brachistochrone_s( z = (px, py, v) ∈ R³, state - u ∈ R, control + u ∈ [u_min, u_max], control z(t0) == [x0, y0, v0] From dbdc30a41b405ec9284c976a1699832e34239dc1 Mon Sep 17 00:00:00 2001 From: Amiel Date: Sat, 28 Feb 2026 23:09:56 +0100 Subject: [PATCH 3/9] fix bug again --- ext/OptimalControlModels/brachistochrone.jl | 3 ++- ext/OptimalControlModels_s/brachistochrone_s.jl | 12 ++++++------ 2 files changed, 8 insertions(+), 7 deletions(-) diff --git a/ext/OptimalControlModels/brachistochrone.jl b/ext/OptimalControlModels/brachistochrone.jl index cb2d2eb8..c9309a9f 100644 --- a/ext/OptimalControlModels/brachistochrone.jl +++ b/ext/OptimalControlModels/brachistochrone.jl @@ -50,8 +50,9 @@ function OptimalControlProblems.brachistochrone( t ∈ [t0, tf], time z = (px, py, v) ∈ R³, state - u ∈ [u_min, u_max], control + u ∈ R, control + u_min ≤ u(t) ≤ u_max 0.1 ≤ tf ≤ 20.0 px(t0) == x0 diff --git a/ext/OptimalControlModels_s/brachistochrone_s.jl b/ext/OptimalControlModels_s/brachistochrone_s.jl index f57d2004..fdc8452f 100644 --- a/ext/OptimalControlModels_s/brachistochrone_s.jl +++ b/ext/OptimalControlModels_s/brachistochrone_s.jl @@ -47,23 +47,23 @@ function OptimalControlProblems.brachistochrone_s( # model ocp = @def begin - + tf ∈ R, variable t ∈ [t0, tf], time - + z = (px, py, v) ∈ R³, state - u ∈ [u_min, u_max], control + u ∈ R, control + + u_min ≤ u(t) ≤ u_max + 0.1 ≤ tf ≤ 20.0 - z(t0) == [x0, y0, v0] px(tf) == xf py(tf) == yf - 0.1 ≤ tf ≤ 20.0 - ∂(px)(t) == v(t) * sin(u(t)) ∂(py)(t) == -v(t) * cos(u(t)) ∂(v)(t) == g * cos(u(t)) From 38d1fdfc0a4b8ca38d843452ddd5615326624ddd Mon Sep 17 00:00:00 2001 From: Amiel Date: Sat, 28 Feb 2026 23:32:39 +0100 Subject: [PATCH 4/9] fix --- ext/JuMPModels/brachistochrone.jl | 2 +- ext/OptimalControlModels/brachistochrone.jl | 2 +- ext/OptimalControlModels_s/brachistochrone_s.jl | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/ext/JuMPModels/brachistochrone.jl b/ext/JuMPModels/brachistochrone.jl index 42ee02dc..85fa7edc 100644 --- a/ext/JuMPModels/brachistochrone.jl +++ b/ext/JuMPModels/brachistochrone.jl @@ -59,7 +59,7 @@ function OptimalControlProblems.brachistochrone( px[0:N] py[0:N] v[0:N] - u_min <= u[0:N] <= u_max + -1.57 <= u[0:N] <= 1.57 end ) diff --git a/ext/OptimalControlModels/brachistochrone.jl b/ext/OptimalControlModels/brachistochrone.jl index c9309a9f..8bd59b8e 100644 --- a/ext/OptimalControlModels/brachistochrone.jl +++ b/ext/OptimalControlModels/brachistochrone.jl @@ -52,7 +52,7 @@ function OptimalControlProblems.brachistochrone( z = (px, py, v) ∈ R³, state u ∈ R, control - u_min ≤ u(t) ≤ u_max + -1.57 ≤ u(t) ≤ 1.57 0.1 ≤ tf ≤ 20.0 px(t0) == x0 diff --git a/ext/OptimalControlModels_s/brachistochrone_s.jl b/ext/OptimalControlModels_s/brachistochrone_s.jl index fdc8452f..96f2cf31 100644 --- a/ext/OptimalControlModels_s/brachistochrone_s.jl +++ b/ext/OptimalControlModels_s/brachistochrone_s.jl @@ -55,7 +55,7 @@ function OptimalControlProblems.brachistochrone_s( z = (px, py, v) ∈ R³, state u ∈ R, control - u_min ≤ u(t) ≤ u_max + -1.57 ≤ u(t) ≤ 1.57 0.1 ≤ tf ≤ 20.0 z(t0) == [x0, y0, v0] From 02dde16ee07e60475decd887b9ba21cb253c62ea Mon Sep 17 00:00:00 2001 From: Amiel Date: Sun, 1 Mar 2026 19:16:58 +0100 Subject: [PATCH 5/9] add bryson problem --- ext/Descriptions/bryson_denham.md | 56 +++++++++++++++ ext/JuMPModels/bryson_denham.jl | 68 ++++++++++++++++++ ext/MetaData/bryson_denham.jl | 12 ++++ ext/OptimalControlModels/bryson_denham.jl | 70 ++++++++++++++++++ ext/OptimalControlModels_s/bryson_denham_s.jl | 71 +++++++++++++++++++ 5 files changed, 277 insertions(+) create mode 100644 ext/Descriptions/bryson_denham.md create mode 100644 ext/JuMPModels/bryson_denham.jl create mode 100644 ext/MetaData/bryson_denham.jl create mode 100644 ext/OptimalControlModels/bryson_denham.jl create mode 100644 ext/OptimalControlModels_s/bryson_denham_s.jl diff --git a/ext/Descriptions/bryson_denham.md b/ext/Descriptions/bryson_denham.md new file mode 100644 index 00000000..2181b366 --- /dev/null +++ b/ext/Descriptions/bryson_denham.md @@ -0,0 +1,56 @@ +The **Bryson–Denham problem** is a classic optimal control benchmark involving a state inequality constraint. +It models the movement of a unit mass on a frictionless plane, where the goal is to relocate the mass over a fixed time interval with minimum control effort, while ensuring the position never exceeds a certain limit. +Originally introduced in [Bryson et al. 1963](https://doi.org/10.2514/3.2107), it is a standard test case for evaluating the ability of numerical solvers to handle state-constrained trajectories. + +### Mathematical formulation + +The problem can be stated as + +```math +$$ +\begin{aligned} +\min_{x,u} \quad & J(x,u) = \int_0^1 \frac{1}{2} u^2(t) \,\mathrm{d}t \\[1em] +\text{s.t.} \quad & \dot{x}_1(t) = x_2(t), \quad \dot{x}_2(t) = u(t), \\[0.5em] +& x_1(0) = 0, \; x_1(1) = 0, \; x_2(0) = 1, \; x_2(1) = -1, \\[0.5em] +& x_1(t) \le a. +\end{aligned} +$$ +``` + +where $x_1$ represents position, $x_2$ velocity, $u$ acceleration (control), and $a$ is the maximum allowable displacement (e.g., $a=1/9$). + +### Qualitative behaviour + +The problem features a **second-order state constraint** because the control $u$ only appears in the second derivative of the constrained state $x_1$: + +```math +$$ +\ddot{x}_1(t) = u(t). +$$ +``` + +When the trajectory is on a **boundary arc** ($x_1(t) = a$), the derivatives $\dot{x}_1$ and $\ddot{x}_1$ must be zero. This implies that while the constraint is active, the velocity $x_2$ is zero and the control $u$ is zero: + +```math +$$ +u(t) = 0. +$$ +``` + +The solution structure changes based on the value of the constraint parameter $a$: + +* **Unconstrained Case**: If $a$ is sufficiently large ($a \ge 1/4$), the state constraint is never active. +* **Touch Point**: For a critical value of $a$, the trajectory just touches the boundary $x_1 = a$ at a single point $t=0.5$. +* **Boundary Arc**: For smaller values (e.g., $a=1/9$), the trajectory remains on the boundary for a finite time interval. During this interval, the control vanishes. + +### Characteristics + +* Linear–quadratic dynamics with a second-order state inequality constraint. +* Demonstrates a "bang-off-bang" control structure where the control is zero on the constraint boundary. +* Used to test the accuracy of state constraint handling and the detection of switching times. + +### References + +* Bryson, A.E., Denham, W.F., & Dreyfus, S.E. (1963). *Optimal programming problems with inequality constraints I: necessary conditions for extremal solutions*. + AIAA Journal. [doi.org/10.2514/3.2107](https://doi.org/10.2514/3.2107) +* Dymos Documentation: Bryson-Denham Problem. [dymos/examples/bryson_denham](https://openmdao.github.io/dymos/examples/bryson_denham/bryson_denham.html) \ No newline at end of file diff --git a/ext/JuMPModels/bryson_denham.jl b/ext/JuMPModels/bryson_denham.jl new file mode 100644 index 00000000..562b327a --- /dev/null +++ b/ext/JuMPModels/bryson_denham.jl @@ -0,0 +1,68 @@ +""" +$(TYPEDSIGNATURES) + +Constructs a JuMP model representing the **Bryson-Denham optimal control problem**. +The objective is to minimize the control effort while satisfying state constraints and double integrator dynamics. + +# Arguments +- `::JuMPBackend`: Placeholder type to specify the JuMP backend or solver interface. +- `grid_size::Int=500`: (Keyword) Number of discretization steps for the time grid. + +# Returns + +- `model::JuMP.Model`: A JuMP model containing the decision variables, dynamics constraints, boundary conditions, and the quadratic objective function. + +# Example + +```julia-repl +julia> using OptimalControlProblems +julia> using JuMP + +julia> model = OptimalControlProblems.bryson_denham(JuMPBackend(); N=100) +``` + +# References + +""" + +function OptimalControlProblems.bryson_denham( + ::JuMPBackend, + args...; + grid_size::Int=grid_size_data(:bryson_denham), + parameters::Union{Nothing,NamedTuple}=nothing, + kwargs..., +) + params = parameters_data(:bryson_denham, parameters) + t0, tf = params[:t0], params[:tf] + x1_t0, x2_t0 = params[:x1_t0], params[:x2_t0] + x1_tf, x2_tf = params[:x1_tf], params[:x2_tf] + x1_max = params[:x1_max] + + model = JuMP.Model(args...; kwargs...) + + model[:time_grid] = () -> range(t0, tf, grid_size+1) + model[:state_components] = ["x1", "x2"] + model[:control_components] = ["u"] + model[:costate_components] = ["p1", "p2"] + model[:variable_components] = [] + + N = grid_size + Δt = (tf- t0) /N + + @variable(model, x1[0:N] <= x1_max, start = 0.0) + @variable(model, x2[0:N], start = 0.0) + @variable(model, u[0:N], start = 0.0) + + @constraints(model, begin + x1[0] == x1_t0 + x2[0] == x2_t0 + x1[N] == x1_tf + x2[N] == x2_tf + p1[i = 1:N], x1[i] == x1[i-1] + 0.5* Δt * (x2[i] + x2[i-1]) + p2[i = 1:N], x2[i] == x2[i-1] + 0.5 * Δt * (u[i] + u[i-1]) + end) + + @objective(model, Min, 0.5 * Δt * sum(0.5 * (u[i]^2 + u[i-1]^2) for i in 1:N)) + + return model +end \ No newline at end of file diff --git a/ext/MetaData/bryson_denham.jl b/ext/MetaData/bryson_denham.jl new file mode 100644 index 00000000..e072ef89 --- /dev/null +++ b/ext/MetaData/bryson_denham.jl @@ -0,0 +1,12 @@ +bryson_denham_meta = OrderedDict( + :grid_size => 500, + :parameters => ( + t0 = 0.0, # Initial time + tf = 1.0, # Final time + x1_t0 = 0.0, # Initial position + x2_t0 = 1.0, # Initial velocity + x1_tf = 0.0, # Final position + x2_tf = -1.0, # Final velocity + x1_max = 1/9 # State constraint: x1(t) <= x1_max + ), +) \ No newline at end of file diff --git a/ext/OptimalControlModels/bryson_denham.jl b/ext/OptimalControlModels/bryson_denham.jl new file mode 100644 index 00000000..95ee822e --- /dev/null +++ b/ext/OptimalControlModels/bryson_denham.jl @@ -0,0 +1,70 @@ +""" +$(TYPEDSIGNATURES) + +Constructs an **OptimalControl problem** representing the Bryson-Denham problem using the OptimalControl backend. +The function sets up the state and control variables, boundary conditions, dynamics, path constraints, and the objective functional. +It then performs direct transcription to generate a discrete optimal control problem (DOCP). + +# Arguments + +- `::OptimalControlBackend`: Placeholder type to specify the OptimalControl backend or solver interface. +- `grid_size::Int=500`: (Keyword) Number of discretisation points for the direct transcription grid. + +# Returns + +- `docp`: The direct optimal control problem object, representing the discretised problem. + +# Example + +```julia-repl +julia> using OptimalControlProblems + +julia> docp = OptimalControlProblems.bryson_denham(OptimalControlBackend(); N=100); +``` + +# References + +Bryson, A. E. and Denham, W. F., "A Steering Program for Optimal Transfer of a Thrust-Limited Rocket Between Neighboring Circular Orbits", 1962. +- Formulation inspired by OptimalControl approach for swing-up control problems. +""" + +function OptimalControlProblems.bryson_denham( + ::OptimalControlBackend, + description::Symbol...; + grid_size::Int=grid_size_data(:bryson_denham), + parameters::Union{Nothing,NamedTuple}=nothing, + kwargs..., +) + params = parameters_data(:bryson_denham, parameters) + t0, tf = params[:t0], params[:tf] + x1_t0, x2_t0 = params[:x1_t0], params[:x2_t0] + x1_tf, x2_tf = params[:x1_tf], params[:x2_tf] + x1_max = params[:x1_max] + + ocp = @def begin + t ∈ [t0, tf], time + x ∈ R², state + u ∈ R, control + + x(t0) == [x1_t0, x2_t0] + x(tf) == [x1_tf, x2_tf] + + x₁(t) ≤ x1_max + + ẋ(t) == [x₂(t), u(t)] + + ∫(0.5 * u(t)^2) → min + end + + init = (state=[0.0, 0.0], control=0.0) + + return direct_transcription( + ocp, + description...; + lagrange_to_mayer=false, + init=init, + grid_size=grid_size, + disc_method=:trapeze, + kwargs..., + ) +end \ No newline at end of file diff --git a/ext/OptimalControlModels_s/bryson_denham_s.jl b/ext/OptimalControlModels_s/bryson_denham_s.jl new file mode 100644 index 00000000..248a6385 --- /dev/null +++ b/ext/OptimalControlModels_s/bryson_denham_s.jl @@ -0,0 +1,71 @@ +""" +$(TYPEDSIGNATURES) + +Constructs an **OptimalControl problem** representing the Bryson-Denham problem. +This problem consists of minimizing the integral of the squared acceleration for a particle +required to travel a given distance within a set time, subject to a state constraint on position. + +# Arguments + +- `::OptimalControlBackend`: Placeholder type to specify the OptimalControl backend or solver interface. +- `grid_size::Int=500`: (Keyword) Number of discretization points for the direct transcription grid. + +# Returns + +- `docp`: The discretized optimal control problem (DOCP) object. + +# Example + +```julia-repl +julia> using OptimalControlProblems + +julia> docp = OptimalControlProblems.bryson_denham(OptimalControlBackend(); N=100); +``` + +# References + +- Bryson, A. E. and Denham, W. F., "A Steering Program for Optimal Transfer of a Thrust-Limited Rocket Between Neighboring Circular Orbits", 1962. +- Dymos Examples: https://openmdao.github.io/dymos/examples/bryson_denham/bryson_denham.html +""" + +function OptimalControlProblems.bryson_denham_s( + ::OptimalControlBackend, + description::Symbol...; + grid_size::Int=grid_size_data(:bryson_denham), + parameters::Union{Nothing,NamedTuple}=nothing, + kwargs..., +) + params = parameters_data(:bryson_denham, parameters) + t0, tf = params[:t0], params[:tf] + x1_t0, x2_t0 = params[:x1_t0], params[:x2_t0] + x1_tf, x2_tf = params[:x1_tf], params[:x2_tf] + x1_max = params[:x1_max] + + ocp = @def begin + t ∈ [t0, tf], time + x ∈ R², state # x1 (position) et x2(speed) + u ∈ R, control + + x(t0) == [x1_t0, x2_t0] + x(tf) == [x1_tf, x2_tf] + + x₁(t) ≤ x1_max + + ∂(x₁)(t) == x₂(t) + ∂(x₂)(t) == u(t) + + ∫(0.5 * u(t)^2) → min + end + + init = (state=[0.0, 0.0], control=0.0) + + return direct_transcription( + ocp, + description...; + lagrange_to_mayer=false, + init=init, + grid_size=grid_size, + disc_method=:trapeze, + kwargs..., + ) +end From 20c5624ce9d7eb356a96808f5e01862476435026 Mon Sep 17 00:00:00 2001 From: Amiel Date: Sun, 1 Mar 2026 19:19:16 +0100 Subject: [PATCH 6/9] add balanced field problem --- ext/Descriptions/balanced_field.md | 48 +++++++ ext/JuMPModels/balanced_field.jl | 130 ++++++++++++++++++ ext/MetaData/balanced_field.jl | 29 ++++ ext/OptimalControlModels/balanced_field.jl | 128 +++++++++++++++++ .../balanced_field_s.jl | 106 ++++++++++++++ 5 files changed, 441 insertions(+) create mode 100644 ext/Descriptions/balanced_field.md create mode 100644 ext/JuMPModels/balanced_field.jl create mode 100644 ext/MetaData/balanced_field.jl create mode 100644 ext/OptimalControlModels/balanced_field.jl create mode 100644 ext/OptimalControlModels_s/balanced_field_s.jl diff --git a/ext/Descriptions/balanced_field.md b/ext/Descriptions/balanced_field.md new file mode 100644 index 00000000..d9fae05e --- /dev/null +++ b/ext/Descriptions/balanced_field.md @@ -0,0 +1,48 @@ +The **Aircraft Balanced Field Length Calculation** is a classic aeronautical engineering problem used to determine the minimum runway length required for a safe takeoff or a safe stop in the event of an engine failure. +This implementation focuses on the **takeoff climb** phase with one engine out, starting from the critical engine failure speed $V_1$. + +### Mathematical formulation + +The problem is to minimise the final range $r(t_f)$ to reach the screen height (usually 35 ft). +The state vector is $x(t) = [r(t), v(t), h(t), \gamma(t)]^ op$ and the control is the angle of attack $\alpha(t)$. + +```math +\begin{aligned} +\min_{\alpha, t_f} \quad & r(t_f) \[0.5em] + ext{s.t.} \quad & \dot{r}(t) = v(t) \cos \gamma(t), \[0.5em] +& \dot{v}(t) = \frac{T \cos \alpha(t) - D}{m} - g \sin \gamma(t), \[0.5em] +& \dot{h}(t) = v(t) \sin \gamma(t), \[0.5em] +& \dot{\gamma}(t) = \frac{T \sin \alpha(t) + L}{m v(t)} - \frac{g \cos \gamma(t)}{v(t)}, \[0.5em] +& h(t_f) = 10.668 ext{ m (35 ft)}, \[0.5em] +& \gamma(t_f) = 5^\circ. +\end{aligned} +``` + +The aerodynamic forces $L$ and $D$ depend on the angle of attack $\alpha$, velocity $v$, and altitude $h$ (ground effect). + +### Parameters + +The parameters are based on a mid-size jet aircraft (nominal values from Dymos): + +| Parameter | Symbol | Value | +|-----------|--------|-------| +| Mass | $m$ | 79015.8 kg | +| Surface | $S$ | 124.7 m² | +| Thrust (1 engine) | $T$ | 120102.0 N | +| Screen height | $h_{tf}$ | 10.668 m | + +### Characteristics + +- Multi-state nonlinear dynamics, +- Free final time $t_f$, +- Ground effect inclusion in the drag model ($K$ coefficient), +- Control bounds on the angle of attack $\alpha$, +- Boundary conditions representing $V_1$ conditions and final climb requirements. + +### References + +- **Dymos Documentation**. [*Aircraft Balanced Field Length Calculation*.](https://openmdao.github.io/dymos/examples/balanced_field/balanced_field.html). + Illustrates the full multi-phase BFL calculation using Dymos and OpenMDAO. + +- **Betts, J. T. (2010)**. *Practical Methods for Optimal Control and Estimation Using Nonlinear Programming*. SIAM. + Discusses takeoff and landing trajectory optimization in Chapter 6. diff --git a/ext/JuMPModels/balanced_field.jl b/ext/JuMPModels/balanced_field.jl new file mode 100644 index 00000000..5fbb5dc0 --- /dev/null +++ b/ext/JuMPModels/balanced_field.jl @@ -0,0 +1,130 @@ +""" +$(TYPEDSIGNATURES) + +Constructs and returns a JuMP model for the **Aircraft Balanced Field Length Calculation (Takeoff phase)**. +The model represents the dynamics of the aircraft during takeoff and seeks to minimise the final range. + +# Arguments + +- `::JuMPBackend`: Specifies the backend for building the JuMP model. +- `grid_size::Int=200`: (Keyword) Number of discretisation steps for the time horizon. + +# Returns + +- `model::JuMP.Model`: A JuMP model representing the problem. +""" +function OptimalControlProblems.balanced_field( + ::JuMPBackend, + args...; + grid_size::Int=grid_size_data(:balanced_field), + parameters::Union{Nothing,NamedTuple}=nothing, + kwargs..., +) + + # parameters + params = parameters_data(:balanced_field, parameters) + t0 = params[:t0] + m = params[:m] + g = params[:g] + ρ = params[:ρ] + S = params[:S] + CD0 = params[:CD0] + AR = params[:AR] + e = params[:e] + CL0 = params[:CL0] + CL_max = params[:CL_max] + h_w = params[:h_w] + span = params[:span] + α_max = params[:α_max] + T = params[:T] + + r_t0 = params[:r_t0] + v_t0 = params[:v_t0] + h_t0 = params[:h_t0] + γ_t0 = params[:γ_t0] + h_tf = params[:h_tf] + γ_tf = params[:γ_tf] + + α_min = params[:α_min] + α_max_ctrl = params[:α_max_ctrl] + + # Constants + b = span / 2.0 + K_nom = 1.0 / (pi * AR * e) + + # model + model = JuMP.Model(args...; kwargs...) + + # metadata + model[:time_grid] = () -> range(t0, value(model[:tf]), grid_size + 1) + model[:state_components] = ["r", "v", "h", "γ"] + model[:costate_components] = ["∂r", "∂v", "∂h", "∂γ"] + model[:control_components] = ["α"] + model[:variable_components] = ["tf"] + + @expression(model, N, grid_size) + + # variables + @variables( + model, + begin + r[0:N], (start = r_t0) + v[0:N] ≥ 1.0, (start = v_t0 + 10.0) + h[0:N] ≥ 0.0, (start = h_tf / 2.0) + γ[0:N], (start = γ_tf) + 0 ≤ α[0:N] ≤ α_max_ctrl, (start = 0.1) + tf ≥ 0.1, (start = 10.0) + end + ) + + # boundary constraints + @constraints( + model, + begin + r[0] == r_t0 + v[0] == v_t0 + h[0] == h_t0 + γ[0] == γ_t0 + + h[N] == h_tf + γ[N] == γ_tf + end + ) + + # dynamics + @expressions( + model, + begin + Δt, (tf - t0) / N + + q[i = 0:N], 0.5 * ρ * v[i]^2 + CL[i = 0:N], CL0 + (α[i] / α_max) * (CL_max - CL0) + L[i = 0:N], q[i] * S * CL[i] + + h_eff[i = 0:N], h[i] + h_w + term_h[i = 0:N], 33.0 * abs(h_eff[i] / b)^1.5 + K[i = 0:N], K_nom * term_h[i] / (1.0 + term_h[i]) + D[i = 0:N], q[i] * S * (CD0 + K[i] * CL[i]^2) + + dr[i = 0:N], v[i] * cos(γ[i]) + dv[i = 0:N], (T * cos(α[i]) - D[i]) / m - g * sin(γ[i]) + dh[i = 0:N], v[i] * sin(γ[i]) + dγ[i = 0:N], (T * sin(α[i]) + L[i]) / (m * v[i]) - (g * cos(γ[i])) / v[i] + end + ) + + @constraints( + model, + begin + ∂r[i = 1:N], r[i] == r[i - 1] + 0.5 * Δt * (dr[i] + dr[i - 1]) + ∂v[i = 1:N], v[i] == v[i - 1] + 0.5 * Δt * (dv[i] + dv[i - 1]) + ∂h[i = 1:N], h[i] == h[i - 1] + 0.5 * Δt * (dh[i] + dh[i - 1]) + ∂γ[i = 1:N], γ[i] == γ[i - 1] + 0.5 * Δt * (dγ[i] + dγ[i - 1]) + end + ) + + # objective + @objective(model, Min, r[N]) + + return model +end diff --git a/ext/MetaData/balanced_field.jl b/ext/MetaData/balanced_field.jl new file mode 100644 index 00000000..f69bd6de --- /dev/null +++ b/ext/MetaData/balanced_field.jl @@ -0,0 +1,29 @@ +balanced_field_meta = OrderedDict( + :grid_size => 200, + :parameters => ( + t0 = 0.0, + m = 79015.8, # mass (kg) + g = 9.80665, # gravity (m/s^2) + ρ = 1.225, # air density (kg/m^3) + S = 124.7, # surface (m^2) + CD0 = 0.03, # zero-lift drag coefficient + AR = 9.45, # aspect ratio + e = 0.801, # Oswald efficiency + CL0 = 0.5, # zero-alpha lift coefficient + CL_max = 2.0, # max lift coefficient + h_w = 1.0, # height of wing above CoG (m) + span = 35.7, # wingspan (m) + α_max = 0.1745, # alpha at CL_max (10 degrees in rad) + T = 120102.0, # thrust (N) - one engine out + r_t0 = 600.0, # initial range (m) at V1 + v_t0 = 70.0, # initial velocity (m/s) at V1 + h_t0 = 0.0, # initial altitude (m) + γ_t0 = 0.0, # initial flight path angle (rad) + h_tf = 10.668, # final altitude (35 ft in m) + γ_tf = 0.0873, # final flight path angle (5 degrees in rad) + v_min = 0.0, + v_max = 100.0, + α_min = -0.1745, # -10 degrees + α_max_ctrl = 0.2618, # 15 degrees + ), +) diff --git a/ext/OptimalControlModels/balanced_field.jl b/ext/OptimalControlModels/balanced_field.jl new file mode 100644 index 00000000..1615645e --- /dev/null +++ b/ext/OptimalControlModels/balanced_field.jl @@ -0,0 +1,128 @@ +""" +$(TYPEDSIGNATURES) + +Constructs an **OptimalControl problem** for the Aircraft Balanced Field Length Calculation (Takeoff phase). +The objective is to minimise the final range to reach a specified altitude (35 ft) with one engine out. +The problem formulation is based on the Dymos balanced field example. + +# Arguments + +- `::OptimalControlBackend`: Placeholder type specifying the OptimalControl backend or solver interface. +- `grid_size::Int=200`: (Keyword) Number of discretisation points for the direct transcription grid. + +# Returns + +- `docp`: The direct optimal control problem object representing the problem. +- `nlp`: The corresponding nonlinear programming model. + +# Example + +```julia-repl +julia> using OptimalControlProblems +julia> docp = OptimalControlProblems.balanced_field(OptimalControlBackend()); +``` +""" +function OptimalControlProblems.balanced_field( + ::OptimalControlBackend, + description::Symbol...; + grid_size::Int=grid_size_data(:balanced_field), + parameters::Union{Nothing,NamedTuple}=nothing, + kwargs..., +) + + # parameters + params = parameters_data(:balanced_field, parameters) + t0 = params[:t0] + m = params[:m] + g = params[:g] + ρ = params[:ρ] + S = params[:S] + CD0 = params[:CD0] + AR = params[:AR] + e = params[:e] + CL0 = params[:CL0] + CL_max = params[:CL_max] + h_w = params[:h_w] + span = params[:span] + α_max = params[:α_max] + T = params[:T] + + r_t0 = params[:r_t0] + v_t0 = params[:v_t0] + h_t0 = params[:h_t0] + γ_t0 = params[:γ_t0] + h_tf = params[:h_tf] + γ_tf = params[:γ_tf] + + α_min = params[:α_min] + α_max_ctrl = params[:α_max_ctrl] + + # Constants + b = span / 2.0 + K_nom = 1.0 / (pi * AR * e) + + # model + ocp = @def begin + tf ∈ R, variable + t ∈ [t0, tf], time + x = (r, v, h, γ) ∈ R⁴, state + α ∈ R, control + + r(t0) == r_t0 + v(t0) == v_t0 + h(t0) == h_t0 + γ(t0) == γ_t0 + + h(tf) == h_tf + γ(tf) == γ_tf + + 0 ≤ α(t) ≤ α_max_ctrl # alpha + tf ≥ 0.1 + h(t) ≥ 0 + v(t) ≥ 1.0 + + ẋ(t) == dynamics(x(t), α(t), m, g, ρ, S, CD0, CL0, CL_max, α_max, T, b, K_nom, h_w) + + r(tf) → min + end + + function dynamics(x, α, m, g, ρ, S, CD0, CL0, CL_max, α_max, T, b, K_nom, h_w) + r, v, h, γ = x + + q = 0.5 * ρ * v^2 + CL = CL0 + (α / α_max) * (CL_max - CL0) + L = q * S * CL + + h_eff = h + h_w + term_h = 33.0 * abs(h_eff / b)^1.5 + K = K_nom * term_h / (1.0 + term_h) + D = q * S * (CD0 + K * CL^2) + + rdot = v * cos(γ) + vdot = (T * cos(α) - D) / m - g * sin(γ) + hdot = v * sin(γ) + γdot = (T * sin(α) + L) / (m * v) - (g * cos(γ)) / v + + return [rdot, vdot, hdot, γdot] + end + + # initial guess + tf_guess = 10.0 + xinit = [r_t0, v_t0 + 10.0, h_tf / 2.0, γ_tf] + uinit = [0.1] + vinit = [tf_guess] + init = (state=xinit, control=uinit, variable=vinit) + + # discretise + docp = direct_transcription( + ocp, + description...; + lagrange_to_mayer=false, + init=init, + grid_size=grid_size, + disc_method=:trapeze, + kwargs..., + ) + + return docp +end diff --git a/ext/OptimalControlModels_s/balanced_field_s.jl b/ext/OptimalControlModels_s/balanced_field_s.jl new file mode 100644 index 00000000..c7ed9b1e --- /dev/null +++ b/ext/OptimalControlModels_s/balanced_field_s.jl @@ -0,0 +1,106 @@ +""" +$(TYPEDSIGNATURES) + +Constructs an **OptimalControl problem** for the Aircraft Balanced Field Length Calculation (Takeoff phase) using symbolic syntax. +The objective is to minimise the final range to reach a specified altitude (35 ft) with one engine out. + +# Arguments + +- `::OptimalControlBackend`: Placeholder type specifying the OptimalControl backend or solver interface. +- `grid_size::Int=200`: (Keyword) Number of discretisation points for the direct transcription grid. + +# Returns + +- `docp`: The direct optimal control problem object representing the problem. +- `nlp`: The corresponding nonlinear programming model. +""" +function OptimalControlProblems.balanced_field_s( + ::OptimalControlBackend, + description::Symbol...; + grid_size::Int=grid_size_data(:balanced_field), + parameters::Union{Nothing,NamedTuple}=nothing, + kwargs..., +) + + # parameters + params = parameters_data(:balanced_field, parameters) + t0 = params[:t0] + m = params[:m] + g = params[:g] + ρ = params[:ρ] + S = params[:S] + CD0 = params[:CD0] + AR = params[:AR] + e = params[:e] + CL0 = params[:CL0] + CL_max = params[:CL_max] + h_w = params[:h_w] + span = params[:span] + α_max = params[:α_max] + T = params[:T] + + r_t0 = params[:r_t0] + v_t0 = params[:v_t0] + h_t0 = params[:h_t0] + γ_t0 = params[:γ_t0] + h_tf = params[:h_tf] + γ_tf = params[:γ_tf] + + α_min = params[:α_min] + α_max_ctrl = params[:α_max_ctrl] + + # Constants + b = span / 2.0 + K_nom = 1.0 / (pi * AR * e) + + # model + ocp = @def begin + tf ∈ R, variable + t ∈ [t0, tf], time + x = (r, v, h, γ) ∈ R⁴, state + α ∈ R, control + + x(t0) == [r_t0, v_t0, h_t0, γ_t0] + + h(tf) == h_tf + γ(tf) == γ_tf + + 0 ≤ α(t) ≤ α_max_ctrl + tf ≥ 0.1 + h(t) ≥ 0 + v(t) ≥ 1.0 + + # dynamics + q = 0.5 * ρ * v(t)^2 + CL = CL0 + (α(t) / α_max) * (CL_max - CL0) + L = q * S * CL + h_eff = h(t) + h_w + term_h = 33.0 * abs(h_eff / b)^1.5 + K = K_nom * term_h / (1.0 + term_h) + D = q * S * (CD0 + K * CL^2) + + ∂(r)(t) == v(t) * cos(γ(t)) + ∂(v)(t) == (T * cos(α(t)) - D) / m - g * sin(γ(t)) + ∂(h)(t) == v(t) * sin(γ(t)) + ∂(γ)(t) == (T * sin(α(t)) + L) / (m * v(t)) - (g * cos(γ(t))) / v(t) + + r(tf) → min + end + + # initial guess + tf_guess = 10.0 + init = (state=[r_t0, v_t0 + 10.0, h_tf / 2.0, γ_tf], control=[0.1], variable=[tf_guess]) + + # discretise + docp = direct_transcription( + ocp, + description...; + lagrange_to_mayer=false, + init=init, + grid_size=grid_size, + disc_method=:trapeze, + kwargs..., + ) + + return docp +end From a022661c148cb5f5a29728dbf062f2e381173119 Mon Sep 17 00:00:00 2001 From: Amiel Date: Sun, 1 Mar 2026 19:21:04 +0100 Subject: [PATCH 7/9] add moutain car problem --- ext/Descriptions/mountain_car.md | 57 ++++++++++ ext/JuMPModels/mountain_car.jl | 114 +++++++++++++++++++ ext/MetaData/mountain_car.jl | 21 ++++ ext/OptimalControlModels/mountain_car.jl | 96 ++++++++++++++++ ext/OptimalControlModels_s/mountain_car_s.jl | 97 ++++++++++++++++ 5 files changed, 385 insertions(+) create mode 100644 ext/Descriptions/mountain_car.md create mode 100644 ext/JuMPModels/mountain_car.jl create mode 100644 ext/MetaData/mountain_car.jl create mode 100644 ext/OptimalControlModels/mountain_car.jl create mode 100644 ext/OptimalControlModels_s/mountain_car_s.jl diff --git a/ext/Descriptions/mountain_car.md b/ext/Descriptions/mountain_car.md new file mode 100644 index 00000000..138bba72 --- /dev/null +++ b/ext/Descriptions/mountain_car.md @@ -0,0 +1,57 @@ +The **Mountain Car problem** is a classic benchmark in control and reinforcement learning. +It involves an underpowered car that must climb a steep hill to reach a target position. +Because the car's engine is too weak to climb the hill directly, it must first drive away from the goal to gain momentum by oscillating in a valley. +The goal is to reach the target position in **minimum time**. + +### Mathematical formulation + +The problem can be stated as + +```math +\begin{aligned} +\min_{x,v,u,t_f} \quad & J = t_f \[0.5em] + ext{s.t.} \quad & \dot{x}(t) = v(t), \[0.5em] +& \dot{v}(t) = a \, u(t) - b \, \cos(c \, x(t)), \[0.5em] +& x(0) = -0.5, \quad v(0) = 0.0, \[0.5em] +& x(t_f) = 0.5, \quad v(t_f) \ge 0.0, \[0.5em] +& -1.2 \le x(t) \le 0.5, \[0.5em] +& -0.07 \le v(t) \le 0.07, \[0.5em] +& -1 \le u(t) \le 1, \[0.5em] +& t_f \ge 0. +\end{aligned} +``` + +### Parameters + +| Parameter | Symbol | Value | +|-----------|--------|-------| +| Power coefficient | $a$ | 0.001 | +| Gravity coefficient | $b$ | 0.0025 | +| Frequency coefficient | $c$ | 3.0 | +| Initial position | $x_0$ | -0.5 | +| Target position | $x_f$ | 0.5 | + +### Qualitative behaviour + +- The car must oscillate back and forth in the valley to build enough kinetic energy to climb the right hill. +- The optimal strategy typically involves a sequence of full-throttle accelerations in alternating directions (**bang-bang control**). +- The minimum time objective makes the problem sensitive to the initial guess for the final time. + +### Characteristics + +- Two-dimensional state space (position and velocity), +- Scalar control (effort/acceleration), +- **Free final time** with minimum time objective, +- State constraints (position and velocity bounds), +- Control constraints (bounds on acceleration). + +### References + +- **Dymos Documentation**. [*The Mountain Car Problem*](https://openmdao.github.io/dymos/examples/mountain_car/mountain_car.html). + Provides a detailed description and implementation of the Mountain Car problem using the Dymos library. + +- **OpenAI Gym**. [*MountainCar-v0*](https://gym.openai.com/envs/MountainCar-v0/). + A standard reinforcement learning environment for the Mountain Car problem. + +- Moore, A. W. (1990). *Efficient Memory-based Learning for Robot Control*. PhD thesis, University of Cambridge. + Introduces the Mountain Car problem as a benchmark for reinforcement learning and control. diff --git a/ext/JuMPModels/mountain_car.jl b/ext/JuMPModels/mountain_car.jl new file mode 100644 index 00000000..3f379e89 --- /dev/null +++ b/ext/JuMPModels/mountain_car.jl @@ -0,0 +1,114 @@ +""" +$(TYPEDSIGNATURES) + +Constructs and returns a JuMP model for the **Mountain Car Problem**. +The model represents the dynamics of an underpowered car climbing a hill. +The objective is to minimize the final time `tf` to reach the target position. + +# Arguments + +- `::JuMPBackend`: Specifies the backend for building the JuMP model. +- `grid_size::Int=100`: (Keyword) Number of discretisation steps for the time horizon. + +# Returns + +- `model::JuMP.Model`: A JuMP model representing the Mountain Car optimal control problem. + +# Example + +```julia-repl +julia> using OptimalControlProblems +julia> using JuMP + +julia> model = OptimalControlProblems.mountain_car(JuMPBackend(); N=100) +``` + +# References + +- Problem formulation: https://openmdao.github.io/dymos/examples/mountain_car/mountain_car.html +""" +function OptimalControlProblems.mountain_car( + ::JuMPBackend, + args...; + grid_size::Int=grid_size_data(:mountain_car), + parameters::Union{Nothing,NamedTuple}=nothing, + kwargs..., +) + + # parameters + params = parameters_data(:mountain_car, parameters) + t0 = params[:t0] + tf_start = params[:tf_start] + tf_min = params[:tf_min] + pos_t0 = params[:pos_t0] + vel_t0 = params[:vel_t0] + pos_tf = params[:pos_tf] + vel_tf_min = params[:vel_tf_min] + pos_min = params[:pos_min] + pos_max = params[:pos_max] + vel_min = params[:vel_min] + vel_max = params[:vel_max] + u_min = params[:u_min] + u_max = params[:u_max] + a = params[:a] + b = params[:b] + c = params[:c] + + # model + model = JuMP.Model(args...; kwargs...) + + # metadata + model[:time_grid] = () -> range(t0, value(model[:tf]), grid_size + 1) + model[:state_components] = ["pos", "vel"] + model[:costate_components] = ["∂pos", "∂vel"] + model[:control_components] = ["u"] + model[:variable_components] = ["tf"] + + # N = grid_size + @expression(model, N, grid_size) + + # variables + @variables( + model, + begin + pos_min ≤ pos[0:N] ≤ pos_max, (start = pos_t0) + vel_min ≤ vel[0:N] ≤ vel_max, (start = vel_t0) + u_min ≤ u[0:N] ≤ u_max, (start = 0.0) + tf ≥ tf_min, (start = tf_start) + end + ) + + # boundary constraints + @constraints( + model, + begin + pos[0] == pos_t0 + vel[0] == vel_t0 + pos[N] == pos_tf + vel[N] ≥ vel_tf_min + end + ) + + # dynamics + @expressions( + model, + begin + Δt, (tf - t0) / N + dpos[i = 0:N], vel[i] + dvel[i = 0:N], a * u[i] - b * cos(c * pos[i]) + end + ) + + @constraints( + model, + begin + ∂pos[i = 1:N], pos[i] == pos[i - 1] + 0.5 * Δt * (dpos[i] + dpos[i - 1]) + ∂vel[i = 1:N], vel[i] == vel[i - 1] + 0.5 * Δt * (dvel[i] + dvel[i - 1]) + end + ) + + # objective + @objective(model, Min, tf) + + return model +end diff --git a/ext/MetaData/mountain_car.jl b/ext/MetaData/mountain_car.jl new file mode 100644 index 00000000..cc96b7b4 --- /dev/null +++ b/ext/MetaData/mountain_car.jl @@ -0,0 +1,21 @@ +mountain_car_meta = OrderedDict( + :grid_size => 200, + :parameters => ( + t0 = 0.0, + tf_start = 50.0, + tf_min = 0.1, + pos_t0 = -0.5, + vel_t0 = 0.0, + pos_tf = 0.5, + vel_tf_min = 0.0, + pos_min = -1.2, + pos_max = 0.5, + vel_min = -0.07, + vel_max = 0.07, + u_min = -1.0, + u_max = 1.0, + a = 0.001, + b = 0.0025, + c = 3.0, + ), +) diff --git a/ext/OptimalControlModels/mountain_car.jl b/ext/OptimalControlModels/mountain_car.jl new file mode 100644 index 00000000..984675bc --- /dev/null +++ b/ext/OptimalControlModels/mountain_car.jl @@ -0,0 +1,96 @@ +""" +$(TYPEDSIGNATURES) + +Constructs an **OptimalControl problem** for the Mountain Car problem. +The goal is to drive an underpowered car up a steep hill. +The objective is to minimize the time required to reach the target position. +The problem formulation can be found [here](https://openmdao.github.io/dymos/examples/mountain_car/mountain_car.html). + +# Arguments + +- `::OptimalControlBackend`: Placeholder type specifying the OptimalControl backend or solver interface. +- `grid_size::Int=100`: (Keyword) Number of discretisation points for the direct transcription grid. + +# Returns + +- `docp`: The direct optimal control problem object representing the Mountain Car problem. + +# Example + +```julia-repl +julia> using OptimalControlProblems + +julia> docp = OptimalControlProblems.mountain_car(OptimalControlBackend(); N=100); +``` +""" +function OptimalControlProblems.mountain_car( + ::OptimalControlBackend, + description::Symbol...; + grid_size::Int=grid_size_data(:mountain_car), + parameters::Union{Nothing,NamedTuple}=nothing, + kwargs..., +) + + # parameters + params = parameters_data(:mountain_car, parameters) + t0 = params[:t0] + tf_start = params[:tf_start] + tf_min = params[:tf_min] + pos_t0 = params[:pos_t0] + vel_t0 = params[:vel_t0] + pos_tf = params[:pos_tf] + vel_tf_min = params[:vel_tf_min] + pos_min = params[:pos_min] + pos_max = params[:pos_max] + vel_min = params[:vel_min] + vel_max = params[:vel_max] + u_min = params[:u_min] + u_max = params[:u_max] + a = params[:a] + b = params[:b] + c = params[:c] + + # model + ocp = @def begin + tf ∈ R, variable + t ∈ [t0, tf], time + x = (pos, vel) ∈ R², state + u ∈ R, control + + # constraints + tf ≥ tf_min + pos_min ≤ pos(t) ≤ pos_max + vel_min ≤ vel(t) ≤ vel_max + u_min ≤ u(t) ≤ u_max + + # initial conditions + pos(t0) == pos_t0 + vel(t0) == vel_t0 + + # final conditions + pos(tf) == pos_tf + vel(tf) ≥ vel_tf_min + + # dynamics + ẋ(t) == [vel(t), a * u(t) - b * cos(c * pos(t))] + + # objective + tf → min + end + + # initial guess + init = (state=[pos_t0, vel_t0], control=0.0, variable=tf_start) + + # discretise the optimal control problem + docp = direct_transcription( + ocp, + description...; + lagrange_to_mayer=false, + init=init, + grid_size=grid_size, + disc_method=:trapeze, + kwargs..., + ) + + return docp +end diff --git a/ext/OptimalControlModels_s/mountain_car_s.jl b/ext/OptimalControlModels_s/mountain_car_s.jl new file mode 100644 index 00000000..cd770838 --- /dev/null +++ b/ext/OptimalControlModels_s/mountain_car_s.jl @@ -0,0 +1,97 @@ +""" +$(TYPEDSIGNATURES) + +Constructs an **OptimalControl problem** for the Mountain Car problem (symbolic version). +The goal is to drive an underpowered car up a steep hill. +The objective is to minimize the time required to reach the target position. +The problem formulation can be found [here](https://openmdao.github.io/dymos/examples/mountain_car/mountain_car.html). + +# Arguments + +- `::OptimalControlBackend`: Placeholder type specifying the OptimalControl backend or solver interface. +- `grid_size::Int=100`: (Keyword) Number of discretisation points for the direct transcription grid. + +# Returns + +- `docp`: The direct optimal control problem object representing the Mountain Car problem. + +# Example + +```julia-repl +julia> using OptimalControlProblems + +julia> docp = OptimalControlProblems.mountain_car_s(OptimalControlBackend(); N=100); +``` +""" +function OptimalControlProblems.mountain_car_s( + ::OptimalControlBackend, + description::Symbol...; + grid_size::Int=grid_size_data(:mountain_car), + parameters::Union{Nothing,NamedTuple}=nothing, + kwargs..., +) + + # parameters + params = parameters_data(:mountain_car, parameters) + t0 = params[:t0] + tf_start = params[:tf_start] + tf_min = params[:tf_min] + pos_t0 = params[:pos_t0] + vel_t0 = params[:vel_t0] + pos_tf = params[:pos_tf] + vel_tf_min = params[:vel_tf_min] + pos_min = params[:pos_min] + pos_max = params[:pos_max] + vel_min = params[:vel_min] + vel_max = params[:vel_max] + u_min = params[:u_min] + u_max = params[:u_max] + a = params[:a] + b = params[:b] + c = params[:c] + + # model + ocp = @def begin + tf ∈ R, variable + t ∈ [t0, tf], time + x = (pos, vel) ∈ R², state + u ∈ R, control + + # constraints + tf ≥ tf_min + pos_min ≤ pos(t) ≤ pos_max + vel_min ≤ vel(t) ≤ vel_max + u_min ≤ u(t) ≤ u_max + + # initial conditions + pos(t0) == pos_t0 + vel(t0) == vel_t0 + + # final conditions + pos(tf) == pos_tf + vel(tf) ≥ vel_tf_min + + # dynamics + ∂(pos)(t) == vel(t) + ∂(vel)(t) == a * u(t) - b * cos(c * pos(t)) + + # objective + tf → min + end + + # initial guess + init = (state=[pos_t0, vel_t0], control=0.0, variable=tf_start) + + # discretise the optimal control problem + docp = direct_transcription( + ocp, + description...; + lagrange_to_mayer=false, + init=init, + grid_size=grid_size, + disc_method=:trapeze, + kwargs..., + ) + + return docp +end From e98695147c73b59bb225db992c06b53df9cea3e0 Mon Sep 17 00:00:00 2001 From: Amiel Date: Sun, 1 Mar 2026 19:25:34 +0100 Subject: [PATCH 8/9] add glider, robot, space shuttle, steering , insurance problems --- ext/JuMPModels/glider.jl | 6 +-- ext/JuMPModels/robot.jl | 54 ++++++++++++++----- ext/JuMPModels/space_shuttle.jl | 2 +- ext/JuMPModels/steering.jl | 2 +- ext/OptimalControlModels/glider.jl | 14 +++-- ext/OptimalControlModels/robot.jl | 40 +++++++++----- ext/OptimalControlModels/space_shuttle.jl | 6 +-- ext/OptimalControlModels/steering.jl | 2 +- ext/OptimalControlModels_s/glider_s.jl | 12 +++-- ext/OptimalControlModels_s/robot_s.jl | 35 +++++++----- ext/OptimalControlModels_s/space_shuttle_s.jl | 6 +-- ext/OptimalControlModels_s/steering_s.jl | 2 +- 12 files changed, 121 insertions(+), 60 deletions(-) diff --git a/ext/JuMPModels/glider.jl b/ext/JuMPModels/glider.jl index 164600a7..52331893 100644 --- a/ext/JuMPModels/glider.jl +++ b/ext/JuMPModels/glider.jl @@ -76,12 +76,12 @@ function OptimalControlProblems.glider( @variables( model, begin - tf ≥ tf_l, (start = 1) - x[k = 0:N] ≥ x_l, (start = x_t0 + vx_t0 * k / N) + tf ≥ tf_l, (start = 100.0) #### + x[k = 0:N] ≥ x_l, (start = x_t0 + (k / N) * 1248.0) #### y[k = 0:N], (start = y_t0 + (k / N) * (y_tf - y_t0)) vx[k = 0:N] ≥ vx_l, (start = vx_t0) vy[k = 0:N], (start = vy_t0) - cL_min ≤ cL[k = 0:N] ≤ cL_max, (start = cL_max / 2) + cL_min ≤ cL[k = 0:N] ≤ cL_max, (start = 1.0) #### end ) diff --git a/ext/JuMPModels/robot.jl b/ext/JuMPModels/robot.jl index 79bfadc0..bc109fba 100644 --- a/ext/JuMPModels/robot.jl +++ b/ext/JuMPModels/robot.jl @@ -92,22 +92,53 @@ function OptimalControlProblems.robot( @variables( model, begin - ρ_l ≤ ρ[k = 0:N] ≤ ρ_u, (start = ρ_t0) - θ_l ≤ θ[k = 0:N] ≤ θ_u, (start = 2π/3 * (k / N)^2) - ϕ_l ≤ ϕ[k = 0:N] ≤ ϕ_u, (start = ϕ_t0) + ρ_l ≤ ρ[k = 0:N] ≤ ρ_u + θ_l ≤ θ[k = 0:N] ≤ θ_u + ϕ_l ≤ ϕ[k = 0:N] ≤ ϕ_u - dρ[k = 0:N], (start = 0) - dθ[k = 0:N], (start = 4π/3 * (k / N)) - dϕ[k = 0:N], (start = 0) + dρ[k = 0:N] + dθ[k = 0:N] + dϕ[k = 0:N] - uρ_l ≤ uρ[0:N] ≤ uρ_u, (start = 0) - uθ_l ≤ uθ[0:N] ≤ uθ_u, (start = 0) - uϕ_l ≤ uϕ[0:N] ≤ uϕ_u, (start = 0) + uρ_l ≤ uρ[0:N] ≤ uρ_u + uθ_l ≤ uθ[0:N] ≤ uθ_u + uϕ_l ≤ uϕ[0:N] ≤ uϕ_u - tf ≥ tf_l, (start = 1) + tf ≥ tf_l end ) + #INITIAL GUESS + set_start_value(tf, 9.1) + for k in 0:grid_size + # Coefficient d'interpolation entre 0 et 1 + alpha = k / grid_size + + # Interpolation linéaire entre t0 et tf + ρ_val = ρ_t0 + alpha * (ρ_tf - ρ_t0) + θ_val = θ_t0 + alpha * (θ_tf - θ_t0) + ϕ_val = ϕ_t0 + alpha * (ϕ_tf - ϕ_t0) + + # SECURITÉ ANTI-CRASH (Division par zéro) + if abs(ϕ_val) < 1e-6 + ϕ_val = 1e-6 + end + + # Application des valeurs aux variables JuMP + set_start_value(ρ[k], ρ_val) + set_start_value(θ[k], θ_val) + set_start_value(ϕ[k], ϕ_val) + + # Vitesses et contrôles à 0 par défaut + set_start_value(dρ[k], 0.0) + set_start_value(dθ[k], 0.0) + set_start_value(dϕ[k], 0.0) + + set_start_value(uρ[k], 0.0) + set_start_value(uθ[k], 0.0) + set_start_value(uϕ[k], 0.0) + end + # Boundary condition @constraints( model, @@ -137,8 +168,7 @@ function OptimalControlProblems.robot( # Δt, (tf - t0) / N - # - I_θ[i = 0:N], ((L - ρ[i])^3 + ρ[i]^3) * (sin(ϕ[i]))^2 + I_θ[i = 0:N], ((L - ρ[i])^3 + ρ[i]^3) * ((sin(ϕ[i]))^2 + 1e-9) I_ϕ[i = 0:N], (L - ρ[i])^3 + ρ[i]^3 # diff --git a/ext/JuMPModels/space_shuttle.jl b/ext/JuMPModels/space_shuttle.jl index 0479eebf..42d50df7 100644 --- a/ext/JuMPModels/space_shuttle.jl +++ b/ext/JuMPModels/space_shuttle.jl @@ -184,7 +184,7 @@ function OptimalControlProblems.space_shuttle( set_start_value.(model[:ψ], vec(initial_guess[:, 6])) set_start_value.(model[:α], vec(initial_guess[:, 7])) set_start_value.(model[:β], vec(initial_guess[:, 8])) - set_start_value.(model[:tf], (tf_l+tf_u)/2) + set_start_value.(model[:tf], 2000.0) ####### # Functions to restore `h` and `v` to their true scale @expression(model, h[j = 0:N], scaled_h[j] * scaling_h) diff --git a/ext/JuMPModels/steering.jl b/ext/JuMPModels/steering.jl index cd40af69..8554cc42 100644 --- a/ext/JuMPModels/steering.jl +++ b/ext/JuMPModels/steering.jl @@ -51,7 +51,7 @@ function OptimalControlProblems.steering( x₄_tf = params[:x₄_tf] # - tf_start = 1 + tf_start = 0.6 function gen_x0(k, i) if i == 1 || i == 4 return 0.0 diff --git a/ext/OptimalControlModels/glider.jl b/ext/OptimalControlModels/glider.jl index d91059be..cbec4a64 100644 --- a/ext/OptimalControlModels/glider.jl +++ b/ext/OptimalControlModels/glider.jl @@ -110,11 +110,15 @@ function OptimalControlProblems.glider( end # initial guess - tfinit = 1 - xinit = - t -> [x_t0 + vx_t0 * t / tfinit, y_t0 + t / tfinit * (y_tf - y_t0), vx_t0, vy_t0] - uinit = cL_max / 2 - init = (state=xinit, control=uinit, variable=tfinit) + tfinit = 100.0 + xinit = t -> [ + x_t0 + (t / tfinit) * 1248.0, + y_t0 + (t / tfinit) * (y_tf - y_t0), + vx_t0 + (t / tfinit) * (vx_tf - vx_t0), + vy_t0 + (t / tfinit) * (vy_tf - vy_t0) + ] + uinit = 1.0 + init = (state=xinit, control=uinit, variable=tfinit) # discretise the optimal control problem docp = direct_transcription( diff --git a/ext/OptimalControlModels/robot.jl b/ext/OptimalControlModels/robot.jl index 1bfe831a..454ab55b 100644 --- a/ext/OptimalControlModels/robot.jl +++ b/ext/OptimalControlModels/robot.jl @@ -105,30 +105,42 @@ function OptimalControlProblems.robot( dθ(tf) == dθ_tf, (dθ_tf) dϕ(tf) == dϕ_tf, (dϕ_tf) - # aliases - I_θ = ((L - ρ(t))^3 + ρ(t)^3) * sin(ϕ(t))^2 + + I_θ = ((L - ρ(t))^3 + ρ(t)^3) * (sin(ϕ(t))^2 + 1e-9) I_ϕ = (L - ρ(t))^3 + ρ(t)^3 # dynamics - ẋ(t) == [dρ(t), uρ(t) / L, dθ(t), 3 * uθ(t) / I_θ, dϕ(t), 3 * uϕ(t) / I_ϕ] + ẋ(t) == [dρ(t), uρ(t) / L, dθ(t), 3 * uθ(t) / I_θ, dϕ(t), 3 * uϕ(t) / I_ϕ] # objective tf → min end # initial guess - tf = 1 - xinit = - t -> [ - ρ_t0, - 0, - 2π/3 * ((t - t0) / (tf - t0))^2, - 4π/3 * ((t - t0) / (tf - t0)), - ϕ_t0, - 0, + tf_guess = 9.1 + xinit = t -> begin + alpha = clamp((t - t0) / (tf_guess - t0), 0.0, 1.0) + + ρ_val = ρ_t0 + alpha * (ρ_tf - ρ_t0) + θ_val = θ_t0 + alpha * (θ_tf - θ_t0) + ϕ_val = ϕ_t0 + alpha * (ϕ_tf - ϕ_t0) + + if abs(ϕ_val) < 1e-6 + ϕ_val = 1e-6 + end + + return [ + ρ_val, + 0.0, + θ_val, + 0.0, + ϕ_val, + 0.0, ] - uinit = [0, 0, 0] - init = (state=xinit, control=uinit, variable=tf) + end + + uinit = [0.0, 0.0, 0.0] + init = (state=xinit, control=uinit, variable=tf_guess) # discretise the optimal control problem docp = direct_transcription( diff --git a/ext/OptimalControlModels/space_shuttle.jl b/ext/OptimalControlModels/space_shuttle.jl index 2d6e6d25..bdccc605 100644 --- a/ext/OptimalControlModels/space_shuttle.jl +++ b/ext/OptimalControlModels/space_shuttle.jl @@ -56,8 +56,8 @@ function OptimalControlProblems.space_shuttle( # Δt_min = params[:Δt_min] Δt_max = params[:Δt_max] - tf_l = grid_size*Δt_min - tf_u = grid_size*Δt_max + tf_l = 1500.0 # + tf_u = 2500.0 # ## Initial conditions h_t0 = params[:h_t0] @@ -175,7 +175,7 @@ function OptimalControlProblems.space_shuttle( # initial guess: linear interpolation for h, v, gamma (NB. t0 = 0), constant for the rest # variable time step seems to be initialized at 1 in jump # note that ipopt will project the initial guess inside the bounds anyway. - tf_init = (tf_l+tf_u)/2 + tf_init = 2000 x_init = t -> [ h_t0 + (t - t0) / (tf_init - t0) * (h_tf - h_t0), diff --git a/ext/OptimalControlModels/steering.jl b/ext/OptimalControlModels/steering.jl index 36aeabff..a1539e56 100644 --- a/ext/OptimalControlModels/steering.jl +++ b/ext/OptimalControlModels/steering.jl @@ -85,7 +85,7 @@ function OptimalControlProblems.steering( end end xinit = t -> [gen_x0(t, i) for i in 1:4] - init = (state=xinit, control=0, variable=1) + init = (state=xinit, control=0, variable=0.6) # discretise the optimal control problem docp = direct_transcription( diff --git a/ext/OptimalControlModels_s/glider_s.jl b/ext/OptimalControlModels_s/glider_s.jl index 59cd87e7..7e73c295 100644 --- a/ext/OptimalControlModels_s/glider_s.jl +++ b/ext/OptimalControlModels_s/glider_s.jl @@ -104,10 +104,14 @@ function OptimalControlProblems.glider_s( end # initial guess - tfinit = 1 - xinit = - t -> [x_t0 + vx_t0 * t / tfinit, y_t0 + t / tfinit * (y_tf - y_t0), vx_t0, vy_t0] - uinit = cL_max / 2 + tfinit = 100.0 + xinit = t -> [ + x_t0 + (t / tfinit) * 1248.0, + y_t0 + (t / tfinit) * (y_tf - y_t0), + vx_t0 + (t / tfinit) * (vx_tf - vx_t0), + vy_t0 + (t / tfinit) * (vy_tf - vy_t0) + ] + uinit = 1.0 init = (state=xinit, control=uinit, variable=tfinit) # discretise the optimal control problem diff --git a/ext/OptimalControlModels_s/robot_s.jl b/ext/OptimalControlModels_s/robot_s.jl index 2756c18b..b9f245fe 100644 --- a/ext/OptimalControlModels_s/robot_s.jl +++ b/ext/OptimalControlModels_s/robot_s.jl @@ -106,7 +106,7 @@ function OptimalControlProblems.robot_s( dϕ(tf) == dϕ_tf, (dϕ_tf) # aliases - I_θ = ((L - ρ(t))^3 + ρ(t)^3) * sin(ϕ(t))^2 + I_θ = ((L - ρ(t))^3 + ρ(t)^3) * (sin(ϕ(t))^2 + 1e-9) I_ϕ = (L - ρ(t))^3 + ρ(t)^3 # dynamics @@ -122,18 +122,29 @@ function OptimalControlProblems.robot_s( end # initial guess - tf = 1 - xinit = - t -> [ - ρ_t0, - 0, - 2π/3 * ((t - t0) / (tf - t0))^2, - 4π/3 * ((t - t0) / (tf - t0)), - ϕ_t0, - 0, + tf_guess = 9.1 + xinit = t -> begin + alpha = clamp((t - t0) / (tf_guess - t0), 0.0, 1.0) + + ρ_val = ρ_t0 + alpha * (ρ_tf - ρ_t0) + θ_val = θ_t0 + alpha * (θ_tf - θ_t0) + ϕ_val = ϕ_t0 + alpha * (ϕ_tf - ϕ_t0) + + if abs(ϕ_val) < 1e-6 + ϕ_val = 1e-6 + end + + return [ + ρ_val, + 0.0, + θ_val, + 0.0, + ϕ_val, + 0.0, ] - uinit = [0, 0, 0] - init = (state=xinit, control=uinit, variable=tf) + end + uinit = [0.0, 0.0, 0.0] + init = (state=xinit, control=uinit, variable=tf_guess) # discretise the optimal control problem docp = direct_transcription( diff --git a/ext/OptimalControlModels_s/space_shuttle_s.jl b/ext/OptimalControlModels_s/space_shuttle_s.jl index f1469e0a..77b4df0e 100644 --- a/ext/OptimalControlModels_s/space_shuttle_s.jl +++ b/ext/OptimalControlModels_s/space_shuttle_s.jl @@ -56,8 +56,8 @@ function OptimalControlProblems.space_shuttle_s( # Δt_min = params[:Δt_min] Δt_max = params[:Δt_max] - tf_l = grid_size*Δt_min - tf_u = grid_size*Δt_max + tf_l = 1500.0 # + tf_u = 2500.0 # ## Initial conditions h_t0 = params[:h_t0] @@ -164,7 +164,7 @@ function OptimalControlProblems.space_shuttle_s( # initial guess: linear interpolation for h, v, gamma (NB. t0 = 0), constant for the rest # variable time step seems to be initialized at 1 in jump # note that ipopt will project the initial guess inside the bounds anyway. - tf_init = (tf_l+tf_u)/2 + tf_init = 2000 x_init = t -> [ h_t0 + (t - t0) / (tf_init - t0) * (h_tf - h_t0), diff --git a/ext/OptimalControlModels_s/steering_s.jl b/ext/OptimalControlModels_s/steering_s.jl index c78da787..047c2efe 100644 --- a/ext/OptimalControlModels_s/steering_s.jl +++ b/ext/OptimalControlModels_s/steering_s.jl @@ -85,7 +85,7 @@ function OptimalControlProblems.steering_s( end end xinit = t -> [gen_x0(t, i) for i in 1:4] - init = (state=xinit, control=0, variable=1) + init = (state=xinit, control=0, variable=0.6) # discretise the optimal control problem docp = direct_transcription( From 6375ddc9b6d97d5472e240e4db46821827f17ef9 Mon Sep 17 00:00:00 2001 From: Amiel Date: Sun, 1 Mar 2026 19:54:15 +0100 Subject: [PATCH 9/9] remove exclude problem --- src/OptimalControlProblems.jl | 4 ---- 1 file changed, 4 deletions(-) diff --git a/src/OptimalControlProblems.jl b/src/OptimalControlProblems.jl index 9729407d..0ea8a857 100644 --- a/src/OptimalControlProblems.jl +++ b/src/OptimalControlProblems.jl @@ -68,10 +68,6 @@ function make_list_of_problems() :dielectrophoretic_particle, :moonlander, :ducted_fan, - :insurance, - :robot, - :space_shuttle, - :steering, ] list_of_problems = setdiff(list_of_problems, problems_to_exclude)