Skip to content

Conversation

@SebastianM-C
Copy link
Member

@SebastianM-C SebastianM-C commented Nov 12, 2025

Checklist

  • Appropriate tests were added
  • Any code changes were done in a way that does not break public API
  • All documentation related to code changes were updated
  • The new code follows the
    contributor guidelines, in particular the SciML Style Guide and
    COLPRAC.
  • Any new documentation only uses public API

Additional context

This fixes #3939 by adding the tune_parameters option for all dynamic optimization extensions. I also did fix some existing issues with the interface, such as the fact that it was ignoring any defaults in the model and added error handling for JuMPDynamicOptProblem in the case where the cost function evaluated variables at points that were not in the support (in that case InfiniteOpt automatically adds new points, which breaks the way we set the constraints for U, as we have completely different time steps).

In terms of support, I'd say the best extension is the InfiniteOptDynamicOptProblem, since it directly uses the InfiniteOpt representation and can automatically handle non-uniform data evaluation. The JuMPDynamicOptProblem & CasADiDynamicOptProblem would be next, where CasADi gets ranked lower because for some reason it doesn't work when used in the VS Code REPL. This also happens for PyomoDynamicOptProblem, but that one also has the issue that it crashes julia if it tries to evaluate a variable at a point that's not in the collocation points.

Another issue that I noticed is that solve(jprob, JuMPCollocation(Ipopt.Optimizer, constructTsitouras5())) takes very long if we increase the number of collocation points by decreasing dt. I haven't profiled yet, but I think it might be in the way we are adding the constraints based on the tableau & RGF tracing.

One thing that's missing in the interface is the ability to set box constraints on the parameters that we tune, but I think that should go in a separate PR, as there are some more interface questions that I have related to that. In particular, how does one set the bounds after model creation? In most cases, if you want to reuse a model, you would only know the bounds on the parameters to optimize when you are building the dynamic optimization problem. We can change the values of the bounds for a parameter in an out of place way, but as far as I can tell, there's no way of putting the parameter back into the system after we changed its bounds. We can't use extend either, since the base system has priority.

In a similar note, we also only know the tunable parameters & the cost function after system construction, so the process of setting up the dynamic optimization problem is a bit involved. Would it make sense to have some utility functions that set up the search space in an easier manner from a vector of pairs (i.e. something like [sys.x=>(1,2), sys.y=>(2,3)] would set the tunables to [x, y] and set the bounds to the appropriate values).

Also, regarding the interface, it's a bit strange that the solution does not look like any other SciML solution in that it does have 2 separate solutions and it doesn't have retcodes or parameters directly accessible at the top level.

Other missing things include more solver options that can be passed to the underlying optimizer, but that can be added after the more important aspects are clarified.

cc @baggepinnen

SebastianM-C and others added 2 commits November 18, 2025 05:48
When we have more collocation points than expected
the constraints are wrong
@SebastianM-C SebastianM-C marked this pull request as ready for review November 18, 2025 04:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

JuMP/InfiniteOpt dynamic optimization interface issues with parameter estimation

1 participant