Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

scipy.optimize.minimize('SLSQP') too slow when given 2000 dim variable

I have a non-lenear optimization problem with a constraint and upper/lower bounds, so with scipy I have to use SLSQP. The problem is clearly not convex. I got the jacobian fo both the objective and constraint functions to work correctly (results are good/fast up to 300 input vector). All functions are vectorized and tuned to run very fast. The problem is that using 1000+ input vector takes ages though I can see the minimizer is not calling my functions a lot (objective/constraint/gradients) and seems to spend most of its processing time internally. I read somewhere perf of SLSQP is O(n^3).

Is there a better/faster SLSQP implementation or another method for this type of problem for python ? I tried nlopt and somehow returns wrong results given the exact same functions I use in scipy (with a wrapper to adapt to its method signature). I also failed to use ipopt with pyipopt package, cannot get working ipopt binaries to work with the python wrapper.

UPDATE: if it helps, my input variable is basically a vector of (x,y) tuples or points in 2D surface representing coordinates. With 1000 points, I end up with a 2000 dim input vector. The function I want to optimize calculates optimum position of the points between each other taking into consideration their relationships and other constraints. So the problem is not sparse.

Thanks...

like image 584
Riad Souissi Avatar asked Dec 07 '25 06:12

Riad Souissi


1 Answers

We don't know much about the model, but here are some notes:

  1. SLSQP is really designed for small (dense), well-scaled models.
  2. SLSQP is a local solver. It will accept non-convex problems but will only provide local solutions.
  3. I doubt there is this kind of a complexity bound for SLSQP. Anyway, it does not say much about the performance of a particular problem.
  4. IPOPT is a large-scale, sparse interior point solver. It will find local solutions. It can solve really large models.
  5. Global solvers like BARON, ANTIGONE and COUENNE find global solutions (or bounds on the quality of the solution if you are impatient and stop prematurely). These solvers are (most of the time) slower than local solvers. I am not aware of direct Python links.
  6. If you have a good starting point, a local solver may be just what you need. Using a multistart strategy can help finding better solutions (not proven globally optimal but you get some confidence you have not found a really bad local optimum).
  7. The Python framework PYOMO offers access to many solvers. However you will need to rewrite the model. PYOMO has automatic differentiation: no need to provide gradients.
  8. For testing you can try to transcribe the model in AMPL or GAMS and solve online via NEOS. This will allow you to try out a number of solvers. Both AMPL and GAMS have automatic differentation.
like image 176
Erwin Kalvelagen Avatar answered Dec 09 '25 21:12

Erwin Kalvelagen