Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to choose proper method for scipy.optimize.minimize?

I was wondering how I can choose the best minimization method for scipy.optimize.minimize and how different the results may be?

I am trying to minimize the following expression (solve for g):

|a1.g.x + a2.g.x^3 - K|

like image 814
Shannon Avatar asked Dec 03 '25 13:12

Shannon


1 Answers

Scipy has a lecture on Mathematical Optimization, where they have a section on choosing a minimization method. Snippet taken from that section:

Without knowledge of the gradient:

  • In general, prefer BFGS or L-BFGS, even if you have to approximate numerically gradients. These are also the default if you omit the parameter method - depending if the problem has constraints or bounds
  • On well-conditioned problems, Powell and Nelder-Mead, both gradient-free methods, work well in high dimension, but they collapse for ill-conditioned problems.

With knowledge of the gradient:

  • BFGS or L-BFGS.
  • Computational overhead of BFGS is larger than that L-BFGS, itself larger than that of conjugate gradient. On the other side, BFGS usually needs less function evaluations than CG. Thus conjugate gradient method is better than BFGS at optimizing computationally cheap functions.

With the Hessian:

  • If you can compute the Hessian, prefer the Newton method (Newton-CG or TCG).

If you have noisy measurements:

  • Use Nelder-Mead or Powell.

If I have interpreted your equation correctly, I think that either BFGS or L-BFGS might work for you.

like image 89
Avi Vajpeyi Avatar answered Dec 06 '25 23:12

Avi Vajpeyi



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!