API reference

class simplenlopt.OptimizeResult

Bases: dict

Represents the optimization result.

x

The solution of the optimization.

Type

ndarray

success

Whether or not the optimizer exited successfully.

Type

bool

message

Description of the cause of the termination.

Type

str

fun

Value of the objective function.

Type

float

nfev

Number of evaluations of the objective function.

Type

int

simplenlopt.auglag(fun, x0, args=(), method='auto', jac=None, bounds=None, constraints=(), penalize_inequalities=True, ftol_rel=1e-08, xtol_rel=1e-06, ftol_abs=1e-14, xtol_abs=1e-08, maxeval='auto', maxtime=None, solver_options={})

Constrained local minimization via the augmented lagrangian method

Parameters
  • fun (callable) – The objective function to be minimized. Must be in the form fun(x, *args), where x is the argument in the form of a 1-D array and args is a tuple of any additional fixed parameters needed to completely specify the function

  • x0 (ndarray) – Starting guess for the decision variable

  • args (tuple, optional, default ()) – Further arguments to describe the objective function

  • method (string, optional, default 'auto') –

    Optimization algorithm to use. Should be one of

    • ’lbfgs’: Limited-memory Broyden-Fletcher Goldfarb Shanno algorithm

    • ’slsqp’: Sequential least squares programming

    • ’mma’: Method of moving asymptotes

    • ’ccsaq’: conservative convex separable approximation

    • ’tnewton’: truncated Newton

    • ’tnewton_restart’: truncated Newton with restarting

    • ’tnewton_precond’: truncated Newton with preconditioning

    • ’tnewton_precond_restart’: truncated Newton with preconditioning and restarting

    • ’var1’: Shifted limited-memory variable metric with rank 1-method

    • ’var2’: Shifted limited-memory variable metric with rank 2-method

    • ’bobyqa’: Bounded optimization by quadratic approximation

    • ’cobyla’: Constrained optimization by linear approximation

    • ’neldermead’: Nelder-Mead optimization

    • ’sbplx’: Subplex algorithm

    • ’praxis’: Principal Axis algorithm

    • ’auto’

    See NLopt documentation for a detailed description of these methods.

    If ‘auto’, a suitable solver is chosen based on the availability of gradient information and if also inequalities should be penalized:

    jac != None and penalize_inequalities=True -> ‘lbfgs’

    jac != None and penalize_inequalities=False -> ‘mma’

    jac = None and penalize_inequalities=True -> ‘bobyqa’

    jac = None and penalize_inequalities=False -> ‘cobyla’

  • jac ({callable, '2-point', '3-point', 'NLOpt', bool}, optional, default None) –

    If callable, must be in the form jac(x, *args), where x is the argument. in the form of a 1-D array and args is a tuple of any additional fixed parameters needed to completely specify the function.

    If ‘2-point’ will use forward difference to approximate the gradient.

    If ‘3-point’ will use central difference to approximate the gradient.

    If ‘NLOpt’, must be in the form jac(x, grad, *args), where x is the argument in the form of a 1-D array, grad a 1-D array containing the gradient and args is a tuple of any additional fixed parameters needed to completely specify the function.

  • bounds (tuple of array-like, optional, default None) – Bounds for variables. (min, max) pairs for each element in x, defining the finite lower and upper bounds for the optimizing argument of fun. It is required to have len(bounds) == len(x).

  • constraints (list, optional, default ()) – List of constraint functions. Constraints must be of the form f(x) for a constraint of the form f(x) <= 0.

  • penalize_inequalities (bool, optional, default True) –

    If True, also penalizes violation of inequality constraints (NLopt code: AUGLAG).

    If False, only penalizes violation of equality constraints (NLopt code: AUGLAG_EQ).

    In this case the chosen method must be able to handle inequality constraints.

  • ftol_rel (float, optional, default 1e-8) – Relative function tolerance to signal convergence

  • xtol_rel (float, optional, default 1e-6) – Relative parameter vector tolerance to signal convergence

  • ftol_abs (float, optional, default 1e-14) – Absolute function tolerance to signal convergence

  • xtol_abs (float, optional, default 1e-8) – Absolute parameter vector tolerance to signal convergence

  • maxeval (int, optional, default None) – Number of maximal function evaluations.

  • maxtime (float, optional, default None) – maximum absolute time until the optimization is terminated.

  • solver_options (dict, optional, default None) – Dictionary of additional options supplied to the solver.

Returns

result – The optimization result represented as a OptimizeResult object. Important attributes are: x the solution array, fun the value of the function at the solution, and message which describes the cause of the termination. See OptimizeResult for a description of other attributes.

Return type

OptimizeResult

Notes

References:

Andrew R. Conn, Nicholas I. M. Gould, and Philippe L. Toint, “A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds,” SIAM J. Numer. Anal. vol. 28, no. 2, p. 545-572 (1991)

E. G. Birgin and J. M. Martínez, “Improving ultimate convergence of an augmented Lagrangian method,” Optimization Methods and Software vol. 23, no. 2, p. 177-195 (2008)

The augmented lagrangian transforms a constrained optimization problem into an unconstrained one by constructing penalty terms for violating the constraints.

simplenlopt.crs(fun, bounds, args=(), x0='random', population=None, ftol_rel=1e-08, xtol_rel=1e-06, ftol_abs=1e-14, xtol_abs=1e-08, maxeval='auto', maxtime=None, solver_options={})

Global optimization via Controlled Random Search with local mutation

Parameters
  • fun (callable) – The objective function to be minimized. Must be in the form fun(x, *args), where x is the argument in the form of a 1-D array and args is a tuple of any additional fixed parameters needed to completely specify the function

  • bounds (tuple of array-like) – Bounds for variables. (min, max) pairs for each element in x, defining the finite lower and upper bounds for the optimizing argument of fun. It is required to have len(bounds) == len(x). len(bounds) is used to determine the number of parameters in x.

  • args (list, optional, default ()) – Further arguments to describe the objective function

  • x0 ({ndarray, 'random'}, optional, default 'random') – Initial parameter vector guess. If ndarray, must be a 1-D array if ‘random’, picks a random initial guess in the feasible region.

  • population (int, optional, default None) – Population size. If None, will use NLopt’s default population size.

  • ftol_rel (float, optional, default 1e-8) – Relative function tolerance to signal convergence

  • xtol_rel (float, optional, default 1e-6) – Relative parameter vector tolerance to signal convergence

  • ftol_abs (float, optional, default 1e-14) – Absolute function tolerance to signal convergence

  • xtol_abs (float, optional, default 1e-8) – Absolute parameter vector tolerance to signal convergence

  • maxeval ({int, 'auto'}, optional, default 'auto') – Number of maximal function evaluations. If ‘auto’, set to 1.000 * dimensions

  • maxtime (float, optional, default None) – maximum absolute time until the optimization is terminated.

  • solver_options (dict, optional, default None) – Dictionary of additional options supplied to the solver.

Returns

result – The optimization result represented as a OptimizeResult object. Important attributes are: x the solution array, fun the value of the function at the solution, and message which describes the cause of the termination. See OptimizeResult for a description of other attributes.

Return type

OptimizeResult

Notes

References:

P. Kaelo and M. M. Ali, “Some variants of the controlled random search algorithm for global optimization,” J. Optim. Theory Appl. 130 (2), 253-264 (2006)

W. L. Price, “Global optimization by controlled random search,” J. Optim. Theory Appl. 40 (3), p. 333-348 (1983)

W. L. Price, “A controlled random search procedure for global optimization,” in Towards Global Optimization 2, p. 71-84 edited by L. C. W. Dixon and G. P. Szego (North-Holland Press, Amsterdam, 1978)

simplenlopt.curve_fit(f, xdata, ydata, p0=None, sigma=None, bounds=None, loss='squared', method='auto', jac=None, f_scale=1, **minimize_kwargs)

Curve fitting using NLopt’s local optimizers in SciPy style

Parameters
  • f (callable) – Must be of the form f(xdata, *params)

  • xdata (ndarray (n, )) – Predictor variable values

  • ydata (ndarray (n, )) – Response variable values

  • p0 (ndarray (n, ), optional) – If None, defaults to 1 for all parameters

  • sigma (ndarray (n, ), optional) – Typically uncertainties for each data point. The objective will be multiplied by 1/sigma

  • bounds (two-tuple of array-like, optional) – Determines the bounds on the fitting parameters ([lower_bounds], [upper_bounds])

  • loss (string, optional, default 'linear') –

    Should be one of

    • ’squared’: minimizes sum(residual**2)

    • ’absolute’: minimizes sum(abs(residual))

    • ’cauchy’: minimizes sum(f_scale**2 * ln(1 + residual**2/f_scale**2))

  • method (string or 'auto', optional, default 'auto') –

    Optimization algorithm to use.

    Local optimizers:

    • ’lbfgs’

    • ’slsqp’

    • ’mma’

    • ’ccsaq’

    • ’tnewton’

    • ’tnewton_restart’

    • ’tnewton_precond’

    • ’tnewton_precond_restart’

    • ’var1’

    • ’var2’

    • ’bobyqa’

    • ’cobyla’

    • ’neldermead’

    • ’sbplx’

    • ’praxis’

    Global optimizers require bounds!=None. Possible algorithms:

    • ’crs’

    • ’direct’

    • ’esch’

    • ’isres’

    • ’stogo’

    • ’mlsl’

    If ‘auto’, defaults to ‘slsqp’ if jac != None and ‘bobyqa’ if jac == None

  • jac (callable, optional) – Must be of the form jac(xdata) and return a N x m numpy array for N data points and m fitting parameters

Returns

  • popt (ndarray (m, )) – Array of best fit parameters.

  • pcov (ndarray (m, m)) – Approximated covariance matrix of the fit parameters. To compute one standard deviation errors on the parameters use perr = np.sqrt(np.diag(pcov)). In general, these are not very accurate estimates of the parameter uncertainty as the method relies on approximations.

Raises

ValueError – if either ydata or xdata contain NaNs, or if incompatible options are used.

simplenlopt.direct(fun, bounds, args=(), locally_biased=True, scale=True, randomize=False, ftol_rel=1e-08, xtol_rel=1e-06, ftol_abs=1e-14, xtol_abs=1e-08, maxeval=None, maxtime=None, solver_options={})

Global optimization via variants of the DIviding RECTangles (DIRECT) algorithm

Parameters
  • fun (callable) – The objective function to be minimized. Must be in the form fun(x, *args), where x is the argument in the form of a 1-D array and args is a tuple of any additional fixed parameters needed to completely specify the function

  • bounds (tuple of array-like) – Bounds for variables. (min, max) pairs for each element in x, defining the finite lower and upper bounds for the optimizing argument of fun. It is required to have len(bounds) == len(x). len(bounds) is used to determine the number of parameters in x.

  • args (list, optional, default ()) – Further arguments to describe the objective function

  • locally_biased (boolean, optional, default True) – If True, uses the locally biased variant of DIRECT known as DIRECT_L

  • scale (boolean, optional, default True) – If True, scales the parameter space to a hypercube of length 1 in all dimensions

  • randomize (boolean, optional, default False) – If True, randomize the algorithm by partly randomizing which side of the hyperrectangle is halved

  • ftol_rel (float, optional, default 1e-8) – Relative function tolerance to signal convergence

  • xtol_rel (float, optional, default 1e-6) – Relative parameter vector tolerance to signal convergence

  • ftol_abs (float, optional, default 1e-14) – Absolute function tolerance to signal convergence

  • xtol_abs (float, optional, default 1e-8) – Absolute parameter vector tolerance to signal convergence

  • maxeval ({int, 'auto'}, optional, default 'auto') – Number of maximal function evaluations. If ‘auto’, set to 1.000 * dimensions

  • maxtime (float, optional, default None) – maximum absolute time until the optimization is terminated.

  • solver_options (dict, optional, default None) – Dictionary of additional options supplied to the solver.

Returns

result – The optimization result represented as a OptimizeResult object. Important attributes are: x the solution array, fun the value of the function at the solution, and message which describes the cause of the termination. See OptimizeResult for a description of other attributes.

Return type

OptimizeResult

Notes

References:

D. R. Jones, C. D. Perttunen, and B. E. Stuckmann, “Lipschitzian optimization without the lipschitz constant,” J. Optimization Theory and Applications, vol. 79, p. 157 (1993)

J. M. Gablonsky and C. T. Kelley, “A locally-biased form of the DIRECT algorithm,” J. Global Optimization, vol. 21 (1), p. 27-37 (2001)

By default will use the locally biased variant with NLopt code “DIRECT_L”. For objective functions with many local minima, setting locally_biased=False which calls the DIRECT algorithm without local bias is advisable.

simplenlopt.esch(fun, bounds, args=(), x0='random', population=None, ftol_rel=1e-08, xtol_rel=1e-06, ftol_abs=1e-14, xtol_abs=1e-08, maxeval='auto', maxtime=None, solver_options={})

Global optimization via Differential Evolution variant

Note

ESCH does not seem to respect the relative and absolute convergence criteria. By default, it will always run for the maximal number of iterations.

Parameters
  • fun (callable) – The objective function to be minimized. Must be in the form fun(x, *args), where x is the argument in the form of a 1-D array and args is a tuple of any additional fixed parameters needed to completely specify the function

  • bounds (tuple of array-like) – Bounds for variables. (min, max) pairs for each element in x, defining the finite lower and upper bounds for the optimizing argument of fun. It is required to have len(bounds) == len(x). len(bounds) is used to determine the number of parameters in x.

  • args (list, optional, default ()) – Further arguments to describe the objective function

  • x0 ({ndarray, 'random'}, optional, default 'random') –

    Initial parameter vector guess.

    If ndarray, must be a 1-D array.

    if ‘random’, picks a random initial guess in the feasible region.

  • population (int, optional, default None) – Population size. If None, will use NLopt’s default population size.

  • ftol_rel (float, optional, default 1e-8) – Relative function tolerance to signal convergence

  • xtol_rel (float, optional, default 1e-6) – Relative parameter vector tolerance to signal convergence

  • ftol_abs (float, optional, default 1e-14) – Absolute function tolerance to signal convergence

  • xtol_abs (float, optional, default 1e-8) – Absolute parameter vector tolerance to signal convergence

  • maxeval ({int, 'auto'}, optional, default 'auto') – Number of maximal function evaluations. If ‘auto’, set to 1.000 * dimensions

  • maxtime (float, optional, default None) – maximum absolute time until the optimization is terminated.

  • solver_options (dict, optional, default None) – Dictionary of additional options supplied to the solver.

Returns

result – The optimization result represented as a OptimizeResult object. Important attributes are: x the solution array, fun the value of the function at the solution, and message which describes the cause of the termination. See OptimizeResult for a description of other attributes.

Return type

OptimizeResult

Notes

Reference: C. H. da Silva Santos, “Parallel and Bio-Inspired Computing Applied to Analyze Microwave and Photonic Metamaterial Strucutures,” Ph.D. thesis, University of Campinas, (2010)

simplenlopt.global_optimizers_info()

Prints a short summary of NLopt’s global optimizers

simplenlopt.isres(fun, bounds, args=(), constraints=[], x0='random', population=None, ftol_rel=1e-08, xtol_rel=1e-06, ftol_abs=1e-14, xtol_abs=1e-08, maxeval='auto', maxtime=None, solver_options={})

Global optimization via the Improved Stochastic Ranking Evolution Strategy

Note

ISRES does not seem to respect the relative and absolute convergence criteria. By default, it will always run for the maximal number of iterations.

Parameters
  • fun (callable) – The objective function to be minimized. Must be in the form fun(x, *args), where x is the argument in the form of a 1-D array and args is a tuple of any additional fixed parameters needed to completely specify the function

  • bounds (tuple of array-like) – Bounds for variables. (min, max) pairs for each element in x, defining the finite lower and upper bounds for the optimizing argument of fun. It is required to have len(bounds) == len(x). len(bounds) is used to determine the number of parameters in x.

  • args (list, optional, default ()) – Further arguments to describe the objective function

  • constraints (list, optional, default ()) – List of constraint functions. Constraints must be of the form f(x) for a constraint of the form f(x) <= 0.

  • x0 ({ndarray, 'random'}, optional, default 'random') – Initial parameter vector guess. If ndarray, must be a 1-D array if ‘random’, picks a random initial guess in the feasible region.

  • population (int, optional, default None) – Population size. If None, will use NLopt’s default population size.

  • ftol_rel (float, optional, default 1e-8) – Relative function tolerance to signal convergence

  • xtol_rel (float, optional, default 1e-6) – Relative parameter vector tolerance to signal convergence

  • ftol_abs (float, optional, default 1e-14) – Absolute function tolerance to signal convergence

  • xtol_abs (float, optional, default 1e-8) – Absolute parameter vector tolerance to signal convergence

  • maxeval ({int, 'auto'}, optional, default 'auto') – Number of maximal function evaluations. If ‘auto’, set to 1.000 * dimensions

  • maxtime (float, optional, default None) – maximum absolute time until the optimization is terminated.

  • solver_options (dict, optional, default None) – Dictionary of additional options supplied to the solver.

Returns

result – The optimization result represented as a OptimizeResult object. Important attributes are: x the solution array, fun the value of the function at the solution, and message which describes the cause of the termination. See OptimizeResult for a description of other attributes.

Return type

OptimizeResult

Notes

References:

Thomas Philip Runarsson and Xin Yao, “Search biases in constrained evolutionary optimization,” IEEE Trans. on Systems, Man, and Cybernetics Part C: Applications and Reviews, vol. 35 (no. 2), pp. 233-243 (2005)

Thomas P. Runarsson and Xin Yao, “Stochastic ranking for constrained evolutionary optimization,” IEEE Trans. Evolutionary Computation, vol. 4 (no. 3), pp. 284-294 (2000)

simplenlopt.local_optimizers_info()

Prints a short summary of NLopt’s local optimizers

simplenlopt.minimize(fun, x0, args=(), method='auto', jac=None, bounds=None, constraints=[], ftol_rel=1e-08, xtol_rel=1e-06, ftol_abs=1e-14, xtol_abs=1e-08, maxeval='auto', maxtime=None, solver_options={})

Local minimization function for NLopt’s algorithm in SciPy style

Parameters
  • fun (callable) – The objective function to be minimized. Must be in the form fun(x, *args), where x is the argument in the form of a 1-D array and args is a tuple of any additional fixed parameters needed to completely specify the function

  • x0 (ndarray) – Starting guess for the decision variable

  • args (tuple, optional, default ()) – Further arguments to describe the objective function

  • method (string, optional, default 'auto') –

    Optimization algorithm to use. Should be one of

    • ’lbfgs’: Limited-memory Broyden-Fletcher Goldfarb Shanno algorithm

    • ’slsqp’: Sequential least squares programming

    • ’mma’: Method of moving asymptotes

    • ’ccsaq’: conservative convex separable approximation

    • ’tnewton’: truncated Newton

    • ’tnewton_restart’: truncated Newton with restarting

    • ’tnewton_precond’: truncated Newton with preconditioning

    • ’tnewton_precond_restart’: truncated Newton with preconditioning and restarting

    • ’var1’: Shifted limited-memory variable metric with rank 1-method

    • ’var2’: Shifted limited-memory variable metric with rank 2-method

    • ’bobyqa’: Bounded optimization by quadratic approximation

    • ’cobyla’: Constrained optimization by linear approximation

    • ’neldermead’: Nelder-Mead optimization

    • ’sbplx’: Subplex algorithm

    • ’praxis’: Principal Axis algorithm

    • ’auto’

    See NLopt documentation for a detailed description of these methods.

    If ‘auto’, will be set to ‘bobyqa’/’cobyla’ if jac=None (‘cobyla’ if constraints are set) or ‘lbfgs’/’slsqp’ if jac != None (‘slsqp’ if constraints set).

    If the chosen method does not support the required constraints, the augmented lagrangian is called to handle them.

  • jac ({callable, '2-point', '3-point', 'NLOpt', bool}, optional, default None) –

    If callable, must be in the form jac(x, *args), where x is the argument in the form of a 1-D array and args is a tuple of any additional fixed parameters needed to completely specify the function.

    If ‘2-point’ will use forward difference to approximate the gradient.

    If ‘3-point’ will use central difference to approximate the gradient.

    If ‘NLOpt’, must be in the form jac(x, grad, *args), where x is the argument in the form of a 1-D array, grad a 1-D array containing the gradient and args is a tuple of any additional fixed parameters needed to completely specify the function.

  • bounds (tuple of array-like) – Bounds for variables. (min, max) pairs for each element in x, defining the finite lower and upper bounds for the optimizing argument of fun. It is required to have len(bounds) == len(x).

  • constraints (list, optional, default ()) – List of constraint functions. Constraints must be of the form f(x) for a constraint of the form f(x) <= 0.

  • ftol_rel (float, optional, default 1e-8) – Relative function tolerance to signal convergence

  • xtol_rel (float, optional, default 1e-6) – Relative parameter vector tolerance to signal convergence

  • ftol_abs (float, optional, default 1e-14) – Absolute function tolerance to signal convergence

  • xtol_abs (float, optional, default 1e-8) – Absolute parameter vector tolerance to signal convergence

  • maxeval (int, optional, default None) – Number of maximal function evaluations.

  • maxtime (float, optional, default None) – maximum absolute time until the optimization is terminated.

  • solver_options (dict, optional, default None) – Dictionary of additional options supplied to the solver.

Returns

result – The optimization result represented as a OptimizeResult object. Important attributes are: x the solution array, fun the value of the function at the solution, and message which describes the cause of the termination. See OptimizeResult for a description of other attributes.

Return type

OptimizeResult

Examples

>>> import numpy as np
>>> from scipy.optimize import rosen, rosen_der
>>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
>>> res = minimize(rosen, x0, method='lbfgs', jac=rosen_der)
>>> res.success
True
>>> res.message
'Success'
>>> np.isclose(res.fun, 0)
True
>>> res.x
array([ 1.,  1.,  1.,  1.,  1.])
>>> res = minimize(rosen, x0, method='lbfgs', jac=rosen_der,
...                ftol_abs=1e-5)
>>> res.success
True
>>> res.message
'Optimization stopped because ftol_rel or ftol_abs (above) was reached.'
>>> res = minimize(rosen, x0, method='ld_lbfgs', jac=rosen_der, foo=3)
Traceback (most recent call last):
    ...
ValueError: Parameter foo could not be recognized.
>>> x0 = np.array([-1., 1.])
>>> fun = lambda x: - 2*x[0]*x[1] - 2*x[0] + x[0]**2 + 2*x[1]**2
>>> dfun = lambda x: np.array([2*x[0] - 2*x[1] - 2, - 2*x[0] + 4*x[1]])
>>> cons = [{'type': 'eq',
...           'fun': lambda x: x[0]**3 - x[1],
...           'jac': lambda x: np.array([3.*(x[0]**2.), -1.])},
...         {'type': 'ineq',
...           'fun': lambda x: x[1] - 1,
...           'jac': lambda x: np.array([0., 1.])}]
>>> res = minimize(fun, x0, jac=dfun, method='slsqp', constraints=cons)
>>> res.success
False
>>> res.message
'Halted because roundoff errors limited progress. (In this case, the optimization still typically returns a useful result.)'
>>> res.x.round(2)
array([ 0.84,  0.6 ])
simplenlopt.mlsl(fun, bounds, args=(), jac=None, x0='random', sobol_sampling=True, population=4, local_minimizer='auto', ftol_rel=1e-08, xtol_rel=1e-06, ftol_abs=1e-14, xtol_abs=1e-08, maxeval='auto', maxtime=None, local_minimizer_options={}, solver_options={})

Global optimization via MultiLevel Single Linkage (MLSL)

Note

MLSL does not seem to respect the relative and absolute convergence criteria. By default, it will always run for the maximal number of iterations.

Parameters
  • fun (callable) – The objective function to be minimized. Must be in the form fun(x, *args), where x is the argument in the form of a 1-D array and args is a tuple of any additional fixed parameters needed to completely specify the function

  • bounds (tuple of array-like) – Bounds for variables. (min, max) pairs for each element in x, defining the finite lower and upper bounds for the optimizing argument of fun. It is required to have len(bounds) == len(x). len(bounds) is used to determine the number of parameters in x.

  • args (list, optional, default ()) – Further arguments to describe the objective function

  • jac ({callable, '2-point', '3-point', 'NLOpt', bool}, optional, default None) –

    If callable, must be in the form jac(x, *args), where x is the argument. in the form of a 1-D array and args is a tuple of any additional fixed parameters needed to completely specify the function.

    If ‘2-point’ will use forward difference to approximate the gradient.

    If ‘3-point’ will use central difference to approximate the gradient.

    If ‘NLOpt’, must be in the form jac(x, grad, *args), where x is the argument in the form of a 1-D array, grad a 1-D array containing the gradient and args is a tuple of any additional fixed parameters needed to completely specify the function.

  • x0 ({ndarray, 'random'}, optional, default 'random') – Initial parameter vector guess. If ndarray, must be a 1-D array if ‘random’, picks a random initial guess in the feasible region.

  • sobol_sampling (bool, optional, default True) – If True, starting points for local minimizations are sampled from a Sobol sequence.

  • population (int, optional, default 4) – Number of local searches per iteration.

  • local_minimizer (string, optional, default 'auto') –

    Local Optimization algorithm to use. If string, Should be one of

    • ’lbfgs’: Limited-memory Broyden-Fletcher Goldfarb Shanno algorithm

    • ’slsqp’: Sequential least squares programming

    • ’mma’: Method of moving asymptotes

    • ’ccsaq’: conservative convex separable approximation

    • ’tnewton’: truncated Newton

    • ’tnewton_restart’: truncated Newton with restarting

    • ’tnewton_precond’: truncated Newton with preconditioning

    • ’tnewton_precond_restart’: truncated Newton with preconditioning and restarting

    • ’var1’: Shifted limited-memory variable metric with rank 1-method

    • ’var2’: Shifted limited-memory variable metric with rank 2-method

    • ’bobyqa’: Bounded optimization by quadratic approximation

    • ’cobyla’: Constrained optimization by linear approximation

    • ’neldermead’: Nelder-Mead optimization

    • ’sbplx’: Subplex algorithm

    • ’praxis’: Principal Axis algorithm

    • ’auto’

    See NLopt documentation for a detailed description of these methods.

    If ‘auto’, will default to “lbfgs” if jac!= None and “bobyqa” if jac=None

  • ftol_rel (float, optional, default 1e-8) – Relative function tolerance to signal convergence

  • xtol_rel (float, optional, default 1e-6) – Relative parameter vector tolerance to signal convergence

  • ftol_abs (float, optional, default 1e-14) – Absolute function tolerance to signal convergence

  • xtol_abs (1e-8, optional, default 1e-8) – Absolute parameter vector tolerance to signal convergence

  • maxeval ({int, 'auto'}, optional, default 'auto') – Number of maximal function evaluations. If ‘auto’, set to 1.000 * dimensions for gradient based local optimizer, and 10.000 * problem dimensional for gradient free local optimizer

  • maxtime (float, optional, default None) – maximum absolute time until the optimization is terminated

  • local_mimimizer_options (dict) – Further options supplied to the local minimizer

  • solver_options (dict) – Further options supplied to the global MLSL minimizer

Returns

result – The optimization result represented as a OptimizeResult object. Important attributes are: x the solution array, fun the value of the function at the solution, and message which describes the cause of the termination. See OptimizeResult for a description of other attributes.

Return type

OptimizeResult

Notes

References:

A. H. G. Rinnooy Kan and G. T. Timmer, “Stochastic global optimization methods,” Mathematical Programming, vol. 39, p. 27-78 (1987)

Sergei Kucherenko and Yury Sytsko, “Application of deterministic low-discrepancy sequences in global optimization,” Computational Optimization and Applications, vol. 30, p. 297-318 (2005)

simplenlopt.stogo(fun, bounds, args=(), jac=None, x0='random', randomize=False, ftol_rel=1e-08, xtol_rel=1e-06, ftol_abs=1e-14, xtol_abs=1e-08, maxeval='auto', maxtime=None, solver_options={})

Global optimization via STOchastic Global Optimization (STOGO)

Note

STOGO does not seem to respect the relative and absolute convergence criteria. By default, it will always run for the maximal number of iterations.

Parameters
  • fun (callable) – The objective function to be minimized. Must be in the form fun(x, *args), where x is the argument in the form of a 1-D array and args is a tuple of any additional fixed parameters needed to completely specify the function

  • bounds (tuple of array-like) – Bounds for variables. (min, max) pairs for each element in x, defining the finite lower and upper bounds for the optimizing argument of fun. It is required to have len(bounds) == len(x). len(bounds) is used to determine the number of parameters in x.

  • args (list, optional, default ()) – Further arguments to describe the objective function

  • jac ({callable, '2-point', '3-point', 'NLOpt', bool}, optional, default None) –

    If callable, must be in the form jac(x, *args), where x is the argument in the form of a 1-D array and args is a tuple of any additional fixed parameters needed to completely specify the function.

    If ‘2-point’ will use forward difference to approximate the gradient.

    If ‘3-point’ will use central difference to approximate the gradient.

    If ‘NLOpt’, must be in the form jac(x, grad, *args), where x is the argument in the form of a 1-D array, grad a 1-D array containing the gradient and args is a tuple of any additional fixed parameters needed to completely specify the function.

  • x0 ({ndarray, 'random'}, optional, default 'random') –

    Initial parameter vector guess.

    If ndarray, must be a 1-D array.

    If ‘random’, picks a random initial guess in the feasible region.

  • randomize (bool, optional, default False) – If True, randomizes the branching process

  • ftol_rel (float, optional, default 1e-8) – Relative function tolerance to signal convergence

  • xtol_rel (float, optional, default 1e-6) – Relative parameter vector tolerance to signal convergence

  • ftol_abs (float, optional, default 1e-14) – Absolute function tolerance to signal convergence

  • xtol_abs (float, optional, default 1e-8) – Absolute parameter vector tolerance to signal convergence

  • maxeval ({int, 'auto'}, optional, default 'auto') – Number of maximal function evaluations. If ‘auto’, set to 1.000 * dimensions

  • maxtime (float, optional, default None) – maximum absolute time until the optimization is terminated.

  • solver_options (dict, optional, default None) – Dictionary of additional options supplied to the solver.

Returns

result – The optimization result represented as a OptimizeResult object. Important attributes are: x the solution array, fun the value of the function at the solution, and message which describes the cause of the termination. See OptimizeResult for a description of other attributes.

Return type

OptimizeResult

Notes

References:

S. Zertchaninov and K. Madsen, “A C++ Programme for Global Optimization,” IMM-REP-1998-04, Department of Mathematical Modelling, Technical University of Denmark, DK-2800 Lyngby, Denmark, 1998

S. Gudmundsson, “Parallel Global Optimization,” M.Sc. Thesis, IMM, Technical University of Denmark, 1998