L iddlZddlZddlmZmZmZmZddlm Z ddl m Z m Z m Z mZmZddlmZmZmZmZmZddlmZmZmZmZmZmZ ddZd Zd Zd Z d Z!d Z"dZ#dZ$y)N)BoundsLinearConstraintNonlinearConstraintOptimizeResult) TrustRegion)ObjectiveFunctionBoundConstraintsLinearConstraintsNonlinearConstraintsProblem) MaxEvalError TargetSuccessCallbackSuccessFeasibleSuccessexact_1d_array) ExitStatusOptions ConstantsDEFAULT_OPTIONSDEFAULT_CONSTANTS PRINT_OPTIONSc |i}n t|}|jtjttj}t |}|jtj ttj } t| } |jtjttj} t | } |jtjttj} t | } tj|vr!|tjdkr td|jtjttj} t| } tj|vr!|tjdkr td|jtjttj} t| } |jtjttj}t |}t|t s|f}t#|||g|}t%|ds|g}t'|}t)t+||}t-|\}}t/|||}t1|||}t3||||||| | | | | | }t5||j6t9di|}|j:j<st?|ddt@jBd|S|j6dk(rt?|ddt@jDd|S|rtGd tGd |tjHd tGd |tjJd tGd |tjLd tGd|tjNd tG tQ|||}d}d}d}d}d}d} ||tjNk\rt@j`}nD|dz }tbjdjk|jl|jnjpjrz |ttjv|jxzk\r|j{||jx}|j}|\} }!| |!z}"tbjdjk|"}#|#|ttj~|jzkr|xjx|ttjzc_<||jkDrd}d}n|dz }|dz }|#d|jzkDrd}|dk\xs|dk\}$|$rd}d}d}%n |j\}}&|&t|jx|ttj|jzkD}%n[|j|"}'|'rC t|||"|\}(})}*|j|jl|j|j|j}+|j|jl|"z|(|)|*},|jdk(r|,|+kDrtbjdjk| |ttjdz|jxzkDrL|j|"|}-tbjdjk|-dkDr|"|-z }" t|||"|\}(})}*|j|"|(|)|*}. |j|jl|"zd} |jnj||jl|"z|(|)|*}/|j|j|"|.|jx|jkr|.|ttjk\rd}n|dz }|jnj|jl}0 |jnj|jl}1tbjdjk|0|ttjtbjdjk|1zkrd}|dk\r |jnjd}|j|jl|"z |j\}}&|/xsO|.|ttjkxr7|&t|jx|ttj|jzkD}%||jkxr|.|ttjkxr|% }$nd}$d}%|$r|j|tjJkrd}t@j}n|j||j|r|j|jl|j|j}2td|j||j|jl|j|2|j|tG|%rc |j||}" t|||"|\}(})}* |jnj||jl|"z|(|)|*|jlt?||j||||S#tR$r!t?|ddt@jTd|cYStV$r!t?|ddt@jXd|cYStZ$r!t?|ddt@j\d|cYSt^$r!t?|ddt@j`d|cYStbjdjf$r!t?|ddt@jhd|cYSwxYw#tbjdjf$rt@jh}Y+wxYw#tR$rt@jT}d}YMtZ$rt@j\}d}YjtV$rt@jX}d}Yt^$rt@j}YwxYw#tR$rt@jT}d}YtZ$rt@j\}d}YtV$rt@jX}d}Yt^$rt@j}YwxYw#tbjdjf$rt@jh}YMwxYw#tbjdjf$rt@jh}YwxYw#tbjdjf$rt@jh}YwxYw#tbjdjf$rt@jh}YwxYw#tbjdjf$rt@jh}YwxYw#tbjdjf$rt@jh}YQwxYw#tR$rt@jT}d}YstZ$rt@j\}d}YtV$rt@jX}d}Yt^$rt@j}YwxYw#tbjdjf$rt@jh}YwxYw)a? Minimize a scalar function using the COBYQA method. The Constrained Optimization BY Quadratic Approximations (COBYQA) method is a derivative-free optimization method designed to solve general nonlinear optimization problems. A complete description of COBYQA is given in [3]_. Parameters ---------- fun : {callable, None} Objective function to be minimized. ``fun(x, *args) -> float`` where ``x`` is an array with shape (n,) and `args` is a tuple. If `fun` is ``None``, the objective function is assumed to be the zero function, resulting in a feasibility problem. x0 : array_like, shape (n,) Initial guess. args : tuple, optional Extra arguments passed to the objective function. bounds : {`scipy.optimize.Bounds`, array_like, shape (n, 2)}, optional Bound constraints of the problem. It can be one of the cases below. #. An instance of `scipy.optimize.Bounds`. For the time being, the argument ``keep_feasible`` is disregarded, and all the constraints are considered unrelaxable and will be enforced. #. An array with shape (n, 2). The bound constraints for ``x[i]`` are ``bounds[i][0] <= x[i] <= bounds[i][1]``. Set ``bounds[i][0]`` to :math:`-\infty` if there is no lower bound, and set ``bounds[i][1]`` to :math:`\infty` if there is no upper bound. The COBYQA method always respect the bound constraints. constraints : {Constraint, list}, optional General constraints of the problem. It can be one of the cases below. #. An instance of `scipy.optimize.LinearConstraint`. The argument ``keep_feasible`` is disregarded. #. An instance of `scipy.optimize.NonlinearConstraint`. The arguments ``jac``, ``hess``, ``keep_feasible``, ``finite_diff_rel_step``, and ``finite_diff_jac_sparsity`` are disregarded. #. A list, each of whose elements are described in the cases above. callback : callable, optional A callback executed at each objective function evaluation. The method terminates if a ``StopIteration`` exception is raised by the callback function. Its signature can be one of the following: ``callback(intermediate_result)`` where ``intermediate_result`` is a keyword parameter that contains an instance of `scipy.optimize.OptimizeResult`, with attributes ``x`` and ``fun``, being the point at which the objective function is evaluated and the value of the objective function, respectively. The name of the parameter must be ``intermediate_result`` for the callback to be passed an instance of `scipy.optimize.OptimizeResult`. Alternatively, the callback function can have the signature: ``callback(xk)`` where ``xk`` is the point at which the objective function is evaluated. Introspection is used to determine which of the signatures to invoke. options : dict, optional Options passed to the solver. Accepted keys are: disp : bool, optional Whether to print information about the optimization procedure. Default is ``False``. maxfev : int, optional Maximum number of function evaluations. Default is ``500 * n``. maxiter : int, optional Maximum number of iterations. Default is ``1000 * n``. target : float, optional Target on the objective function value. The optimization procedure is terminated when the objective function value of a feasible point is less than or equal to this target. Default is ``-numpy.inf``. feasibility_tol : float, optional Tolerance on the constraint violation. If the maximum constraint violation at a point is less than or equal to this tolerance, the point is considered feasible. Default is ``numpy.sqrt(numpy.finfo(float).eps)``. radius_init : float, optional Initial trust-region radius. Typically, this value should be in the order of one tenth of the greatest expected change to `x0`. Default is ``1.0``. radius_final : float, optional Final trust-region radius. It should indicate the accuracy required in the final values of the variables. Default is ``1e-6``. nb_points : int, optional Number of interpolation points used to build the quadratic models of the objective and constraint functions. Default is ``2 * n + 1``. scale : bool, optional Whether to scale the variables according to the bounds. Default is ``False``. filter_size : int, optional Maximum number of points in the filter. The filter is used to select the best point returned by the optimization procedure. Default is ``sys.maxsize``. store_history : bool, optional Whether to store the history of the function evaluations. Default is ``False``. history_size : int, optional Maximum number of function evaluations to store in the history. Default is ``sys.maxsize``. debug : bool, optional Whether to perform additional checks during the optimization procedure. This option should be used only for debugging purposes and is highly discouraged to general users. Default is ``False``. Other constants (from the keyword arguments) are described below. They are not intended to be changed by general users. They should only be changed by users with a deep understanding of the algorithm, who want to experiment with different settings. Returns ------- `scipy.optimize.OptimizeResult` Result of the optimization procedure, with the following fields: message : str Description of the cause of the termination. success : bool Whether the optimization procedure terminated successfully. status : int Termination status of the optimization procedure. x : `numpy.ndarray`, shape (n,) Solution point. fun : float Objective function value at the solution point. maxcv : float Maximum constraint violation at the solution point. nfev : int Number of function evaluations. nit : int Number of iterations. If ``store_history`` is True, the result also has the following fields: fun_history : `numpy.ndarray`, shape (nfev,) History of the objective function values. maxcv_history : `numpy.ndarray`, shape (nfev,) History of the maximum constraint violations. A description of the termination statuses is given below. .. list-table:: :widths: 25 75 :header-rows: 1 * - Exit status - Description * - 0 - The lower bound for the trust-region radius has been reached. * - 1 - The target objective function value has been reached. * - 2 - All variables are fixed by the bound constraints. * - 3 - The callback requested to stop the optimization procedure. * - 4 - The feasibility problem received has been solved successfully. * - 5 - The maximum number of function evaluations has been exceeded. * - 6 - The maximum number of iterations has been exceeded. * - -1 - The bound constraints are infeasible. * - -2 - A linear algebra error occurred. Other Parameters ---------------- decrease_radius_factor : float, optional Factor by which the trust-region radius is reduced when the reduction ratio is low or negative. Default is ``0.5``. increase_radius_factor : float, optional Factor by which the trust-region radius is increased when the reduction ratio is large. Default is ``numpy.sqrt(2.0)``. increase_radius_threshold : float, optional Threshold that controls the increase of the trust-region radius when the reduction ratio is large. Default is ``2.0``. decrease_radius_threshold : float, optional Threshold used to determine whether the trust-region radius should be reduced to the resolution. Default is ``1.4``. decrease_resolution_factor : float, optional Factor by which the resolution is reduced when the current value is far from its final value. Default is ``0.1``. large_resolution_threshold : float, optional Threshold used to determine whether the resolution is far from its final value. Default is ``250.0``. moderate_resolution_threshold : float, optional Threshold used to determine whether the resolution is close to its final value. Default is ``16.0``. low_ratio : float, optional Threshold used to determine whether the reduction ratio is low. Default is ``0.1``. high_ratio : float, optional Threshold used to determine whether the reduction ratio is high. Default is ``0.7``. very_low_ratio : float, optional Threshold used to determine whether the reduction ratio is very low. This is used to determine whether the models should be reset. Default is ``0.01``. penalty_increase_threshold : float, optional Threshold used to determine whether the penalty parameter should be increased. Default is ``1.5``. penalty_increase_factor : float, optional Factor by which the penalty parameter is increased. Default is ``2.0``. short_step_threshold : float, optional Factor used to determine whether the trial step is too short. Default is ``0.5``. low_radius_factor : float, optional Factor used to determine which interpolation point should be removed from the interpolation set at each iteration. Default is ``0.1``. byrd_omojokun_factor : float, optional Factor by which the trust-region radius is reduced for the computations of the normal step in the Byrd-Omojokun composite-step approach. Default is ``0.8``. threshold_ratio_constraints : float, optional Threshold used to determine which constraints should be taken into account when decreasing the penalty parameter. Default is ``2.0``. large_shift_factor : float, optional Factor used to determine whether the point around which the quadratic models are built should be updated. Default is ``10.0``. large_gradient_factor : float, optional Factor used to determine whether the models should be reset. Default is ``10.0``. resolution_factor : float, optional Factor by which the resolution is decreased. Default is ``2.0``. improve_tcg : bool, optional Whether to improve the steps computed by the truncated conjugate gradient method when the trust-region boundary is reached. Default is ``True``. References ---------- .. [1] J. Nocedal and S. J. Wright. *Numerical Optimization*. Springer Ser. Oper. Res. Financ. Eng. Springer, New York, NY, USA, second edition, 2006. `doi:10.1007/978-0-387-40065-5 `_. .. [2] M. J. D. Powell. A direct search optimization method that models the objective and constraint functions by linear interpolation. In S. Gomez and J.-P. Hennart, editors, *Advances in Optimization and Numerical Analysis*, volume 275 of Math. Appl., pages 51--67. Springer, Dordrecht, Netherlands, 1994. `doi:10.1007/978-94-015-8330-5_4 `_. .. [3] T. M. Ragonneau. *Model-Based Derivative-Free Optimization Methods and Software*. PhD thesis, Department of Applied Mathematics, The Hong Kong Polytechnic University, Hong Kong, China, 2022. URL: https://theses.lib.polyu.edu.hk/handle/200/12294. Examples -------- To demonstrate how to use `minimize`, we first minimize the Rosenbrock function implemented in `scipy.optimize` in an unconstrained setting. .. testsetup:: import numpy as np np.set_printoptions(precision=3, suppress=True) >>> from cobyqa import minimize >>> from scipy.optimize import rosen To solve the problem using COBYQA, run: >>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2] >>> res = minimize(rosen, x0) >>> res.x array([1., 1., 1., 1., 1.]) To see how bound and constraints are handled using `minimize`, we solve Example 16.4 of [1]_, defined as .. math:: \begin{aligned} \min_{x \in \mathbb{R}^2} & \quad (x_1 - 1)^2 + (x_2 - 2.5)^2\\ \text{s.t.} & \quad -x_1 + 2x_2 \le 2,\\ & \quad x_1 + 2x_2 \le 6,\\ & \quad x_1 - 2x_2 \le 2,\\ & \quad x_1 \ge 0,\\ & \quad x_2 \ge 0. \end{aligned} >>> import numpy as np >>> from scipy.optimize import Bounds, LinearConstraint Its objective function can be implemented as: >>> def fun(x): ... return (x[0] - 1.0)**2 + (x[1] - 2.5)**2 This problem can be solved using `minimize` as: >>> x0 = [2.0, 0.0] >>> bounds = Bounds([0.0, 0.0], np.inf) >>> constraints = LinearConstraint([ ... [-1.0, 2.0], ... [1.0, 2.0], ... [1.0, -2.0], ... ], -np.inf, [2.0, 6.0, 2.0]) >>> res = minimize(fun, x0, bounds=bounds, constraints=constraints) >>> res.x array([1.4, 1.7]) To see how nonlinear constraints are handled, we solve Problem (F) of [2]_, defined as .. math:: \begin{aligned} \min_{x \in \mathbb{R}^2} & \quad -x_1 - x_2\\ \text{s.t.} & \quad x_1^2 - x_2 \le 0,\\ & \quad x_1^2 + x_2^2 \le 1. \end{aligned} >>> from scipy.optimize import NonlinearConstraint Its objective and constraint functions can be implemented as: >>> def fun(x): ... return -x[0] - x[1] >>> >>> def cub(x): ... return [x[0]**2 - x[1], x[0]**2 + x[1]**2] This problem can be solved using `minimize` as: >>> x0 = [1.0, 1.0] >>> constraints = NonlinearConstraint(cub, -np.inf, [0.0, 1.0]) >>> res = minimize(fun, x0, constraints=constraints) >>> res.x array([0.707, 0.707]) Finally, to see how to supply linear and nonlinear constraints simultaneously, we solve Problem (G) of [2]_, defined as .. math:: \begin{aligned} \min_{x \in \mathbb{R}^3} & \quad x_3\\ \text{s.t.} & \quad 5x_1 - x_2 + x_3 \ge 0,\\ & \quad -5x_1 - x_2 + x_3 \ge 0,\\ & \quad x_1^2 + x_2^2 + 4x_2 \le x_3. \end{aligned} Its objective and nonlinear constraint functions can be implemented as: >>> def fun(x): ... return x[2] >>> >>> def cub(x): ... return x[0]**2 + x[1]**2 + 4.0*x[1] - x[2] This problem can be solved using `minimize` as: >>> x0 = [1.0, 1.0, 1.0] >>> constraints = [ ... LinearConstraint( ... [[5.0, -1.0, 1.0], [-5.0, -1.0, 1.0]], ... [0.0, 0.0], ... np.inf, ... ), ... NonlinearConstraint(cub, -np.inf, 0.0), ... ] >>> res = minimize(fun, x0, constraints=constraints) >>> res.x array([ 0., -3., -3.]) Nrz)The size of the history must be positive.z(The size of the filter must be positive.__len__FTz$Starting the optimization procedure.zInitial trust-region radius: .zFinal trust-region radius: z(Maximum number of function evaluations: zMaximum number of iterations: rg?znonlinearly constrained@zNew trust-region radius: )cdictgetrVERBOSErboolFEASIBILITY_TOLfloatSCALE STORE_HISTORY HISTORY_SIZE ValueErrorint FILTER_SIZEDEBUG isinstancetupler hasattrlenr _get_bounds_get_constraintsr r r _set_default_optionsn_set_default_constantsbounds is_feasible _build_resultrINFEASIBLE_ERROR FIXED_SUCCESSprintRHOBEGRHOENDMAX_EVALMAX_ITERrrTARGET_SUCCESSrCALLBACK_SUCCESSrFEASIBLE_SUCCESSrMAX_ITER_WARNINGnplinalg LinAlgError LINALG_ERRORnormx_bestmodels interpolationx_baserLARGE_SHIFT_FACTORradius shift_x_baseget_trust_region_stepSHORT_STEP_THRESHOLD resolutionDECREASE_RESOLUTION_FACTORget_index_to_removemaxRESOLUTION_FACTORincrease_penalty_evalMAX_EVAL_WARNINGmeritfun_bestcub_bestceq_besttypeBYRD_OMOJOKUN_FACTOR get_second_order_correction_stepget_reduction_ratioupdate_interpolationset_best_index update_radiusVERY_LOW_RATIOfun_grad fun_alt_gradLARGE_GRADIENT_FACTOR reset_modelsset_multipliers LOW_RATIORADIUS_SUCCESSenhance_resolutiondecrease_penaltymaxcv _print_stepbuild_xn_evalget_geometry_steppenalty)3funx0argsr7 constraintscallbackoptionskwargsverbosefeasibility_tolscale store_history history_size filter_sizedebugobjn_origlinear_constraintsnonlinear_constraintslinear nonlinearpb constants frameworksuccessn_iterk_new n_short_stepsn_very_short_steps n_alt_modelsstatus radius_save normal_steptangential_stepsteps_normrnimprove_geometrydist_newsame_best_pointfun_valcub_valceq_val merit_old merit_newsoc_stepratioill_conditionedgradgrad_alt maxcv_vals3 \/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/scipy/_lib/cobyqa/main.pyminimizer$s F w-kk'//?7??+KLG7mGkk//0OO,O KK w}}'E FE KEKK--.M'Mw&773G3G+HA+MDEE;;,,-L|$Lg%''2E2E*F!*KCDD++++,Kk"K KK w}}'E FE KE dE "w C% 7$ 7C 2y !T WF k&&9 :F1A0M-- 165 AF$%:GUKI      B "$$'&00I 99      ' '          $ $     45 -ggnn.E-FaHI +GGNN,C+DAFG 6w''() ,  .ww7G7G/H.IKL 3 GY7 jG F EML  WW--. .00F !  IINN  9#3#3#A#A#H#HH 5569I9II J  " "7 + && '0'F'Fw'O$ __,% 7789;O;OO P    )*N*N O O Y111 ! %&"" "a'"C)"6"666)*&!.!!3!N7IQ7N ! ! %&"#( &/&C&C&EOE8$,c$$i99:**+/$ (88>O05! 1-GWg.&OO$$&&&&&&  &OO$$t+Wgw GG88!I- {3 > >?3F&&'' )IIg Hyy~~h/#5("8= " ) $ ' 95GWg."55 %99!((4/E&/&6&6&K&Ky//$6'O((*''e4##y';';; )*B*B CC'( $) (//889I9IJ"'0'7'7'D'D ) 0 0(H 99>>$/)%;;3IINN84355,-L'1,& ) 0 0 = = ?,-L)))*:*:T*AB&/&C&C&EOE8 $ )*=*= >> !((!)"="=>#../! 9#7#77-9+>+>!??-,,#&+"#(  ##ww~~'>>#22  ( ( 1  & & (HH$$i&8&8):L:L / 0D0D/EFJJy//0&&II   225'B  ,1"iw,O)'$   55$$t+   $ $ &y |    q        % %            ' '            ' '            ' '     99       # #      Jyy,,'44F(%'66F"G&'88F"G&'88F"G#'88FF -"%/%>%>F&*G!."%/%@%@F&*G!."%/%@%@F&*G!+"%/%@%@F!""yy,,'44Fyy,,'44F& "yy44"%/%<%q?-r32r36-s'&s'*-tt-uu-vv-v76v7:x.x.5x.x.-x.1-y"!y"cR|Qttj|tj tj|tjSt |trc|j j |fk7s|jj |fk7rtd|dt|j |jSt|drKtj|}|j |dfk7r tdt|dddf|dddfStd ) z Uniformize the bounds. NzThe bounds must have z elements.rzGThe shape of the bounds is not compatible with the number of variables.rrzPThe bounds must be an instance of scipy.optimize.Bounds or an array-like object.) rrEfullinfr.lbshapeubr*r0asarray TypeError)r7r5s rr2r2qs~bgga"&&)2771bff+=>> FF # 99??qd "fiioo!&=4QCzBC Cfii++  #F# <The lower bound of the nonlinear constraints must be a vector.z>The upper bound of the nonlinear constraints must be a vector.r_)eqineqz+The constraint type must be "eq" or "ineq".rvz)The constraint function must be callable.rxr )rvr_rxzrThe constraints must be instances of scipy.optimize.LinearConstraint, scipy.optimize.NonlinearConstraint, or dict.)r.r!r0rrrrappendArEbroadcast_arraysrrvr*callabler"r)ryrr constraintrrs rr3r3s+t$GK,K"n !7 j"2 3 MB  MB  % % LL((R0   $7 8 B   B " ( (#NN((R0   D )Z':f+=F,!!NOOJ&hz%7H.I !LMM ! ( (%e,&v.&NN626 ? g7p 4 44rctj|vr!|tjdkr tdtj|vr!|tjdkr tdtj|vrEtj|vr3|tj|tjkrEtdtj|vrYt j t tj|tjg|tjj<ntj|vrYt jt tj|tjg|tjj<ndt tj|tjj<t tj|tjj<t|tj|tjj<t|tj|tjj<tj|vr!|tjdkr tdtj|vr=|tj|dz|dzzdzkDrtd |dz|dzzdzd |jtjjt tj|t|tj|tjj<tj|vr!|tjdkr td |jtjjt jt tj||tjdzgt|tj|tjj<tj|vr!|tjdkr td |jtjjt tj|t|tj|tjj<|jtjjt tjt|tj|tjj<|jtj jt tj t|tj |tj j<|jtj"jt tj"t%|tj"|tj"j<|jtj&jt tj&t%|tj&|tj&j<|jtj(jt tj(t|tj(|tj(j<|jtj*jt tj*t%|tj*|tj*j<|jtj,jt tj,t|tj,|tj,j<|jtj.jt tj.t%|tj.|tj.j<|D]B}|tj0j3vs$t5j6d |d t8dDy)z" Set the default options. rz1The initial trust-region radius must be positive.z2The final trust-region radius must be nonnegative.z_The initial trust-region radius must be greater than or equal to the final trust-region radius.rz4The number of interpolation points must be positive.rrz3The number of interpolation points must be at most rzrEminrvaluerVr&NPT setdefaultr+r?r@TARGETr%r#r$r'r,r(r)r- __members__valueswarningswarnRuntimeWarning)r{r5keys rr4r4s[~~ WW^^%<%CLMM~~ WW^^% 7>> "WW^^%< <B  7 "(*/' ) $$% 7 "(*/' ) $$%)8(G$$%(7(G$$%$)''..*A$BGGNN !$)''..*A$BGGNN !{{g''++"6!";%& &  w GKK QUq1u$5!#; ;AQ1q5!a'( +   w{{((/'++*Fq*IJ!$WW[[%9!:GGKK  7"ww/?/?'@A'E J     0 01!4 $q(  '*''2B2B*C&DGG   " "#7"ww/?/?'@A'EMNN (()!,'*''2B2B*C&DGG   " "# w~~++_W^^-LM$)''..*A$BGGNN ! %%//0.3''(.GG # # ) )* w,,ogoo.NO%)''//*B%CGGOO ! !" w}}**OGMM,JK#' (>#?GGMM    !!++,*-WW5H5H-I)JGG   % %& ##--.,08M8M0N+OGG ! ! ' '( "",,-+.gg6J6J.K*LGG & &' w}}**OGMM,JK#' (>#?GGMM   H g))002 2 MM,SE3^Q GHrc t|}|jtjjt tjt |tj|tjj<|tjdks|tjdk\r td|jtjjt tjt |tj|tjj<|tjdkr tdtj|vr!|tjdkr tdtj|vr!|tjdkr tdtj|vrEtj|vr3|tj|tjk\rNtdtj|vr_tjt tjdd|tjzzg|tjj<ntj|vr\tjt tjd |tjzg|tjj<ndt tj|tjj<t tj|tjj<|jtjjt tjt |tj|tjj<|tjdks|tjdk\r td tj|vr!|tjdkr td tj |vr!|tj dkr td tj|vrEtj |vr3|tj |tjkDrEtd tj|vrYtjt tj |tjg|tj j<ntj |vrYtjt tj|tj g|tjj<ndt tj|tjj<t tj |tj j<tj"|vr7|tj"dks|tj"dk\r tdtj$|vr7|tj$dks|tj$dk\r tdtj"|vrEtj$|vr3|tj"|tj$kDrEtdtj"|vrYtjt tj$|tj"g|tj$j<ntj$|vrYtjt tj"|tj$g|tj"j<ndt tj"|tj"j<t tj$|tj$j<|jtj&jt tj&t |tj&|tj&j<|tj&dks|tj&dk\r tdtj(|vr!|tj(dkr tdtj*|vr!|tj*dkr tdtj(|vrEtj*|vr3|tj*|tj(krEtdtj(|vrYtjt tj*|tj(g|tj*j<ntj*|vrYtjt tj(|tj*g|tj(j<ndt tj(|tj(j<t tj*|tj*j<|jtj,jt tj,t |tj,|tj,j<|tj,dks|tj,dk\r td|jtj.jt tj.t |tj.|tj.j<|tj.dks|tj.dk\r td|jtj0jt tj0t |tj0|tj0j<|tj0dks|tj0dk\r td|jtj2jt tj2t |tj2|tj2j<|tj2dkr td|jtj4jt tj4t |tj4|tj4j<|tj4dkr td|jtj6jt tj6t |tj6|tj6j<|tj6dkr td|jtj8jt tj8t |tj8|tj8j<|tj8dkr td|jtj:jt tj:t=|tj:|tj:j<|D]B}|tj>jAvs$tCjDd|dtFdD|S)z$ Set the default constants. rg?zCThe constant decrease_radius_factor must be in the interval (0, 1).z>The constant increase_radius_threshold must be greater than 1.z;The constant increase_radius_factor must be greater than 1.z>The constant decrease_radius_threshold must be greater than 1.zPThe constant decrease_radius_threshold must be less than increase_radius_factor.g?rzGThe constant decrease_resolution_factor must be in the interval (0, 1).z?The constant large_resolution_threshold must be greater than 1.zBThe constant moderate_resolution_threshold must be greater than 1.zVThe constant moderate_resolution_threshold must be at most large_resolution_threshold.z6The constant low_ratio must be in the interval (0, 1).z7The constant high_ratio must be in the interval (0, 1).z2The constant low_ratio must be at most high_ratio.z;The constant very_low_ratio must be in the interval (0, 1).zKThe constant penalty_increase_threshold must be greater than or equal to 1.zThe constant low_radius_factor must be in the interval (0, 1).zAThe constant byrd_omojokun_factor must be in the interval (0, 1).z@The constant threshold_ratio_constraints must be greater than 1.z4The constant large_shift_factor must be nonnegative.z:The constant large_gradient_factor must be greater than 1.z6The constant resolution_factor must be greater than 1.zUnknown constant: rr)$r!rrDECREASE_RADIUS_FACTORrrr&r*INCREASE_RADIUS_THRESHOLDINCREASE_RADIUS_FACTORDECREASE_RADIUS_THRESHOLDrErrVrTLARGE_RESOLUTION_THRESHOLDMODERATE_RESOLUTION_THRESHOLDrl HIGH_RATIOrfPENALTY_INCREASE_THRESHOLDPENALTY_INCREASE_FACTORrRLOW_RADIUS_FACTORr`THRESHOLD_RATIO_CONSTRAINTSrNrirW IMPROVE_TCGr$rrrrr)r|rrs rr6r67sX V I ((..)::;9>)2239Ii..445 )223s: Y55 6# =   ++11)==> I   ++y8 i99 :c A L   ((I5  / /9 < i99 :99: ;4   ) )Y 6?Avv!)"E"EFsYy'G'GHHI @ )55;;<  , , 9<>FF!)"B"BCi C CDD = )22889=N  , ,= )22889 iAA B )55;;< ,,22)>>?=B)667=Ii22889 )6673> Y99 :c A   ,, 9 i:: ;s B M   //9< i== ># E   ,, 9  3 3y @ i== > <<= >>   - - :CE66!)"I"IJ)>>? D )99??@  0 0I =@B!)"F"FG)AAB A )66<<= iBB C )66<<= iEE F )99??@i')%%&#- Y(( )S 0 D  y()&&'3. Y)) *c 1 E  i'I,@,@I,M Y(( )Ii6J6J,K KD     )02!)"6"67)--. 1 )&&,,-    */1vv!)"5"56)../ 0 )%%++,0A   0 )%%++,1B  1 )&&,,-  &&)22316)**+1Ii&&,,- )**+s2 Y-- .# 5 I   ,, 9 i:: ;c A *  ))Y6 i77 8C ? J   ,, 9  - - : i77 8 <<= >.   - - :=?VV!)"C"CD)>>? > )3399:  * *i 7@B!)"F"FG);;< A )66<<= iBB C )66<<=>O  - -> )3399:&&,,)8897<)0017Ii,,223 )001S8 Y33 4 ; O  ##)))55649)--.4Ii))//0 )--.#5 Y00 1S 8 L  &&,,)8897<)0017Ii,,223 )001S8 Y33 4 ; O  --33)??@>C)778>Ii3399:6673> N  $$**)6675:)../5Ii**001--.4() ) ''--)99:8=)1128Ii--334001S8 H  ##)))55649)--.4Ii))//0,,-4 D  ##)//0.2)''(.Ii##))* J i++224 4 MM.se15~q IJ rc~|j|tjk\rt|j|z}|||j \}}}|j |||}||tjkr||tjkrt|jr||tjkrt|||fS)z: Evaluate the objective and constraint functions. ) rsrr?rrJrurprr%ris_feasibilityr) rrrr{x_evalrrrr_vals rrYrYs yyGG,,--    $F "69+<+< =GWg HHVWg .E77>>** WW445 5 Ugg.E.E&FF GW $$rc<|j|\}}}|xr,tj|xrtj|}|tjtj fvr|xr||t jk}t} tjdtjdtjdtjdtj dtjdtjdtjdtjd i j!|d | _|| _|j&| _|j+|| _|| _|| _|j2| _|| _|t j8r"|j:| _|j<| _|t j>rMtA| j"|| j,| j.| j0| j4| j6| S) z7 Build the result of the optimization process. z##&@##&E##&5##%K!B!" c&'(# N$FNLLFMzz!}FHFJFL))FKFJw$$%^^!//w NN  HH JJ LL KK JJ  MrcXtt|dtd|dtd|d|jstd|jd|dtd|dtjd it 5td|ddddy#1swYyxYw) zP Print information about the current state of the optimization process. rz Number of function evaluations: zNumber of iterations: zLeast value of z: zMaximum constraint violation: zCorresponding point: Nr )r<rfun_namerE printoptionsr)rrrrrrsrs rrqrqs G WIQ- ,VHA 67 "6(! ,-    }Bwiq9: *5' 34  )= ), %aS*+,,,s B  B))r Nr NN)%rnumpyrEscipy.optimizerrrrrrproblemr r r r r utilsrrrrrsettingsrrrrrrrr2r3r4r6rYr9rqr rrrs{#   J Z 2B5JeHPTn %&2j ,r