L i/dZddlZddlmZmZddlmZmZm Z ddl m Z ddl m Z ddlmZmZmZmZmZmZmZmZmZmZmZmZd Zd Zd Z d d Zy)a Dogleg algorithm with rectangular trust regions for least-squares minimization. The description of the algorithm can be found in [Voglis]_. The algorithm does trust-region iterations, but the shape of trust regions is rectangular as opposed to conventional elliptical. The intersection of a trust region and an initial feasible region is again some rectangle. Thus, on each iteration a bound-constrained quadratic optimization problem is solved. A quadratic problem is solved by well-known dogleg approach, where the function is minimized along piecewise-linear "dogleg" path [NumOpt]_, Chapter 4. If Jacobian is not rank-deficient then the function is decreasing along this path, and optimization amounts to simply following along this path as long as a point stays within the bounds. A constrained Cauchy step (along the anti-gradient) is considered for safety in rank deficient cases, in this situations the convergence might be slow. If during iterations some variable hit the initial bound and the component of anti-gradient points outside the feasible region, then a next dogleg step won't make any progress. At this state such variables satisfy first-order optimality conditions and they are excluded before computing a next dogleg step. Gauss-Newton step can be computed exactly by `numpy.linalg.lstsq` (for dense Jacobian matrices) or by iterative procedure `scipy.sparse.linalg.lsmr` (for dense and sparse matrices, or Jacobian being LinearOperator). The second option allows to solve very large problems (up to couple of millions of residuals on a regular PC), provided the Jacobian matrix is sufficiently sparse. But note that dogbox is not very good for solving problems with large number of constraints, because of variables exclusion-inclusion on each iteration (a required number of function evaluations might be high or accuracy of a solution will be poor), thus its large-scale usage is probably limited to unconstrained problems. References ---------- .. [Voglis] C. Voglis and I. E. Lagaris, "A Rectangular Trust Region Dogleg Approach for Unconstrained and Bound Constrained Nonlinear Optimization", WSEAS International Conference on Applied Mathematics, Corfu, Greece, 2004. .. [NumOpt] J. Nocedal and S. J. Wright, "Numerical optimization, 2nd edition". N)lstsqnorm)LinearOperatoraslinearoperatorlsmr)OptimizeResult)_call_callback_maybe_halt) step_size_to_bound in_boundsupdate_tr_radiusevaluate_quadraticbuild_quadratic_1dminimize_quadratic_1d compute_gradcompute_jac_scalecheck_terminationscale_for_robust_loss_functionprint_header_nonlinearprint_iteration_nonlinearclj\}}fd}fd}t||f||tS)zCompute LinearOperator to use in LSMR by dogbox algorithm. `active_set` mask is used to excluded active variables from computations of matrix-vector products. cr|jj}d|<j|zSNr)ravelcopymatvec)xx_freeJop active_setds `/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/scipy/optimize/_lsq/dogbox.pyrzlsmr_operator..matvecBs2!zzz!a%  c:j|z}d|<|Sr)rmatvec)rrrr r!s r"r%zlsmr_operator..rmatvecGs#  A * r#)rr%dtype)shaperfloat)rr!r mnrr%s``` r" lsmr_operatorr,:s3 99DAq!  1a& NNr#c2||z }||z }tj|| }tj||}tj||}tj||} tj|| } tj||} |||| | | fS)aFind intersection of trust-region bounds and initial bounds. Returns ------- lb_total, ub_total : ndarray with shape of x Lower and upper bounds of the intersection region. orig_l, orig_u : ndarray of bool with shape of x True means that an original bound is taken as a corresponding bound in the intersection region. tr_l, tr_u : ndarray of bool with shape of x True means that a trust-region bound is taken as a corresponding bound in the intersection region. )npmaximumminimumequal) r tr_boundslbub lb_centered ub_centeredlb_totalub_totalorig_lorig_utr_ltr_us r"find_intersectionr=Osq&Kq&Kzz+ z2Hzz+y1H XXh ,F XXh ,F 88Hyj )D 88Hi (D XvvtT 99r#ct||||\}} } } } } tj|t}t ||| r||dfSt tj|| || \}}t ||d|d |z}||z }t |||| \}}d||dk| z<d||dkD| z<tj|dk| z|dkD| zz}|||zz||fS)aFind dogleg step in a rectangular region. Returns ------- step : ndarray, shape (n,) Computed dogleg step. bound_hits : ndarray of int, shape (n,) Each component shows whether a corresponding variable hits the initial bound after the step is taken: * 0 - a variable doesn't hit the bound. * -1 - lower bound is hit. * 1 - upper bound is hit. tr_hit : bool Whether the step hit the boundary of the trust-region. r'Frr )r=r. zeros_likeintr r rany)r newton_stepgabr2r3r4r7r8r9r:r;r< bound_hits to_bounds_ cauchy_step step_diff step_sizehitstr_hits r" dogleg_steprPls 6G 9b"62Hhdq,Jh1J--%bmmA&6HhOLIq)Aq);|>d<tA||>rd} n| d} t?|||||'|(||||  S)Nr g?rjac)ordg?r?r@dTexact)rcondrggg?) cost_only)rfunnitnfevcost) rr\rYrRgrad optimality active_maskr[njevstatus)!rr.sumrdotr isinstancestrrrinfrArBr1 empty_likesizerrrrrr,rrPfillrclipallisfiniter rrr )?rYrRx0f0J0r3r4ftolxtolgtolmax_nfevx_scale loss_function tr_solver tr_optionsverbosecallbackff_truer[Jrarhor\rE jac_scalescale scale_invDeltaon_boundrsteptermination_status iteration step_normactual_reductionr free_setg_freeg_fullg_normrlb_freeub_free scale_freeJ_freerDrFrGrlsmr_opr2 step_free on_bound_freerOpredicted_reductionx_newf_new step_h_normcost_newratiomaskintermediate_results? r"dogboxrs^ A VVXF D A D ARVVCF^#-aC81RVVAq\!QA7C(=W-=I,Q/y"AKy iRVV ,E z}}Rs+H!#HRXXb" !"HRXXb"  A == D77S=II!| \A% ;8* aRVV$ D=!"  a< %it=M&/ 9  )TX-= 8X,X,8_   q({^F"5a8K&ffvg>DAq & "1%C$C ;G9j9!DhAdGq=DhAdGAVVXFDAA AID(#A&5aC@1Q"A#4Q #B yI Q   "0 #6 *2  '(-&("[ ^!  $F64d;M OOr#)N)__doc__numpyr. numpy.linalgrrscipy.sparse.linalgrrrscipy.optimizerscipy._lib._utilr commonr r r rrrrrrrrrr,r=rPrr#r"rsS)T$FF)67777O*::(CXDHBOr#