`L ij dZddlZddlZddlZddlZddlmZmZddlZ ddl m Z ddl m Z ddlmZddlmZdd lmZdd lmZmZdd lmZdd lmZmZmZdd lmZmZm Z m!Z!m"Z"ddl#m$Z$m%Z%ddl&m'Z'm(Z(m)Z)m*Z*ddl+m,Z,m-Z-m.Z.dZ/dZ0dddddde jbe jdjfddZ4dZ5edgdgdgdddddddde jbe jdjfddd Z6Gd!d"e,Z7Gd#d$e7Z8ddddddde jbe jdjffd%Z9Gd&d'e7Z:y)(zUGraphicalLasso: sparse inverse covariance estimation with an l1-penalized estimator. N)IntegralReal)linalg) _fit_context)ConvergenceWarning)_cd_fast)lars_path_gram)check_cvcross_val_score)Bunch)Interval StrOptionsvalidate_params)MetadataRouter MethodMapping_raise_for_params_routing_enabledprocess_routing)Paralleldelayed)_is_arraylike_not_scalarcheck_random_state check_scalar validate_data)EmpiricalCovarianceempirical_covariancelog_likelihoodcV|jd}dt||z|tjdtjzzz}||tj |j tj tj|j z zz }|S)zEvaluation of the graphical-lasso objective function the objective function is made of a shifted scaled version of the normalized log-likelihood (i.e. its empirical mean over the samples) and a penalisation term to promote sparsity rr)shapernplogpiabssumdiag)mle precision_alphapcosts e/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/sklearn/covariance/_graph_lasso.py _objectiver/-s A .j1 1Aq255y8I4I IDERVVJ'++-rwwz7J0K0O0O0QQ RRD Kctj||z}||jdz}||tj|jtjtj|jz zz }|S)zExpression of the dual gap convergence criterion The specific definition is given in Duchi "Projected Subgradient Methods for Learning Sparse Gaussians". r)r#r'r"r&r()emp_covr*r+gaps r. _dual_gapr4:sr &&:% &C:  A C5BFF:&**,rvvbggj6I/J/N/N/PP QQC Jr0cd-C6?dF)cov_initmodetolenet_tolmax_iterverboseepsc|j\} } |dk(rstj|} dt|| z} | | t j dtj zzz } t j|| z| z } || | | fdfS||j}n|j}|dz}|jdd| dz}||jdd| dz<tj|} t j| }d}t}|dk(rtdd }n td } tj} t j|ddddfd }t|D]}t| D]X}|dkDr*|dz }||||k7||<|dd|f||k7|dd|f<n|ddddf|dd||||k7f}t j di|5|dk(rF| ||k7|f| ||fd |zzz }t#j$||d|||||t'dd \}} } } n't)|||j*|| dz z d|dd\} } }dddd|||ft j,|||k7|fz z | ||f<| ||f |z| ||k7|f<| ||f |z| |||k7f<t j,||}|||||k7f<||||k7|f<[t j.| js t1dt3|| |} t5|| |} |rt7d|| | fz|j9| | ft j:| |krnIt j.| r |dkDst1dt=j>d|| fzt@|| ||dzfS#1swYexYw#t0$r}|jBddzf|_!|d}~wwxYw)Nrr!rgffffff?rr5raiseignore)overinvalid)rCC)orderiFTlars)XyGram n_samples alpha_min copy_Gramr>method return_pathg?z1The system is too ill-conditioned for this solverz<[graphical_lasso] Iteration % 3i, cost % 3.2e, dual gap %.3ezANon SPD result: the system is too ill-conditioned for this solverzDgraphical_lasso: did not converge after %i iteration: dual gap: %.3ez3. The system is too ill-conditioned for this solver)"r"rinvrr#r$r%r'copyflatpinvharangelistdictinfrangeerrstatecd_fastenet_coordinate_descent_gramrr sizedotisfiniteFloatingPointErrorr4r/printappendr&warningswarnrargs)r2r+r8r9r:r;r<r=r>_ n_featuresr*r-d_gap covariance_diagonalindicesicostserrorssub_covarianceidxdirowcoefses r._graphical_lassorsGsMMMAz zZZ( nWj99 RVVAI...w+,z9 T5M144lln mmo 4K||-zA~-.H*2K& Q&'k*Jii #G A FE t|7H5g&TQRV!4C@xK AZ(2 97qB)4RC)HN2&,72,>w#~,NN1b5)(3ABF(;N1%c7c>12[[*6*t|'w#~s':;)#s(3dSj@B!*1)M)M!!*$$.t4! *q!Q'5"!/&)hh&+zA~&>&* ##)(- ' 1e)>(+S)ff[C)<=uEF( 38$4>c3h3G2G%2O 7c>3./3=c3h3G2G%2O 33./~u538 CC/038 GsNC/0e2 9f;;z~~/0(Ggz59Egz59DR$&' LL$ 'vve}s";;t$Q(WGK N MMVU#$"   E1q5 00I@ &&)SSUs?0B*N=A3N0 D&N=5N=<+N=0N: 5N== O%O  O%ctj|}d|jdd|jddz<tjtj |S)aFind the maximum alpha for which there are some non-zeros off-diagonal. Parameters ---------- emp_cov : ndarray of shape (n_features, n_features) The sample covariance matrix. Notes ----- This results from the bound for the all the Lasso that are solved in GraphicalLasso: each time, the row of cov corresponds to Xy. As the bound for alpha is given by `max(abs(Xy))`, the result follows. rNr)r#rPrQr"maxr&)r2As r. alpha_maxrwsI A !AFF aggaj1n  66"&&) r0 array-likeboolean)r2 return_costs return_n_iterprefer_skip_nested_validation)r9r:r;r<r=rzr>r{c t||d|||||d j|} | j| jg} |r| j | j | r| j | j t| S)a+L1-penalized covariance estimator. Read more in the :ref:`User Guide `. .. versionchanged:: v0.20 graph_lasso has been renamed to graphical_lasso Parameters ---------- emp_cov : array-like of shape (n_features, n_features) Empirical covariance from which to compute the covariance estimate. alpha : float The regularization parameter: the higher alpha, the more regularization, the sparser the inverse covariance. Range is (0, inf]. mode : {'cd', 'lars'}, default='cd' The Lasso solver to use: coordinate descent or LARS. Use LARS for very sparse underlying graphs, where p > n. Elsewhere prefer cd which is more numerically stable. tol : float, default=1e-4 The tolerance to declare convergence: if the dual gap goes below this value, iterations are stopped. Range is (0, inf]. enet_tol : float, default=1e-4 The tolerance for the elastic net solver used to calculate the descent direction. This parameter controls the accuracy of the search direction for a given column update, not of the overall parameter estimate. Only used for mode='cd'. Range is (0, inf]. max_iter : int, default=100 The maximum number of iterations. verbose : bool, default=False If verbose is True, the objective function and dual gap are printed at each iteration. return_costs : bool, default=False If return_costs is True, the objective function and dual gap at each iteration are returned. eps : float, default=eps The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Default is `np.finfo(np.float64).eps`. return_n_iter : bool, default=False Whether or not to return the number of iterations. Returns ------- covariance : ndarray of shape (n_features, n_features) The estimated covariance matrix. precision : ndarray of shape (n_features, n_features) The estimated (sparse) precision matrix. costs : list of (objective, dual_gap) pairs The list of values of the objective function and the dual gap at each iteration. Returned only if return_costs is True. n_iter : int Number of iterations. Returned only if `return_n_iter` is set to True. See Also -------- GraphicalLasso : Sparse inverse covariance estimation with an l1-penalized estimator. GraphicalLassoCV : Sparse inverse covariance with cross-validated choice of the l1 penalty. Notes ----- The algorithm employed to solve this problem is the GLasso algorithm, from the Friedman 2008 Biostatistics paper. It is the same algorithm as in the R `glasso` package. One possible difference with the `glasso` R package is that the diagonal coefficients are not penalized. Examples -------- >>> import numpy as np >>> from sklearn.datasets import make_sparse_spd_matrix >>> from sklearn.covariance import empirical_covariance, graphical_lasso >>> true_cov = make_sparse_spd_matrix(n_dim=3,random_state=42) >>> rng = np.random.RandomState(42) >>> X = rng.multivariate_normal(mean=np.zeros(3), cov=true_cov, size=3) >>> emp_cov = empirical_covariance(X, assume_centered=True) >>> emp_cov, _ = graphical_lasso(emp_cov, alpha=0.05) >>> emp_cov array([[ 1.687, 0.212, -0.209], [ 0.212, 0.221, -0.0817], [-0.209, -0.0817, 0.232]]) precomputedT) r+r9 covariancer:r;r<r=r>assume_centered)GraphicalLassofitrgr*r`costs_n_iter_tuple) r2r+r9r:r;r<r=rzr>r{modeloutputs r.graphical_lassorsl      c'l !1!1 2F ell# emm$ =r0c >eZdZUiejeedddgeedddgeedddgeddhgdgeeddd gd Ze e d <ejd d d ddde je jjdffd ZxZS)BaseGraphicalLassorNrightclosedleftr5rFr=both)r:r;r<r9r=r>_parameter_constraintsstore_precisionr6r7Fczt||||_||_||_||_||_||_y)Nr)super__init__r:r;r<r9r=r>) selfr:r;r<r9r=r>r __class__s r.rzBaseGraphicalLasso.__init__us? 9      r0)__name__ __module__ __qualname__rrrrrrrU__annotations__popr#finfofloat64r>r __classcell__rs@r.rris$  4 4$q$w78dAtG<=h4?@T6N+,;q$v67$D01   BHHRZZ $ $r0rc eZdZUdZiej eedddgedhdgdZe e d< dd dd d d d e je jjd d fd ZedddZxZS)ragSparse inverse covariance estimation with an l1-penalized estimator. For a usage example see :ref:`sphx_glr_auto_examples_applications_plot_stock_market.py`. Read more in the :ref:`User Guide `. .. versionchanged:: v0.20 GraphLasso has been renamed to GraphicalLasso Parameters ---------- alpha : float, default=0.01 The regularization parameter: the higher alpha, the more regularization, the sparser the inverse covariance. Range is (0, inf]. mode : {'cd', 'lars'}, default='cd' The Lasso solver to use: coordinate descent or LARS. Use LARS for very sparse underlying graphs, where p > n. Elsewhere prefer cd which is more numerically stable. covariance : "precomputed", default=None If covariance is "precomputed", the input data in `fit` is assumed to be the covariance matrix. If `None`, the empirical covariance is estimated from the data `X`. .. versionadded:: 1.3 tol : float, default=1e-4 The tolerance to declare convergence: if the dual gap goes below this value, iterations are stopped. Range is (0, inf]. enet_tol : float, default=1e-4 The tolerance for the elastic net solver used to calculate the descent direction. This parameter controls the accuracy of the search direction for a given column update, not of the overall parameter estimate. Only used for mode='cd'. Range is (0, inf]. max_iter : int, default=100 The maximum number of iterations. verbose : bool, default=False If verbose is True, the objective function and dual gap are plotted at each iteration. eps : float, default=eps The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Default is `np.finfo(np.float64).eps`. .. versionadded:: 1.3 assume_centered : bool, default=False If True, data are not centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False, data are centered before computation. Attributes ---------- location_ : ndarray of shape (n_features,) Estimated location, i.e. the estimated mean. covariance_ : ndarray of shape (n_features, n_features) Estimated covariance matrix precision_ : ndarray of shape (n_features, n_features) Estimated pseudo inverse matrix. n_iter_ : int Number of iterations run. costs_ : list of (objective, dual_gap) pairs The list of values of the objective function and the dual gap at each iteration. Returned only if return_costs is True. .. versionadded:: 1.3 n_features_in_ : int Number of features seen during :term:`fit`. .. versionadded:: 0.24 feature_names_in_ : ndarray of shape (`n_features_in_`,) Names of features seen during :term:`fit`. Defined only when `X` has feature names that are all strings. .. versionadded:: 1.0 See Also -------- graphical_lasso : L1-penalized covariance estimator. GraphicalLassoCV : Sparse inverse covariance with cross-validated choice of the l1 penalty. Examples -------- >>> import numpy as np >>> from sklearn.covariance import GraphicalLasso >>> true_cov = np.array([[0.8, 0.0, 0.2, 0.0], ... [0.0, 0.4, 0.0, 0.0], ... [0.2, 0.0, 0.3, 0.1], ... [0.0, 0.0, 0.1, 0.7]]) >>> np.random.seed(0) >>> X = np.random.multivariate_normal(mean=[0, 0, 0, 0], ... cov=true_cov, ... size=200) >>> cov = GraphicalLasso().fit(X) >>> np.around(cov.covariance_, decimals=3) array([[0.816, 0.049, 0.218, 0.019], [0.049, 0.364, 0.017, 0.034], [0.218, 0.017, 0.322, 0.093], [0.019, 0.034, 0.093, 0.69 ]]) >>> np.around(cov.location_, decimals=3) array([0.073, 0.04 , 0.038, 0.143]) rNrrr)r+rrr5r6r7F)r9rr:r;r<r=r>rc Nt |||||||| ||_||_yN)r:r;r<r9r=r>r)rrr+r) rr+r9rr:r;r<r=r>rrs r.rzGraphicalLasso.__init__s< +   $r0Tr|c Zt||dd}|jdk(r8|j}tj|j d|_nat||j}|jr(tj|j d|_n|jd|_t||jd|j|j|j|j|j |j" \|_|_|_|_|S) aFit the GraphicalLasso model to X. Parameters ---------- X : array-like of shape (n_samples, n_features) Data from which to compute the covariance estimate. y : Ignored Not used, present for API consistency by convention. Returns ------- self : object Returns the instance itself. r)ensure_min_featuresensure_min_samplesrrrrNr+r8r9r:r;r<r=r>)rrrPr#zerosr" location_rrmeanrsr+r9r:r;r<r=r>rgr*rr)rXyr2s r.rzGraphicalLasso.fits$ $qQ O ??m +ffhGXXaggaj1DN*1d>R>RSG##!#!''!*!5!"GW **]]]]LL H D$/4;  r0){Gz?N)rrr__doc__rrrrrrUrr#rrr>rrrrrs@r.rrstl$  3 3$4D89!=/2D9$D%  BHHRZZ $ $%25(6(r0rc 4td|dz } t|} || j} n|} t} t}t}| t|}|D]} t | || ||||| |  \} }}}| j | |j || t |}|7tjstj }|j ||dk(r tjjd|dkDs|td|fztd|z|| ||fS| |fS#t$rRtj }| j tj|j tjYwxYw)a l1-penalized covariance estimator along a path of decreasing alphas Read more in the :ref:`User Guide `. Parameters ---------- X : ndarray of shape (n_samples, n_features) Data from which to compute the covariance estimate. alphas : array-like of shape (n_alphas,) The list of regularization parameters, decreasing order. cov_init : array of shape (n_features, n_features), default=None The initial guess for the covariance. X_test : array of shape (n_test_samples, n_features), default=None Optional test matrix to measure generalisation error. mode : {'cd', 'lars'}, default='cd' The Lasso solver to use: coordinate descent or LARS. Use LARS for very sparse underlying graphs, where p > n. Elsewhere prefer cd which is more numerically stable. tol : float, default=1e-4 The tolerance to declare convergence: if the dual gap goes below this value, iterations are stopped. The tolerance must be a positive number. enet_tol : float, default=1e-4 The tolerance for the elastic net solver used to calculate the descent direction. This parameter controls the accuracy of the search direction for a given column update, not of the overall parameter estimate. Only used for mode='cd'. The tolerance must be a positive number. max_iter : int, default=100 The maximum number of iterations. This parameter should be a strictly positive integer. verbose : int or bool, default=False The higher the verbosity flag, the more information is printed during the fitting. eps : float, default=eps The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Default is `np.finfo(np.float64).eps`. .. versionadded:: 1.3 Returns ------- covariances_ : list of shape (n_alphas,) of ndarray of shape (n_features, n_features) The estimated covariance matrices. precisions_ : list of shape (n_alphas,) of ndarray of shape (n_features, n_features) The estimated (sparse) precision matrices. scores_ : list of shape (n_alphas,), dtype=float The generalisation error (log-likelihood) on the test data. Returned only if test data is passed. rrr.z/[graphical_lasso_path] alpha: %.2e, score: %.2ez"[graphical_lasso_path] alpha: %.2e)rurrPrTrsr`rr^r#rVnanr]sysstderrwriter_)ralphasr8X_testr9r:r;r<r=r> inner_verboser2rg covariances_ precisions_scores_ test_emp_covr+r*rd this_scores r.graphical_lasso_pathrKsV7Q;'M"1%Glln  6L&KfG +F3 #D ',<$!!% - )KQ    ,   z *!+L*E  ;;z* ffW NN: & a< JJ  S ! q[!Ej)* :UBCG#DH['11  $$)" '&&J    '   rvv & 's!A D<`. .. versionchanged:: v0.20 GraphLassoCV has been renamed to GraphicalLassoCV Parameters ---------- alphas : int or array-like of shape (n_alphas,), dtype=float, default=4 If an integer is given, it fixes the number of points on the grids of alpha to be used. If a list is given, it gives the grid to be used. See the notes in the class docstring for more details. Range is [1, inf) for an integer. Range is (0, inf] for an array-like of floats. n_refinements : int, default=4 The number of times the grid is refined. Not used if explicit values of alphas are passed. Range is [1, inf). cv : int, cross-validation generator or iterable, default=None Determines the cross-validation splitting strategy. Possible inputs for cv are: - None, to use the default 5-fold cross-validation, - integer, to specify the number of folds. - :term:`CV splitter`, - An iterable yielding (train, test) splits as arrays of indices. For integer/None inputs :class:`~sklearn.model_selection.KFold` is used. Refer :ref:`User Guide ` for the various cross-validation strategies that can be used here. .. versionchanged:: 0.20 ``cv`` default value if None changed from 3-fold to 5-fold. tol : float, default=1e-4 The tolerance to declare convergence: if the dual gap goes below this value, iterations are stopped. Range is (0, inf]. enet_tol : float, default=1e-4 The tolerance for the elastic net solver used to calculate the descent direction. This parameter controls the accuracy of the search direction for a given column update, not of the overall parameter estimate. Only used for mode='cd'. Range is (0, inf]. max_iter : int, default=100 Maximum number of iterations. mode : {'cd', 'lars'}, default='cd' The Lasso solver to use: coordinate descent or LARS. Use LARS for very sparse underlying graphs, where number of features is greater than number of samples. Elsewhere prefer cd which is more numerically stable. n_jobs : int, default=None Number of jobs to run in parallel. ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. ``-1`` means using all processors. See :term:`Glossary ` for more details. .. versionchanged:: v0.20 `n_jobs` default changed from 1 to None verbose : bool, default=False If verbose is True, the objective function and duality gap are printed at each iteration. eps : float, default=eps The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Default is `np.finfo(np.float64).eps`. .. versionadded:: 1.3 assume_centered : bool, default=False If True, data are not centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False, data are centered before computation. Attributes ---------- location_ : ndarray of shape (n_features,) Estimated location, i.e. the estimated mean. covariance_ : ndarray of shape (n_features, n_features) Estimated covariance matrix. precision_ : ndarray of shape (n_features, n_features) Estimated precision matrix (inverse covariance). costs_ : list of (objective, dual_gap) pairs The list of values of the objective function and the dual gap at each iteration. Returned only if return_costs is True. .. versionadded:: 1.3 alpha_ : float Penalization parameter selected. cv_results_ : dict of ndarrays A dict with keys: alphas : ndarray of shape (n_alphas,) All penalization parameters explored. split(k)_test_score : ndarray of shape (n_alphas,) Log-likelihood score on left-out data across (k)th fold. .. versionadded:: 1.0 mean_test_score : ndarray of shape (n_alphas,) Mean of scores over the folds. .. versionadded:: 1.0 std_test_score : ndarray of shape (n_alphas,) Standard deviation of scores over the folds. .. versionadded:: 1.0 n_iter_ : int Number of iterations run for the optimal alpha. n_features_in_ : int Number of features seen during :term:`fit`. .. versionadded:: 0.24 feature_names_in_ : ndarray of shape (`n_features_in_`,) Names of features seen during :term:`fit`. Defined only when `X` has feature names that are all strings. .. versionadded:: 1.0 See Also -------- graphical_lasso : L1-penalized covariance estimator. GraphicalLasso : Sparse inverse covariance estimation with an l1-penalized estimator. Notes ----- The search for the optimal penalization parameter (`alpha`) is done on an iteratively refined grid: first the cross-validated scores on a grid are computed, then a new refined grid is centered around the maximum, and so on. One of the challenges which is faced here is that the solvers can fail to converge to a well-conditioned estimate. The corresponding values of `alpha` then come out as missing values, but the optimum may be close to these missing values. In `fit`, once the best parameter `alpha` is found through cross-validation, the model is fit again using the entire training set. Examples -------- >>> import numpy as np >>> from sklearn.covariance import GraphicalLassoCV >>> true_cov = np.array([[0.8, 0.0, 0.2, 0.0], ... [0.0, 0.4, 0.0, 0.0], ... [0.2, 0.0, 0.3, 0.1], ... [0.0, 0.0, 0.1, 0.7]]) >>> np.random.seed(0) >>> X = np.random.multivariate_normal(mean=[0, 0, 0, 0], ... cov=true_cov, ... size=200) >>> cov = GraphicalLassoCV().fit(X) >>> np.around(cov.covariance_, decimals=3) array([[0.816, 0.051, 0.22 , 0.017], [0.051, 0.364, 0.018, 0.036], [0.22 , 0.018, 0.322, 0.094], [0.017, 0.036, 0.094, 0.69 ]]) >>> np.around(cov.location_, decimals=3) array([0.073, 0.04 , 0.038, 0.143]) For an example comparing :class:`sklearn.covariance.GraphicalLassoCV`, :func:`sklearn.covariance.ledoit_wolf` shrinkage and the empirical covariance on high-dimensional gaussian data, see :ref:`sphx_glr_auto_examples_covariance_plot_sparse_cov.py`. rNrrrxr cv_object)r n_refinementscvn_jobsrr6r7r5F) rrrr:r;r<r9rr=r>rc jt |||||| | | ||_||_||_||_yr)rrrrrr) rrrrr:r;r<r9rr=r>rrs r.rzGraphicalLassoCV.__init__sK +   * r0Tr|c t|dtdjr(tjj d_njd_tj}tj|d}t}j}tdjdz t|rCjD]%}t!|d t"dtj$d 'jd} n_j&} t)|} d | z} tj*tj,| tj,| |d d dt/rt1dfi|} nt3t3i} t5j4} t7| D]}t9j:5t9j<dt>tAjBjfd|jD|fi| jFjDD}d d d tI\}}}tI|}tI|}|jKtI||tM|tOjPdd}tj$ }d}tS|D]\}\}}}tj|}|dtjTtjVjXz k\rtjZ}tj\|r|}||k\s|}|}dk(r|dd} |dd} ne||k(r%|t_|dz k(s||d} ||dzd} n;|t_|dz k(r||d} d ||dz} n||dz d} ||dzd} t|sEtj*tj,| tj,| |dzddjsi| dkDsptad|dz| t5j4| z fzttI|}t|d}t|djcd|jctetg|jB|tjh|}dtjhi_5t7|j dD]}|d d |fjjd|d<tj|djjd<tjl|djjd<}|_7tq||jrjtjvjxjX\_=_>_?_@S#1swYxYw) aXFit the GraphicalLasso covariance model to X. Parameters ---------- X : array-like of shape (n_samples, n_features) Data from which to compute the covariance estimate. y : Ignored Not used, present for API consistency by convention. **params : dict, default=None Parameters to be passed to the CV splitter and the cross_val_score function. .. versionadded:: 1.5 Only available if `enable_metadata_routing=True`, which can be set by using ``sklearn.set_config(enable_metadata_routing=True)``. See :ref:`Metadata Routing User Guide ` for more details. Returns ------- self : object Returns the instance itself. rr)rrrrF) classifierr+r)min_valmax_valinclude_boundariesrN)split)splitterrA)rr=c 3K|]i\}}tt||jjjt dj zj kyw)皙?)rrr9r:r;r<r=r>N)rrr9r:r;intr<r>).0traintestrrrrs r. z'GraphicalLassoCV.fit..st O$t2G01%% w!YY HH!%!$S4==%8!9 - HH   OsA/A2T)keyreverserz8[GraphicalLassoCV] Done refinement % 2i out of %i: % 3is)rrr=paramsrr _test_score)axismean_test_scorestd_test_score)r+r9r:r;r<r=r>)Arrrr#rr"rrrr rrTrrur=rrrrVrrwlogspacelog10rrr timerWracatch_warnings simplefilterrrrrrzipextendsortedoperator itemgetter enumeraterrr>rr]lenr_r`r rarray cv_results_stdalpha_rsr9r:r;r<rgr*rr)rrrrr2rpathn_alphasr+ralpha_1alpha_0 routed_paramst0rj this_pathcovsrdscores best_scorelast_finite_idxindexr best_index grid_scores best_alpharrs`` @@r.rzGraphicalLassoCV.fitsp: &$. $q 9   XXaggaj1DNVVAYDN&q$:N:NO dggqU 3v;;At||a/0 #H - FF'.  [[FM ..M(GWnG[['!2BHHW4ExPQUSUQUVF  +D%B6BM!5r?;M YY[}%K A((* %%h0BC OHDKKN O(0rxx1'U 8N8N8T8T'U O   4"9oOD!V:D&\F KKFFD1 2$H$7$7$:DID &&JO-6t_ '))vqWWV_ rxx ';'?'?!??!#J;;z*&+O+!+J!&J 'Qq'!*q'!*.zSYQR]7Rz*1-zA~.q1s4y1},z*1-j!1!!44zA~.q1zA~.q1+H5RXXg%68I8VW<X"||  1N1umTYY[2-=>?QK ZCJ47m d1g a #%{{%   hh{+ $bhhv&67{((+, IA7B1a47HD  uQC{3 4 I/1ggk.J*+-/VVKa-H)*J'   HX ]]]]! H D$/4;  g  s A4W**W4 ct|jjjt |j t jdd}|S)ajGet metadata routing of this object. Please check :ref:`User Guide ` on how the routing mechanism works. .. versionadded:: 1.5 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information. )ownerrr)calleecaller)rmethod_mapping)rrraddr rr)rrouters r.get_metadata_routingz%GraphicalLassoCV.get_metadata_routinggsQ dnn&=&=>BBdgg&(?..ge.LC  r0r)rrrrrrrrrUrr#rrr>rrrrrrs@r.rrsyv$  3 3$Haf=|L"8QVDEmT" $D    BHHRZZ $ $:5x6xtr0r);rrrrranumbersrrnumpyr#scipyrbaser exceptionsr linear_modelr rYr model_selectionr r utilsr utils._param_validationrrrutils.metadata_routingrrrrrutils.parallelrrutils.validationrrrrrrrr/r4rrr>rsrwrrrrrrNr0r.rse "+/)7KK/ HG  "      B1J& >" # #(       D,>'L       }%@n)nr0