`L i/>hdZddlZddlZddlZddlmZddlmZddlm Z m Z ddl Z ddl mZddlmZddlmZdd lmZmZmZmZdd lmZdd lmZmZmZdd lm Z m!Z!m"Z"dd l#m$Z$m%Z%m&Z&ddl'm(Z(m)Z)m*Z*ddl+m,Z,e jZe j\j^Z0dZ1dZ2dZ3d2dZ4dZ5dZ6d3dZ7dZ8 d4dZ9 d5dZ: d6dZ; d7d!Z<e"d"d#gd"dgd"dgd$gd%d& d8ddd'd ddd(d)d(dddd* d+Z=Gd,d-eeeeZ>Gd.d/e>Z?Gd0d1e>Z@y)9z"Non-negative matrix factorization.N)ABC)sqrt)IntegralReal)linalg)config_context) BaseEstimatorClassNamePrefixFeaturesOutMixinTransformerMixin _fit_context)ConvergenceWarning) check_arraycheck_random_state gen_batches)Interval StrOptionsvalidate_params)_randomized_svdsafe_sparse_dot squared_norm)check_is_fittedcheck_non_negative validate_data)_update_cdnmf_fastc*tt|S)zDot product-based Euclidean norm implementation. See: http://fa.bianp.net/blog/2011/computing-the-vector-norm/ Parameters ---------- x : array-like Vector for which to compute the norm. )rr)xs `/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/sklearn/decomposition/_nmf.pynormr *s  Q  cftj|j|jS)zTrace of np.dot(X, Y.T). Parameters ---------- X : array-like First matrix. Y : array-like Second matrix. )npdotravel)XYs r trace_dotr(7s! 66!'')QWWY ''r!c t|}|ddk7r:|jd|dk7r%td|d|dd|jdd|ddk7r:|jd|dk7r%td|d|dd|jddt||t j |dk(rtd |d y) Nrautoz+Array with wrong first dimension passed to z . Expected z , but got .rz,Array with wrong second dimension passed to zArray passed to z is full of zeros.)rshape ValueErrorrr#max)Ar,whoms r _check_initr1DsAA Qx6aggajE!H49${5QR8*Uwwqzl! %   Qx6aggajE!H4:4& ERSH:Vwwqzl! %  q$ vvayA~+D61CDEEr!Fc ~t|}tj|stj|}tj|}tj|}|dk(rtj|rtj |j |j }ttjj|j||g|}t||jz|}||zd|zz dz }n%t|tj ||z dz }|rtj|dzS|Stj|r$t|||j } |j } n6tj ||} | j} |j} | tkD} | | } | | } t| | tk<|dk(rtj tj |dtj |d} | | z }tj | tj"|}|| | j!z z }n7|dk(rd| | z }tj |tj$|j&z tj tj"|z }ntj|rVd}t)|j&dD]8}|tj tj ||dd|f|zz }:ntj  |z}tj | | |dz z}| |zj!||zz }|||dz zz }|||dz zz}|r$t+|d}tjd|zS|S)akCompute the beta-divergence of X and dot(W, H). Parameters ---------- X : float or array-like of shape (n_samples, n_features) W : float or array-like of shape (n_samples, n_components) H : float or array-like of shape (n_components, n_features) beta : float or {'frobenius', 'kullback-leibler', 'itakura-saito'} Parameter of the beta-divergence. If beta == 2, this is half the Frobenius *squared* norm. If beta == 1, this is the generalized Kullback-Leibler divergence. If beta == 0, this is the Itakura-Saito divergence. Else, this is the general beta-divergence. square_root : bool, default=False If True, return np.sqrt(2 * res) For beta == 2, it corresponds to the Frobenius norm. Returns ------- res : float Beta divergence of X and np.dot(X, H). r@rraxisN)_beta_loss_to_floatspissparser# atleast_2dr$datar(r multi_dotTrr_special_sparse_dotr%EPSILONsumlogprodr,ranger.)r&WHbeta square_rootnorm_Xnorm_WH cross_prodresWH_dataX_dataWHindicessum_WHdiv sum_WH_betaisum_X_WHs r_beta_divergencerTUs6 t $D ;;q> MM!  aA aA qy ;;q>VVAFFAFF+F 3 3QSS!QK @!DG"AGa0JG#cJ&66#=Cq266!Q</036C 7737# #J {{1~%aA.33 VVAq\((*wGgG G_F")GGg  qyqq)266!!+<=wffVRVVC[) v $$ wffSkBGGAGG,,rvvbffSk/BB ;;q>K1771:& BrvvbffQ!Q$&8D&@AA  B&&T*K66&'dQh"78t|  "TH_4 {dQh'' ttax  #qkwwq3w r!c<tj|r|j\}}|jd}t j |}|jd}t |||z}td||D]X} t| | |z} t j||| ddf|j|| ddfjd|| <Ztj|||ff|j} | jSt j||S)z0Computes np.dot(W, H), only where X is non zero.rrNr4)r,)r7r8nonzeror,r#emptyr.rBslicemultiplyr<r? coo_matrixtocsrr$) rCrDr&iijjn_valsdot_vals n_components batch_sizestartbatchrMs rr=r=s {{1~B!88F#wwqz v'=> 1fj1 E%!34E kk!BuIqL/133r%y!|;LMQQRHUO  ]]Hr2h/qww ?xxzvva|r!c<dddd}t|tr||}|S)z"Convert string beta_loss to float.rrr) frobeniuskullback-leibler itakura-saito) isinstancestr) beta_loss beta_loss_maps rr6r6s("#QOM)S!!), r!c t|d|j\}}|.|dk7r)|t||kDrtdj |||t||krd}nd}|dk(rt j |j|z }t|}||j||fj|jdz} ||j||fj|jdz} t j| | t j| | | | fSt||| \} } } t j| } t j| } t j | d t j| ddd fz| ddd f<t j | d t j| d ddfz| d ddf<td |D]}| dd|f| |ddf}}t j |d t j |d }}t jt j"|d t jt j"|d }}t%|t%|}}t%|t%|}}||z||z}}||kDr ||z }||z }|}n ||z }||z }|}t j | ||z}||z| dd|f<||z| |ddf<d | | |k<d | | |k<|d k(r | | fS|dk(r$|j}|| | d k(<|| | d k(<| | fS|dk(rt|}|j}t||jt'| | d k(zdz | | d k(<t||jt'| | d k(zdz | | d k(<| | fStd|dd)aNAlgorithms for NMF initialization. Computes an initial guess for the non-negative rank k matrix approximation for X: X = WH. Parameters ---------- X : array-like of shape (n_samples, n_features) The data matrix to be decomposed. n_components : int The number of components desired in the approximation. init : {'random', 'nndsvd', 'nndsvda', 'nndsvdar'}, default=None Method used to initialize the procedure. Valid options: - None: 'nndsvda' if n_components <= min(n_samples, n_features), otherwise 'random'. - 'random': non-negative random matrices, scaled with: sqrt(X.mean() / n_components) - 'nndsvd': Nonnegative Double Singular Value Decomposition (NNDSVD) initialization (better for sparseness) - 'nndsvda': NNDSVD with zeros filled with the average of X (better when sparsity is not desired) - 'nndsvdar': NNDSVD with zeros filled with small random values (generally faster, less accurate alternative to NNDSVDa for when sparsity is not desired) - 'custom': use custom matrices W and H .. versionchanged:: 1.1 When `init=None` and n_components is less than n_samples and n_features defaults to `nndsvda` instead of `nndsvd`. eps : float, default=1e-6 Truncate all values less then this in output to zero. random_state : int, RandomState instance or None, default=None Used when ``init`` == 'nndsvdar' or 'random'. Pass an int for reproducible results across multiple function calls. See :term:`Glossary `. Returns ------- W : array-like of shape (n_samples, n_components) Initial guesses for solving X ~= WH. H : array-like of shape (n_components, n_features) Initial guesses for solving X ~= WH. References ---------- C. Boutsidis, E. Gallopoulos: SVD based initialization: A head start for nonnegative matrix factorization - Pattern Recognition, 2008 http://tinyurl.com/nndsvd zNMF initializationNrandomzLinit = '{}' can only be used when n_components <= min(n_samples, n_features)nndsvda)sizeF)copyout) random_staterrnndsvdnndsvdardzInvalid init parameter: got z instead of one of )Nrmrtrnru)rr,minr-formatr#rmeanrstandard_normalastypedtypeabsr zeros_likerBmaximumminimumr len)r&r`initepsrs n_samples n_featuresavgrngrDrCUSVjryx_py_px_ny_nx_p_nrmy_p_nrmx_n_nrmy_n_nrmm_pm_nuvsigmalbds r_initialize_nmfrs|q./GGIz  H  3y*5 5 99?   | 3y*5 5DD xggaffh-. . #%%L*+E%FMM GG%N   #%%I|+D%ELL GG%M   qa qa!t aLIGAq! aA aAggadmbffQq!tWo-AadGggadmbffQq!tWo-AadG 1l #Aw!Q$1::a#RZZ1%5S66"**Q*+RVVBJJq!4D-ES 9d3i9d3iW$g&7S 9g Ag AEg Ag AEggadUl#'!Q$'!Q$36Aa#gJAa#gJ x a4K  ffh!q& !q&  a4K   .ffhc11s1Q!V9~1FFLM!q& c11s1Q!V9~1FFLM!q&  a4K F H  r!c|jd}tj|j|}t ||} |dk7r|j dd|dzxx|z cc<|dk7r| |z} |r|j |} ntj|} tj| tj} t||| | S)zHelper function for _fit_coordinate_descent. Update W to minimize the objective function, iterating once over all coordinates. By symmetry, to update H, one can call _update_coordinate_descent(X.T, Ht, W, ...). rNr|) r,r#r$r<rflat permutationarangeasarrayintpr) r&rCHtl1_regl2_regshufflersr`HHtXHtrs r_update_coordinate_descentrxs88A;L &&r C !R C} $L1$$%/% } v "..|< ii - **[8K ac; 77r!-C6?Tc t|jd} t|d}t| }td|dzD]r}d}|t ||| ||| |z }| r|t |j| |||| |z }|dk(r|}dk(rn.| rt d||z ||z |ksa| rt d |dzn|| jfS) a Compute Non-negative Matrix Factorization (NMF) with Coordinate Descent The objective function is minimized with an alternating minimization of W and H. Each minimization is done with a cyclic (up to a permutation of the features) Coordinate Descent. Parameters ---------- X : array-like of shape (n_samples, n_features) Constant matrix. W : array-like of shape (n_samples, n_components) Initial guess for the solution. H : array-like of shape (n_components, n_features) Initial guess for the solution. tol : float, default=1e-4 Tolerance of the stopping condition. max_iter : int, default=200 Maximum number of iterations before timing out. l1_reg_W : float, default=0. L1 regularization parameter for W. l1_reg_H : float, default=0. L1 regularization parameter for H. l2_reg_W : float, default=0. L2 regularization parameter for W. l2_reg_H : float, default=0. L2 regularization parameter for H. update_H : bool, default=True Set to True, both W and H will be estimated from initial guesses. Set to False, only W will be estimated. verbose : int, default=0 The verbosity level. shuffle : bool, default=False If true, randomize the order of coordinates in the CD solver. random_state : int, RandomState instance or None, default=None Used to randomize the coordinates in the CD solver, when ``shuffle`` is set to ``True``. Pass an int for reproducible results across multiple function calls. See :term:`Glossary `. Returns ------- W : ndarray of shape (n_samples, n_components) Solution to the non-negative least squares problem. H : ndarray of shape (n_components, n_features) Solution to the non-negative least squares problem. n_iter : int The number of iterations done by the algorithm. References ---------- .. [1] :doi:`"Fast local algorithms for large scale nonnegative matrix and tensor factorizations" <10.1587/transfun.E92.A.708>` Cichocki, Andrzej, and P. H. A. N. Anh-Huy. IEICE transactions on fundamentals of electronics, communications and computer sciences 92.3: 708-721, 2009. C)ordercsr) accept_sparserrrz violation:zConverged at iteration)rr<rrBrprint)r&rCrDtolmax_iterl1_reg_Wl1_reg_Hl2_reg_Wl2_reg_Hupdate_Hverboserrsrrn_iter violationviolation_inits r_fit_coordinate_descentrsj QSS $BAU+A \ *C8a<(  / q"h'3    3RHh I Q;&N Q    , N : ; ~ % ,. ; 36 bddF?r!c .|dk(rg| t||j} | r| } n| j} | tj||j}tj||} nt |||} t j|r| j}|j}n,| }|}| j}|dz dkrt||tk<|dz dkrt||tk<|dk(rtj|||n"|dk(r|dz}|dz}||z}n ||dz z}||z}t| |j} |dk(r1|tj|d }|tjddf} nt j|rtj|j}t|jdD]f}tj||ddf|}|dz dkrt||tk<||dz z}tj||j||ddf<hn(|dz z}tj||j}|} |dkDr| |z } |dkDr| ||zz} t| | dk(<| | z} | }|dk7r||z}||z}|||| fS) z&Update W in Multiplicative Update NMF.rN?rr3rrqr4)rr<rpr#r$r=r7r8r:r>divider?newaxisrWr,rB)r&rCrDrjrrgammaH_sumrrr numerator denominator WH_safe_XWH_safe_X_datarLrMWHHtrRWHidelta_Ws r_multiplicative_update_wrsA~ ;!!QSS)C I I ;&&ACC.CffQn (1a0 ;;q>&^^NVVF&NF!B3"#*2<  s?Q 7>N>G3 4 > IIfn. A !^ r !N q N f $N y1} ,N f $N$Iqss3  >}qq) A .K{{1~xx(qwwqz*2A&&1a4!,C 1}q(-4C'M*IM)C!#QSS!1DAJ 2y1}$vvb!##K!|x !|!HqL0 $+K q ! IG zELA eS# r!c b|dk(rDt|j|} tjj |j||g} nt |||} t j|r| j} |j}n,| } |}| j}|dz dkrt||tk<|dz dkrt| | tk<|dk(rtj|| | n"|dk(r| dz} | dz} | |z} n | |dz z} | |z} t|j| } |dk(r7tj|d}d||dk(<|d d tjf} nt j|rtj|j}t!|jdD]f}tj"||d d |f}|dz dkrt||tk<||dz z}tj"|j||d d |f<hn(|dz z}tj"|j|}|} |dkDr| |z } |dkDr| ||zz} t| | dk(<|9|7|dk7r|d|z z}| |z} || z}|| z}|| z }|| z }||z }|dk7r||z}|S| }|| z}|dk7r||z}||z}|S) z&update H in Multiplicative Update NMF.rrrr3rrqrr4N)rr<r#rr;r=r7r8r:rpr>rr?rrWr,rBr$)r&rCrDrjrrrr/BrhorrrrrLrMW_sumWtWHrRrdelta_Hs r_multiplicative_update_hrzsA~#ACC+ ii))1331+6 (1a0 ;;q>&^^NVVF&NF!B3"#*2<  s?Q 7>N>G3 4 > IIfn. A !^ r !N q N f $N y1} ,N f $N$ACC3  >FF11%E #E%1* 2:: .K {{1~xx(qwwqz*2A&&AadG,C 1}q(-4C'M*IM)C!#S!1DAJ 2y1}$vvacc2K!|x !|!HqL0 $+K q !} A: !e)OAQ  S S Y [ E A: %KA H ; A:  G W  Hr!rec Htj} t|}|dkr dd|z z } n|dkDr d|dz z } nd} t||||d}|}d\}}}td|dzD]}t ||||||| ||||  \}}}}|dkr3d ||t j t jjk<| rPt|||||| | }d\}}}|dkr3d ||t j t jjk<|d kDs|d zd k(st||||d}| r(tj}td ||| z |fz||z |z |krn|}| r4|d k(sd zd k7r'tj}td|| z fz||fS)aK Compute Non-negative Matrix Factorization with Multiplicative Update. The objective function is _beta_divergence(X, WH) and is minimized with an alternating minimization of W and H. Each minimization is done with a Multiplicative Update. Parameters ---------- X : array-like of shape (n_samples, n_features) Constant input matrix. W : array-like of shape (n_samples, n_components) Initial guess for the solution. H : array-like of shape (n_components, n_features) Initial guess for the solution. beta_loss : float or {'frobenius', 'kullback-leibler', 'itakura-saito'}, default='frobenius' String must be in {'frobenius', 'kullback-leibler', 'itakura-saito'}. Beta divergence to be minimized, measuring the distance between X and the dot product WH. Note that values different from 'frobenius' (or 2) and 'kullback-leibler' (or 1) lead to significantly slower fits. Note that for beta_loss <= 0 (or 'itakura-saito'), the input matrix X cannot contain zeros. max_iter : int, default=200 Number of iterations. tol : float, default=1e-4 Tolerance of the stopping condition. l1_reg_W : float, default=0. L1 regularization parameter for W. l1_reg_H : float, default=0. L1 regularization parameter for H. l2_reg_W : float, default=0. L2 regularization parameter for W. l2_reg_H : float, default=0. L2 regularization parameter for H. update_H : bool, default=True Set to True, both W and H will be estimated from initial guesses. Set to False, only W will be estimated. verbose : int, default=0 The verbosity level. Returns ------- W : ndarray of shape (n_samples, n_components) Solution to the non-negative least squares problem. H : ndarray of shape (n_components, n_features) Solution to the non-negative least squares problem. n_iter : int The number of iterations done by the algorithm. References ---------- Lee, D. D., & Seung, H., S. (2001). Algorithms for Non-negative Matrix Factorization. Adv. Neural Inform. Process. Syst.. 13. Fevotte, C., & Idier, J. (2011). Algorithms for nonnegative matrix factorization with the beta-divergence. Neural Computation, 23(9). rrr3rTrFNNN)rjrrrrrrrr)rjrrrr z0Epoch %02d reached after %.3f seconds, error: %fz&Epoch %02d reached after %.3f seconds.) timer6rTrBrr#finfofloat64rrr)r&rCrDrjrrrrrrrr start_timer error_at_initprevious_errorrrrrerror iter_timeend_times r_fit_multiplicative_updatersfJ#I.I1}sY' Qy3'%Q1iTJM"N&OE38a<(5#6   5#s q=.1Aa"((2::&*** + (#!!A/OE3A~25!bhhrzz*.../ 7v{a'$Q1iTJE IIK Fy:5u=> &-7#="Nk5#pC1H q 099; 4:@U7V V  a<r!z array-likez sparse matrixboolean)r&rCrDrprefer_skip_nested_validationcdrsame) rrsolverrjrralpha_Walpha_Hl1_ratiorsrrc &t|||||| | | | | || }|jt|dtjtj g}t d5|j||||\}}}ddd||fS#1swYxYw)aCompute Non-negative Matrix Factorization (NMF). Find two non-negative matrices (W, H) whose product approximates the non- negative matrix X. This factorization can be used for example for dimensionality reduction, source separation or topic extraction. The objective function is: .. math:: L(W, H) &= 0.5 * ||X - WH||_{loss}^2 &+ alpha\_W * l1\_ratio * n\_features * ||vec(W)||_1 &+ alpha\_H * l1\_ratio * n\_samples * ||vec(H)||_1 &+ 0.5 * alpha\_W * (1 - l1\_ratio) * n\_features * ||W||_{Fro}^2 &+ 0.5 * alpha\_H * (1 - l1\_ratio) * n\_samples * ||H||_{Fro}^2, where :math:`||A||_{Fro}^2 = \sum_{i,j} A_{ij}^2` (Frobenius norm) and :math:`||vec(A)||_1 = \sum_{i,j} abs(A_{ij})` (Elementwise L1 norm) The generic norm :math:`||X - WH||_{loss}^2` may represent the Frobenius norm or another supported beta-divergence loss. The choice between options is controlled by the `beta_loss` parameter. The regularization terms are scaled by `n_features` for `W` and by `n_samples` for `H` to keep their impact balanced with respect to one another and to the data fit term as independent as possible of the size `n_samples` of the training set. The objective function is minimized with an alternating minimization of W and H. If H is given and update_H=False, it solves for W only. Note that the transformed data is named W and the components matrix is named H. In the NMF literature, the naming convention is usually the opposite since the data matrix X is transposed. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Constant matrix. W : array-like of shape (n_samples, n_components), default=None If `init='custom'`, it is used as initial guess for the solution. If `update_H=False`, it is initialised as an array of zeros, unless `solver='mu'`, then it is filled with values calculated by `np.sqrt(X.mean() / self._n_components)`. If `None`, uses the initialisation method specified in `init`. H : array-like of shape (n_components, n_features), default=None If `init='custom'`, it is used as initial guess for the solution. If `update_H=False`, it is used as a constant, to solve for W only. If `None`, uses the initialisation method specified in `init`. n_components : int or {'auto'} or None, default='auto' Number of components. If `None`, all features are kept. If `n_components='auto'`, the number of components is automatically inferred from `W` or `H` shapes. .. versionchanged:: 1.4 Added `'auto'` value. .. versionchanged:: 1.6 Default value changed from `None` to `'auto'`. init : {'random', 'nndsvd', 'nndsvda', 'nndsvdar', 'custom'}, default=None Method used to initialize the procedure. Valid options: - None: 'nndsvda' if n_components < n_features, otherwise 'random'. - 'random': non-negative random matrices, scaled with: `sqrt(X.mean() / n_components)` - 'nndsvd': Nonnegative Double Singular Value Decomposition (NNDSVD) initialization (better for sparseness) - 'nndsvda': NNDSVD with zeros filled with the average of X (better when sparsity is not desired) - 'nndsvdar': NNDSVD with zeros filled with small random values (generally faster, less accurate alternative to NNDSVDa for when sparsity is not desired) - 'custom': If `update_H=True`, use custom matrices W and H which must both be provided. If `update_H=False`, then only custom matrix H is used. .. versionchanged:: 0.23 The default value of `init` changed from 'random' to None in 0.23. .. versionchanged:: 1.1 When `init=None` and n_components is less than n_samples and n_features defaults to `nndsvda` instead of `nndsvd`. update_H : bool, default=True Set to True, both W and H will be estimated from initial guesses. Set to False, only W will be estimated. solver : {'cd', 'mu'}, default='cd' Numerical solver to use: - 'cd' is a Coordinate Descent solver that uses Fast Hierarchical Alternating Least Squares (Fast HALS). - 'mu' is a Multiplicative Update solver. .. versionadded:: 0.17 Coordinate Descent solver. .. versionadded:: 0.19 Multiplicative Update solver. beta_loss : float or {'frobenius', 'kullback-leibler', 'itakura-saito'}, default='frobenius' Beta divergence to be minimized, measuring the distance between X and the dot product WH. Note that values different from 'frobenius' (or 2) and 'kullback-leibler' (or 1) lead to significantly slower fits. Note that for beta_loss <= 0 (or 'itakura-saito'), the input matrix X cannot contain zeros. Used only in 'mu' solver. .. versionadded:: 0.19 tol : float, default=1e-4 Tolerance of the stopping condition. max_iter : int, default=200 Maximum number of iterations before timing out. alpha_W : float, default=0.0 Constant that multiplies the regularization terms of `W`. Set it to zero (default) to have no regularization on `W`. .. versionadded:: 1.0 alpha_H : float or "same", default="same" Constant that multiplies the regularization terms of `H`. Set it to zero to have no regularization on `H`. If "same" (default), it takes the same value as `alpha_W`. .. versionadded:: 1.0 l1_ratio : float, default=0.0 The regularization mixing parameter, with 0 <= l1_ratio <= 1. For l1_ratio = 0 the penalty is an elementwise L2 penalty (aka Frobenius Norm). For l1_ratio = 1 it is an elementwise L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2. random_state : int, RandomState instance or None, default=None Used for NMF initialisation (when ``init`` == 'nndsvdar' or 'random'), and in Coordinate Descent. Pass an int for reproducible results across multiple function calls. See :term:`Glossary `. verbose : int, default=0 The verbosity level. shuffle : bool, default=False If true, randomize the order of coordinates in the CD solver. Returns ------- W : ndarray of shape (n_samples, n_components) Solution to the non-negative least squares problem. H : ndarray of shape (n_components, n_features) Solution to the non-negative least squares problem. n_iter : int Actual number of iterations. References ---------- .. [1] :doi:`"Fast local algorithms for large scale nonnegative matrix and tensor factorizations" <10.1587/transfun.E92.A.708>` Cichocki, Andrzej, and P. H. A. N. Anh-Huy. IEICE transactions on fundamentals of electronics, communications and computer sciences 92.3: 708-721, 2009. .. [2] :doi:`"Algorithms for nonnegative matrix factorization with the beta-divergence" <10.1162/NECO_a_00168>` Fevotte, C., & Idier, J. (2011). Neural Computation, 23(9). Examples -------- >>> import numpy as np >>> X = np.array([[1,1], [2, 1], [3, 1.2], [4, 1], [5, 0.8], [6, 1]]) >>> from sklearn.decomposition import non_negative_factorization >>> W, H, n_iter = non_negative_factorization( ... X, n_components=2, init='random', random_state=0) ) r`rrrjrrrsrrrrrrcscrr|T assume_finiterCrDrN)NMF_validate_paramsrr#rfloat32r _fit_transform)r&rCrDr`rrrrjrrrrrrsrrestrs rnon_negative_factorizationrsl !  ! CA^BJJ ;STA d +J))!qA)I 1fJ a<JJs BBcjeZdZUdZeeddddedhgehddgehdegeed ddgeedddgd geed ddgeed dded hgeed dd gd gd Ze e d< dddddddd dd d dZ dZ dZ dZddZdZedZfdZxZS)_BaseNMFz$Base class for NMF and MiniBatchNMF.rNleftclosedr*>customrtrmrnru>rergrfrrsrbothr r`rrjrrrsrrrr_parameter_constraintsrerrr) rrjrrrsrrrrc ||_||_||_||_||_||_||_||_| |_| |_ yNr) selfr`rrjrrrsrrrrs r__init__z_BaseNMF.__init__sL) "  (     r!c|j|_|j|jd|_t|j|_y)Nr)r` _n_componentsr,r6rj _beta_lossrr&s r _check_paramsz_BaseNMF._check_paramss>!..    %!"D .dnn=r!c|j\}}|jdk(r|rt||j|fdt|||jfd|jdk(r|jd|_|j|jk7s|j|jk7r/t dj |j|j||fS|s7|tjdtt||j|fd|jdk(r|jd|_|j|jk7r$t dj |j|jd k(rbtj|j|jz }tj||jf||j }||fStj||jf|j }||fS||tjd t|jdk(r|jd |_t!||j|j|j" \}}||fS)z"Check W and H, or initialize them.rz NMF (input H)z NMF (input W)r*rzKH and W should have the same dtype as X. Got H.dtype = {} and W.dtype = {}.z8When update_H=False, the provided initial W is not used.z4H should have the same dtype as X. Got H.dtype = {}.murzcWhen init!='custom', provided W or H are ignored. Set init='custom' to use them as initialization.r)rrs)r,rr1rr| TypeErrorrxwarningswarnRuntimeWarningrr#rryfullzerosrrs)rr&rCrDrrrrs r _check_w_hz_BaseNMF._check_w_hs: ! : 99 X D.. ;_ M It'9'9:O L!!V+%&WWQZ"ww!''!QWW%755;VAGGQWW5M^!t U} N" D.. ;_ M!!V+%&WWQZ"ww!''!JQQ{{d"ggaffh););;<GGY(:(:;SP*!t 'HHi););?w&# *=>8X55r!c ,|j|fi||S)aSLearn a NMF model for the data X. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training vector, where `n_samples` is the number of samples and `n_features` is the number of features. y : Ignored Not used, present for API consistency by convention. **params : kwargs Parameters (keyword arguments) and values passed to the fit_transform instance. Returns ------- self : object Returns the instance itself. ) fit_transform)rr&rparamss rfitz _BaseNMF.fits. 1'' r!c6t|||jzS)aTransform data back to its original space. .. versionadded:: 0.18 Parameters ---------- X : {ndarray, sparse matrix} of shape (n_samples, n_components) Transformed data matrix. Returns ------- X_original : ndarray of shape (n_samples, n_features) Returns a data matrix of the original shape. )r components_r s rinverse_transformz_BaseNMF.inverse_transform s 4####r!c4|jjdS)z&Number of transformed output features.r)rr,)rs r_n_features_outz_BaseNMF._n_features_out s%%a((r!ct|}d|j_d|j_ddg|j _|S)NTrr)super__sklearn_tags__ input_tags positive_onlysparsetransformer_tagspreserves_dtype)rtags __class__s rr!z_BaseNMF.__sklearn_tags__%s@w')(,%!%1:I0F- r!r*r)__name__ __module__ __qualname____doc__rrrrrdict__annotations__rr rrrrpropertyrr! __classcell__r(s@rrrns,. Xq$v 6  x  L M  I J  q$v67h4?@'(T1d6:;T1d6:Jx:x 64$&))r!rc eZdZUdZiej eddhgdgdZeed< ddddd d dd d d d dd fd Z fdZ e dddZ ddZ dZxZS)raNon-Negative Matrix Factorization (NMF). Find two non-negative matrices, i.e. matrices with all non-negative elements, (W, H) whose product approximates the non-negative matrix X. This factorization can be used for example for dimensionality reduction, source separation or topic extraction. The objective function is: .. math:: L(W, H) &= 0.5 * ||X - WH||_{loss}^2 &+ alpha\_W * l1\_ratio * n\_features * ||vec(W)||_1 &+ alpha\_H * l1\_ratio * n\_samples * ||vec(H)||_1 &+ 0.5 * alpha\_W * (1 - l1\_ratio) * n\_features * ||W||_{Fro}^2 &+ 0.5 * alpha\_H * (1 - l1\_ratio) * n\_samples * ||H||_{Fro}^2, where :math:`||A||_{Fro}^2 = \sum_{i,j} A_{ij}^2` (Frobenius norm) and :math:`||vec(A)||_1 = \sum_{i,j} abs(A_{ij})` (Elementwise L1 norm). The generic norm :math:`||X - WH||_{loss}` may represent the Frobenius norm or another supported beta-divergence loss. The choice between options is controlled by the `beta_loss` parameter. The regularization terms are scaled by `n_features` for `W` and by `n_samples` for `H` to keep their impact balanced with respect to one another and to the data fit term as independent as possible of the size `n_samples` of the training set. The objective function is minimized with an alternating minimization of W and H. Note that the transformed data is named W and the components matrix is named H. In the NMF literature, the naming convention is usually the opposite since the data matrix X is transposed. Read more in the :ref:`User Guide `. Parameters ---------- n_components : int or {'auto'} or None, default='auto' Number of components. If `None`, all features are kept. If `n_components='auto'`, the number of components is automatically inferred from W or H shapes. .. versionchanged:: 1.4 Added `'auto'` value. .. versionchanged:: 1.6 Default value changed from `None` to `'auto'`. init : {'random', 'nndsvd', 'nndsvda', 'nndsvdar', 'custom'}, default=None Method used to initialize the procedure. Valid options: - `None`: 'nndsvda' if n_components <= min(n_samples, n_features), otherwise random. - `'random'`: non-negative random matrices, scaled with: `sqrt(X.mean() / n_components)` - `'nndsvd'`: Nonnegative Double Singular Value Decomposition (NNDSVD) initialization (better for sparseness) - `'nndsvda'`: NNDSVD with zeros filled with the average of X (better when sparsity is not desired) - `'nndsvdar'` NNDSVD with zeros filled with small random values (generally faster, less accurate alternative to NNDSVDa for when sparsity is not desired) - `'custom'`: Use custom matrices `W` and `H` which must both be provided. .. versionchanged:: 1.1 When `init=None` and n_components is less than n_samples and n_features defaults to `nndsvda` instead of `nndsvd`. solver : {'cd', 'mu'}, default='cd' Numerical solver to use: - 'cd' is a Coordinate Descent solver. - 'mu' is a Multiplicative Update solver. .. versionadded:: 0.17 Coordinate Descent solver. .. versionadded:: 0.19 Multiplicative Update solver. beta_loss : float or {'frobenius', 'kullback-leibler', 'itakura-saito'}, default='frobenius' Beta divergence to be minimized, measuring the distance between X and the dot product WH. Note that values different from 'frobenius' (or 2) and 'kullback-leibler' (or 1) lead to significantly slower fits. Note that for beta_loss <= 0 (or 'itakura-saito'), the input matrix X cannot contain zeros. Used only in 'mu' solver. .. versionadded:: 0.19 tol : float, default=1e-4 Tolerance of the stopping condition. max_iter : int, default=200 Maximum number of iterations before timing out. random_state : int, RandomState instance or None, default=None Used for initialisation (when ``init`` == 'nndsvdar' or 'random'), and in Coordinate Descent. Pass an int for reproducible results across multiple function calls. See :term:`Glossary `. alpha_W : float, default=0.0 Constant that multiplies the regularization terms of `W`. Set it to zero (default) to have no regularization on `W`. .. versionadded:: 1.0 alpha_H : float or "same", default="same" Constant that multiplies the regularization terms of `H`. Set it to zero to have no regularization on `H`. If "same" (default), it takes the same value as `alpha_W`. .. versionadded:: 1.0 l1_ratio : float, default=0.0 The regularization mixing parameter, with 0 <= l1_ratio <= 1. For l1_ratio = 0 the penalty is an elementwise L2 penalty (aka Frobenius Norm). For l1_ratio = 1 it is an elementwise L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2. .. versionadded:: 0.17 Regularization parameter *l1_ratio* used in the Coordinate Descent solver. verbose : int, default=0 Whether to be verbose. shuffle : bool, default=False If true, randomize the order of coordinates in the CD solver. .. versionadded:: 0.17 *shuffle* parameter used in the Coordinate Descent solver. Attributes ---------- components_ : ndarray of shape (n_components, n_features) Factorization matrix, sometimes called 'dictionary'. n_components_ : int The number of components. It is same as the `n_components` parameter if it was given. Otherwise, it will be same as the number of features. reconstruction_err_ : float Frobenius norm of the matrix difference, or beta-divergence, between the training data ``X`` and the reconstructed data ``WH`` from the fitted model. n_iter_ : int Actual number of iterations. n_features_in_ : int Number of features seen during :term:`fit`. .. versionadded:: 0.24 feature_names_in_ : ndarray of shape (`n_features_in_`,) Names of features seen during :term:`fit`. Defined only when `X` has feature names that are all strings. .. versionadded:: 1.0 See Also -------- DictionaryLearning : Find a dictionary that sparsely encodes data. MiniBatchSparsePCA : Mini-batch Sparse Principal Components Analysis. PCA : Principal component analysis. SparseCoder : Find a sparse representation of data from a fixed, precomputed dictionary. SparsePCA : Sparse Principal Components Analysis. TruncatedSVD : Dimensionality reduction using truncated SVD. References ---------- .. [1] :doi:`"Fast local algorithms for large scale nonnegative matrix and tensor factorizations" <10.1587/transfun.E92.A.708>` Cichocki, Andrzej, and P. H. A. N. Anh-Huy. IEICE transactions on fundamentals of electronics, communications and computer sciences 92.3: 708-721, 2009. .. [2] :doi:`"Algorithms for nonnegative matrix factorization with the beta-divergence" <10.1162/NECO_a_00168>` Fevotte, C., & Idier, J. (2011). Neural Computation, 23(9). Examples -------- >>> import numpy as np >>> X = np.array([[1, 1], [2, 1], [3, 1.2], [4, 1], [5, 0.8], [6, 1]]) >>> from sklearn.decomposition import NMF >>> model = NMF(n_components=2, init='random', random_state=0) >>> W = model.fit_transform(X) >>> H = model.components_ r rr)rrrNrerrrrrF) rrrjrrrsrrrrrc Tt ||||||||| | |  ||_| |_yNr)r rrr)rr`rrrjrrrsrrrrrr(s rrz NMF.__init__sE %%    r!ct|||jdk7r3|jdvr%t d|jd|j|jdk(r)|j dk(rt jdt|S)Nr )rrez$Invalid beta_loss parameter: solver z does not handle beta_loss = rtzThe multiplicative update ('mu') solver cannot update zeros present in the initialization, and so leads to poorer results when used jointly with init='nndsvd'. You may try init='nndsvda' or init='nndsvdar' instead.) r r rrjr-rrr UserWarningrr&r(s rr zNMF._check_params"s a  ;;$ 4>>9I#I6t{{oF#~~02  ;;$ 499#8 MMM   r!TrcTt||dtjtjg}t d5|j |||\}}}dddt ||||jd|_|jd|_ ||_ |_ |S#1swYLxYw) aLearn a NMF model for the data X and returns the transformed data. This is more efficient than calling fit followed by transform. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training vector, where `n_samples` is the number of samples and `n_features` is the number of features. y : Ignored Not used, present for API consistency by convention. W : array-like of shape (n_samples, n_components), default=None If `init='custom'`, it is used as initial guess for the solution. If `None`, uses the initialisation method specified in `init`. H : array-like of shape (n_components, n_features), default=None If `init='custom'`, it is used as initial guess for the solution. If `None`, uses the initialisation method specified in `init`. Returns ------- W : ndarray of shape (n_samples, n_components) Transformed data. rrTrrCrDNrr) rr#rrr rrTrreconstruction_err_r, n_components_rn_iter_)rr&rrCrDrs rrzNMF.fit_transform9s8  !>"**bjj9Q $ / <..qA.;LAq& <$4 q!T__$$  WWQZ  < `. Parameters ---------- n_components : int or {'auto'} or None, default='auto' Number of components. If `None`, all features are kept. If `n_components='auto'`, the number of components is automatically inferred from W or H shapes. .. versionchanged:: 1.4 Added `'auto'` value. .. versionchanged:: 1.6 Default value changed from `None` to `'auto'`. init : {'random', 'nndsvd', 'nndsvda', 'nndsvdar', 'custom'}, default=None Method used to initialize the procedure. Valid options: - `None`: 'nndsvda' if `n_components <= min(n_samples, n_features)`, otherwise random. - `'random'`: non-negative random matrices, scaled with: `sqrt(X.mean() / n_components)` - `'nndsvd'`: Nonnegative Double Singular Value Decomposition (NNDSVD) initialization (better for sparseness). - `'nndsvda'`: NNDSVD with zeros filled with the average of X (better when sparsity is not desired). - `'nndsvdar'` NNDSVD with zeros filled with small random values (generally faster, less accurate alternative to NNDSVDa for when sparsity is not desired). - `'custom'`: Use custom matrices `W` and `H` which must both be provided. batch_size : int, default=1024 Number of samples in each mini-batch. Large batch sizes give better long-term convergence at the cost of a slower start. beta_loss : float or {'frobenius', 'kullback-leibler', 'itakura-saito'}, default='frobenius' Beta divergence to be minimized, measuring the distance between `X` and the dot product `WH`. Note that values different from 'frobenius' (or 2) and 'kullback-leibler' (or 1) lead to significantly slower fits. Note that for `beta_loss <= 0` (or 'itakura-saito'), the input matrix `X` cannot contain zeros. tol : float, default=1e-4 Control early stopping based on the norm of the differences in `H` between 2 steps. To disable early stopping based on changes in `H`, set `tol` to 0.0. max_no_improvement : int, default=10 Control early stopping based on the consecutive number of mini batches that does not yield an improvement on the smoothed cost function. To disable convergence detection based on cost function, set `max_no_improvement` to None. max_iter : int, default=200 Maximum number of iterations over the complete dataset before timing out. alpha_W : float, default=0.0 Constant that multiplies the regularization terms of `W`. Set it to zero (default) to have no regularization on `W`. alpha_H : float or "same", default="same" Constant that multiplies the regularization terms of `H`. Set it to zero to have no regularization on `H`. If "same" (default), it takes the same value as `alpha_W`. l1_ratio : float, default=0.0 The regularization mixing parameter, with 0 <= l1_ratio <= 1. For l1_ratio = 0 the penalty is an elementwise L2 penalty (aka Frobenius Norm). For l1_ratio = 1 it is an elementwise L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2. forget_factor : float, default=0.7 Amount of rescaling of past information. Its value could be 1 with finite datasets. Choosing values < 1 is recommended with online learning as more recent batches will weight more than past batches. fresh_restarts : bool, default=False Whether to completely solve for W at each step. Doing fresh restarts will likely lead to a better solution for a same number of iterations but it is much slower. fresh_restarts_max_iter : int, default=30 Maximum number of iterations when solving for W at each step. Only used when doing fresh restarts. These iterations may be stopped early based on a small change of W controlled by `tol`. transform_max_iter : int, default=None Maximum number of iterations when solving for W at transform time. If None, it defaults to `max_iter`. random_state : int, RandomState instance or None, default=None Used for initialisation (when ``init`` == 'nndsvdar' or 'random'), and in Coordinate Descent. Pass an int for reproducible results across multiple function calls. See :term:`Glossary `. verbose : bool, default=False Whether to be verbose. Attributes ---------- components_ : ndarray of shape (n_components, n_features) Factorization matrix, sometimes called 'dictionary'. n_components_ : int The number of components. It is same as the `n_components` parameter if it was given. Otherwise, it will be same as the number of features. reconstruction_err_ : float Frobenius norm of the matrix difference, or beta-divergence, between the training data `X` and the reconstructed data `WH` from the fitted model. n_iter_ : int Actual number of started iterations over the whole dataset. n_steps_ : int Number of mini-batches processed. n_features_in_ : int Number of features seen during :term:`fit`. feature_names_in_ : ndarray of shape (`n_features_in_`,) Names of features seen during :term:`fit`. Defined only when `X` has feature names that are all strings. See Also -------- NMF : Non-negative matrix factorization. MiniBatchDictionaryLearning : Finds a dictionary that can best be used to represent data using a sparse code. References ---------- .. [1] :doi:`"Fast local algorithms for large scale nonnegative matrix and tensor factorizations" <10.1587/transfun.E92.A.708>` Cichocki, Andrzej, and P. H. A. N. Anh-Huy. IEICE transactions on fundamentals of electronics, communications and computer sciences 92.3: 708-721, 2009. .. [2] :doi:`"Algorithms for nonnegative matrix factorization with the beta-divergence" <10.1162/NECO_a_00168>` Fevotte, C., & Idier, J. (2011). Neural Computation, 23(9). .. [3] :doi:`"Online algorithms for nonnegative matrix factorization with the Itakura-Saito divergence" <10.1109/ASPAA.2011.6082314>` Lefevre, A., Bach, F., Fevotte, C. (2011). WASPA. Examples -------- >>> import numpy as np >>> X = np.array([[1, 1], [2, 1], [3, 1.2], [4, 1], [5, 0.8], [6, 1]]) >>> from sklearn.decomposition import MiniBatchNMF >>> model = MiniBatchNMF(n_components=2, init='random', random_state=0) >>> W = model.fit_transform(X) >>> H = model.components_ rNrrrrr)max_no_improvementra forget_factorfresh_restartsfresh_restarts_max_itertransform_max_iterrirerrrrrgffffff?F)rrarjrrHrrrrrIrJrKrLrsrc t||||||||| | | ||_||_| |_| |_| |_||_yr5)r rrHrarIrJrKrL)rr`rrarjrrHrrrrrIrJrKrLrsrr(s rrzMiniBatchNMF.__init__sf( %%  #5$*,'>$"4r!ct||t|j|jd|_|j |j |jdz z|_|jdkrdd|jz z |_ n.|jdkDrd|jdz z |_ nd|_ |j|j|_ |S|j|_ |S)Nrrrr3r) r r rwrar, _batch_sizerI_rhor_gammarLr_transform_max_iterr8s rr zMiniBatchNMF._check_paramss a t ;&&4+;+;aggaj+HI  ??Q t!67DK __q 3!67DKDK &&. MM   ((   r!c >tj|j|jz }tj|j d|jf||j }|j}|j|\}}} }t|D]}}t||||j|| |j^}}tj||z tj|z } |jdkDr| |jkr|S||dd|S)zMinimize the objective function w.r.t W. Update W with H being fixed, until convergence. This is the heart of `transform` but it's also used during `fit` when doing fresh restarts. rrN)r#rryrrr,r|rprrBrrrRrr r) rr&rDrrrCW_bufferrr@rW_diffs r_solve_WzMiniBatchNMF._solve_Ws ggaffh!3!334 GGQWWQZ!3!34c I668$(#?#?#B !Xqx A,1a(HdkkEA[[X.Q?Fxx!|$(( 2HQK r!c x|jd}|j|\}}}} |js||j|||j}n't ||||j |||j^}} |j dkr3d||tjtjjk<t||||j ||jzz||jzz||dzjzz| |dzjzz|z } |rt||||j || |j|j|j |j" |dd|j dkr3d||tjtjjk<| S)z0Perform the update of W and H for one minibatch.rNrrr)rjrrrr/rr)r,rrJrWrKrrrRr#rrrrTr?r_components_numerator_components_denominatorrQ) rr&rCrDrrarrrrr@ batch_costs r_minibatch_stepzMiniBatchNMF._minibatch_stepsWWQZ 261M1Ma1P.(Hh   !) aD$@$@AA,1a(HdkkEA ??Q .1Aa"((2::&*** + Q1doo 6  !  !!Q$% &!Q$%  &    +//!!kk,,..II AaD!#25!bhhrzz*.../r!c \|jd}|dz}|dk(r!|jrtd|d|d|y|j||_n1||dzz } t | d} |jd| z z|| zz|_|jr!td|d|d|d|jt j ||z t j |z } |jdkDr-| |jkr|jrtd|d|y |j|j|jkrd|_ |j|_n|xjdz c_ |j7|j|jk\r|jrtd |d|y y) z7Helper function to encapsulate the early stopping logicrrzMinibatch step /z: mean batch cost: Fz , ewa cost: z#Converged (small H change) at step Tz>Converged (lack of improvement in objective function) at step ) r,rr _ewa_costrwrr r _ewa_cost_min_no_improvementrH) rr&r[rDH_bufferrstepn_stepsraalphaH_diffs r_minibatch_convergencez#MiniBatchNMF._minibatch_convergenceJsWWQZ ax 19||vQwi7J:,WX >> !'DN)a-0EqME!^^q5y9J>*:<  Q\*V[[^; 88a"**bjj9Q $ / E$($7$7Q!$7$D !Aq&' E$4 q!T__$$  WWQZ   E Es B&&B/c jt|d|j||jdk(r|jdkr t d|j d}|j ||||\}}|j}|j|_tj|j |j|_ d|_ d|_d|_t!||j"}t%j&|}t)tj*||j"z }|j,|z} t/t1| |D]B\} } |j3|| || ||} |r|j5|| | |||| | rn||ddD|j6r|j9|||j:} dz} t)tj*| |z } | |j,k(r7|j<dkDr(t?j@d|j,dtB||| | fS) aLearn a NMF model for the data X and returns the transformed data. Parameters ---------- X : {ndarray, sparse matrix} of shape (n_samples, n_features) Data matrix to be decomposed. W : array-like of shape (n_samples, n_components), default=None If `init='custom'`, it is used as initial guess for the solution. If `update_H=False`, it is initialised as an array of zeros, unless `solver='mu'`, then it is filled with values calculated by `np.sqrt(X.mean() / self._n_components)`. If `None`, uses the initialisation method specified in `init`. H : array-like of shape (n_components, n_features), default=None If `init='custom'`, it is used as initial guess for the solution. If `update_H=False`, it is used as a constant, to solve for W only. If `None`, uses the initialisation method specified in `init`. update_H : bool, default=True If True, both W and H will be estimated from initial guesses, this corresponds to a call to the `fit_transform` method. If False, only W will be estimated, this corresponds to a call to the `transform` method. Returns ------- W : ndarray of shape (n_samples, n_components) Transformed data. H : ndarray of shape (n_components, n_features) Factorization matrix, sometimes called 'dictionary'. n_iter : int Actual number of started iterations over the whole dataset. n_steps : int Number of mini-batches processed. zMiniBatchNMF (input X)rr?rNrzMaximum number of iterations z- reached. Increase it to improve convergence.)"rr rwrr-r,rrprYr#onesr|rZr_r`rarrP itertoolscycleintceilrziprBr\rgrJrWrSrrrr)rr&rCrDrrrbbatchesn_steps_per_iterrdrRrcr[rs rrzMiniBatchNMF._fit_transformsP 167 1 557aDAq)*D &+-77177!''+JD (DM  A Qa$7WWQZ   r!r)r)NNT)r*r+r,r-rrrrrr.r/rr rWr\rgr rrrDrxr1r2s@rrGrGsIV$  ) )$'!T&I4P!T&AB"4Af=>$+$,Xq$v$N#O'!T&I4P$D&5  "%&5P46/b:x5*6*X_%B45:6:r!rG)F)Ngư>N) rrrrrrTrFNrEr) rerrrrrrTr)NNr*)Ar-rlrrabcrmathrnumbersrrnumpyr# scipy.sparser$r7scipyr_configr baser r r r exceptionsrutilsrrrutils._param_validationrrr utils.extmathrrrutils.validationrrr _cdnmf_fastrrrrr>r r(r1rTr=r6rrrrrrrrrrGr!rrs(  "$ ,@@ KJ , "((2::  " " ! (F"hV*Xv8D       u@   iZHL^ J       bJO ,D !D !K  #(   b        #bbJ|.0@-QT|~v(vr C 8C r!