`L i/ZdZddlZddlZddlZddlmZmZddlZddl m Z ddl m Z ddl mZmZmZmZddlmZmZmZmZdd lmZmZmZmZdd lmZmZmZdd l m!Z!m"Z"m#Z#dd l$m%Z%m&Z&dd l'm(Z(m)Z)dZ*dddddddddd dZ+edgdgddgddgehdgeeddddgeeddddgdgddgeedddgedgdgdgdgddddddddddddddd dZ,dddddddddddd d!Z- d9d"Z.d#Z/edgdged$d%hgeedddgd&d d:dd'dddd(dddd%ddddd)d*d+d,Z0edged%d$hgdgeedddgd-dd'd.d%ddddddddddd/ d0Z1Gd1d2eeZ2Gd3d4e2eZ3Gd5d6e2eZ4Gd7d8e2eZ5y);zDictionary learning.N)IntegralReal)effective_n_jobs)linalg) BaseEstimatorClassNamePrefixFeaturesOutMixinTransformerMixin _fit_context)LarsLasso LassoLarsorthogonal_mp_gram) check_arraycheck_random_state gen_batchesgen_even_slices)Interval StrOptionsvalidate_params)_randomized_svd row_normssvd_flip)Paralleldelayed)check_is_fitted validate_datacF|r|dvrtdj|yy)N)omplarsz9Positive constraint not supported for '{}' coding method.) ValueErrorformatmethodpositives j/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/sklearn/decomposition/_dict_learning.py_check_positive_codingr's.Fo- G N Nv V  .x lasso_larsTF gramcov algorithmregularizationcopy_covinitmax_iterverboser%c &|j\} } |jd} |dk(rt|| z } tjd}t |d| |d| |}|j |j |j ||j}tjdi|n|dk(rt|| z }t|d||d | }|+|jd stj|}||_|j |j |j d |j}n |dk(rz tjd}td| |t|d}|j |j |j ||j}tjdi|n|dk(rhtj|tjtj||z dzj }| rNtj |dd |n4|dk(r/t#||t|d t%|d |j }j'| | S#tjdiwxYw#tjdiwxYw)a Generic sparse coding with precomputed Gram and/or covariance matrices. Each row of the result is the solution to a Lasso problem. Parameters ---------- X : ndarray of shape (n_samples, n_features) Data matrix. dictionary : ndarray of shape (n_components, n_features) The dictionary matrix against which to solve the sparse coding of the data. Some of the algorithms assume normalized rows. gram : ndarray of shape (n_components, n_components), default=None Precomputed Gram matrix, `dictionary * dictionary'` gram can be `None` if method is 'threshold'. cov : ndarray of shape (n_components, n_samples), default=None Precomputed covariance, `dictionary * X'`. algorithm : {'lasso_lars', 'lasso_cd', 'lars', 'omp', 'threshold'}, default='lasso_lars' The algorithm used: * `'lars'`: uses the least angle regression method (`linear_model.lars_path`); * `'lasso_lars'`: uses Lars to compute the Lasso solution; * `'lasso_cd'`: uses the coordinate descent method to compute the Lasso solution (`linear_model.Lasso`). lasso_lars will be faster if the estimated components are sparse; * `'omp'`: uses orthogonal matching pursuit to estimate the sparse solution; * `'threshold'`: squashes to zero all coefficients less than regularization from the projection `dictionary * data'`. regularization : int or float, default=None The regularization parameter. It corresponds to alpha when algorithm is `'lasso_lars'`, `'lasso_cd'` or `'threshold'`. Otherwise it corresponds to `n_nonzero_coefs`. init : ndarray of shape (n_samples, n_components), default=None Initialization value of the sparse code. Only used if `algorithm='lasso_cd'`. max_iter : int, default=1000 Maximum number of iterations to perform if `algorithm='lasso_cd'` or `'lasso_lars'`. copy_cov : bool, default=True Whether to copy the precomputed covariance matrix; if `False`, it may be overwritten. verbose : int, default=0 Controls the verbosity; the higher, the more messages. positive: bool, default=False Whether to enforce a positivity constraint on the sparse code. .. versionadded:: 0.20 Returns ------- code : ndarray of shape (n_components, n_features) The sparse codes. rr)ignore)allF)alpha fit_interceptr3 precomputefit_pathr%r2)Xylasso_cdT)r7r8r9r2 warm_startr%N WRITEABLE) check_inputr )r8r3r9n_nonzero_coefsr: thresholdoutr)squared)Gramr;r@tol norms_squaredcopy_Xy)shapefloatnpseterrrfitTcoef_r flagsarrayr intsignmaximumabscliprrreshape)X dictionaryr,r-r.r/r0r1r2r3r% n_samples n_features n_componentsr7err_mgtr)new_codeclfr s r&_sparse_encode_precomputedra$sX^GGIz##A&LL n% 2 !iiH-G##!!J NN:<<N 5!''H II  j n% 2      ::k*xx~CI  accu599 f  !iiH-G# #N 3 D HHZ\\1333H /zzH II  k !GGCL2::bffSkN.JA#NNQQ  GGHa8 4 e %/#At4   !    I| 44 II  \ II  sAI3A#I9I69Jz array-like>rr r<rAr)leftclosedbooleanr3rYrZr,r-r.r@r7r0r1r2n_jobsr?r3r%prefer_skip_nested_validation) r,r-r.r@r7r0r1r2rhr?r3r%c | rt|dk(rYt|dtjtjg}t|dtjtjg}nt|}t|}|jd|jdk7r/t dj |j|jt|| t|||||||||| | | |  S)a#Sparse coding. Each row of the result is the solution to a sparse coding problem. The goal is to find a sparse array `code` such that:: X ~= code * dictionary Read more in the :ref:`User Guide `. Parameters ---------- X : array-like of shape (n_samples, n_features) Data matrix. dictionary : array-like of shape (n_components, n_features) The dictionary matrix against which to solve the sparse coding of the data. Some of the algorithms assume normalized rows for meaningful output. gram : array-like of shape (n_components, n_components), default=None Precomputed Gram matrix, `dictionary * dictionary'`. cov : array-like of shape (n_components, n_samples), default=None Precomputed covariance, `dictionary' * X`. algorithm : {'lasso_lars', 'lasso_cd', 'lars', 'omp', 'threshold'}, default='lasso_lars' The algorithm used: * `'lars'`: uses the least angle regression method (`linear_model.lars_path`); * `'lasso_lars'`: uses Lars to compute the Lasso solution; * `'lasso_cd'`: uses the coordinate descent method to compute the Lasso solution (`linear_model.Lasso`). lasso_lars will be faster if the estimated components are sparse; * `'omp'`: uses orthogonal matching pursuit to estimate the sparse solution; * `'threshold'`: squashes to zero all coefficients less than regularization from the projection `dictionary * data'`. n_nonzero_coefs : int, default=None Number of nonzero coefficients to target in each column of the solution. This is only used by `algorithm='lars'` and `algorithm='omp'` and is overridden by `alpha` in the `omp` case. If `None`, then `n_nonzero_coefs=int(n_features / 10)`. alpha : float, default=None If `algorithm='lasso_lars'` or `algorithm='lasso_cd'`, `alpha` is the penalty applied to the L1 norm. If `algorithm='threshold'`, `alpha` is the absolute value of the threshold below which coefficients will be squashed to zero. If `algorithm='omp'`, `alpha` is the tolerance parameter: the value of the reconstruction error targeted. In this case, it overrides `n_nonzero_coefs`. If `None`, default to 1. copy_cov : bool, default=True Whether to copy the precomputed covariance matrix; if `False`, it may be overwritten. init : ndarray of shape (n_samples, n_components), default=None Initialization value of the sparse codes. Only used if `algorithm='lasso_cd'`. max_iter : int, default=1000 Maximum number of iterations to perform if `algorithm='lasso_cd'` or `'lasso_lars'`. n_jobs : int, default=None Number of parallel jobs to run. ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. ``-1`` means using all processors. See :term:`Glossary ` for more details. check_input : bool, default=True If `False`, the input arrays X and dictionary will not be checked. verbose : int, default=0 Controls the verbosity; the higher, the more messages. positive : bool, default=False Whether to enforce positivity when finding the encoding. .. versionadded:: 0.20 Returns ------- code : ndarray of shape (n_samples, n_components) The sparse codes. See Also -------- sklearn.linear_model.lars_path : Compute Least Angle Regression or Lasso path using LARS algorithm. sklearn.linear_model.orthogonal_mp : Solves Orthogonal Matching Pursuit problems. sklearn.linear_model.Lasso : Train Linear Model with L1 prior as regularizer. SparseCoder : Find a sparse representation of data from a fixed precomputed dictionary. Examples -------- >>> import numpy as np >>> from sklearn.decomposition import sparse_encode >>> X = np.array([[-1, -1, -1], [0, 0, 3]]) >>> dictionary = np.array( ... [[0, 1, 0], ... [-1, -1, 2], ... [1, 1, 1], ... [0, 1, 1], ... [0, 2, 1]], ... dtype=np.float64 ... ) >>> sparse_encode(X, dictionary, alpha=1e-10) array([[ 0., 0., -1., 0., 0.], [ 0., 1., 1., 0., 0.]]) r<C)orderdtyperbzRDictionary and X have different numbers of features:dictionary.shape: {} X.shape{} r,r-r.r@r7r0r1r2rhr3r%) rrLfloat64float32rJr!r"r'_sparse_encodergs r& sparse_encoderssv  "$#bjj"**-EJASRZZ0HIA$Z0JAAaggaj( --3VJ4D4Dagg-N  9h/    '  r(roc  j\} }jd}dvr| tt|dz d|n|d%dk7r tjj 'dk7r"dtjj t | dk(sdk(rt }|Sjd} jd}tj| |f}tt| t | }t|   f d |D}t||D] \}}|||< |S) z1Sparse coding without input/parameter validation.r)r r rbg?rAr<Fr+)rhr3c3 K|]9}tt| dd|fnd |nd  ;yw)Nr+)rra) .0 this_slicerYr.r0r-rZr,r1r2r%r/r3s r& z!_sparse_encode..sm:  ,*+ jM &)oAzM"4)%)%5j!4 :s?A) rJminmaxrLdotrOrraemptylistrrzip)rYrZr,r-r.r@r7r0r1r2rhr3r%r[r\r]codeslices code_viewsrx this_viewr/s````` ``` `` @r&rrrrs$GGIz##A&LO#(  ! Z"_a!8,GN  ! N | [0vvj*,,/ {yJ.ffZ%1$ [(@) )    I##A&L 88Y - .D /)-=f-EF GF99::!:J "%VZ!8% I$Z% Kr(c |j\}} t|}||j|z}||j|z}d} t| D]} || | fdkDr&|| xx|dd| f|| |zz || | fz z cc<n^||j |} d| j xsdz} |j d| t| }| |z|| <d|dd| f<| dz } |rtj|| dd|| || xxttj|| dzcc<|r| dkDrt| dyyy) aUpdate the dense dictionary factor in place. Parameters ---------- dictionary : ndarray of shape (n_components, n_features) Value of the dictionary at the previous iteration. Y : ndarray of shape (n_samples, n_features) Data matrix. code : ndarray of shape (n_samples, n_components) Sparse coding of the data against which to optimize the dictionary. A : ndarray of shape (n_components, n_components), default=None Together with `B`, sufficient stats of the online model to update the dictionary. B : ndarray of shape (n_features, n_components), default=None Together with `A`, sufficient stats of the online model to update the dictionary. verbose: bool, default=False Degree of output the procedure will print. random_state : int, RandomState instance or None, default=None Used for randomly initializing the dictionary. Pass an int for reproducible results across multiple function calls. See :term:`Glossary `. positive : bool, default=False Whether to enforce positivity when finding the dictionary. .. versionadded:: 0.20 Nrgư>g{Gz?rb)sizerBz unused atoms resampled.)rJrrOrangechoicestdnormallenrLrWr{rnormprint)rZYrABr3 random_stater%r[r]n_unusedknewd noise_levelnoises r& _update_dictrshX#jjI|%l3Ly FFTMy CC$JH < < QT7T> qMa1g!z(99Qq!tWD DM\((34D$((*/2K '';SY'GE 5LJqMDAJ MH  GGJqM1d 1 > 1 V[[A7;; )<,8a<  234 wr(c vtj}||tj|d}|}nDtj|d\}}}t ||\}}|ddtj f|z}t|}||kr|ddd|f}|d|ddf}nptj|tjt|||z ff}tj|tj||z |jdff}tj|}g}tj}| dk(r tdd d }t|D]S}tj|z }| dk(r>t j"j%d t j"j'n| rtd |||d z |fzt)|||||||||  }t+|||| | | dtj,|||zz dzz|tj,tj.|zz}|j1||dkDr9|d|d z }|||d zkr#| dk(r tdn| rtd|zn!|dzdk(s@| D| t3V| r ||||dzfS|||fS)z"Main dictionary learning algorithmNF)rmF) full_matricesrb[dict_learning] )end.zCIteration % 3i (elapsed time: % 3is, % 4.1fmn, current cost % 7.3f)<)r.r7r1rhr%r2r3r3rr%?rrz+--- Convergence reached after %d iterations)timerLrRrsvdrnewaxisrc_zerosr_rJasfortranarraynanrrsysstdoutwriteflushrsrsumrVappendlocals)rYr]r7r2rFr$rh dict_init code_initcallbackr3r return_n_iter positive_dict positive_codemethod_max_itert0rrZSrerrors current_costiidtdEs r&_dict_learningr)s( B!6xx - $jj%@a#D*5jq"**}% 2  JAqA} }$%  q 01 uuT288SY q0@$ABBCUU ,"2J4D4DQ4G!HI I "":.J F66L!| S) BHo3 YY[2  a< JJ  S ! JJ     Ur27L12   "$     %"  RVVQ ):%:q$@AAEBFF FF4LM E    l# 6fRj(BC&*$$a<"IG"LM 6Q;8/ VX g3jZa//Z''r(cdr )rY return_coder$rdMbP?ru)r7r2rrr batch_sizer3shufflerhr$rrrrrFmax_no_improvementcd| z}tdid|d|d|d| d| d|d| d |d | d |d |d |d| d|d|d|d|d|j|}|s |jS|j|}||jfS)aSolve a dictionary learning matrix factorization problem online. Finds the best dictionary and the corresponding sparse code for approximating the data matrix X by solving:: (U^*, V^*) = argmin 0.5 || X - U V ||_Fro^2 + alpha * || U ||_1,1 (U,V) with || V_k ||_2 = 1 for all 0 <= k < n_components where V is the dictionary and U is the sparse code. ||.||_Fro stands for the Frobenius norm and ||.||_1,1 stands for the entry-wise matrix norm which is the sum of the absolute values of all the entries in the matrix. This is accomplished by repeatedly iterating over mini-batches by slicing the input data. Read more in the :ref:`User Guide `. Parameters ---------- X : array-like of shape (n_samples, n_features) Data matrix. n_components : int or None, default=2 Number of dictionary atoms to extract. If None, then ``n_components`` is set to ``n_features``. alpha : float, default=1 Sparsity controlling parameter. max_iter : int, default=100 Maximum number of iterations over the complete dataset before stopping independently of any early stopping criterion heuristics. .. versionadded:: 1.1 return_code : bool, default=True Whether to also return the code U or just the dictionary `V`. dict_init : ndarray of shape (n_components, n_features), default=None Initial values for the dictionary for warm restart scenarios. If `None`, the initial values for the dictionary are created with an SVD decomposition of the data via :func:`~sklearn.utils.extmath.randomized_svd`. callback : callable, default=None A callable that gets invoked at the end of each iteration. batch_size : int, default=256 The number of samples to take in each batch. .. versionchanged:: 1.3 The default value of `batch_size` changed from 3 to 256 in version 1.3. verbose : bool, default=False To control the verbosity of the procedure. shuffle : bool, default=True Whether to shuffle the data before splitting it in batches. n_jobs : int, default=None Number of parallel jobs to run. ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. ``-1`` means using all processors. See :term:`Glossary ` for more details. method : {'lars', 'cd'}, default='lars' * `'lars'`: uses the least angle regression method to solve the lasso problem (`linear_model.lars_path`); * `'cd'`: uses the coordinate descent method to compute the Lasso solution (`linear_model.Lasso`). Lars will be faster if the estimated components are sparse. random_state : int, RandomState instance or None, default=None Used for initializing the dictionary when ``dict_init`` is not specified, randomly shuffling the data when ``shuffle`` is set to ``True``, and updating the dictionary. Pass an int for reproducible results across multiple function calls. See :term:`Glossary `. positive_dict : bool, default=False Whether to enforce positivity when finding the dictionary. .. versionadded:: 0.20 positive_code : bool, default=False Whether to enforce positivity when finding the code. .. versionadded:: 0.20 method_max_iter : int, default=1000 Maximum number of iterations to perform when solving the lasso problem. .. versionadded:: 0.22 tol : float, default=1e-3 Control early stopping based on the norm of the differences in the dictionary between 2 steps. To disable early stopping based on changes in the dictionary, set `tol` to 0.0. .. versionadded:: 1.1 max_no_improvement : int, default=10 Control early stopping based on the consecutive number of mini batches that does not yield an improvement on the smoothed cost function. To disable convergence detection based on cost function, set `max_no_improvement` to None. .. versionadded:: 1.1 Returns ------- code : ndarray of shape (n_samples, n_components), The sparse code (only returned if `return_code=True`). dictionary : ndarray of shape (n_components, n_features), The solutions to the dictionary learning problem. n_iter : int Number of iterations run. Returned only if `return_n_iter` is set to `True`. See Also -------- dict_learning : Solve a dictionary learning matrix factorization problem. DictionaryLearning : Find a dictionary that sparsely encodes data. MiniBatchDictionaryLearning : A faster, less accurate, version of the dictionary learning algorithm. SparsePCA : Sparse Principal Components Analysis. MiniBatchSparsePCA : Mini-batch Sparse Principal Components Analysis. Examples -------- >>> import numpy as np >>> from sklearn.datasets import make_sparse_coded_signal >>> from sklearn.decomposition import dict_learning_online >>> X, _, _ = make_sparse_coded_signal( ... n_samples=30, n_components=15, n_features=20, n_nonzero_coefs=10, ... random_state=42, ... ) >>> U, V = dict_learning_online( ... X, n_components=15, alpha=0.2, max_iter=20, batch_size=3, random_state=42 ... ) We can check the level of sparsity of `U`: >>> np.mean(U == 0) np.float64(0.53) We can compare the average squared euclidean norm of the reconstruction error of the sparse coded signal relative to the squared euclidean norm of the original signal: >>> X_hat = U @ V >>> np.mean(np.sum((X_hat - X) ** 2, axis=1) / np.sum(X ** 2, axis=1)) np.float64(0.053) lasso_r]r7r2rh fit_algorithmrrrrtransform_algorithmtransform_alpharrtransform_max_iterr3rrFrrI)MiniBatchDictionaryLearningrN components_ transform)rYr]r7r2rrrrr3rrhr$rrrrrFrrestrs r&dict_learning_onlinersz#V+ % !             " 0  $ $ +   ! " # $.% & c!f'* }}QS__$$r()rYr$rr:0yE>) r2rFr$rhrrrr3rrrrrct|||||||| || | || |jd}|j|}| r$||j|j|j fS||j|jfS)a^Solve a dictionary learning matrix factorization problem. Finds the best dictionary and the corresponding sparse code for approximating the data matrix X by solving:: (U^*, V^*) = argmin 0.5 || X - U V ||_Fro^2 + alpha * || U ||_1,1 (U,V) with || V_k ||_2 = 1 for all 0 <= k < n_components where V is the dictionary and U is the sparse code. ||.||_Fro stands for the Frobenius norm and ||.||_1,1 stands for the entry-wise matrix norm which is the sum of the absolute values of all the entries in the matrix. Read more in the :ref:`User Guide `. Parameters ---------- X : array-like of shape (n_samples, n_features) Data matrix. n_components : int Number of dictionary atoms to extract. alpha : int or float Sparsity controlling parameter. max_iter : int, default=100 Maximum number of iterations to perform. tol : float, default=1e-8 Tolerance for the stopping condition. method : {'lars', 'cd'}, default='lars' The method used: * `'lars'`: uses the least angle regression method to solve the lasso problem (`linear_model.lars_path`); * `'cd'`: uses the coordinate descent method to compute the Lasso solution (`linear_model.Lasso`). Lars will be faster if the estimated components are sparse. n_jobs : int, default=None Number of parallel jobs to run. ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. ``-1`` means using all processors. See :term:`Glossary ` for more details. dict_init : ndarray of shape (n_components, n_features), default=None Initial value for the dictionary for warm restart scenarios. Only used if `code_init` and `dict_init` are not None. code_init : ndarray of shape (n_samples, n_components), default=None Initial value for the sparse code for warm restart scenarios. Only used if `code_init` and `dict_init` are not None. callback : callable, default=None Callable that gets invoked every five iterations. verbose : bool, default=False To control the verbosity of the procedure. random_state : int, RandomState instance or None, default=None Used for randomly initializing the dictionary. Pass an int for reproducible results across multiple function calls. See :term:`Glossary `. return_n_iter : bool, default=False Whether or not to return the number of iterations. positive_dict : bool, default=False Whether to enforce positivity when finding the dictionary. .. versionadded:: 0.20 positive_code : bool, default=False Whether to enforce positivity when finding the code. .. versionadded:: 0.20 method_max_iter : int, default=1000 Maximum number of iterations to perform. .. versionadded:: 0.22 Returns ------- code : ndarray of shape (n_samples, n_components) The sparse code factor in the matrix factorization. dictionary : ndarray of shape (n_components, n_features), The dictionary factor in the matrix factorization. errors : array Vector of errors at each iteration. n_iter : int Number of iterations run. Returned only if `return_n_iter` is set to True. See Also -------- dict_learning_online : Solve a dictionary learning matrix factorization problem online. DictionaryLearning : Find a dictionary that sparsely encodes data. MiniBatchDictionaryLearning : A faster, less accurate version of the dictionary learning algorithm. SparsePCA : Sparse Principal Components Analysis. MiniBatchSparsePCA : Mini-batch Sparse Principal Components Analysis. Examples -------- >>> import numpy as np >>> from sklearn.datasets import make_sparse_coded_signal >>> from sklearn.decomposition import dict_learning >>> X, _, _ = make_sparse_coded_signal( ... n_samples=30, n_components=15, n_features=20, n_nonzero_coefs=10, ... random_state=42, ... ) >>> U, V, errors = dict_learning(X, n_components=15, alpha=0.1, random_state=42) We can check the level of sparsity of `U`: >>> np.mean(U == 0) np.float64(0.62) We can compare the average squared euclidean norm of the reconstruction error of the sparse coded signal relative to the squared euclidean norm of the original signal: >>> X_hat = U @ V >>> np.mean(np.sum((X_hat - X) ** 2, axis=1) / np.sum(X ** 2, axis=1)) np.float64(0.0192) )r]r7r2rFrrhrrrr3rrrrdefault)r)DictionaryLearning set_output fit_transformrerror_n_iter_)rYr]r7r2rFr$rhrrrr3rrrrr estimatorrs r& dict_learningrusB#! !##*j9j%  " "1 %D   ! !         && (8(8 88r(c.eZdZdZdZdZdZdZdZy)_BaseSparseCodingz>Base class from SparseCoder and DictionaryLearning algorithms.cf||_||_||_||_||_||_||_yN)rtransform_n_nonzero_coefsrr split_signrhr)selfrrrrrhrrs r&__init__z_BaseSparseCoding.__init__4s:$7 )B&."4$ *r(c t||d}t|dr|j |j}n |j}t |||j |j ||j|j|j}|jrj|j\}}tj|d|zf}tj|d|ddd|f<tj|d |dd|df<|}|S)WPrivate method allowing to accommodate both DictionaryLearning and SparseCoder.F)resetr7N)r.r@r7r2rhr%rr)rhasattrrr7rsrrrrhrrrJrLr}rUminimum)rrYrZrrr[r\ split_codes r& _transformz_BaseSparseCoding._transformFs $ / 4 !d&:&:&B"jjO"22O .. ::!,,;;''   ??$(JJ !Iz9a*n"=>J)+D!)>Dj  r(cPt||j||jSaTransform data back to its original space. Parameters ---------- X : array-like of shape (n_samples, n_components) Data to be transformed back. Must have the same number of components as the data used to train the model. Returns ------- X_original : ndarray of shape (n_samples, n_features) Transformed data. )rrrrs r&inverse_transformz#_BaseSparseCoding.inverse_transforms% &&q$*:*:;;r(N) __name__ __module__ __qualname____doc__rrrrrrIr(r&rr1sH+$>4(!*`. Parameters ---------- dictionary : ndarray of shape (n_components, n_features) The dictionary atoms used for sparse coding. Lines are assumed to be normalized to unit norm. transform_algorithm : {'lasso_lars', 'lasso_cd', 'lars', 'omp', 'threshold'}, default='omp' Algorithm used to transform the data: - `'lars'`: uses the least angle regression method (`linear_model.lars_path`); - `'lasso_lars'`: uses Lars to compute the Lasso solution; - `'lasso_cd'`: uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). `'lasso_lars'` will be faster if the estimated components are sparse; - `'omp'`: uses orthogonal matching pursuit to estimate the sparse solution; - `'threshold'`: squashes to zero all coefficients less than alpha from the projection ``dictionary * X'``. transform_n_nonzero_coefs : int, default=None Number of nonzero coefficients to target in each column of the solution. This is only used by `algorithm='lars'` and `algorithm='omp'` and is overridden by `alpha` in the `omp` case. If `None`, then `transform_n_nonzero_coefs=int(n_features / 10)`. transform_alpha : float, default=None If `algorithm='lasso_lars'` or `algorithm='lasso_cd'`, `alpha` is the penalty applied to the L1 norm. If `algorithm='threshold'`, `alpha` is the absolute value of the threshold below which coefficients will be squashed to zero. If `algorithm='omp'`, `alpha` is the tolerance parameter: the value of the reconstruction error targeted. In this case, it overrides `n_nonzero_coefs`. If `None`, default to 1. split_sign : bool, default=False Whether to split the sparse feature vector into the concatenation of its negative part and its positive part. This can improve the performance of downstream classifiers. n_jobs : int, default=None Number of parallel jobs to run. ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. ``-1`` means using all processors. See :term:`Glossary ` for more details. positive_code : bool, default=False Whether to enforce positivity when finding the code. .. versionadded:: 0.20 transform_max_iter : int, default=1000 Maximum number of iterations to perform if `algorithm='lasso_cd'` or `lasso_lars`. .. versionadded:: 0.22 Attributes ---------- n_components_ : int Number of atoms. n_features_in_ : int Number of features seen during :term:`fit`. .. versionadded:: 0.24 feature_names_in_ : ndarray of shape (`n_features_in_`,) Names of features seen during :term:`fit`. Defined only when `X` has feature names that are all strings. .. versionadded:: 1.0 See Also -------- DictionaryLearning : Find a dictionary that sparsely encodes data. MiniBatchDictionaryLearning : A faster, less accurate, version of the dictionary learning algorithm. MiniBatchSparsePCA : Mini-batch Sparse Principal Components Analysis. SparsePCA : Sparse Principal Components Analysis. sparse_encode : Sparse coding where each row of the result is the solution to a sparse coding problem. Examples -------- >>> import numpy as np >>> from sklearn.decomposition import SparseCoder >>> X = np.array([[-1, -1, -1], [0, 0, 3]]) >>> dictionary = np.array( ... [[0, 1, 0], ... [-1, -1, 2], ... [1, 1, 1], ... [0, 1, 1], ... [0, 2, 1]], ... dtype=np.float64 ... ) >>> coder = SparseCoder( ... dictionary=dictionary, transform_algorithm='lasso_lars', ... transform_alpha=1e-10, ... ) >>> coder.transform(X) array([[ 0., 0., -1., 0., 0.], [ 0., 1., 1., 0., 0.]]) rNFr*)rrrrrhrrc >t ||||||||||_yr)superrrZ) rrZrrrrrhrr __class__s r&rzSparseCoder.__init__s0   %      %r(c|S)aDo nothing and return the estimator unchanged. This method is just there to implement the usual API and hence work in pipelines. Parameters ---------- X : Ignored Not used, present for API consistency by convention. y : Ignored Not used, present for API consistency by convention. Returns ------- self : object Returns the instance itself. rIrrYys r&rNzSparseCoder.fit/s & r(c8t|||jS)aQEncode the data as a sparse combination of the dictionary atoms. Coding method is determined by the object parameter `transform_algorithm`. Parameters ---------- X : ndarray of shape (n_samples, n_features) Training vector, where `n_samples` is the number of samples and `n_features` is the number of features. y : Ignored Not used, present for API consistency by convention. Returns ------- X_new : ndarray of shape (n_samples, n_components) Transformed data. )rrrZ)rrYrrs r&rzSparseCoder.transformDs(w!!T__55r(c:|j||jSr)rrZrs r&rzSparseCoder.inverse_transformZs&&q$//::r(cXt|}d|_ddg|j_|S)NFrprq)r__sklearn_tags__ requires_fittransformer_tagspreserves_dtypertagsrs r&rzSparseCoder.__sklearn_tags__js0w')!1:I0F- r(c4|jjdS)zNumber of atoms.rrZrJrs r& n_components_zSparseCoder.n_components_p$$Q''r(c4|jjdS)z%Number of features seen during `fit`.rbrrs r&n_features_in_zSparseCoder.n_features_in_urr(c|jS)&Number of transformed output features.)rrs r&_n_features_outzSparseCoder._n_features_outzs!!!r(r)rrrrrrNrrrpropertyrrr __classcell__rs@r&rrsvuv""&%.*6,;  ((((""r(rceZdZUdZideeddddgdeedddgd eedddgd eedddgd ed d hgdehdgdeeddddgdeeddddgdedgdejdgdejdgde dgddgddgddgddgddgdeedddgiZ e e d< d*dddd d ddddddd!d!dd!d!dd"fd# Zd*d$Zed%&d*d'Zed(Zfd)ZxZS)+raDictionary learning. Finds a dictionary (a set of atoms) that performs well at sparsely encoding the fitted data. Solves the optimization problem:: (U^*,V^*) = argmin 0.5 || X - U V ||_Fro^2 + alpha * || U ||_1,1 (U,V) with || V_k ||_2 <= 1 for all 0 <= k < n_components ||.||_Fro stands for the Frobenius norm and ||.||_1,1 stands for the entry-wise matrix norm which is the sum of the absolute values of all the entries in the matrix. Read more in the :ref:`User Guide `. Parameters ---------- n_components : int, default=None Number of dictionary elements to extract. If None, then ``n_components`` is set to ``n_features``. alpha : float, default=1.0 Sparsity controlling parameter. max_iter : int, default=1000 Maximum number of iterations to perform. tol : float, default=1e-8 Tolerance for numerical error. fit_algorithm : {'lars', 'cd'}, default='lars' * `'lars'`: uses the least angle regression method to solve the lasso problem (:func:`~sklearn.linear_model.lars_path`); * `'cd'`: uses the coordinate descent method to compute the Lasso solution (:class:`~sklearn.linear_model.Lasso`). Lars will be faster if the estimated components are sparse. .. versionadded:: 0.17 *cd* coordinate descent method to improve speed. transform_algorithm : {'lasso_lars', 'lasso_cd', 'lars', 'omp', 'threshold'}, default='omp' Algorithm used to transform the data: - `'lars'`: uses the least angle regression method (:func:`~sklearn.linear_model.lars_path`); - `'lasso_lars'`: uses Lars to compute the Lasso solution. - `'lasso_cd'`: uses the coordinate descent method to compute the Lasso solution (:class:`~sklearn.linear_model.Lasso`). `'lasso_lars'` will be faster if the estimated components are sparse. - `'omp'`: uses orthogonal matching pursuit to estimate the sparse solution. - `'threshold'`: squashes to zero all coefficients less than alpha from the projection ``dictionary * X'``. .. versionadded:: 0.17 *lasso_cd* coordinate descent method to improve speed. transform_n_nonzero_coefs : int, default=None Number of nonzero coefficients to target in each column of the solution. This is only used by `algorithm='lars'` and `algorithm='omp'`. If `None`, then `transform_n_nonzero_coefs=int(n_features / 10)`. transform_alpha : float, default=None If `algorithm='lasso_lars'` or `algorithm='lasso_cd'`, `alpha` is the penalty applied to the L1 norm. If `algorithm='threshold'`, `alpha` is the absolute value of the threshold below which coefficients will be squashed to zero. If `None`, defaults to `alpha`. .. versionchanged:: 1.2 When None, default value changed from 1.0 to `alpha`. n_jobs : int or None, default=None Number of parallel jobs to run. ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. ``-1`` means using all processors. See :term:`Glossary ` for more details. code_init : ndarray of shape (n_samples, n_components), default=None Initial value for the code, for warm restart. Only used if `code_init` and `dict_init` are not None. dict_init : ndarray of shape (n_components, n_features), default=None Initial values for the dictionary, for warm restart. Only used if `code_init` and `dict_init` are not None. callback : callable, default=None Callable that gets invoked every five iterations. .. versionadded:: 1.3 verbose : bool, default=False To control the verbosity of the procedure. split_sign : bool, default=False Whether to split the sparse feature vector into the concatenation of its negative part and its positive part. This can improve the performance of downstream classifiers. random_state : int, RandomState instance or None, default=None Used for initializing the dictionary when ``dict_init`` is not specified, randomly shuffling the data when ``shuffle`` is set to ``True``, and updating the dictionary. Pass an int for reproducible results across multiple function calls. See :term:`Glossary `. positive_code : bool, default=False Whether to enforce positivity when finding the code. .. versionadded:: 0.20 positive_dict : bool, default=False Whether to enforce positivity when finding the dictionary. .. versionadded:: 0.20 transform_max_iter : int, default=1000 Maximum number of iterations to perform if `algorithm='lasso_cd'` or `'lasso_lars'`. .. versionadded:: 0.22 Attributes ---------- components_ : ndarray of shape (n_components, n_features) dictionary atoms extracted from the data error_ : array vector of errors at each iteration n_features_in_ : int Number of features seen during :term:`fit`. .. versionadded:: 0.24 feature_names_in_ : ndarray of shape (`n_features_in_`,) Names of features seen during :term:`fit`. Defined only when `X` has feature names that are all strings. .. versionadded:: 1.0 n_iter_ : int Number of iterations run. See Also -------- MiniBatchDictionaryLearning: A faster, less accurate, version of the dictionary learning algorithm. MiniBatchSparsePCA : Mini-batch Sparse Principal Components Analysis. SparseCoder : Find a sparse representation of data from a fixed, precomputed dictionary. SparsePCA : Sparse Principal Components Analysis. References ---------- J. Mairal, F. Bach, J. Ponce, G. Sapiro, 2009: Online dictionary learning for sparse coding (https://www.di.ens.fr/~fbach/mairal_icml09.pdf) Examples -------- >>> import numpy as np >>> from sklearn.datasets import make_sparse_coded_signal >>> from sklearn.decomposition import DictionaryLearning >>> X, dictionary, code = make_sparse_coded_signal( ... n_samples=30, n_components=15, n_features=20, n_nonzero_coefs=10, ... random_state=42, ... ) >>> dict_learner = DictionaryLearning( ... n_components=15, transform_algorithm='lasso_lars', transform_alpha=0.1, ... random_state=42, ... ) >>> X_transformed = dict_learner.fit(X).transform(X) We can check the level of sparsity of `X_transformed`: >>> np.mean(X_transformed == 0) np.float64(0.527) We can compare the average squared euclidean norm of the reconstruction error of the sparse coded signal relative to the squared euclidean norm of the original signal: >>> X_hat = X_transformed @ dict_learner.components_ >>> np.mean(np.sum((X_hat - X) ** 2, axis=1) / np.sum(X ** 2, axis=1)) np.float64(0.056) r]rbNrcrdr7rr2rFrr rr>rr r<rAr)rrrhrrrr3rrfrrrr_parameter_constraintsr*rrF)r7r2rFrrrrrhrrrr3rrrrrc t|||||| ||||_||_||_||_||_| |_| |_| |_ | |_ ||_ ||_ yr) rrr]r7r2rFrrrrr3rr)rr]r7r2rFrrrrrhrrrr3rrrrrrs r&rzDictionaryLearning.__init__Xsz,   %      )   *""   (*r(c(|j||S)Fit the model from data in X. Parameters ---------- X : array-like of shape (n_samples, n_features) Training vector, where `n_samples` is the number of samples and `n_features` is the number of features. y : Ignored Not used, present for API consistency by convention. Returns ------- self : object Returns the instance itself. )rrs r&rNzDictionaryLearning.fits" 1 r(Tric:t|j|jd|jz}t|j}t ||}|j |jd}n |j }t|||j|j|j||j|j|j|j|j |j"|d|j$|j\}}}|_||_||_|S)aFit the model from data in X and return the transformed data. Parameters ---------- X : array-like of shape (n_samples, n_features) Training vector, where `n_samples` is the number of samples and `n_features` is the number of features. y : Ignored Not used, present for API consistency by convention. Returns ------- V : ndarray of shape (n_samples, n_components) Transformed data. r#rrbT)r7rFr2r$rrhrrrr3rrrr)r'rrrrrr]rJrr7rFr2rrhrrrr3rrrr) rrYrr$rr]VUEs r&rz DictionaryLearning.fit_transforms$ d&8&84CUCUVD...)$*;*;< $ "    $771:L,,L . **]] 33;;nnnn]]LL%,,,,!! 1a$ r(c4|jjdSrrrrJrs r&rz"DictionaryLearning._n_features_out%%a((r(cJt|}ddg|j_|SNrprqrrr r r s r&rz#DictionaryLearning.__sklearn_tags__(w')1:I0F- r(r)rrrrrrrrrLndarraycallablerdict__annotations__rrNr rrrrrrs@r&rrs~@$(AtFCTJ$(4D89$ Xh4?@$ q$v67 $ *fd^45 $  M N $ $hxD&PRV%W$ HT1d6BDI$ 8T"$ bjj$'$ bjj$'$ Xt$$ I;$ yk!$" (#$$ )%$& )'$( x!T&IJ)$D2)+ !"&))+V(5262h))r(rcheZdZUdZideeddddgdeedddgd eedddgd ed d hgd degdeedddgddgddejgdehdgdeeddddgdeeddddgddgddgddgddgddgdeedddgde geedddgeeddddgdZ e e d< d1ddd ddd dd!ddd"d"dd"d"ddd#d$d%fd& Zd'Zd(Zd)Zd*Zd+Zed ,d1d-Zed ,d1d.Zed/Zfd0ZxZS)2raMini-batch dictionary learning. Finds a dictionary (a set of atoms) that performs well at sparsely encoding the fitted data. Solves the optimization problem:: (U^*,V^*) = argmin 0.5 || X - U V ||_Fro^2 + alpha * || U ||_1,1 (U,V) with || V_k ||_2 <= 1 for all 0 <= k < n_components ||.||_Fro stands for the Frobenius norm and ||.||_1,1 stands for the entry-wise matrix norm which is the sum of the absolute values of all the entries in the matrix. Read more in the :ref:`User Guide `. Parameters ---------- n_components : int, default=None Number of dictionary elements to extract. alpha : float, default=1 Sparsity controlling parameter. max_iter : int, default=1_000 Maximum number of iterations over the complete dataset before stopping independently of any early stopping criterion heuristics. .. versionadded:: 1.1 fit_algorithm : {'lars', 'cd'}, default='lars' The algorithm used: - `'lars'`: uses the least angle regression method to solve the lasso problem (`linear_model.lars_path`) - `'cd'`: uses the coordinate descent method to compute the Lasso solution (`linear_model.Lasso`). Lars will be faster if the estimated components are sparse. n_jobs : int, default=None Number of parallel jobs to run. ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. ``-1`` means using all processors. See :term:`Glossary ` for more details. batch_size : int, default=256 Number of samples in each mini-batch. .. versionchanged:: 1.3 The default value of `batch_size` changed from 3 to 256 in version 1.3. shuffle : bool, default=True Whether to shuffle the samples before forming batches. dict_init : ndarray of shape (n_components, n_features), default=None Initial value of the dictionary for warm restart scenarios. transform_algorithm : {'lasso_lars', 'lasso_cd', 'lars', 'omp', 'threshold'}, default='omp' Algorithm used to transform the data: - `'lars'`: uses the least angle regression method (`linear_model.lars_path`); - `'lasso_lars'`: uses Lars to compute the Lasso solution. - `'lasso_cd'`: uses the coordinate descent method to compute the Lasso solution (`linear_model.Lasso`). `'lasso_lars'` will be faster if the estimated components are sparse. - `'omp'`: uses orthogonal matching pursuit to estimate the sparse solution. - `'threshold'`: squashes to zero all coefficients less than alpha from the projection ``dictionary * X'``. transform_n_nonzero_coefs : int, default=None Number of nonzero coefficients to target in each column of the solution. This is only used by `algorithm='lars'` and `algorithm='omp'`. If `None`, then `transform_n_nonzero_coefs=int(n_features / 10)`. transform_alpha : float, default=None If `algorithm='lasso_lars'` or `algorithm='lasso_cd'`, `alpha` is the penalty applied to the L1 norm. If `algorithm='threshold'`, `alpha` is the absolute value of the threshold below which coefficients will be squashed to zero. If `None`, defaults to `alpha`. .. versionchanged:: 1.2 When None, default value changed from 1.0 to `alpha`. verbose : bool or int, default=False To control the verbosity of the procedure. split_sign : bool, default=False Whether to split the sparse feature vector into the concatenation of its negative part and its positive part. This can improve the performance of downstream classifiers. random_state : int, RandomState instance or None, default=None Used for initializing the dictionary when ``dict_init`` is not specified, randomly shuffling the data when ``shuffle`` is set to ``True``, and updating the dictionary. Pass an int for reproducible results across multiple function calls. See :term:`Glossary `. positive_code : bool, default=False Whether to enforce positivity when finding the code. .. versionadded:: 0.20 positive_dict : bool, default=False Whether to enforce positivity when finding the dictionary. .. versionadded:: 0.20 transform_max_iter : int, default=1000 Maximum number of iterations to perform if `algorithm='lasso_cd'` or `'lasso_lars'`. .. versionadded:: 0.22 callback : callable, default=None A callable that gets invoked at the end of each iteration. .. versionadded:: 1.1 tol : float, default=1e-3 Control early stopping based on the norm of the differences in the dictionary between 2 steps. To disable early stopping based on changes in the dictionary, set `tol` to 0.0. .. versionadded:: 1.1 max_no_improvement : int, default=10 Control early stopping based on the consecutive number of mini batches that does not yield an improvement on the smoothed cost function. To disable convergence detection based on cost function, set `max_no_improvement` to None. .. versionadded:: 1.1 Attributes ---------- components_ : ndarray of shape (n_components, n_features) Components extracted from the data. n_features_in_ : int Number of features seen during :term:`fit`. .. versionadded:: 0.24 feature_names_in_ : ndarray of shape (`n_features_in_`,) Names of features seen during :term:`fit`. Defined only when `X` has feature names that are all strings. .. versionadded:: 1.0 n_iter_ : int Number of iterations over the full dataset. n_steps_ : int Number of mini-batches processed. .. versionadded:: 1.1 See Also -------- DictionaryLearning : Find a dictionary that sparsely encodes data. MiniBatchSparsePCA : Mini-batch Sparse Principal Components Analysis. SparseCoder : Find a sparse representation of data from a fixed, precomputed dictionary. SparsePCA : Sparse Principal Components Analysis. References ---------- J. Mairal, F. Bach, J. Ponce, G. Sapiro, 2009: Online dictionary learning for sparse coding (https://www.di.ens.fr/~fbach/mairal_icml09.pdf) Examples -------- >>> import numpy as np >>> from sklearn.datasets import make_sparse_coded_signal >>> from sklearn.decomposition import MiniBatchDictionaryLearning >>> X, dictionary, code = make_sparse_coded_signal( ... n_samples=30, n_components=15, n_features=20, n_nonzero_coefs=10, ... random_state=42) >>> dict_learner = MiniBatchDictionaryLearning( ... n_components=15, batch_size=3, transform_algorithm='lasso_lars', ... transform_alpha=0.1, max_iter=20, random_state=42) >>> X_transformed = dict_learner.fit_transform(X) We can check the level of sparsity of `X_transformed`: >>> np.mean(X_transformed == 0) > 0.5 np.True_ We can compare the average squared euclidean norm of the reconstruction error of the sparse coded signal relative to the squared euclidean norm of the original signal: >>> X_hat = X_transformed @ dict_learner.components_ >>> np.mean(np.sum((X_hat - X) ** 2, axis=1) / np.sum(X ** 2, axis=1)) np.float64(0.052) r]rbNrcrdr7rr2rrr rhrrrfrr>rr r<rAr)rrr3rrrrr)rrFrrr*rTrFrru)r7r2rrhrrrrrrr3rrrrrrrFrc t|| | | | |||||_||_||_||_||_| |_||_||_ | |_ ||_ ||_ ||_ ||_||_yr)rrr]r7r2rrr3rrrrrrrrF)rr]r7r2rrhrrrrrrr3rrrrrrrFrrs r&rz$MiniBatchDictionaryLearning.__init__s0   %      )   *"  $$(*  "4r(c|j|_|j|jd|_t|j|j d|jz|_t|j|jd|_ y)Nrbrr) r] _n_componentsrJr'rr_fit_algorithmrzr _batch_sizers r& _check_paramsz)MiniBatchDictionaryLearning._check_paramsst!..    %!"D  t1143E3EF&););;t ;r(c |j |j}n6t||j|\}}}|ddtjf|z}|jt |kr|d|jddf}n[tj |tj|jt |z |jdf|jf}t|d|jd}tj|d }|S) z!Initialization of the dictionary.N)rrbrnrF)rmrncopyW) requirements) rrr2rLrr concatenaterrJrnrrequire)rrYrrZ_rs r&_initialize_dictz,MiniBatchDictionaryLearning._initialize_dicts >> %J /4%%L  Aq*1bjj=)J6J   Z 0#$8d&8&8$8!$;Converged (lack of improvement in objective function) at step ) rJrzr3r _ewa_costrrr2rF _ewa_cost_min_no_improvementr) rrYrGnew_dictold_dictr[rBn_stepsrr7 dict_diffs r&_check_convergencez.MiniBatchDictionaryLearning._check_convergenceNsWWQZ ax 3sI 23 3||vQwi7J:,WX >> !'DN)a-0EqME!^^q5y9J>*:<  KK8 34t7I7II 88ar8rrJr3rrr2rnr@rArKrLrMrr4 itertoolscyclerSceilr2rrrHrRrrn_steps_rr)rrYrrZrOX_trainr[r\batchesn_steps_per_iterrPibatchX_batchrGs r&rNzMiniBatchDictionaryLearning.fits2$  !BJJ 33U  1/0A0AB**1d.@.@A ??$ <<ffhG    & &w /G '  : << # $((   !3!3 4GMM ((J(:(:;7==Q! i)9)9://'*rwwy43C3C'CDE--"22 E'NG4 %HAuenG--T%7%7J&&Z9a}}( fh'$HQK# %&A wwt}}/??@ % r(ct|d}t||tjtjgd| }|s|j |t |j|_|j||j}d|_ tj|j|jf|j|_tj|jd|jf|j|_n |j"}|j%|||j|j||_|xjdz c_ |S)aUpdate the model using the data in X as a mini-batch. Parameters ---------- X : array-like of shape (n_samples, n_features) Training vector, where `n_samples` is the number of samples and `n_features` is the number of features. y : Ignored Not used, present for API consistency by convention. Returns ------- self : object Return the instance itself. rrl)rnrmrrr7rb)rrrLrprqr5rrrTr>rXrr2rnr@rJrArrH)rrYrhas_componentsrZs r& partial_fitz'MiniBatchDictionaryLearning.partial_fits$!}5  !BJJ 33.FX    q !!3D4E4E!FD ..q$2D2DEJDMhh 2 2D4F4FGqwwWDGhh D,>,>?qwwODG))J Q D,>,> N%   r(c4|jjdSr$r%rs r&rz+MiniBatchDictionaryLearning._n_features_out r&r(cJt|}ddg|j_|Sr(r)r s r&rz,MiniBatchDictionaryLearning.__sklearn_tags__ r*r(r)rrrrrrrrrLr+r,rr-r.rr5r>rErHrRr rNrarrrrrs@r&rrskN`$(AtFCTJ$(4D89$ Xh4?@$ *dF^45 $ 4" $ x!T&AB $ I;$ dBJJ'$  M N $ $hxD&PRV%W$ HT1d6BDI$ I;$ yk$ (!$" )#$$ )%$& x!T&IJ'$(8$q$v67'!T&I4P-$D6.!"& -.` <: +$LBH5O6Ob5*6*X))r(r)NNFNF)r)6rrUrrnumbersrrnumpyrLjoblibrscipyrbaserr r r linear_modelr r rrutilsrrrrutils._param_validationrrr utils.extmathrrrutils.parallelrrutils.validationrrr'rarsrrrrrrrrrrrIr(r&ros  "# FEQQKK@@.=      d5N^#nt$d# M N %Xq$vFM4D8$?Kt$h4?@T"!{;K!$#''4       e-,eX       Mh   M5`n(b^!{tVn-.$Xq$vFG  #(O%       'O%O%d^vtn-.#$Xq$vFG  #(    #p9p9fl<79Il<^]"#]]"@T*MTn B "3]B r(