`L iU$dZddlmZmZddlmZddlZddlm Z ddl m Z m Z ddl mZmZdd lmZmZmZdd lmZmZdd lmZmZdd lmZmZmZdd lmZm Z ddl!m"Z"m#Z#ddgZ$dZ%ddZ&dZ'GddeeeZ(Gdde(Z)Gdde(Z*y)z!Spectral biclustering algorithms.)ABCMetaabstractmethod)IntegralN)norm) dia_matrixissparse)eigshsvds) BaseEstimatorBiclusterMixin _fit_context)check_random_state check_scalar)Interval StrOptions)_randomized_svdmake_nonnegativesafe_sparse_dot)assert_all_finite validate_data)KMeansMiniBatchKMeansSpectralBiclusteringSpectralCoclusteringct|}tjdtj|j dz j }tjdtj|j dz j }tj tj|d|}tj tj|d|}t|r<|j\}}t|dgf||f}t|dgf||f}||z|z}n|ddtjf|z|z}|||fS)zNormalize ``X`` by scaling rows and columns independently. Returns the normalized matrix and the row and column scaling factors. g?raxisr)shapeN) rnpasarraysqrtsumsqueezewhereisnanrr rnewaxis)Xrow_diagcol_diagn_rowsn_colsrcans `/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/sklearn/cluster/_bicluster.py_scale_normalizer2s Azz#1 667??AHzz#1 667??AHxx*Ax8Hxx*Ax8H{ 1#vv.> ? 1#vv.> ? UQY am $q (8 3 x !!ct|}|}t|D]Z}t|\}}}t|r#t |j |j z }nt ||z }|}|S||ksY|S|S)zNormalize rows and columns of ``X`` simultaneously so that all rows sum to one constant and all columns sum to a different constant. )rranger2rrdata)r)max_itertolX_scaled_X_newdists r1_bistochastic_normalizer=-s AH 8_&x0 q! A; ./D5()D  s  O Or3ct|d}t|r tdtj|}|j dddtj f}|j d}|j }||z |z |zS)z>Normalize ``X`` according to Kluger's log-interactions scheme.r) min_valuez[Cannot compute log of a sparse matrix, because log(x) diverges to -infinity as x goes to 0.rNr)rr ValueErrorr!logmeanr()r)Lrow_avgcol_avgavgs r1_log_normalizerGBsa(A{   q Aff!fnQ ]+Gff!fnG &&(C w; 3 &&r3c eZdZUdZeddhgeeddddgdged d hejgeed ddgd gd Z e e d<e ddZ e dZedddZdZdZfdZxZS) BaseSpectralz%Base class for spectral biclustering. randomizedarpackrNleftclosedboolean k-means++randomr random_state svd_method n_svd_vecs mini_batchinitn_initrR_parameter_constraintscf||_||_||_||_||_||_||_yN) n_clustersrTrUrVrWrXrR)selfr\rTrUrVrWrXrRs r1__init__zBaseSpectral.__init__^s6%$$$  (r3cy)z0Validate parameters depending on the input data.Nr] n_sampless r1_check_parameterszBaseSpectral._check_parametersqsr3T)prefer_skip_nested_validationct||dtj}|j|jd|j ||S)aWCreate a biclustering for X. Parameters ---------- X : array-like of shape (n_samples, n_features) Training data. y : Ignored Not used, present for API consistency by convention. Returns ------- self : object SpectralBiclustering instance. csr) accept_sparsedtyper)rr!float64rcr _fit)r]r)ys r1fitzBaseSpectral.fitus>" $bjj I qwwqz* !  r3c|jdk(r'( $D ))$??56*#J (r3rI) metaclassceZdZUdZiej deedddgiZee d< dddd d d dd fd Z dZ dZ xZ S)ra2Spectral Co-Clustering algorithm (Dhillon, 2001). Clusters rows and columns of an array `X` to solve the relaxed normalized cut of the bipartite graph created from `X` as follows: the edge between row vertex `i` and column vertex `j` has weight `X[i, j]`. The resulting bicluster structure is block-diagonal, since each row and each column belongs to exactly one bicluster. Supports sparse matrices, as long as they are nonnegative. Read more in the :ref:`User Guide `. Parameters ---------- n_clusters : int, default=3 The number of biclusters to find. svd_method : {'randomized', 'arpack'}, default='randomized' Selects the algorithm for finding singular vectors. May be 'randomized' or 'arpack'. If 'randomized', use :func:`sklearn.utils.extmath.randomized_svd`, which may be faster for large matrices. If 'arpack', use :func:`scipy.sparse.linalg.svds`, which is more accurate, but possibly slower in some cases. n_svd_vecs : int, default=None Number of vectors to use in calculating the SVD. Corresponds to `ncv` when `svd_method=arpack` and `n_oversamples` when `svd_method` is 'randomized`. mini_batch : bool, default=False Whether to use mini-batch k-means, which is faster but may get different results. init : {'k-means++', 'random'}, or ndarray of shape (n_clusters, n_features), default='k-means++' Method for initialization of k-means algorithm; defaults to 'k-means++'. n_init : int, default=10 Number of random initializations that are tried with the k-means algorithm. If mini-batch k-means is used, the best initialization is chosen and the algorithm runs once. Otherwise, the algorithm is run for each initialization and the best solution chosen. random_state : int, RandomState instance, default=None Used for randomizing the singular value decomposition and the k-means initialization. Use an int to make the randomness deterministic. See :term:`Glossary `. Attributes ---------- rows_ : array-like of shape (n_row_clusters, n_rows) Results of the clustering. `rows[i, r]` is True if cluster `i` contains row `r`. Available only after calling ``fit``. columns_ : array-like of shape (n_column_clusters, n_columns) Results of the clustering, like `rows`. row_labels_ : array-like of shape (n_rows,) The bicluster label of each row. column_labels_ : array-like of shape (n_cols,) The bicluster label of each column. biclusters_ : tuple of two ndarrays The tuple contains the `rows_` and `columns_` arrays. n_features_in_ : int Number of features seen during :term:`fit`. .. versionadded:: 0.24 feature_names_in_ : ndarray of shape (`n_features_in_`,) Names of features seen during :term:`fit`. Defined only when `X` has feature names that are all strings. .. versionadded:: 1.0 See Also -------- SpectralBiclustering : Partitions rows and columns under the assumption that the data has an underlying checkerboard structure. References ---------- * :doi:`Dhillon, Inderjit S, 2001. Co-clustering documents and words using bipartite spectral graph partitioning. <10.1145/502512.502550>` Examples -------- >>> from sklearn.cluster import SpectralCoclustering >>> import numpy as np >>> X = np.array([[1, 1], [2, 1], [1, 0], ... [4, 7], [3, 5], [3, 6]]) >>> clustering = SpectralCoclustering(n_clusters=2, random_state=0).fit(X) >>> clustering.row_labels_ #doctest: +SKIP array([0, 1, 1, 0, 0, 0], dtype=int32) >>> clustering.column_labels_ #doctest: +SKIP array([0, 0], dtype=int32) >>> clustering SpectralCoclustering(n_clusters=2, random_state=0) For a more detailed example, see the following: :ref:`sphx_glr_auto_examples_bicluster_plot_spectral_coclustering.py`. r\rNrLrMrYrJFrPrrSc 0t||||||||yr[)rr^) r]r\rTrUrVrWrXrRrs r1r^zSpectralCoclustering.__init__@s!   J D&, r3cZ|j|kDrtd|d|jdy)N"n_clusters should be <= n_samples=. Got instead.)r\r@ras r1rcz&SpectralCoclustering._check_parametersOs> ??Y &4YK@OO$I/  'r3ct|\}}}dttjtj|j z}|j ||d\}}tj|ddtjf|z|ddtjf|zf}|j||j \} } |jd} | d| |_ | | d|_ tjt|j D cgc]} |j| k(c} |_tjt|j D cgc]} |j| k(c} |_ycc} wcc} w)Nr)rxr)r2intr!ceillog2r\r~vstackr(rr row_labels_column_labels_r5rows_columns_) r]r)normalized_datar*r+n_svrzr}zr:rr,r/s r1rjzSpectralCoclustering._fitVs2.>q.A+83rwwrwwt7899yy$!y<1 IIx2:: .2HQ ]4Ka4OP QMM!T__5 6!'6?$VWoYYuT__?UV! 0 0A 5VW  /4T__/E F!T A % F   W Fs E?Fr)rrrrrIrYrrrrr^rcrjrrs@r1rrsln`$  - -$x!T&AB$D       r3c eZdZUdZiej eedddege hdgeedddgeedddgdZe e d< dd d d d dd dddd fd Z dZ dZdZdZxZS)ratSpectral biclustering (Kluger, 2003). Partitions rows and columns under the assumption that the data has an underlying checkerboard structure. For instance, if there are two row partitions and three column partitions, each row will belong to three biclusters, and each column will belong to two biclusters. The outer product of the corresponding row and column label vectors gives this checkerboard structure. Read more in the :ref:`User Guide `. Parameters ---------- n_clusters : int or tuple (n_row_clusters, n_column_clusters), default=3 The number of row and column clusters in the checkerboard structure. method : {'bistochastic', 'scale', 'log'}, default='bistochastic' Method of normalizing and converting singular vectors into biclusters. May be one of 'scale', 'bistochastic', or 'log'. The authors recommend using 'log'. If the data is sparse, however, log normalization will not work, which is why the default is 'bistochastic'. .. warning:: if `method='log'`, the data must not be sparse. n_components : int, default=6 Number of singular vectors to check. n_best : int, default=3 Number of best singular vectors to which to project the data for clustering. svd_method : {'randomized', 'arpack'}, default='randomized' Selects the algorithm for finding singular vectors. May be 'randomized' or 'arpack'. If 'randomized', uses :func:`~sklearn.utils.extmath.randomized_svd`, which may be faster for large matrices. If 'arpack', uses `scipy.sparse.linalg.svds`, which is more accurate, but possibly slower in some cases. n_svd_vecs : int, default=None Number of vectors to use in calculating the SVD. Corresponds to `ncv` when `svd_method=arpack` and `n_oversamples` when `svd_method` is 'randomized`. mini_batch : bool, default=False Whether to use mini-batch k-means, which is faster but may get different results. init : {'k-means++', 'random'} or ndarray of shape (n_clusters, n_features), default='k-means++' Method for initialization of k-means algorithm; defaults to 'k-means++'. n_init : int, default=10 Number of random initializations that are tried with the k-means algorithm. If mini-batch k-means is used, the best initialization is chosen and the algorithm runs once. Otherwise, the algorithm is run for each initialization and the best solution chosen. random_state : int, RandomState instance, default=None Used for randomizing the singular value decomposition and the k-means initialization. Use an int to make the randomness deterministic. See :term:`Glossary `. Attributes ---------- rows_ : array-like of shape (n_row_clusters, n_rows) Results of the clustering. `rows[i, r]` is True if cluster `i` contains row `r`. Available only after calling ``fit``. columns_ : array-like of shape (n_column_clusters, n_columns) Results of the clustering, like `rows`. row_labels_ : array-like of shape (n_rows,) Row partition labels. column_labels_ : array-like of shape (n_cols,) Column partition labels. biclusters_ : tuple of two ndarrays The tuple contains the `rows_` and `columns_` arrays. n_features_in_ : int Number of features seen during :term:`fit`. .. versionadded:: 0.24 feature_names_in_ : ndarray of shape (`n_features_in_`,) Names of features seen during :term:`fit`. Defined only when `X` has feature names that are all strings. .. versionadded:: 1.0 See Also -------- SpectralCoclustering : Spectral Co-Clustering algorithm (Dhillon, 2001). References ---------- * :doi:`Kluger, Yuval, et. al., 2003. Spectral biclustering of microarray data: coclustering genes and conditions. <10.1101/gr.648603>` Examples -------- >>> from sklearn.cluster import SpectralBiclustering >>> import numpy as np >>> X = np.array([[1, 1], [2, 1], [1, 0], ... [4, 7], [3, 5], [3, 6]]) >>> clustering = SpectralBiclustering(n_clusters=2, random_state=0).fit(X) >>> clustering.row_labels_ array([1, 1, 1, 0, 0, 0], dtype=int32) >>> clustering.column_labels_ array([1, 0], dtype=int32) >>> clustering SpectralBiclustering(n_clusters=2, random_state=0) For a more detailed example, see :ref:`sphx_glr_auto_examples_bicluster_plot_spectral_biclustering.py` rNrLrM>rAscale bistochastic)r\methodrwn_bestrYrrrJFrPr) rrwrrTrUrVrWrXrRc Zt ||||||| | ||_||_||_yr[)rr^rrwr) r]r\rrwrrTrUrVrWrXrRrs r1r^zSpectralBiclustering.__init__s9   J D&,  ( r3ct|jtr+|j|kDrTtd|d|jd |j\}}t |dtd|t |dtd||j |jkDr&td |j d |jd y#tt f$r}td|jd |d}~wwxYw) Nrrrn_row_clustersr) target_typemin_valmax_valn_column_clustersz*Incorrect parameter n_clusters has value: z. It should either be a single integer or an iterable with two integers: (n_row_clusters, n_column_clusters) And the values are should be in the range: (1, n_samples)zn_best=z must be <= n_components=.) isinstancer\rr@r TypeErrorrrw)r]rbrres r1rcz&SpectralBiclustering._check_parameterss doox 0* 8 D( 3  48OO1 1"$ (% %' (% " ;;** *$++&?@Q@Q?RRST  + *  ()-- s7B>>C, C''C,c |j}|jdk(rt|}|dz }n>|jdk(rt|\}}}|dz }n|jdk(r t |}|jdk(rdnd}|j ||\}}|j }|j } |j\} } |j||j| } |j| |j| } |j|| j | |_ |j|j | j | |_ tjt!| Dcgc]!}t!| D]}|j|k(#c}}|_tjt!| Dcgc]!}t!| D]}|j|k(#c}}|_y#t$r|jx} } YCwxYwcc}}wcc}}w)NrrrrAr)rwrr=r2rGr~rtr\r_fit_best_piecewiser_project_and_clusterrrr!rr5rr)r]r)rrr:rxrzr}utr{rn_col_clustersbest_utbest_vtlabels r1rjzSpectralBiclustering._fit,s   ;;. (5a8O AID [[G #$4Q$7 !OQ AID [[E !,Q/O-A1 yy$ :1 SS SS >-1__ *NN**2t{{NK**2t{{NK44Q >R"77WYYWYY#>2 ~.   E) )    ~. ">2 ##u, ,   % >.2oo =N^ >  s-G%!&H 1&H %H?Hcfd}tj|d|}tjtd||z }|tj|d|}|S)zFind the ``n_best`` vectors that are best approximated by piecewise constant vectors. The piecewise vectors are found by k-means; the best is chosen according to Euclidean distance. ctj|jdd\}}||jS)Nrqr)rreshaperavel)r}rrr\r]s r1make_piecewisez@SpectralBiclustering._fit_best_piecewise..make_piecewise`s7#}}QYYr1-=zJ HfF#))+ +r3r)rarrN)r!apply_along_axisrargsort)r]vectorsrr\rpiecewise_vectorsdistsresults` ` r1rz(SpectralBiclustering._fit_best_piecewiseWsX ,//QGT##DqwAR7RTE*7F34 r3cHt||}|j||\}}|S)z7Project ``data`` to ``vectors`` and cluster the result.)rr)r]r6rr\ projectedr:rs r1rz)SpectralBiclustering._project_and_clusteris'#D'2 MM)Z8 6 r3r)rrrrrIrYrrtuplerrrr^rcrjrrrrs@r1rrhs}~$  - -$!T&A5I>?@!(AtFCDHaf=> $D *%N) V$r3)igh㈵>)+rabcrrnumbersrnumpyr! scipy.linalgr scipy.sparserrscipy.sparse.linalgr r baser r rutilsrrutils._param_validationrr utils.extmathrrrutils.validationrr_kmeansrr__all__r2r=rGrIrrr`r3r1rs~' (-+>>4:NN?, !#9 :"** ' u>=Gup[ <[ |E<Er3