`L iP dZddlZddlmZmZddlZddlmZddl m Z m Z m Z ddl mZddlmZmZdd lmZdd lmZmZmZdd lmZmZd Zd ZedgdgddddddddddddZGdde e Zy)z*Affinity Propagation clustering algorithm.N)IntegralReal)config_context) BaseEstimator ClusterMixin _fit_context)ConvergenceWarning)euclidean_distancespairwise_distances_argmin)check_random_state)Interval StrOptionsvalidate_params)check_is_fitted validate_datac:fd}fd}|xr|S)NcNtjjdk(S)Nr)npallflat) preferencesk/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/sklearn/cluster/_affinity_propagation.pyall_equal_preferenceszB_equal_similarities_and_preferences..all_equal_preferencess vvjJOOA$6677ctjjt}tj|dtj |j |j dk(S)N)dtyper)ronesshapebool fill_diagonalrr)maskSs rall_equal_similaritieszC_equal_similarities_and_preferences..all_equal_similaritiessOwwqwwd+ q!vvagllagll1o566r)r#rrr$s`` r#_equal_similarities_and_preferencesr&s!87 ! " ?'='??rc  |jd}|dk(s t||rtjd|jd|j|dz kDrW|r+t j |t j |dfSt j |t j |fS|r0t jdgt jdg|zdfSt jdgt jdg|zfS||jdd|dz<t j||f} t j||f} t j||f} |t j|jj|zt j|jjdzz|j||fzz }t j||f} t j |} t|D]}t j| || t j | d}| | |f}t j" | | |f<t j$| d}t j&||dddf| || |f|z | | |f<| d|z z} | |z} | | z } t j(| d| | jdd|dz| jdd|dz<| t j*| dz} t j,| j/}| j1dt j"| || jdd|dz<| d|z z} | |z} | | z} t j,| t j,| zdkD}|| dd||zf<t j*|d}||k\st j*| d}t j*||k(|dk(z|k7}|s|dkDs||k(sd}|rt3d |znd }|r t3d t j4}|j6}|dkDr>|rtjd t8t j |dd|fd}t j |||<t|D]w}t j:||k(j=d}t j t j*||ddt j>f|fd}||||<yt j |dd|fd}t j |||<||}t j@|}t jB||}n5tjd t8t jdg|z}g}|r||dzfS||fS)z$Main affinity propagation algorithm.rzTAll samples have mutually equal similarities. Returning arbitrary cluster center(s).Nd)size)axisFzConverged after %d iterations.TzDid not convergezcAffinity propagation did not converge, this model may return degenerate cluster centers and labels.zWAffinity propagation did not converge and this model will not have any cluster centers.)"rr&warningswarnrrarangearrayzerosfinforepstinystandard_normalrangeaddargmaxinfmaxsubtractmaximumsumdiagcopyclipprint flatnonzeror*r asarraynonzeronewaxisunique searchsorted)r#rconvergence_itermax_iterdampingverbose return_n_iter random_state n_samplesARtmpeinditIYY2dAEKse unconvergednever_convergedckiijlabelscluster_centers_indicess r_affinity_propagationrd"st IA~? ",AFF y1}  )Y'(A )Y'(A ((Iy) *C !BHHQWW$5$:$:S$@@$$9i*@$A BBA )-./A ))I CHo0& q!S IIc " QKvvgCF VVCa  AqDz3'Qi"nCF  q7{ W  S 1a%&VV,`. Parameters ---------- S : array-like of shape (n_samples, n_samples) Matrix of similarities between points. preference : array-like of shape (n_samples,) or float, default=None Preferences for each point - points with larger values of preferences are more likely to be chosen as exemplars. The number of exemplars, i.e. of clusters, is influenced by the input preferences value. If the preferences are not passed as arguments, they will be set to the median of the input similarities (resulting in a moderate number of clusters). For a smaller amount of clusters, this can be set to the minimum value of the similarities. convergence_iter : int, default=15 Number of iterations with no change in the number of estimated clusters that stops the convergence. max_iter : int, default=200 Maximum number of iterations. damping : float, default=0.5 Damping factor between 0.5 and 1. copy : bool, default=True If copy is False, the affinity matrix is modified inplace by the algorithm, for memory efficiency. verbose : bool, default=False The verbosity level. return_n_iter : bool, default=False Whether or not to return the number of iterations. random_state : int, RandomState instance or None, default=None Pseudo-random number generator to control the starting state. Use an int for reproducible results across function calls. See the :term:`Glossary `. .. versionadded:: 0.23 this parameter was previously hardcoded as 0. Returns ------- cluster_centers_indices : ndarray of shape (n_clusters,) Index of clusters centers. labels : ndarray of shape (n_samples,) Cluster labels for each point. n_iter : int Number of iterations run. Returned only if `return_n_iter` is set to True. Notes ----- For an example usage, see :ref:`sphx_glr_auto_examples_cluster_plot_affinity_propagation.py`. You may also check out, :ref:`sphx_glr_auto_examples_applications_plot_stock_market.py` When the algorithm does not converge, it will still return a arrays of ``cluster_center_indices`` and labels if there are any exemplars/clusters, however they may be degenerate and should be used with caution. When all training samples have equal similarities and equal preferences, the assignment of cluster centers and labels depends on the preference. If the preference is smaller than the similarities, a single cluster center and label ``0`` for every sample will be returned. Otherwise, every training sample becomes its own cluster center and is assigned a unique label. References ---------- Brendan J. Frey and Delbert Dueck, "Clustering by Passing Messages Between Data Points", Science Feb. 2007 Examples -------- >>> import numpy as np >>> from sklearn.cluster import affinity_propagation >>> from sklearn.metrics.pairwise import euclidean_distances >>> X = np.array([[1, 2], [1, 4], [1, 0], ... [4, 2], [4, 4], [4, 0]]) >>> S = -euclidean_distances(X, squared=True) >>> cluster_centers_indices, labels = affinity_propagation(S, random_state=0) >>> cluster_centers_indices array([0, 3]) >>> labels array([0, 0, 0, 1, 1, 1]) precomputedrJrIrHr?raffinityrKrM)AffinityPropagationfitcluster_centers_indices_labels_n_iter_) r#rrHrIrJr?rKrLrM estimators raffinity_propagationrvspd$) !  c!f1193D3DiFWFWWW  - -y/@/@ @@rc eZdZUdZeedddgeedddgeedddgdgd eeddd dged d hgd gdgdZe e d<dddddd ddddZ fdZ e dddZdZdfd ZxZS)rpaPerform Affinity Propagation Clustering of data. Read more in the :ref:`User Guide `. Parameters ---------- damping : float, default=0.5 Damping factor in the range `[0.5, 1.0)` is the extent to which the current value is maintained relative to incoming values (weighted 1 - damping). This in order to avoid numerical oscillations when updating these values (messages). max_iter : int, default=200 Maximum number of iterations. convergence_iter : int, default=15 Number of iterations with no change in the number of estimated clusters that stops the convergence. copy : bool, default=True Make a copy of input data. preference : array-like of shape (n_samples,) or float, default=None Preferences for each point - points with larger values of preferences are more likely to be chosen as exemplars. The number of exemplars, ie of clusters, is influenced by the input preferences value. If the preferences are not passed as arguments, they will be set to the median of the input similarities. affinity : {'euclidean', 'precomputed'}, default='euclidean' Which affinity to use. At the moment 'precomputed' and ``euclidean`` are supported. 'euclidean' uses the negative squared euclidean distance between points. verbose : bool, default=False Whether to be verbose. random_state : int, RandomState instance or None, default=None Pseudo-random number generator to control the starting state. Use an int for reproducible results across function calls. See the :term:`Glossary `. .. versionadded:: 0.23 this parameter was previously hardcoded as 0. Attributes ---------- cluster_centers_indices_ : ndarray of shape (n_clusters,) Indices of cluster centers. cluster_centers_ : ndarray of shape (n_clusters, n_features) Cluster centers (if affinity != ``precomputed``). labels_ : ndarray of shape (n_samples,) Labels of each point. affinity_matrix_ : ndarray of shape (n_samples, n_samples) Stores the affinity matrix used in ``fit``. n_iter_ : int Number of iterations taken to converge. n_features_in_ : int Number of features seen during :term:`fit`. .. versionadded:: 0.24 feature_names_in_ : ndarray of shape (`n_features_in_`,) Names of features seen during :term:`fit`. Defined only when `X` has feature names that are all strings. .. versionadded:: 1.0 See Also -------- AgglomerativeClustering : Recursively merges the pair of clusters that minimally increases a given linkage distance. FeatureAgglomeration : Similar to AgglomerativeClustering, but recursively merges features instead of samples. KMeans : K-Means clustering. MiniBatchKMeans : Mini-Batch K-Means clustering. MeanShift : Mean shift clustering using a flat kernel. SpectralClustering : Apply clustering to a projection of the normalized Laplacian. Notes ----- The algorithmic complexity of affinity propagation is quadratic in the number of points. When the algorithm does not converge, it will still return a arrays of ``cluster_center_indices`` and labels if there are any exemplars/clusters, however they may be degenerate and should be used with caution. When ``fit`` does not converge, ``cluster_centers_`` is still populated however it may be degenerate. In such a case, proceed with caution. If ``fit`` does not converge and fails to produce any ``cluster_centers_`` then ``predict`` will label every sample as ``-1``. When all training samples have equal similarities and equal preferences, the assignment of cluster centers and labels depends on the preference. If the preference is smaller than the similarities, ``fit`` will result in a single cluster center and label ``0`` for every sample. Otherwise, every training sample becomes its own cluster center and is assigned a unique label. References ---------- Brendan J. Frey and Delbert Dueck, "Clustering by Passing Messages Between Data Points", Science Feb. 2007 Examples -------- >>> from sklearn.cluster import AffinityPropagation >>> import numpy as np >>> X = np.array([[1, 2], [1, 4], [1, 0], ... [4, 2], [4, 4], [4, 0]]) >>> clustering = AffinityPropagation(random_state=5).fit(X) >>> clustering AffinityPropagation(random_state=5) >>> clustering.labels_ array([0, 0, 0, 1, 1, 1]) >>> clustering.predict([[0, 0], [4, 4]]) array([0, 1]) >>> clustering.cluster_centers_ array([[1, 2], [4, 2]]) For an example usage, see :ref:`sphx_glr_auto_examples_cluster_plot_affinity_propagation.py`. For a comparison of Affinity Propagation with other clustering algorithms, see :ref:`sphx_glr_auto_examples_cluster_plot_cluster_comparison.py` rkg?left)closedr(Nrfreneither euclideanrmrKrMrn_parameter_constraintsrjriTFct||_||_||_||_||_||_||_||_yN)rJrIrHr?rKrrorM) selfrJrIrHr?rrorKrMs r__init__zAffinityPropagation.__init__s>   0  $  (rct|}|jdk(|j_|jdk7|j_|S)Nrm)super__sklearn_tags__ro input_tagspairwisesparse)rtags __class__s rrz$AffinityPropagation.__sklearn_tags__s?w')#'==M#A !%-!? rrgc *|jdk(r!t|||jd}||_n!t||d}t |d |_|jj d|jj dk7r#t d |jj d |j tj|j}n |j}tj|}t|j}t|j|j|j||j |j"d| \|_|_|_|jdk7r"||j$j|_|S) aFit the clustering from features, or affinity matrix. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features), or array-like of shape (n_samples, n_samples) Training instances to cluster, or similarities / affinities between instances if ``affinity='precomputed'``. If a sparse feature matrix is provided, it will be converted into a sparse ``csr_matrix``. y : Ignored Not used, present here for API consistency by convention. Returns ------- self Returns the instance itself. rmT)r?force_writeablecsr) accept_sparse)squaredrr(z7The matrix of similarities must be a square array. Got z instead.)rIrHrrJrKrLrM)rorr?affinity_matrix_r r ValueErrorrrmedianrCr rMrdrIrHrJrKrrrsrtcluster_centers_)rXyrrMs rrqzAffinityPropagation.fitsf( ==M )dADIItLA$%D !dAU;A%8D%I$ID !  & &q )T-B-B-H-H-K K,,2239>  ?? "4#8#89JJZZ + )$*;*;< "  ! !]]!22!LLLL%   ) L L ==M )$%d&C&C$D$I$I$KD ! rct|t||dd}t|ds td|jj ddkDr,t d5t||jcd d d Stjd ttjd g|j dzS#1swYy xYw) aPredict the closest cluster each sample in X belongs to. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) New data to predict. If a sparse matrix is provided, it will be converted into a sparse ``csr_matrix``. Returns ------- labels : ndarray of shape (n_samples,) Cluster labels. Fr)resetrrz;'( $D $ )* 5:6:x!/F))rrp)rr-numbersrrnumpyr_configrbaserrr exceptionsr metricsr r utilsr utils._param_validationrrrutils.validationrrr&rdrvrpr%rrrs0 "$<<+D&KK= @M/h^##(    xAxAvg), g)r