`L ihddlZddlmZddlmZmZddlZddlm Z ddl m Z ddl m Z mZmZmZmZddlmZdd lmZdd lmZdd lmZmZmZdd lmZdd lmZm Z ddl!m"Z"dZ#dZ$GddZ%GddZ&Gddeeee Z'y)N)sqrt)IntegralReal)sparse)config_context) BaseEstimatorClassNamePrefixFeaturesOutMixin ClusterMixinTransformerMixin _fit_context)ConvergenceWarning)pairwise_distances_argmin)euclidean_distances)HiddenInterval StrOptions) row_norms)check_is_fitted validate_data)AgglomerativeClusteringc#K|jd}|j}|j}|j}t |D]B}t j |jd}||||dz}}|||} ||||| <|Dyw)zThis little hack returns a densified row when iterating over a sparse matrix, instead of constructing a sparse matrix for every row that is expensive. rrN)shapeindicesdataindptrrangenpzeros) X n_samples X_indicesX_dataX_indptrirowstartptrendptrnonzero_indicess \/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/sklearn/cluster/_birch.py_iterate_sparse_Xr,s  I I VVFxxH 9 hhqwwqz"#A;Q&#HV4%hv6O sBBct}t}t|||j|j|jj }t|||j|j|jj }||_||_|jrj|j||j_|j|_||_||_|j|_|j||j_t|j|jd}|jd}tj|j||f} || f\} } | | k} d| | d<t!|j"D]O\} }| | r#|j%||j'|.|j%||j'|Q||fS)aThe node has to be split if there is no place for a new subcluster in the node. 1. Two empty nodes and two empty subclusters are initialized. 2. The pair of distant subclusters are found. 3. The properties of the empty subclusters and nodes are updated according to the nearest distance between the subclusters to the pair of distant subclusters. 4. The two nodes are set as children to the two subclusters.  thresholdbranching_factoris_leaf n_featuresdtypeT)Y_norm_squaredsquaredr) _CFSubcluster_CFNoder1r2init_centroids_r3child_ prev_leaf_ next_leaf_r centroids_ squared_norm_rr unravel_indexargmax enumerate subclusters_append_subclusterupdate)noder/r0new_subcluster1new_subcluster2 new_node1 new_node2dist n_clusters farthest_idx node1_dist node2_dist node1_closeridx subclusters r+ _split_noderQ.s$oO#oO) ??""(( I) ??""(( I'O&O || ?? &)2DOO &# ( ( #  ?? &)2DOO &  (:(:D DAJ##DKKMJ 3KLL!*>Y'(2(;(;9% ..Q/AB!//)a-@rRc|jj|}||j|<|j|j|<|j|j |<|j |y)zZRemove a subcluster from a node and update it with the split subclusters. N)rAindexr[r8r\rUrB)rVrPrErFinds r+update_split_subclustersz _CFNode.update_split_subclusterssd%%j1!0#$3$=$=S!"1":":3 /rRc|js|j|y|j}|j}t j |j |j}|dz}||jz }t j|}|j|}|j|jj|}|s^|j||j|j|j|<|j|j|j|<yt!|j||\}} |j#||| t%|j|jkDryy|j'||j} | r3|j|j|<|j|j|<yt%|j|jkr|j|y|j|y)z&Insert a new subcluster into the node.FgT)rArBr/r0rdotr<r[r=argminr9insert_cf_subclusterrCr8r\rUrQr`rYmerge_subcluster) rVrPr/r0 dist_matrix closest_indexclosest_subcluster split_childrErFmergeds r+rdz_CFNode.insert_cf_subclusters   " ": .NN 00ffT__j.B.BC t t)))  +. !..}=  $ $ 0,33HHTK#))*56:6G6G!7)$$]3594E4E!5(""=1 4?&--$40 --&t(()D,A,AA(88T^^TF6H6R6R$$]34F4O4O""=1T&&'$*?*??&&z2 &&z2rRN)__name__ __module__ __qualname____doc__rWrBr`rdrRr+r7r7ms-^  A0CrRr7c8eZdZdZdddZdZdZedZy)r6aEach subcluster in a CFNode is called a CFSubcluster. A CFSubcluster can have a CFNode has its child. Parameters ---------- linear_sum : ndarray of shape (n_features,), default=None Sample. This is kept optional to allow initialization of empty subclusters. Attributes ---------- n_samples_ : int Number of samples that belong to each subcluster. linear_sum_ : ndarray Linear sum of all the samples in a subcluster. Prevents holding all sample data in memory. squared_sum_ : float Sum of the squared l2 norms of all samples belonging to a subcluster. centroid_ : ndarray of shape (branching_factor + 1, n_features) Centroid of the subcluster. Prevent recomputing of centroids when ``CFNode.centroids_`` is called. child_ : _CFNode Child Node of the subcluster. Once a given _CFNode is set as the child of the _CFNode, it is set to ``self.child_``. sq_norm_ : ndarray of shape (branching_factor + 1,) Squared norm of the subcluster. Used to prevent recomputing when pairwise minimum distances are computed. N linear_sumc|$d|_d|_dx|_|_d|_yd|_|x|_|_t j |j|jx|_|_d|_y)Nrr) n_samples_ squared_sum_r[ linear_sum_rrbr\r9)rVrrs r+rWz_CFSubcluster.__init__,s{  DO #D 01 1DNT-  DO0: :DNT-02  $"2"21 D   rRcX|xj|jz c_|xj|jz c_|xj|jz c_|j|jz |_t j |j|j|_yN)rurwrvr[rrbr\)rVrPs r+rCz_CFSubcluster.update9st :000 J222 Z444))DOO;t~~t~~> rRcD|j|jz}|j|jz}|j|jz}d|z |z}tj||}||z |z }||dzkr'|||||f\|_|_|_|_|_yy)zUCheck if a cluster is worthy enough to be merged. If yes then merge. rrTF)rvrwrurrbr[r\) rVnominee_clusterr/new_ssnew_lsnew_n new_centroid new_sq_norm sq_radiuss r+rez_CFSubcluster.merge_subcluster@s""_%A%AA!!O$?$??/"<"<<E V+ ff\<8 UN[0  1 $ kB   ! rRcx|j|jz |jz }tt d|S)zReturn radius of the subclusterr)rvrur\rmax)rVrs r+radiusz_CFSubcluster.radius_s3%%7$--G C9%&&rR) rkrlrmrnrWrCrepropertyrrorRr+r6r6s0!F&* ?>''rRr6c eZdZUdZeedddgeedddgdeeedddgdgdee d hgd Z e e d <d d ddd d dZ edddZdZdZedddZdZdZdZddZfdZxZS)BirchaImplements the BIRCH clustering algorithm. It is a memory-efficient, online-learning algorithm provided as an alternative to :class:`MiniBatchKMeans`. It constructs a tree data structure with the cluster centroids being read off the leaf. These can be either the final cluster centroids or can be provided as input to another clustering algorithm such as :class:`AgglomerativeClustering`. Read more in the :ref:`User Guide `. .. versionadded:: 0.16 Parameters ---------- threshold : float, default=0.5 The radius of the subcluster obtained by merging a new sample and the closest subcluster should be lesser than the threshold. Otherwise a new subcluster is started. Setting this value to be very low promotes splitting and vice-versa. branching_factor : int, default=50 Maximum number of CF subclusters in each node. If a new samples enters such that the number of subclusters exceed the branching_factor then that node is split into two nodes with the subclusters redistributed in each. The parent subcluster of that node is removed and two new subclusters are added as parents of the 2 split nodes. n_clusters : int, instance of sklearn.cluster model or None, default=3 Number of clusters after the final clustering step, which treats the subclusters from the leaves as new samples. - `None` : the final clustering step is not performed and the subclusters are returned as they are. - :mod:`sklearn.cluster` Estimator : If a model is provided, the model is fit treating the subclusters as new samples and the initial data is mapped to the label of the closest subcluster. - `int` : the model fit is :class:`AgglomerativeClustering` with `n_clusters` set to be equal to the int. compute_labels : bool, default=True Whether or not to compute labels for each fit. copy : bool, default=True Whether or not to make a copy of the given data. If set to False, the initial data will be overwritten. .. deprecated:: 1.6 `copy` was deprecated in 1.6 and will be removed in 1.8. It has no effect as the estimator does not perform in-place operations on the input data. Attributes ---------- root_ : _CFNode Root of the CFTree. dummy_leaf_ : _CFNode Start pointer to all the leaves. subcluster_centers_ : ndarray Centroids of all subclusters read directly from the leaves. subcluster_labels_ : ndarray Labels assigned to the centroids of the subclusters after they are clustered globally. labels_ : ndarray of shape (n_samples,) Array of labels assigned to the input data. if partial_fit is used instead of fit, they are assigned to the last batch of data. n_features_in_ : int Number of features seen during :term:`fit`. .. versionadded:: 0.24 feature_names_in_ : ndarray of shape (`n_features_in_`,) Names of features seen during :term:`fit`. Defined only when `X` has feature names that are all strings. .. versionadded:: 1.0 See Also -------- MiniBatchKMeans : Alternative implementation that does incremental updates of the centers' positions using mini-batches. Notes ----- The tree data structure consists of nodes with each node consisting of a number of subclusters. The maximum number of subclusters in a node is determined by the branching factor. Each subcluster maintains a linear sum, squared sum and the number of samples in that subcluster. In addition, each subcluster can also have a node as its child, if the subcluster is not a member of a leaf node. For a new point entering the root, it is merged with the subcluster closest to it and the linear sum, squared sum and the number of samples of that subcluster are updated. This is done recursively till the properties of the leaf node are updated. See :ref:`sphx_glr_auto_examples_cluster_plot_birch_vs_minibatchkmeans.py` for a comparison with :class:`~sklearn.cluster.MiniBatchKMeans`. References ---------- * Tian Zhang, Raghu Ramakrishnan, Maron Livny BIRCH: An efficient data clustering method for large databases. https://www.cs.sfu.ca/CourseCentral/459/han/papers/zhang96.pdf * Roberto Perdisci JBirch - Java implementation of BIRCH clustering algorithm https://code.google.com/archive/p/jbirch Examples -------- >>> from sklearn.cluster import Birch >>> X = [[0, 1], [0.3, 1], [-0.3, 1], [0, -1], [0.3, -1], [-0.3, -1]] >>> brc = Birch(n_clusters=None) >>> brc.fit(X) Birch(n_clusters=None) >>> brc.predict(X) array([0, 0, 0, 1, 1, 1]) For a comparison of the BIRCH clustering algorithm with other clustering algorithms, see :ref:`sphx_glr_auto_examples_cluster_plot_cluster_comparison.py` rtNneither)closedrleftboolean deprecatedr/r0rJcompute_labelscopy_parameter_constraintsg?2TcJ||_||_||_||_||_yryr)rVr/r0rJrrs r+rWzBirch.__init__s)# 0$, rR)prefer_skip_nested_validationc(|j|dS)ad Build a CF Tree for the input data. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Input data. y : Ignored Not used, present here for API consistency by convention. Returns ------- self Fitted estimator. Fpartial)_fitrVr!ys r+fitz Birch.fits$yyEy**rRct|dd}|xr| }|jdk7r|rtjdtt ||d|t jt jg}|j}|j}|j\}}|rtt||d||j|_t||d||j|_|j|j_|j|j_t%j&|st(} nt*} | |D]} t-| } |jj/| } | s-t1|j||\} }|`t||d ||j|_|jj3| |jj3|t j4|j7Dcgc]}|j8c}}||_|j:jd |_|j?||Scc}w) Nroot_rz`copy` was deprecated in 1.6 and will be removed in 1.8 since it has no effect internally. Simply leave this parameter to its default value to avoid this warning.csr) accept_sparseresetr3Tr.rqFr) getattrrwarningswarn FutureWarningrrfloat64float32r/r0rr7r3r dummy_leaf_r;r:rissparseiterr,r6rdrQrB concatenate _get_leavesr<subcluster_centers__n_features_out_global_clustering)rVr!rhas_root first_callr/r0r"r2 iter_funcsamplerPsplitrErFleaf centroidss r+rz Birch._fits4$/!.h/ 99 $ MM/     ::rzz*  NN 00 ! :  #!1%gg DJ '#!1%gg  D +/**D   '$($4$4DJJ !q!I)Il >F&&9JJJ33J?E3>JJ +;40J$'%5!)''   ,,_= ,,_=# >&NN@P@P@R#SDOO#ST #, #77==a@ " $Ts6Icz|jj}g}| |j||j}| |S)z Retrieve the leaves of the CF Node. Returns ------- leaves : list of shape (n_leaves,) List of the leaf nodes. )rr;rZ)rVleaf_ptrleavess r+rzBirch._get_leavesesF##.." MM( #**H" rRcP||j|S|j|dS)a Online learning. Prevents rebuilding of CFTree from scratch. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features), default=None Input data. If X is not provided, only the global clustering step is done. y : Ignored Not used, present here for API consistency by convention. Returns ------- self Fitted estimator. Tr)rrrs r+ partial_fitzBirch.partial_fitus-( 9  # # %K99Q9- -rRcXt|t||dd}|j|S)ak Predict data using the ``centroids_`` of subclusters. Avoid computation of the row norms of X. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Input data. Returns ------- labels : ndarray of shape(n_samples,) Labelled data. rFrr)rr_predictrVr!s r+predictz Birch.predicts+  $e D}}QrRcd|ji}td5t||j|}ddd|jS#1swYxYw)z5Predict data using the ``centroids_`` of subclusters.r4T assume_finite) metric_kwargsN)_subcluster_normsrrrsubcluster_labels_)rVr!kwargsrcs r+rzBirch._predictsY"D$:$:; $ / .4++6F &&v..   s A  Act|t||dd}td5t||jcdddS#1swYyxYw)a Transform X into subcluster centroids dimension. Each dimension represents the distance from the sample point to each cluster centroid. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Input data. Returns ------- X_trans : {array-like, sparse matrix} of shape (n_samples, n_clusters) Transformed data. rFrTrN)rrrrrrs r+ transformzBirch.transformsM"  $e D $ / D&q$*B*BC D D Ds AAc>|j}|j}|duxr |j}d}t|tr0t |j}t ||jkrd}t|jd|_||rXtjt ||_ |rStjdt ||jfztn |j|j|_ |r|j!||_yy)zN Global clustering for the subclusters obtained after fitting NF)rJT)r5zTNumber of subclusters found (%d) by BIRCH is less than (%d). Decrease the threshold.)rJrr isinstancerrrYrrrarangerrrr fit_predictrlabels_)rVr! clustererrrnot_enough_centroidss r+rzBirch._global_clusteringsOO ,, 4-@T-@-@ % i */4??KI9~/'+$"+4+C+CT!R   4&(iiI&?D ## 99~t78' '0&;&;D? $D  5+6+&M^ 5.6.4 (/D,#,JrRr)(rmathrnumbersrrnumpyrscipyr_configrbaser r r r r exceptionsrmetricsrmetrics.pairwiserutils._param_validationrrr utils.extmathrutils.validationrrrr,rQr7r6rrorRr+rsx"$,/2BB%=%$<,~XXv\'\'~F#\3C]FrR