L iddZddlZddlZddlmZddlZddlmZm Z ddl m cm Z ddl mZmZmZmZmZmZddlmZddlmcmZdddd d d d d ZdZgdZGddeZdZdZ edddgZ!e!dZ"e!dZ#e!dZ$e!dZ%e!dZ&e!dZ'e!dZ(e!dad Z)Gd!d"Z*e*dZ+e,e*Z-d#Z.edd$%dbd&Z/edd'dcd(Z0e!ddd)Z1e!ded*Z2e!dfd+Z3e!d,Z4ed-Z5ed.Z6ed/d0g1dgd2Z7dhd3Z8ed/d0g1dgd4Z9 dhd5Z:dddd6d7Z;ed8Z<ed9Z=edddd:did;Z>edddd: djd<Z?e!d=Z@d>d?d@dAdBdCdDd ejd iZBd>ddEdFejdGiZCeDeBjZFeFjeDeCjZHeHjdHZIdIZJdJZK dkdLZLdMZMeDeMaNdNZOedddO dldQZPdRZQdSZRdTZSdUZTdVZUejddPdddddWdXddddddddddddddKfdYZVedd/d0gZd[ZWe!d\ZXe!d]ZYe!d^ZZeddddg_d`Z[y)ma] Hierarchical clustering (:mod:`scipy.cluster.hierarchy`) ======================================================== .. currentmodule:: scipy.cluster.hierarchy These functions cut hierarchical clusterings into flat clusterings or find the roots of the forest formed by a cut by providing the flat cluster ids of each observation. .. autosummary:: :toctree: generated/ fcluster fclusterdata leaders These are routines for agglomerative clustering. .. autosummary:: :toctree: generated/ linkage single complete average weighted centroid median ward These routines compute statistics on hierarchies. .. autosummary:: :toctree: generated/ cophenet from_mlab_linkage inconsistent maxinconsts maxdists maxRstat to_mlab_linkage Routines for visualizing flat clusters. .. autosummary:: :toctree: generated/ dendrogram These are data structures and routines for representing hierarchies as tree objects. .. autosummary:: :toctree: generated/ ClusterNode leaves_list to_tree cut_tree optimal_leaf_ordering These are predicates for checking the validity of linkage and inconsistency matrices as well as for checking isomorphism of two flat cluster assignments. .. autosummary:: :toctree: generated/ is_valid_im is_valid_linkage is_isomorphic is_monotonic correspond num_obs_linkage Utility routines for plotting: .. autosummary:: :toctree: generated/ set_link_color_palette Utility classes: .. autosummary:: :toctree: generated/ DisjointSet -- data structure for incremental connectivity queries N)deque) _hierarchy_optimal_leaf_ordering)_asarrayarray_namespaceis_dask is_lazy_arrayxp_capabilitiesxp_copy) DisjointSet)singlecompleteaveragecentroidmedianwardweighted)rrr) ClusterNoder rrrcophenet correspondcut_tree dendrogramfcluster fclusterdatafrom_mlab_linkage inconsistent is_isomorphic is_monotonic is_valid_imis_valid_linkageleaders leaves_listlinkagemaxRstatmaxdists maxinconstsrnum_obs_linkageoptimal_leaf_orderingset_link_color_paletterto_mlab_linkageto_treerrc eZdZy)ClusterWarningN)__name__ __module__ __qualname__]/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/scipy/cluster/hierarchy.pyr3r3sr8r3cBtjd|tdy)Nzscipy.cluster: r stacklevel)warningswarnr3)ss r9_warningr@s MMOA3'AFr8cNt|j||jS)Ndtype)intasarrayint64)arrxps r9 int_floorrIs  rzz#RXXz. //r8Tz Cython code) dask.arrayz merges chunks)cpu_onlyreasonr=ct|ddS)aN Perform single/min/nearest linkage on the condensed distance matrix ``y``. Parameters ---------- y : ndarray The upper triangular of the distance matrix. The result of ``pdist`` is returned in this form. Returns ------- Z : ndarray The linkage matrix. See Also -------- linkage : for advanced creation of hierarchical clusterings. scipy.spatial.distance.pdist : pairwise distance metrics Examples -------- >>> from scipy.cluster.hierarchy import single, fcluster >>> from scipy.spatial.distance import pdist First, we need a toy dataset to play with:: x x x x x x x x x x x x >>> X = [[0, 0], [0, 1], [1, 0], ... [0, 4], [0, 3], [1, 4], ... [4, 0], [3, 0], [4, 1], ... [4, 4], [3, 4], [4, 3]] Then, we get a condensed distance matrix from this dataset: >>> y = pdist(X) Finally, we can perform the clustering: >>> Z = single(y) >>> Z array([[ 0., 1., 1., 2.], [ 2., 12., 1., 3.], [ 3., 4., 1., 2.], [ 5., 14., 1., 3.], [ 6., 7., 1., 2.], [ 8., 16., 1., 3.], [ 9., 10., 1., 2.], [11., 18., 1., 3.], [13., 15., 2., 6.], [17., 20., 2., 9.], [19., 21., 2., 12.]]) The linkage matrix ``Z`` represents a dendrogram - see `scipy.cluster.hierarchy.linkage` for a detailed explanation of its contents. We can use `scipy.cluster.hierarchy.fcluster` to see to which cluster each initial point would belong given a distance threshold: >>> fcluster(Z, 0.9, criterion='distance') array([ 7, 8, 9, 10, 11, 12, 4, 5, 6, 1, 2, 3], dtype=int32) >>> fcluster(Z, 1, criterion='distance') array([3, 3, 3, 4, 4, 4, 2, 2, 2, 1, 1, 1], dtype=int32) >>> fcluster(Z, 2, criterion='distance') array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int32) Also, `scipy.cluster.hierarchy.dendrogram` can be used to generate a plot of the dendrogram. r euclideanmethodmetricr)ys r9rrsX 1Xk ::r8ct|ddS)a Perform complete/max/farthest point linkage on a condensed distance matrix. Parameters ---------- y : ndarray The upper triangular of the distance matrix. The result of ``pdist`` is returned in this form. Returns ------- Z : ndarray A linkage matrix containing the hierarchical clustering. See the `linkage` function documentation for more information on its structure. See Also -------- linkage : for advanced creation of hierarchical clusterings. scipy.spatial.distance.pdist : pairwise distance metrics Examples -------- >>> from scipy.cluster.hierarchy import complete, fcluster >>> from scipy.spatial.distance import pdist First, we need a toy dataset to play with:: x x x x x x x x x x x x >>> X = [[0, 0], [0, 1], [1, 0], ... [0, 4], [0, 3], [1, 4], ... [4, 0], [3, 0], [4, 1], ... [4, 4], [3, 4], [4, 3]] Then, we get a condensed distance matrix from this dataset: >>> y = pdist(X) Finally, we can perform the clustering: >>> Z = complete(y) >>> Z array([[ 0. , 1. , 1. , 2. ], [ 3. , 4. , 1. , 2. ], [ 6. , 7. , 1. , 2. ], [ 9. , 10. , 1. , 2. ], [ 2. , 12. , 1.41421356, 3. ], [ 5. , 13. , 1.41421356, 3. ], [ 8. , 14. , 1.41421356, 3. ], [11. , 15. , 1.41421356, 3. ], [16. , 17. , 4.12310563, 6. ], [18. , 19. , 4.12310563, 6. ], [20. , 21. , 5.65685425, 12. ]]) The linkage matrix ``Z`` represents a dendrogram - see `scipy.cluster.hierarchy.linkage` for a detailed explanation of its contents. We can use `scipy.cluster.hierarchy.fcluster` to see to which cluster each initial point would belong given a distance threshold: >>> fcluster(Z, 0.9, criterion='distance') array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], dtype=int32) >>> fcluster(Z, 1.5, criterion='distance') array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4], dtype=int32) >>> fcluster(Z, 4.5, criterion='distance') array([1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2], dtype=int32) >>> fcluster(Z, 6, criterion='distance') array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int32) Also, `scipy.cluster.hierarchy.dendrogram` can be used to generate a plot of the dendrogram. rrNrOrRrSs r9rrs` 1Z <>> from scipy.cluster.hierarchy import average, fcluster >>> from scipy.spatial.distance import pdist First, we need a toy dataset to play with:: x x x x x x x x x x x x >>> X = [[0, 0], [0, 1], [1, 0], ... [0, 4], [0, 3], [1, 4], ... [4, 0], [3, 0], [4, 1], ... [4, 4], [3, 4], [4, 3]] Then, we get a condensed distance matrix from this dataset: >>> y = pdist(X) Finally, we can perform the clustering: >>> Z = average(y) >>> Z array([[ 0. , 1. , 1. , 2. ], [ 3. , 4. , 1. , 2. ], [ 6. , 7. , 1. , 2. ], [ 9. , 10. , 1. , 2. ], [ 2. , 12. , 1.20710678, 3. ], [ 5. , 13. , 1.20710678, 3. ], [ 8. , 14. , 1.20710678, 3. ], [11. , 15. , 1.20710678, 3. ], [16. , 17. , 3.39675184, 6. ], [18. , 19. , 3.39675184, 6. ], [20. , 21. , 4.09206523, 12. ]]) The linkage matrix ``Z`` represents a dendrogram - see `scipy.cluster.hierarchy.linkage` for a detailed explanation of its contents. We can use `scipy.cluster.hierarchy.fcluster` to see to which cluster each initial point would belong given a distance threshold: >>> fcluster(Z, 0.9, criterion='distance') array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], dtype=int32) >>> fcluster(Z, 1.5, criterion='distance') array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4], dtype=int32) >>> fcluster(Z, 4, criterion='distance') array([1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2], dtype=int32) >>> fcluster(Z, 6, criterion='distance') array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int32) Also, `scipy.cluster.hierarchy.dendrogram` can be used to generate a plot of the dendrogram. rrNrOrRrSs r9rrRs` 1Y{ ;;r8ct|ddS)a Perform weighted/WPGMA linkage on the condensed distance matrix. See `linkage` for more information on the return structure and algorithm. Parameters ---------- y : ndarray The upper triangular of the distance matrix. The result of ``pdist`` is returned in this form. Returns ------- Z : ndarray A linkage matrix containing the hierarchical clustering. See `linkage` for more information on its structure. See Also -------- linkage : for advanced creation of hierarchical clusterings. scipy.spatial.distance.pdist : pairwise distance metrics Examples -------- >>> from scipy.cluster.hierarchy import weighted, fcluster >>> from scipy.spatial.distance import pdist First, we need a toy dataset to play with:: x x x x x x x x x x x x >>> X = [[0, 0], [0, 1], [1, 0], ... [0, 4], [0, 3], [1, 4], ... [4, 0], [3, 0], [4, 1], ... [4, 4], [3, 4], [4, 3]] Then, we get a condensed distance matrix from this dataset: >>> y = pdist(X) Finally, we can perform the clustering: >>> Z = weighted(y) >>> Z array([[ 0. , 1. , 1. , 2. ], [ 6. , 7. , 1. , 2. ], [ 3. , 4. , 1. , 2. ], [ 9. , 11. , 1. , 2. ], [ 2. , 12. , 1.20710678, 3. ], [ 8. , 13. , 1.20710678, 3. ], [ 5. , 14. , 1.20710678, 3. ], [10. , 15. , 1.20710678, 3. ], [18. , 19. , 3.05595762, 6. ], [16. , 17. , 3.32379407, 6. ], [20. , 21. , 4.06357713, 12. ]]) The linkage matrix ``Z`` represents a dendrogram - see `scipy.cluster.hierarchy.linkage` for a detailed explanation of its contents. We can use `scipy.cluster.hierarchy.fcluster` to see to which cluster each initial point would belong given a distance threshold: >>> fcluster(Z, 0.9, criterion='distance') array([ 7, 8, 9, 1, 2, 3, 10, 11, 12, 4, 6, 5], dtype=int32) >>> fcluster(Z, 1.5, criterion='distance') array([3, 3, 3, 1, 1, 1, 4, 4, 4, 2, 2, 2], dtype=int32) >>> fcluster(Z, 4, criterion='distance') array([2, 2, 2, 1, 1, 1, 2, 2, 2, 1, 1, 1], dtype=int32) >>> fcluster(Z, 6, criterion='distance') array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int32) Also, `scipy.cluster.hierarchy.dendrogram` can be used to generate a plot of the dendrogram. rrNrOrRrSs r9rrsf 1Z <>> from scipy.cluster.hierarchy import centroid, fcluster >>> from scipy.spatial.distance import pdist First, we need a toy dataset to play with:: x x x x x x x x x x x x >>> X = [[0, 0], [0, 1], [1, 0], ... [0, 4], [0, 3], [1, 4], ... [4, 0], [3, 0], [4, 1], ... [4, 4], [3, 4], [4, 3]] Then, we get a condensed distance matrix from this dataset: >>> y = pdist(X) Finally, we can perform the clustering: >>> Z = centroid(y) >>> Z array([[ 0. , 1. , 1. , 2. ], [ 3. , 4. , 1. , 2. ], [ 9. , 10. , 1. , 2. ], [ 6. , 7. , 1. , 2. ], [ 2. , 12. , 1.11803399, 3. ], [ 5. , 13. , 1.11803399, 3. ], [ 8. , 15. , 1.11803399, 3. ], [11. , 14. , 1.11803399, 3. ], [18. , 19. , 3.33333333, 6. ], [16. , 17. , 3.33333333, 6. ], [20. , 21. , 3.33333333, 12. ]]) # may vary The linkage matrix ``Z`` represents a dendrogram - see `scipy.cluster.hierarchy.linkage` for a detailed explanation of its contents. We can use `scipy.cluster.hierarchy.fcluster` to see to which cluster each initial point would belong given a distance threshold: >>> fcluster(Z, 0.9, criterion='distance') array([ 7, 8, 9, 10, 11, 12, 1, 2, 3, 4, 5, 6], dtype=int32) # may vary >>> fcluster(Z, 1.1, criterion='distance') array([5, 5, 6, 7, 7, 8, 1, 1, 2, 3, 3, 4], dtype=int32) # may vary >>> fcluster(Z, 2, criterion='distance') array([3, 3, 3, 4, 4, 4, 1, 1, 1, 2, 2, 2], dtype=int32) # may vary >>> fcluster(Z, 4, criterion='distance') array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int32) Also, `scipy.cluster.hierarchy.dendrogram` can be used to generate a plot of the dendrogram. rrNrOrRrSs r9rrsH 1Z <>> from scipy.cluster.hierarchy import median, fcluster >>> from scipy.spatial.distance import pdist First, we need a toy dataset to play with:: x x x x x x x x x x x x >>> X = [[0, 0], [0, 1], [1, 0], ... [0, 4], [0, 3], [1, 4], ... [4, 0], [3, 0], [4, 1], ... [4, 4], [3, 4], [4, 3]] Then, we get a condensed distance matrix from this dataset: >>> y = pdist(X) Finally, we can perform the clustering: >>> Z = median(y) >>> Z array([[ 0. , 1. , 1. , 2. ], [ 3. , 4. , 1. , 2. ], [ 9. , 10. , 1. , 2. ], [ 6. , 7. , 1. , 2. ], [ 2. , 12. , 1.11803399, 3. ], [ 5. , 13. , 1.11803399, 3. ], [ 8. , 15. , 1.11803399, 3. ], [11. , 14. , 1.11803399, 3. ], [18. , 19. , 3. , 6. ], [16. , 17. , 3.5 , 6. ], [20. , 21. , 3.25 , 12. ]]) The linkage matrix ``Z`` represents a dendrogram - see `scipy.cluster.hierarchy.linkage` for a detailed explanation of its contents. We can use `scipy.cluster.hierarchy.fcluster` to see to which cluster each initial point would belong given a distance threshold: >>> fcluster(Z, 0.9, criterion='distance') array([ 7, 8, 9, 10, 11, 12, 1, 2, 3, 4, 5, 6], dtype=int32) >>> fcluster(Z, 1.1, criterion='distance') array([5, 5, 6, 7, 7, 8, 1, 1, 2, 3, 3, 4], dtype=int32) >>> fcluster(Z, 2, criterion='distance') array([3, 3, 3, 4, 4, 4, 1, 1, 1, 2, 2, 2], dtype=int32) >>> fcluster(Z, 4, criterion='distance') array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int32) Also, `scipy.cluster.hierarchy.dendrogram` can be used to generate a plot of the dendrogram. rrNrOrRrSs r9rrbsH 1Xk ::r8ct|ddS)a Perform Ward's linkage on a condensed distance matrix. See `linkage` for more information on the return structure and algorithm. The following are common calling conventions: 1. ``Z = ward(y)`` Performs Ward's linkage on the condensed distance matrix ``y``. 2. ``Z = ward(X)`` Performs Ward's linkage on the observation matrix ``X`` using Euclidean distance as the distance metric. Parameters ---------- y : ndarray A condensed distance matrix. A condensed distance matrix is a flat array containing the upper triangular of the distance matrix. This is the form that ``pdist`` returns. Alternatively, a collection of m observation vectors in n dimensions may be passed as an m by n array. Returns ------- Z : ndarray The hierarchical clustering encoded as a linkage matrix. See `linkage` for more information on the return structure and algorithm. See Also -------- linkage : for advanced creation of hierarchical clusterings. scipy.spatial.distance.pdist : pairwise distance metrics Examples -------- >>> from scipy.cluster.hierarchy import ward, fcluster >>> from scipy.spatial.distance import pdist First, we need a toy dataset to play with:: x x x x x x x x x x x x >>> X = [[0, 0], [0, 1], [1, 0], ... [0, 4], [0, 3], [1, 4], ... [4, 0], [3, 0], [4, 1], ... [4, 4], [3, 4], [4, 3]] Then, we get a condensed distance matrix from this dataset: >>> y = pdist(X) Finally, we can perform the clustering: >>> Z = ward(y) >>> Z array([[ 0. , 1. , 1. , 2. ], [ 3. , 4. , 1. , 2. ], [ 6. , 7. , 1. , 2. ], [ 9. , 10. , 1. , 2. ], [ 2. , 12. , 1.29099445, 3. ], [ 5. , 13. , 1.29099445, 3. ], [ 8. , 14. , 1.29099445, 3. ], [11. , 15. , 1.29099445, 3. ], [16. , 17. , 5.77350269, 6. ], [18. , 19. , 5.77350269, 6. ], [20. , 21. , 8.16496581, 12. ]]) The linkage matrix ``Z`` represents a dendrogram - see `scipy.cluster.hierarchy.linkage` for a detailed explanation of its contents. We can use `scipy.cluster.hierarchy.fcluster` to see to which cluster each initial point would belong given a distance threshold: >>> fcluster(Z, 0.9, criterion='distance') array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], dtype=int32) >>> fcluster(Z, 1.1, criterion='distance') array([1, 1, 2, 3, 3, 4, 5, 5, 6, 7, 7, 8], dtype=int32) >>> fcluster(Z, 3, criterion='distance') array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4], dtype=int32) >>> fcluster(Z, 9, criterion='distance') array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int32) Also, `scipy.cluster.hierarchy.dendrogram` can be used to generate a plot of the dendrogram. rrNrOrRrSs r9rrsB 1VK 88r8Fc 4 t|}t|d|j|}t|}tvrt dt vr%|dk7r |jdk(rdd}t ||jdk(rtj|d d n|jdk(r|s|jd |jdk(r|jtj|jj|d r_|j|d k\rK|jtj||j rt#j$d t&dtj(||}n t d|s+|j|j+|s t dtj,| t  fd}tj.||| dz df|jd |}|r t1||S|S)aH! Perform hierarchical/agglomerative clustering. The input y may be either a 1-D condensed distance matrix or a 2-D array of observation vectors. If y is a 1-D condensed distance matrix, then y must be a :math:`\binom{n}{2}` sized vector, where n is the number of original observations paired in the distance matrix. The behavior of this function is very similar to the MATLAB linkage function. A :math:`(n-1)` by 4 matrix ``Z`` is returned. At the :math:`i`-th iteration, clusters with indices ``Z[i, 0]`` and ``Z[i, 1]`` are combined to form cluster :math:`n + i`. A cluster with an index less than :math:`n` corresponds to one of the :math:`n` original observations. The distance between clusters ``Z[i, 0]`` and ``Z[i, 1]`` is given by ``Z[i, 2]``. The fourth value ``Z[i, 3]`` represents the number of original observations in the newly formed cluster. The following linkage methods are used to compute the distance :math:`d(s, t)` between two clusters :math:`s` and :math:`t`. The algorithm begins with a forest of clusters that have yet to be used in the hierarchy being formed. When two clusters :math:`s` and :math:`t` from this forest are combined into a single cluster :math:`u`, :math:`s` and :math:`t` are removed from the forest, and :math:`u` is added to the forest. When only one cluster remains in the forest, the algorithm stops, and this cluster becomes the root. A distance matrix is maintained at each iteration. The ``d[i,j]`` entry corresponds to the distance between cluster :math:`i` and :math:`j` in the original forest. At each iteration, the algorithm must update the distance matrix to reflect the distance of the newly formed cluster u with the remaining clusters in the forest. Suppose there are :math:`|u|` original observations :math:`u[0], \ldots, u[|u|-1]` in cluster :math:`u` and :math:`|v|` original objects :math:`v[0], \ldots, v[|v|-1]` in cluster :math:`v`. Recall, :math:`s` and :math:`t` are combined to form cluster :math:`u`. Let :math:`v` be any remaining cluster in the forest that is not :math:`u`. The following are methods for calculating the distance between the newly formed cluster :math:`u` and each :math:`v`. * method='single' assigns .. math:: d(u,v) = \min(dist(u[i],v[j])) for all points :math:`i` in cluster :math:`u` and :math:`j` in cluster :math:`v`. This is also known as the Nearest Point Algorithm. * method='complete' assigns .. math:: d(u, v) = \max(dist(u[i],v[j])) for all points :math:`i` in cluster u and :math:`j` in cluster :math:`v`. This is also known by the Farthest Point Algorithm or Voor Hees Algorithm. * method='average' assigns .. math:: d(u,v) = \sum_{ij} \frac{d(u[i], v[j])} {(|u|*|v|)} for all points :math:`i` and :math:`j` where :math:`|u|` and :math:`|v|` are the cardinalities of clusters :math:`u` and :math:`v`, respectively. This is also called the UPGMA algorithm. * method='weighted' assigns .. math:: d(u,v) = (dist(s,v) + dist(t,v))/2 where cluster u was formed with cluster s and t and v is a remaining cluster in the forest (also called WPGMA). * method='centroid' assigns .. math:: dist(s,t) = ||c_s-c_t||_2 where :math:`c_s` and :math:`c_t` are the centroids of clusters :math:`s` and :math:`t`, respectively. When two clusters :math:`s` and :math:`t` are combined into a new cluster :math:`u`, the new centroid is computed over all the original objects in clusters :math:`s` and :math:`t`. The distance then becomes the Euclidean distance between the centroid of :math:`u` and the centroid of a remaining cluster :math:`v` in the forest. This is also known as the UPGMC algorithm. * method='median' assigns :math:`d(s,t)` like the ``centroid`` method. When two clusters :math:`s` and :math:`t` are combined into a new cluster :math:`u`, the average of centroids s and t give the new centroid :math:`u`. This is also known as the WPGMC algorithm. * method='ward' uses the Ward variance minimization algorithm. The new entry :math:`d(u,v)` is computed as follows, .. math:: d(u,v) = \sqrt{\frac{|v|+|s|} {T}d(v,s)^2 + \frac{|v|+|t|} {T}d(v,t)^2 - \frac{|v|} {T}d(s,t)^2} where :math:`u` is the newly joined cluster consisting of clusters :math:`s` and :math:`t`, :math:`v` is an unused cluster in the forest, :math:`T=|v|+|s|+|t|`, and :math:`|*|` is the cardinality of its argument. This is also known as the incremental algorithm. Warning: When the minimum distance pair in the forest is chosen, there may be two or more pairs with the same minimum distance. This implementation may choose a different minimum than the MATLAB version. Parameters ---------- y : ndarray A condensed distance matrix. A condensed distance matrix is a flat array containing the upper triangular of the distance matrix. This is the form that ``pdist`` returns. Alternatively, a collection of :math:`m` observation vectors in :math:`n` dimensions may be passed as an :math:`m` by :math:`n` array. All elements of the condensed distance matrix must be finite, i.e., no NaNs or infs. method : str, optional The linkage algorithm to use. See the ``Linkage Methods`` section below for full descriptions. metric : str or function, optional The distance metric to use in the case that y is a collection of observation vectors; ignored otherwise. See the ``pdist`` function for a list of valid distance metrics. A custom distance function can also be used. optimal_ordering : bool, optional If True, the linkage matrix will be reordered so that the distance between successive leaves is minimal. This results in a more intuitive tree structure when the data are visualized. defaults to False, because this algorithm can be slow, particularly on large datasets [2]_. See also the `optimal_leaf_ordering` function. .. versionadded:: 1.0.0 Returns ------- Z : ndarray The hierarchical clustering encoded as a linkage matrix. Notes ----- 1. For method 'single', an optimized algorithm based on minimum spanning tree is implemented. It has time complexity :math:`O(n^2)`. For methods 'complete', 'average', 'weighted' and 'ward', an algorithm called nearest-neighbors chain is implemented. It also has time complexity :math:`O(n^2)`. For other methods, a naive algorithm is implemented with :math:`O(n^3)` time complexity. All algorithms use :math:`O(n^2)` memory. Refer to [1]_ for details about the algorithms. 2. Methods 'centroid', 'median', and 'ward' are correctly defined only if Euclidean pairwise metric is used. If `y` is passed as precomputed pairwise distances, then it is the user's responsibility to assure that these distances are in fact Euclidean, otherwise the produced result will be incorrect. See Also -------- scipy.spatial.distance.pdist : pairwise distance metrics References ---------- .. [1] Daniel Mullner, "Modern hierarchical, agglomerative clustering algorithms", :arXiv:`1109.2378v1`. .. [2] Ziv Bar-Joseph, David K. Gifford, Tommi S. Jaakkola, "Fast optimal leaf ordering for hierarchical clustering", 2001. Bioinformatics :doi:`10.1093/bioinformatics/17.suppl_1.S22` Examples -------- >>> from scipy.cluster.hierarchy import dendrogram, linkage >>> from matplotlib import pyplot as plt >>> X = [[i] for i in [2, 8, 0, 4, 1, 9, 9, 0]] >>> Z = linkage(X, 'ward') >>> fig = plt.figure(figsize=(25, 10)) >>> dn = dendrogram(Z) >>> Z = linkage(X, 'single') >>> fig = plt.figure(figsize=(25, 10)) >>> dn = dendrogram(Z) >>> plt.show() CorderrCrHzInvalid method: rNrz`method=z.` requires the distance metric to be EuclideanrTrTthrownamerkThe symmetric non-negative hollow observation matrix looks suspiciously like an uncondensed distance matrixr;`y` must be 1 or 2 dimensional.>The condensed distance matrix must contain only finite values.c|r3tjtj|s tddk(rt j |Sdvrt j |St j|S)Nrdr)rrrr)npallisfinite ValueErrorrmst_single_linkagenn_chain fast_linkage)rTvalidaterP method_codens r9 cy_linkagezlinkage..cy_linkages{ BFF2;;q>2-. . X 00A6 6 B B&&q![9 9**1a= =r8rrmshaperCas_numpyrH)rrfloat64r _LINKAGE_METHODSri_EUCLIDEAN_METHODSndimdistance is_valid_yrrrgxpxiscloselinalgdiagonalTr=r>r3pdistrh num_obs_y lazy_applyr.) rTrPrQoptimal_orderingrHlazymsgrpresultrnros ` @@r9r)r)-s^  B#RZZB7A  D %%+F8455 ##+(=!&&A+ NOovv{AT4 1qwwqz1s{{299#5#5a#8!<=qAv266#++a*=#> MM,)Q 8 NN1f %:;; r{{1~.*+ + 1A"6*K >^^JD#$q5!*BJJ%)b2F$VQ// r8cTeZdZdZddZdZdZdZdZdZ d Z d Z d Z d fd Z y)ra1 A tree node class for representing a cluster. Leaf nodes correspond to original observations, while non-leaf nodes correspond to non-singleton clusters. The `to_tree` function converts a matrix returned by the linkage function into an easy-to-use tree representation. All parameter names are also attributes. Parameters ---------- id : int The node id. left : ClusterNode instance, optional The left child tree node. right : ClusterNode instance, optional The right child tree node. dist : float, optional Distance for this cluster in the linkage matrix. count : int, optional The number of samples in this cluster. See Also -------- to_tree : for converting a linkage matrix ``Z`` into a tree object. Nc&|dkr td|dkr td||| | td|dkr td||_||_||_||_|j||_y|j |j z|_y)NrzThe id must be non-negative.z"The distance must be non-negative.zIOnly full or proper binary trees are permitted. This node has one child.rz9A cluster must contain at least one original observation.)riidleftrightdistcount)selfrrrrrs r9__init__zClusterNode.__init__Qs 6;< < !8AB B LU.  :; ; 19,- -   99 DJekk1DJr8ct|tstdt||j|jkSNz"Can't compare ClusterNode to type  isinstancerrityperrnodes r9__lt__zClusterNode.__lt__f?$ ,((,T |56 6yy499$$r8ct|tstdt||j|jkDSrrrs r9__gt__zClusterNode.__gt__lrr8ct|tstdt||j|jk(Srrrs r9__eq__zClusterNode.__eq__rs?$ ,((,T |56 6yyDII%%r8c|jS)aK The identifier of the target node. For ``0 <= i < n``, `i` corresponds to original observation i. For ``n <= i < 2n-1``, `i` corresponds to non-singleton cluster formed at iteration ``i-n``. Returns ------- id : int The identifier of the target node. rrs r9get_idzClusterNode.get_idxs wwr8c|jS)a The number of leaf nodes (original observations) belonging to the cluster node nd. If the target node is a leaf, 1 is returned. Returns ------- get_count : int The number of leaf nodes below the target node. )rrs r9 get_countzClusterNode.get_countszzr8c|jS)z Return a reference to the left child tree object. Returns ------- left : ClusterNode The left child of the target node. If the node is a leaf, None is returned. rrs r9get_leftzClusterNode.get_leftsyyr8c|jS)z Return a reference to the right child tree object. Returns ------- right : ClusterNode The left child of the target node. If the node is a leaf, None is returned. )rrs r9 get_rightzClusterNode.get_rightszzr8c|jduS)z Return True if the target node is a leaf. Returns ------- leafness : bool True if the target node is a leaf node. Nrrs r9is_leafzClusterNode.is_leafsyyD  r8c|jSNr)xs r9zClusterNode.s r8c|j}dgd|zz}t}t}||d<d}g}|dk\r||}|j} |jr|j |||dz }n_| |vr)|j ||dz<|j | |dz}n2| |vr)|j||dz<|j | |dz}n|dz }|dk\r|S)a Perform pre-order traversal without recursive function calls. When a leaf node is first encountered, ``func`` is called with the leaf node as its argument, and its result is appended to the list. For example, the statement:: ids = root.pre_order(lambda x: x.id) returns a list of the node ids corresponding to the leaf nodes of the tree as they appear from left to right. Parameters ---------- func : function Applied to each leaf ClusterNode object in the pre-order traversal. Given the ``i``-th leaf node in the pre-order traversal ``n[i]``, the result of ``func(n[i])`` is stored in ``L[i]``. If not provided, the index of the original observation to which the node corresponds is used. Returns ------- L : list The pre-order traversal. Nrrr)rsetrrappendraddr) rfuncrocurNodelvisitedrvisitedkpreorderndndids r9 pre_orderzClusterNode.pre_orders@ JJ&AE"55 1fB55Dzz|R)Ex'%'WWGAENLL&AA)%'XXGAENLL&AAAA%1f(r8)NNr)r4r5r6__doc__rrrrrrrrrrr7r8r9rr2s><2*% % &     !-<r8rcHt}t|}|j|g}|rw|j}|j sTt j |||j|j|j|j|rw|S)z Return clustering nodes in bottom-up order by distance. Parameters ---------- Z : scipy.cluster.linkage array The linkage matrix. Returns ------- nodes : list A list of ClusterNode objects. ) rr1rpopleftrbisect insort_leftrr)Zqtreenodesrs r9_order_cluster_treers| A 1:DHHTN E yy{||~   ud + HHT^^% & HHT]]_ % Lr8znon-standard indexing)np_onlyrLct|}t|}t|}| | td|||j |}n|M|j |}|j |Dcgc]}|j c}}|j||}n5|j |}||j|j ||z } t|} |j| |f|j} |j |} d|vr| | d<t|D]\} } | j}t| |}|j| |||<|||j!| |kDxxdzcc<| dz|vr!|| t#j$| dz|k(d<|} | j&Scc}w#t$rd} |j |g}YwxYw)aC Given a linkage matrix Z, return the cut tree. Parameters ---------- Z : scipy.cluster.linkage array The linkage matrix. n_clusters : array_like, optional Number of clusters in the tree at the cut point. height : array_like, optional The height at which to cut the tree. Only possible for ultrametric trees. Returns ------- cutree : array An array indicating group membership at each agglomeration step. I.e., for a full cut tree, in the first column each data point is in its own cluster. At the next step, two nodes are merged. Finally, all singleton and non-singleton clusters are in one group. If `n_clusters` or `height` are given, the columns correspond to the columns of `n_clusters` or `height`. Examples -------- >>> from scipy import cluster >>> import numpy as np >>> from numpy.random import default_rng >>> rng = default_rng() >>> X = rng.random((50, 4)) >>> Z = cluster.hierarchy.ward(X) >>> cutree = cluster.hierarchy.cut_tree(Z, n_clusters=[5, 10]) >>> cutree[:10] array([[0, 0], [1, 1], [2, 2], [3, 3], [3, 4], [2, 2], [0, 0], [1, 5], [3, 6], [4, 7]]) # random z8At least one of either height or n_clusters must be NonerrBrrH)rr-rriarangerEr searchsortedlen TypeErrorzerosrF enumeraterr minmaxrfnonzeror~)r n_clustersheightrHnobsrcols_idxrheightsn_colsgroups last_groupiridx this_groups r9rrs^  B 1 D  "E j4() ) J.99T?  F#**e4aff45??7F3ZZ + "//"))D/:FF*X XXvtnBHHX 5F4JH}q U# 4nnZB/ &&C1 3:z# 778A=8 q5H 7AF2::a!ex/03 4   88O75 *::xj)*s)F/ F44GG)jax_jitallow_dask_computecLt|}t|d|}t|ddd||jddz}dg|d zdz z}t d|D]}t |||<d}t |jdD]}||ddf}t |d|}t |d|} |||zkDrtd |d | ||zkDrtd | d t ||z|||| |d }|d |jk7rtd|d||||z<|r||fS|S)a Convert a linkage matrix into an easy-to-use tree object. The reference to the root `ClusterNode` object is returned (by default). Each `ClusterNode` object has a ``left``, ``right``, ``dist``, ``id``, and ``count`` attribute. The left and right attributes point to ClusterNode objects that were combined to generate the cluster. If both are None then the `ClusterNode` object is a leaf node, its count must be 1, and its distance is meaningless but set to 0. *Note: This function is provided for the convenience of the library user. ClusterNodes are not used as input to any of the functions in this library.* Parameters ---------- Z : ndarray The linkage matrix in proper form (see the `linkage` function documentation). rd : bool, optional When False (default), a reference to the root `ClusterNode` object is returned. Otherwise, a tuple ``(r, d)`` is returned. ``r`` is a reference to the root node while ``d`` is a list of `ClusterNode` objects - one per original entry in the linkage matrix plus entries for all clustering steps. If a cluster id is less than the number of samples ``n`` in the data that the linkage matrix describes, then it corresponds to a singleton cluster (leaf node). See `linkage` for more information on the assignment of cluster ids to clusters. Returns ------- tree : ClusterNode or tuple (ClusterNode, list of ClusterNode) If ``rd`` is False, a `ClusterNode`. If ``rd`` is True, a list of length ``2*n - 1``, with ``n`` the number of samples. See the description of `rd` above for more details. See Also -------- linkage, is_valid_linkage, ClusterNode Examples -------- >>> import numpy as np >>> from scipy.cluster import hierarchy >>> rng = np.random.default_rng() >>> x = rng.random((5, 2)) >>> Z = hierarchy.linkage(x) >>> hierarchy.to_tree(Z) >> rootnode, nodelist = hierarchy.to_tree(Z, rd=True) >>> rootnode >> len(nodelist) 9 r\r^rHTrr`ra materializerHrrNrzSCorrupt matrix Z. Index to derivative cluster is used before it is formed. See row z , column 0z , column 1rzCorrupt matrix Z. The count Z[z,3] is incorrect.) rr_is_valid_linkagerrrangerrIrir) rrdrHrodrrrowfifjs r9r1r1ssz  B#"%Aat#4BG  QA !a%!)A1a[1~! B 1771: 1g s1vr " s1vr " A:FFHTJ))* * A:FFHTJ))* *Q"quc!f 5 q6RXX =aSA++, ,!a%'* Aw r8c t||}t|d|}t|d|j|}t|}t |dd||j dk(rt j|dd n|j d k(r|s|jd |jdk(r|jtj|jj|d r_|j|d k\rK|jtj||jrtj d t"d t j$||}n t'd|s+|j|j)|s t'dd}tj*|||||j|j,d|S)a9 Given a linkage matrix Z and distance, reorder the cut tree. Parameters ---------- Z : ndarray The hierarchical clustering encoded as a linkage matrix. See `linkage` for more information on the return structure and algorithm. y : ndarray The condensed distance matrix from which Z was generated. Alternatively, a collection of m observation vectors in n dimensions may be passed as an m by n array. metric : str or function, optional The distance metric to use in the case that y is a collection of observation vectors; ignored otherwise. See the ``pdist`` function for a list of valid distance metrics. A custom distance function can also be used. Returns ------- Z_ordered : ndarray A copy of the linkage matrix Z, reordered to minimize the distance between adjacent leaves. Examples -------- >>> import numpy as np >>> from scipy.cluster import hierarchy >>> rng = np.random.default_rng() >>> X = rng.standard_normal((10, 10)) >>> Z = hierarchy.ward(X) >>> hierarchy.leaves_list(Z) array([0, 3, 1, 9, 2, 5, 7, 4, 6, 8], dtype=int32) >>> hierarchy.leaves_list(hierarchy.optimal_leaf_ordering(Z, X)) array([3, 0, 2, 5, 7, 4, 8, 6, 9, 1], dtype=int32) r\rr]Trr`rarHrrTr_rrrbr;rcrdc|rFt|ddttjtj|s t dt j ||S)NTrrrd)rrfrgrhrirr.)rrTrms r9cy_optimal_leaf_orderingz7optimal_leaf_ordering..cy_optimal_leaf_orderingsL  at#" =66"++a.) "233%;;AqAAr8rq)rrrtr rrwrxryrrrgrzr{r|r}r~r=r>r3rrirhrrC)rrTrQrHrrs r9r.r.soP A B#"%A#RZZB7A  Dat#"5vv{AT4 1qwwqz1s{{299#5#5a#8!<=qAv266#++a*=#> MM,)Q 8 NN1f %:;; r{{1~.*+ + B >>2Aq4 !qww" NNr8c `t||}t|d|j|}t|dd|d}|jddz}t j ||t|||dz zd zf|jd| }||St|d| }tj|dd |j|}|j|}||z }||z } || z} |d z} | d z} |j| |j|j| |j| zz } | |fS)a Calculate the cophenetic distances between each observation in the hierarchical clustering defined by the linkage ``Z``. Suppose ``p`` and ``q`` are original observations in disjoint clusters ``s`` and ``t``, respectively and ``s`` and ``t`` are joined by a direct parent cluster ``u``. The cophenetic distance between observations ``i`` and ``j`` is simply the distance between clusters ``s`` and ``t``. Parameters ---------- Z : ndarray The hierarchical clustering encoded as an array (see `linkage` function). Y : ndarray (optional) Calculates the cophenetic correlation coefficient ``c`` of a hierarchical clustering defined by the linkage matrix `Z` of a set of :math:`n` observations in :math:`m` dimensions. `Y` is the condensed distance matrix from which `Z` was generated. Returns ------- c : ndarray The cophentic correlation distance (if ``Y`` is passed). d : ndarray The cophenetic distance matrix in condensed form. The :math:`ij` th entry is the cophenetic distance between original observations :math:`i` and :math:`j`. See Also -------- linkage : for a description of what a linkage matrix is. scipy.spatial.distance.squareform : transforming condensed matrices into square ones. Examples -------- >>> from scipy.cluster.hierarchy import single, cophenet >>> from scipy.spatial.distance import pdist, squareform Given a dataset ``X`` and a linkage matrix ``Z``, the cophenetic distance between two points of ``X`` is the distance between the largest two distinct clusters that each of the points: >>> X = [[0, 0], [0, 1], [1, 0], ... [0, 4], [0, 3], [1, 4], ... [4, 0], [3, 0], [4, 1], ... [4, 4], [3, 4], [4, 3]] ``X`` corresponds to this dataset :: x x x x x x x x x x x x >>> Z = single(pdist(X)) >>> Z array([[ 0., 1., 1., 2.], [ 2., 12., 1., 3.], [ 3., 4., 1., 2.], [ 5., 14., 1., 3.], [ 6., 7., 1., 2.], [ 8., 16., 1., 3.], [ 9., 10., 1., 2.], [11., 18., 1., 3.], [13., 15., 2., 6.], [17., 20., 2., 9.], [19., 21., 2., 12.]]) >>> cophenet(Z) array([1., 1., 2., 2., 2., 2., 2., 2., 2., 2., 2., 1., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 1., 1., 2., 2., 2., 2., 2., 2., 1., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 1., 1., 2., 2., 2., 1., 2., 2., 2., 2., 2., 2., 1., 1., 1.]) The output of the `scipy.cluster.hierarchy.cophenet` method is represented in condensed form. We can use `scipy.spatial.distance.squareform` to see the output as a regular matrix (where each element ``ij`` denotes the cophenetic distance between each ``i``, ``j`` pair of points in ``X``): >>> squareform(cophenet(Z)) array([[0., 1., 1., 2., 2., 2., 2., 2., 2., 2., 2., 2.], [1., 0., 1., 2., 2., 2., 2., 2., 2., 2., 2., 2.], [1., 1., 0., 2., 2., 2., 2., 2., 2., 2., 2., 2.], [2., 2., 2., 0., 1., 1., 2., 2., 2., 2., 2., 2.], [2., 2., 2., 1., 0., 1., 2., 2., 2., 2., 2., 2.], [2., 2., 2., 1., 1., 0., 2., 2., 2., 2., 2., 2.], [2., 2., 2., 2., 2., 2., 0., 1., 1., 2., 2., 2.], [2., 2., 2., 2., 2., 2., 1., 0., 1., 2., 2., 2.], [2., 2., 2., 2., 2., 2., 1., 1., 0., 2., 2., 2.], [2., 2., 2., 2., 2., 2., 2., 2., 2., 0., 1., 1.], [2., 2., 2., 2., 2., 2., 2., 2., 2., 1., 0., 1.], [2., 2., 2., 2., 2., 2., 2., 2., 2., 1., 1., 0.]]) In this example, the cophenetic distance between points on ``X`` that are very close (i.e., in the same corner) is 1. For other pairs of points is 2, because the points will be located in clusters at different corners - thus, the distance between these clusters will be larger. r\r]Trrc|rt|ddt|jddz}tj||dz zdztj}t j ||||S)NTrrrrrrB)rrfrrrrtrcophenetic_distances)rrmrozzs r9 cy_cophenetzcophenet..cy_cophenets\  at#" = GGAJN XXqAaCyQ&bjj 9''2q1 r8rrrrqrYr_) rrrtrrrrzrr rxrymeansumsqrt)rrrHrrorzrTYyZz numeratordenomAdenomBcs r9rr(s/X A B#RZZB7Aat#"5  QA  Qq1A !QqS a/2"**!%" .B y #"%A C0  A  A QB aBbI UF UF yBGGBFF6NRVVF^$CDDA r7Nr8c Nt|}t|d|j|}t|dd||t j |k7s|dkr t dd}tj||t|t||jdd f|jd| S) a Calculate inconsistency statistics on a linkage matrix. Parameters ---------- Z : ndarray The :math:`(n-1)` by 4 matrix encoding the linkage (hierarchical clustering). See `linkage` documentation for more information on its form. d : int, optional The number of links up to `d` levels below each non-singleton cluster. Returns ------- R : ndarray A :math:`(n-1)` by 4 matrix where the ``i``'th row contains the link statistics for the non-singleton cluster ``i``. The link statistics are computed over the link heights for links :math:`d` levels below the cluster ``i``. ``R[i,0]`` and ``R[i,1]`` are the mean and standard deviation of the link heights, respectively; ``R[i,2]`` is the number of links included in the calculation; and ``R[i,3]`` is the inconsistency coefficient, .. math:: \frac{\mathtt{Z[i,2]} - \mathtt{R[i,0]}} {R[i,1]} Notes ----- This function behaves similarly to the MATLAB(TM) ``inconsistent`` function. Examples -------- >>> from scipy.cluster.hierarchy import inconsistent, linkage >>> from matplotlib import pyplot as plt >>> X = [[i] for i in [2, 8, 0, 4, 1, 9, 9, 0]] >>> Z = linkage(X, 'ward') >>> print(Z) [[ 5. 6. 0. 2. ] [ 2. 7. 0. 2. ] [ 0. 4. 1. 2. ] [ 1. 8. 1.15470054 3. ] [ 9. 10. 2.12132034 4. ] [ 3. 12. 4.11096096 5. ] [11. 13. 14.07183949 8. ]] >>> inconsistent(Z) array([[ 0. , 0. , 1. , 0. ], [ 0. , 0. , 1. , 0. ], [ 1. , 0. , 1. , 0. ], [ 0.57735027, 0.81649658, 2. , 0.70710678], [ 1.04044011, 1.06123822, 3. , 1.01850858], [ 3.11614065, 1.40688837, 2. , 0.70710678], [ 6.44583366, 6.76770586, 3. , 1.12682288]]) r\r]Trrrz:The second argument d must be a nonnegative integer value.c|rt|ddttj|jddftj}|jddz}t j |||||S)NTrrrrrBr)rrfrrrrtrr")rrrmRros r9cy_inconsistentz%inconsistent..cy_inconsistents]  at#" = HHaggaj!_BJJ 7 GGAJN1a+r8r)rrmrrrCrsrH) rrrtrrffloorrirzrrDr rr)rrrHrs r9r"r"sp  B#RZZB7Aat#"5BHHQK1q5*+ + >>/1AqAQ!"Qrzz#'B 00r8c t|}t||jd|}|jdvr t ||S|j dk7r t d|jd}|dk(r t ||St|}|sH|j|ddddfd k7r+|j|ddddfd|zk7r t d |j|jd|jd d zf|j }tj|ddddfj|ddddfd z }tj|dddd fj|ddddf}d}tj||dddd f||jdf|jd|}tj|ddd fj|S)a Convert a linkage matrix generated by MATLAB(TM) to a new linkage matrix compatible with this module. The conversion does two things: * the indices are converted from ``1..N`` to ``0..(N-1)`` form, and * a fourth column ``Z[:,3]`` is added where ``Z[i,3]`` represents the number of original observations (leaves) in the non-singleton cluster ``i``. This function is useful when loading in linkages from legacy data files generated by MATLAB. Parameters ---------- Z : ndarray A linkage matrix generated by MATLAB(TM). Returns ------- ZS : ndarray A linkage matrix compatible with ``scipy.cluster.hierarchy``. See Also -------- linkage : for a description of what a linkage matrix is. to_mlab_linkage : transform from SciPy to MATLAB format. Examples -------- >>> import numpy as np >>> from scipy.cluster.hierarchy import ward, from_mlab_linkage Given a linkage matrix in MATLAB format ``mZ``, we can use `scipy.cluster.hierarchy.from_mlab_linkage` to import it into SciPy format: >>> mZ = np.array([[1, 2, 1], [4, 5, 1], [7, 8, 1], ... [10, 11, 1], [3, 13, 1.29099445], ... [6, 14, 1.29099445], ... [9, 15, 1.29099445], ... [12, 16, 1.29099445], ... [17, 18, 5.77350269], ... [19, 20, 5.77350269], ... [21, 22, 8.16496581]]) >>> Z = from_mlab_linkage(mZ) >>> Z array([[ 0. , 1. , 1. , 2. ], [ 3. , 4. , 1. , 2. ], [ 6. , 7. , 1. , 2. ], [ 9. , 10. , 1. , 2. ], [ 2. , 12. , 1.29099445, 3. ], [ 5. , 13. , 1.29099445, 3. ], [ 8. , 14. , 1.29099445, 3. ], [ 11. , 15. , 1.29099445, 3. ], [ 16. , 17. , 5.77350269, 6. ], [ 18. , 19. , 5.77350269, 6. ], [ 20. , 21. , 8.16496581, 12. ]]) As expected, the linkage matrix ``Z`` returned includes an additional column counting the number of original samples in each cluster. Also, all cluster indices are reduced by 1 (MATLAB format uses 1-indexing, whereas SciPy uses 0-indexing). r\)rCr^rHr7)rrrz&The linkage array must be rectangular.rN?%The format of the indices is not 1..NrrBct|jd}|rPtj|ddddfdk7r/tj|ddddfd|zk7r t d|j j s|j}tj|f}tj|||dz|S)Nrrrrr) rrrfrrriflags writeablecopyrrcalculate_cluster_sizes)ZpartrmroCSs r9cy_from_mlab_linkagez/from_mlab_linkage..cy_from_mlab_linkagecs KKN uQU|,3uQU|8LPQTUPU8UDE E{{$$JJLE XXqd^**5"a!e< r8Trq)rrrtrrr rwrir rremptyrCrzatrr)rrHrorresrr s r9r!r!sN  B"**CB7A ww+qR  vv{ABB  AAvqR   D BFF1QU8$+qBQBx0@AE0I@AA ((AGGAJ Q/qww( ?C &&+a!e  1bqb5C 0C &&+a2g  " "1QU8 ,C  ,c!SbS&kD"yy|oRZZ!%" .B 66#;q"u  ! !" %%r8ct|}t||j|}|jdvr t ||St |dd||j |ddddfd z|dddd ffd S) a= Convert a linkage matrix to a MATLAB(TM) compatible one. Converts a linkage matrix ``Z`` generated by the linkage function of this module to a MATLAB(TM) compatible one. The return linkage matrix has the last column removed and the cluster indices are converted to ``1..N`` indexing. Parameters ---------- Z : ndarray A linkage matrix generated by ``scipy.cluster.hierarchy``. Returns ------- to_mlab_linkage : ndarray A linkage matrix compatible with MATLAB(TM)'s hierarchical clustering functions. The return linkage matrix has the last column removed and the cluster indices are converted to ``1..N`` indexing. See Also -------- linkage : for a description of what a linkage matrix is. from_mlab_linkage : transform from Matlab to SciPy format. Examples -------- >>> from scipy.cluster.hierarchy import ward, to_mlab_linkage >>> from scipy.spatial.distance import pdist >>> X = [[0, 0], [0, 1], [1, 0], ... [0, 4], [0, 3], [1, 4], ... [4, 0], [3, 0], [4, 1], ... [4, 4], [3, 4], [4, 3]] >>> Z = ward(pdist(X)) >>> Z array([[ 0. , 1. , 1. , 2. ], [ 3. , 4. , 1. , 2. ], [ 6. , 7. , 1. , 2. ], [ 9. , 10. , 1. , 2. ], [ 2. , 12. , 1.29099445, 3. ], [ 5. , 13. , 1.29099445, 3. ], [ 8. , 14. , 1.29099445, 3. ], [11. , 15. , 1.29099445, 3. ], [16. , 17. , 5.77350269, 6. ], [18. , 19. , 5.77350269, 6. ], [20. , 21. , 8.16496581, 12. ]]) After a linkage matrix ``Z`` has been created, we can use `scipy.cluster.hierarchy.to_mlab_linkage` to convert it into MATLAB format: >>> mZ = to_mlab_linkage(Z) >>> mZ array([[ 1. , 2. , 1. ], [ 4. , 5. , 1. ], [ 7. , 8. , 1. ], [ 10. , 11. , 1. ], [ 3. , 13. , 1.29099445], [ 6. , 14. , 1.29099445], [ 9. , 15. , 1.29099445], [ 12. , 16. , 1.29099445], [ 17. , 18. , 5.77350269], [ 19. , 20. , 5.77350269], [ 21. , 22. , 8.16496581]]) The new linkage matrix ``mZ`` uses 1-indexing for all the clusters (instead of 0-indexing). Also, the last column of the original linkage matrix has been dropped. )rCrHrrTrrNrrrraxis)rrrtrrr rconcatrrHs r9r0r0vs|X  B"**,Aww+qR  at#"5 99a2A2hna1Q3i0q9 99r8ct|}t||}t|dd||j|dddf|dddfk\S) a Return True if the linkage passed is monotonic. The linkage is monotonic if for every cluster :math:`s` and :math:`t` joined, the distance between them is no less than the distance between any previously joined clusters. Parameters ---------- Z : ndarray The linkage matrix to check for monotonicity. Returns ------- b : bool A boolean indicating whether the linkage is monotonic. See Also -------- linkage : for a description of what a linkage matrix is. Examples -------- >>> from scipy.cluster.hierarchy import median, ward, is_monotonic >>> from scipy.spatial.distance import pdist By definition, some hierarchical clustering algorithms - such as `scipy.cluster.hierarchy.ward` - produce monotonic assignments of samples to clusters; however, this is not always true for other hierarchical methods - e.g. `scipy.cluster.hierarchy.median`. Given a linkage matrix ``Z`` (as the result of a hierarchical clustering method) we can test programmatically whether it has the monotonicity property or not, using `scipy.cluster.hierarchy.is_monotonic`: >>> X = [[0, 0], [0, 1], [1, 0], ... [0, 4], [0, 3], [1, 4], ... [4, 0], [3, 0], [4, 1], ... [4, 4], [3, 4], [4, 3]] >>> Z = ward(pdist(X)) >>> Z array([[ 0. , 1. , 1. , 2. ], [ 3. , 4. , 1. , 2. ], [ 6. , 7. , 1. , 2. ], [ 9. , 10. , 1. , 2. ], [ 2. , 12. , 1.29099445, 3. ], [ 5. , 13. , 1.29099445, 3. ], [ 8. , 14. , 1.29099445, 3. ], [11. , 15. , 1.29099445, 3. ], [16. , 17. , 5.77350269, 6. ], [18. , 19. , 5.77350269, 6. ], [20. , 21. , 8.16496581, 12. ]]) >>> is_monotonic(Z) True >>> Z = median(pdist(X)) >>> Z array([[ 0. , 1. , 1. , 2. ], [ 3. , 4. , 1. , 2. ], [ 9. , 10. , 1. , 2. ], [ 6. , 7. , 1. , 2. ], [ 2. , 12. , 1.11803399, 3. ], [ 5. , 13. , 1.11803399, 3. ], [ 8. , 15. , 1.11803399, 3. ], [11. , 14. , 1.11803399, 3. ], [18. , 19. , 3. , 6. ], [16. , 17. , 3.5 , 6. ], [20. , 21. , 3.25 , 12. ]]) >>> is_monotonic(Z) False Note that this method is equivalent to just verifying that the distances in the third column of the linkage matrix appear in a monotonically increasing order. rTrrrNrr)rrrrgrs r9r$r$sU^  BrAat#"5 66!ABE(aQi' ((r8)rJ see notes)z jax.numpyr)r=cTt|}t||}t||||d|S)a Return True if the inconsistency matrix passed is valid. It must be a :math:`n` by 4 array of doubles. The standard deviations ``R[:,1]`` must be nonnegative. The link counts ``R[:,2]`` must be positive and no greater than :math:`n-1`. Parameters ---------- R : ndarray The inconsistency matrix to check for validity. warning : bool, optional When True, issues a Python warning if the linkage matrix passed is invalid. throw : bool, optional When True, throws a Python exception if the linkage matrix passed is invalid. name : str, optional This string refers to the variable name of the invalid linkage matrix. Returns ------- b : bool True if the inconsistency matrix is valid; False otherwise. Notes ----- *Array API support (experimental):* If the input is a lazy Array (e.g. Dask or JAX), the return value may be a 0-dimensional bool Array. When warning=True or throw=True, calling this function materializes the array. See Also -------- linkage : for a description of what a linkage matrix is. inconsistent : for the creation of a inconsistency matrix. Examples -------- >>> from scipy.cluster.hierarchy import ward, inconsistent, is_valid_im >>> from scipy.spatial.distance import pdist Given a data set ``X``, we can apply a clustering method to obtain a linkage matrix ``Z``. `scipy.cluster.hierarchy.inconsistent` can be also used to obtain the inconsistency matrix ``R`` associated to this clustering process: >>> X = [[0, 0], [0, 1], [1, 0], ... [0, 4], [0, 3], [1, 4], ... [4, 0], [3, 0], [4, 1], ... [4, 4], [3, 4], [4, 3]] >>> Z = ward(pdist(X)) >>> R = inconsistent(Z) >>> Z array([[ 0. , 1. , 1. , 2. ], [ 3. , 4. , 1. , 2. ], [ 6. , 7. , 1. , 2. ], [ 9. , 10. , 1. , 2. ], [ 2. , 12. , 1.29099445, 3. ], [ 5. , 13. , 1.29099445, 3. ], [ 8. , 14. , 1.29099445, 3. ], [11. , 15. , 1.29099445, 3. ], [16. , 17. , 5.77350269, 6. ], [18. , 19. , 5.77350269, 6. ], [20. , 21. , 8.16496581, 12. ]]) >>> R array([[1. , 0. , 1. , 0. ], [1. , 0. , 1. , 0. ], [1. , 0. , 1. , 0. ], [1. , 0. , 1. , 0. ], [1.14549722, 0.20576415, 2. , 0.70710678], [1.14549722, 0.20576415, 2. , 0.70710678], [1.14549722, 0.20576415, 2. , 0.70710678], [1.14549722, 0.20576415, 2. , 0.70710678], [2.78516386, 2.58797734, 3. , 1.15470054], [2.78516386, 2.58797734, 3. , 1.15470054], [6.57065706, 1.38071187, 3. , 1.15470054]]) Now we can use `scipy.cluster.hierarchy.is_valid_im` to verify that ``R`` is correct: >>> is_valid_im(R) True However, if ``R`` is wrongly constructed (e.g., one of the standard deviations is set to a negative value), then the check will fail: >>> R[-1,1] = R[-1,1] * -1 >>> is_valid_im(R) False rTwarningr`rarrH)rr _is_valid_im)rrr`rarHs r9r%r%"s5|  BrA 7%d$(R 11r8c |r|dnd} |j|jk7rtd|dt|jdk7rt d|d|jddk7rt d|d |jd dkrt d|d  t|j|d d d fd kd|df|j|d d dfd kd|df|j|d d dfd kd|df||||S#tt f$r#}|r|rt t|Yd }~y d }~wwxYw)zVariant of `is_valid_im` to be called internally by other scipy functions, which by default does not materialize lazy input arrays (Dask, JAX, etc.) when warning=True or throw=True.  zInconsistency matrix zmust contain doubles (double).r,must have shape=2 (i.e. be two-dimensional).rrmust have 4 columns.rzmust have at least one row.NFz% contains negative link height means.z3 contains negative link height standard deviations.z contains negative link counts.r`rrrH) rCrtrrrrrir@str_lazy_valid_checksany)rrr`rarrHname_stres r9rrs $$|H 77bjj 3H:>(() ) qww<1 4XJ?445 5 771:?4XJ345 5 771:>4XJ:;< <   !Q$!   *O P R !Q$!   +   !Q$!   *I J LW+"   z "    SV  sBD D>D99D>cTt|}t||}t||||d|S)a Check the validity of a linkage matrix. A linkage matrix is valid if it is a 2-D array (type double) with :math:`n` rows and 4 columns. The first two columns must contain indices between 0 and :math:`2n-1`. For a given row ``i``, the following two expressions have to hold: .. math:: 0 \leq \mathtt{Z[i,0]} \leq i+n-1 0 \leq Z[i,1] \leq i+n-1 I.e., a cluster cannot join another cluster unless the cluster being joined has been generated. The fourth column of `Z` represents the number of original observations in a cluster, so a valid ``Z[i, 3]`` value may not exceed the number of original observations. Parameters ---------- Z : array_like Linkage matrix. warning : bool, optional When True, issues a Python warning if the linkage matrix passed is invalid. throw : bool, optional When True, throws a Python exception if the linkage matrix passed is invalid. name : str, optional This string refers to the variable name of the invalid linkage matrix. Returns ------- b : bool True if the inconsistency matrix is valid; False otherwise. Notes ----- *Array API support (experimental):* If the input is a lazy Array (e.g. Dask or JAX), the return value may be a 0-dimensional bool Array. When warning=True or throw=True, calling this function materializes the array. See Also -------- linkage: for a description of what a linkage matrix is. Examples -------- >>> from scipy.cluster.hierarchy import ward, is_valid_linkage >>> from scipy.spatial.distance import pdist All linkage matrices generated by the clustering methods in this module will be valid (i.e., they will have the appropriate dimensions and the two required expressions will hold for all the rows). We can check this using `scipy.cluster.hierarchy.is_valid_linkage`: >>> X = [[0, 0], [0, 1], [1, 0], ... [0, 4], [0, 3], [1, 4], ... [4, 0], [3, 0], [4, 1], ... [4, 4], [3, 4], [4, 3]] >>> Z = ward(pdist(X)) >>> Z array([[ 0. , 1. , 1. , 2. ], [ 3. , 4. , 1. , 2. ], [ 6. , 7. , 1. , 2. ], [ 9. , 10. , 1. , 2. ], [ 2. , 12. , 1.29099445, 3. ], [ 5. , 13. , 1.29099445, 3. ], [ 8. , 14. , 1.29099445, 3. ], [11. , 15. , 1.29099445, 3. ], [16. , 17. , 5.77350269, 6. ], [18. , 19. , 5.77350269, 6. ], [20. , 21. , 8.16496581, 12. ]]) >>> is_valid_linkage(Z) True However, if we create a linkage matrix in a wrong way - or if we modify a valid one in a way that any of the required expressions don't hold anymore, then the check will fail: >>> Z[3][1] = 20 # the cluster number 20 is not defined at this point >>> is_valid_linkage(Z) False rTr)rrr)rrr`rarHs r9r&r&s7x  BrA Qu"&DR AAr8c|r|dnd} |j|jk7rtd|dt|jdk7rt d|d|jddk7rt d|d |jd d k(r t d  |jd }|dkryt|j|d d d dfd kd|df|j|d d dfd kd|df|j|d d dfd kd|df|j|d d df|dzkDd|df|j|j|d d d dfd|j|dzd|zdz|jk\d|dftj|d d d df|dzkd|df|||| S#tt f$r#}|r|rt t|Yd }~y d }~wwxYw)zVariant of `is_valid_linkage` to be called internally by other scipy functions, which by default does not materialize lazy input arrays (Dask, JAX, etc.) when warning=True or throw=True. rrzLinkage matrix zmust contain doubles.rr rrr!rz6Linkage must be computed on at least two observations.NFTzLinkage zcontains negative indices.zcontains negative distances.rzcontains negative counts.z,contains excessive observations in a clusterrrBz/uses non-singleton cluster before it is formed.z%uses the same cluster more than once.r")rCrtrrrrrir@r#r$r%rrrznunique) rrr`rarrHr&r'ros r9rr sS $$|H 77bjj ohZ7LMN N qww<1 xj9223 3 771:?xj8LMN N 771:?-. .   A1u  !RaR%1  H:7 8 : !Q$!  H:9 : < !Q$!  H:6 7 9 !Q$!a% H:I J L qBQBxa(BIIa!eQUQYaggI,VV W H:L M O Qq"1"uX Q & H:B C EW+"  z "    SV  sB GG5G00G5)r`rrc |j|Dcgc]\}}|j|dc}}}t|}|s|r|r#|s!|j|} |r| St | St |r|j }|Dcgc] }t |}}t||D]9\}\}} |r |r t| |s|stj| td;t| Scc}}wcc}w)a#Validate a set of conditions on the contents of possibly lazy arrays. Parameters ---------- args : tuples of (Array, str) The first element of each tuple must be a 0-dimensional Array that evaluates to bool; the second element must be the message to convey if the first element evaluates to True. throw: bool Set to True to `raise ValueError(args[i][1])` if `args[i][0]` is True. warning: bool Set to True to issue a warning with message `args[i][1]` if `args[i][0]` is True. materialize: bool Set to True to force materialization of lazy arrays when throw=True or warning=True. If the inputs are lazy and materialize=False, ignore the `throw` and `warning` flags. xp: module Array API namespace Returns ------- If xp is an eager backend (e.g. numpy) and all conditions are False, return True. If throw is True, raise. Otherwise, return False. If xp is a lazy backend (e.g. Dask or JAX), return a 0-dimensional bool Array. )rrr;) rreshaper r%boolr computeziprir=r>r3) r`rrrHargscond_condsroutrs r9r$r$< s8 IIdC74rzz$.C DE  D T+vve}ns)S )r{  %* *DT$Z *E *eT*=hq# TS/ !  MM#~! < = 5z>-D +s C0 C6ctt|}t||}t|dd||jddzS)a Return the number of original observations of the linkage matrix passed. Parameters ---------- Z : ndarray The linkage matrix on which to perform the operation. Returns ------- n : int The number of original observations in the linkage. Examples -------- >>> from scipy.cluster.hierarchy import ward, num_obs_linkage >>> from scipy.spatial.distance import pdist >>> X = [[0, 0], [0, 1], [1, 0], ... [0, 4], [0, 3], [1, 4], ... [4, 0], [3, 0], [4, 1], ... [4, 4], [3, 4], [4, 3]] >>> Z = ward(pdist(X)) ``Z`` is a linkage matrix obtained after using the Ward clustering method with ``X``, a dataset with 12 data points. >>> num_obs_linkage(Z) 12 rTrrrr)rrrrrrs r9r-r-q s;D  BrAat#"5 771:>r8ct||}t||}t||}t|d|tj|dtj |t |k(S)a Check for correspondence between linkage and condensed distance matrices. They must have the same number of original observations for the check to succeed. This function is useful as a sanity check in algorithms that make extensive use of linkage and distance matrices that must correspond to the same set of original observations. Parameters ---------- Z : array_like The linkage matrix to check for correspondence. Y : array_like The condensed distance matrix to check for correspondence. Returns ------- b : bool A boolean indicating whether the linkage matrix and distance matrix could possibly correspond to one another. See Also -------- linkage : for a description of what a linkage matrix is. Examples -------- >>> from scipy.cluster.hierarchy import ward, correspond >>> from scipy.spatial.distance import pdist This method can be used to check if a given linkage matrix ``Z`` has been obtained from the application of a cluster method over a dataset ``X``: >>> X = [[0, 0], [0, 1], [1, 0], ... [0, 4], [0, 3], [1, 4], ... [4, 0], [3, 0], [4, 1], ... [4, 4], [3, 4], [4, 3]] >>> X_condensed = pdist(X) >>> Z = ward(X_condensed) Here, we can compare ``Z`` and ``X`` (in condensed form): >>> correspond(Z, X_condensed) True rT)r`rH)r`)rrrrxryrr-)rrrHs r9rr s`d A BrArAat+ &   a OA$6 66r8)rKrLrrc  t|}t|d|j|}t|ddd||jddz}t j |fd }|&t j|dt j }t j|}t j|}|d k(ry| t||}n>t|d|j|}t|dd d|t j|}tj|||t|t|n|d k(r+tj||t|t|n|dk(r"tj||t||ny|dk(r,tj |||t|t|nH|dk(r,tj"|||t|t|nt%dt'||j|S)a Form flat clusters from the hierarchical clustering defined by the given linkage matrix. Parameters ---------- Z : ndarray The hierarchical clustering encoded with the matrix returned by the `linkage` function. t : scalar For criteria 'inconsistent', 'distance' or 'monocrit', this is the threshold to apply when forming flat clusters. For 'maxclust' or 'maxclust_monocrit' criteria, this would be max number of clusters requested. criterion : str, optional The criterion to use in forming flat clusters. This can be any of the following values: ``inconsistent`` : If a cluster node and all its descendants have an inconsistent value less than or equal to `t`, then all its leaf descendants belong to the same flat cluster. When no non-singleton cluster meets this criterion, every node is assigned to its own cluster. (Default) ``distance`` : Forms flat clusters so that the original observations in each flat cluster have no greater a cophenetic distance than `t`. ``maxclust`` : Finds a minimum threshold ``r`` so that the cophenetic distance between any two original observations in the same flat cluster is no more than ``r`` and no more than `t` flat clusters are formed. ``monocrit`` : Forms a flat cluster from a cluster node c with index i when ``monocrit[j] <= t``. For example, to threshold on the maximum mean distance as computed in the inconsistency matrix R with a threshold of 0.8 do:: MR = maxRstat(Z, R, 3) fcluster(Z, t=0.8, criterion='monocrit', monocrit=MR) ``maxclust_monocrit`` : Forms a flat cluster from a non-singleton cluster node ``c`` when ``monocrit[i] <= r`` for all cluster indices ``i`` below and including ``c``. ``r`` is minimized such that no more than ``t`` flat clusters are formed. monocrit must be monotonic. For example, to minimize the threshold t on maximum inconsistency values so that no more than 3 flat clusters are formed, do:: MI = maxinconsts(Z, R) fcluster(Z, t=3, criterion='maxclust_monocrit', monocrit=MI) depth : int, optional The maximum depth to perform the inconsistency calculation. It has no meaning for the other criteria. Default is 2. R : ndarray, optional The inconsistency matrix to use for the ``'inconsistent'`` criterion. This matrix is computed if not provided. monocrit : ndarray, optional An array of length n-1. `monocrit[i]` is the statistics upon which non-singleton i is thresholded. The monocrit vector must be monotonic, i.e., given a node c with index i, for all node indices j corresponding to nodes below c, ``monocrit[i] >= monocrit[j]``. Returns ------- fcluster : ndarray An array of length ``n``. ``T[i]`` is the flat cluster number to which original observation ``i`` belongs. See Also -------- linkage : for information about hierarchical clustering methods work. Examples -------- >>> from scipy.cluster.hierarchy import ward, fcluster >>> from scipy.spatial.distance import pdist All cluster linkage methods - e.g., `scipy.cluster.hierarchy.ward` generate a linkage matrix ``Z`` as their output: >>> X = [[0, 0], [0, 1], [1, 0], ... [0, 4], [0, 3], [1, 4], ... [4, 0], [3, 0], [4, 1], ... [4, 4], [3, 4], [4, 3]] >>> Z = ward(pdist(X)) >>> Z array([[ 0. , 1. , 1. , 2. ], [ 3. , 4. , 1. , 2. ], [ 6. , 7. , 1. , 2. ], [ 9. , 10. , 1. , 2. ], [ 2. , 12. , 1.29099445, 3. ], [ 5. , 13. , 1.29099445, 3. ], [ 8. , 14. , 1.29099445, 3. ], [11. , 15. , 1.29099445, 3. ], [16. , 17. , 5.77350269, 6. ], [18. , 19. , 5.77350269, 6. ], [20. , 21. , 8.16496581, 12. ]]) This matrix represents a dendrogram, where the first and second elements are the two clusters merged at each step, the third element is the distance between these clusters, and the fourth element is the size of the new cluster - the number of original data points included. `scipy.cluster.hierarchy.fcluster` can be used to flatten the dendrogram, obtaining as a result an assignation of the original data points to single clusters. This assignation mostly depends on a distance threshold ``t`` - the maximum inter-cluster distance allowed: >>> fcluster(Z, t=0.9, criterion='distance') array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], dtype=int32) >>> fcluster(Z, t=1.1, criterion='distance') array([1, 1, 2, 3, 3, 4, 5, 5, 6, 7, 7, 8], dtype=int32) >>> fcluster(Z, t=3, criterion='distance') array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4], dtype=int32) >>> fcluster(Z, t=9, criterion='distance') array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int32) In the first case, the threshold ``t`` is too small to allow any two samples in the data to form a cluster, so 12 different clusters are returned. In the second case, the threshold is large enough to allow the first 4 points to be merged with their nearest neighbors. So, here, only 8 clusters are returned. The third case, with a much higher threshold, allows for up to 8 data points to be connected - so 4 clusters are returned here. Lastly, the threshold of the fourth case is large enough to allow for all data points to be merged together - so a single cluster is returned. r\r]TrrrrrrB)r^rCr"rrxmaxclustmonocritmaxclust_monocritz%Invalid cluster formation criterion: )rrrtrrrrfrrEr"rr cluster_infloatrD cluster_distcluster_maxclust_distcluster_monocritcluster_maxclust_monocritrir#) rt criteriondepthrr9rHror~s r9rr sr  B#RZZB7Aat#4BG  QA !S!A::hcD 1 Azz(#HN" 9Q&A#RZZB?A $Sdr J 1 AaAuQxQ8 j 1eAhA7 j ((As1vq9 j ##AxE!Hc!fE ) ),,Q!SVSVL@Y@PQRR ::a=r8c&t|}t|d|j|}|jdk7r t dt j ||}t||} |t| |}nt|d|}t| |||| } | S) a Cluster observation data using a given metric. Clusters the original observations in the n-by-m data matrix X (n observations in m dimensions), using the euclidean distance metric to calculate distances between original observations, performs hierarchical clustering using the single linkage algorithm, and forms flat clusters using the inconsistency method with `t` as the cut-off threshold. A 1-D array ``T`` of length ``n`` is returned. ``T[i]`` is the index of the flat cluster to which the original observation ``i`` belongs. Parameters ---------- X : (N, M) ndarray N by M data matrix with N observations in M dimensions. t : scalar For criteria 'inconsistent', 'distance' or 'monocrit', this is the threshold to apply when forming flat clusters. For 'maxclust' or 'maxclust_monocrit' criteria, this would be max number of clusters requested. criterion : str, optional Specifies the criterion for forming flat clusters. Valid values are 'inconsistent' (default), 'distance', or 'maxclust' cluster formation algorithms. See `fcluster` for descriptions. metric : str or function, optional The distance metric for calculating pairwise distances. See ``distance.pdist`` for descriptions and linkage to verify compatibility with the linkage method. depth : int, optional The maximum depth for the inconsistency calculation. See `inconsistent` for more information. method : str, optional The linkage method to use (single, complete, average, weighted, median centroid, ward). See `linkage` for more information. Default is "single". R : ndarray, optional The inconsistency matrix. It will be computed if necessary if it is not passed. Returns ------- fclusterdata : ndarray A vector of length n. T[i] is the flat cluster number to which original observation i belongs. See Also -------- scipy.spatial.distance.pdist : pairwise distance metrics Notes ----- This function is similar to the MATLAB function ``clusterdata``. Examples -------- >>> from scipy.cluster.hierarchy import fclusterdata This is a convenience method that abstracts all the steps to perform in a typical SciPy's hierarchical clustering workflow. * Transform the input data into a condensed matrix with `scipy.spatial.distance.pdist`. * Apply a clustering method. * Obtain flat clusters at a user defined distance threshold ``t`` using `scipy.cluster.hierarchy.fcluster`. >>> X = [[0, 0], [0, 1], [1, 0], ... [0, 4], [0, 3], [1, 4], ... [4, 0], [3, 0], [4, 1], ... [4, 4], [3, 4], [4, 3]] >>> fclusterdata(X, t=1) array([3, 3, 3, 4, 4, 4, 2, 2, 2, 1, 1, 1], dtype=int32) The output here (for the dataset ``X``, distance threshold ``t``, and the default settings) is four clusters with three data points each. r\r]rz1The observation matrix X must be an n by m array.)rQ)rP)rr)rBrCrrA) rrrtrwrrxrr)r"r) XrArBrQrCrPrrHrrr~s r9r r sn  B#RZZB7Avv{KLLq(A&!Ay e $ Qcb )iuQ?A Hr8c t|}t|d|}t|dd|d}|jddz}t j ||t ||f|jd| S) aA Return a list of leaf node ids. The return corresponds to the observation vector index as it appears in the tree from left to right. Z is a linkage matrix. Parameters ---------- Z : ndarray The hierarchical clustering encoded as a matrix. `Z` is a linkage matrix. See `linkage` for more information. Returns ------- leaves_list : ndarray The list of leaf node ids. See Also -------- dendrogram : for information about dendrogram structure. Examples -------- >>> from scipy.cluster.hierarchy import ward, dendrogram, leaves_list >>> from scipy.spatial.distance import pdist >>> from matplotlib import pyplot as plt >>> X = [[0, 0], [0, 1], [1, 0], ... [0, 4], [0, 3], [1, 4], ... [4, 0], [3, 0], [4, 1], ... [4, 4], [3, 4], [4, 3]] >>> Z = ward(pdist(X)) The linkage matrix ``Z`` represents a dendrogram, that is, a tree that encodes the structure of the clustering performed. `scipy.cluster.hierarchy.leaves_list` shows the mapping between indices in the ``X`` dataset and leaves in the dendrogram: >>> leaves_list(Z) array([ 2, 0, 1, 5, 3, 4, 8, 6, 7, 11, 9, 10], dtype=int32) >>> fig = plt.figure(figsize=(25, 10)) >>> dn = dendrogram(Z) >>> plt.show() r\rTrrc|rt|ddt|jddz}tj|ftj}t j ||||S)NTrrrrrB)rrfrrrint32rprelist)rrmroMLs r9cy_leaves_listz#leaves_list..cy_leaves_list+ sQ  at#" = GGAJN XXqd"(( +1b!$ r8rrrq)rrrrrrzrr rH)rrHrKros r9r(r( sob  B#"%Aat#"5  QA >>.!mA6F!"RXX#'B 00r8  2U(-Zczt}g}|D])}||vs|j||j|+|S)z} Remove duplicates AND preserve the original order of the elements. The set class is not guaranteed to do this. )rrr)L seen_beforeL2rs r9 _remove_dupsrZH sE %K B  K  OOA  IIaL Ir8c:tD]}||ks t|cSyr)_dtextsortedkeys _dtextsizesprs r9_get_tick_text_sizer`W s" " 6q> !"r8c:tD]}||ks t|cSyr)_drotationsortedkeys _drotationr^s r9_get_tick_rotationrd] s" !! 6a= !r8C0c  | ddl}ddl}ddl}| |jj } d}nd}t |dz}||dzz}tjdt |dzdzd}|dvrG|d k(r'| jd|g| jd|gn&| j|dg| jd|g|}|}|r$| jg| jgn | j||d k(r| jjd n| jjd | jD]}|j!d| t#t%t |n| }| t#t't |n| }| j||| n>|d vr9|d k(r'| j|dg| jd|gn&| jd|g| jd|g|}|}|r#| j)g| j+gn| j)||d k(r| j,jdn| j,jd | j/D]}|j!d| t#t't |n| }| | j+|| | n| j+||t1|}i}|D]}g||< t3|D]-\}}}||j5t7t3||/i}|D](}|j8j;|||f}|||<*|D]}|| k7s | j=||| |vr| j=|| | |j>j@} | D]x\}!}"|d vr| |"|!f|dz d}n| |!|"fd|dz }| jC||jE| jF|jId|jKdz|r|jjMyy#t$r}td|d}~wwxYw)Nrz~You must install the matplotlib library to plot the dendrogram. Use no_plot=True to calculate the dendrogram without plotting.TFrOg?r)topbottomrgrh)rotationsize)rrrr)rj)colorsdr)widthrg?r)'matplotlib.pylabmatplotlib.patchesmatplotlib.collections ImportErrorpylabgcarrfrset_ylimset_xlim set_xticksset_xticklabelsxaxisset_ticks_positionget_xticklines set_visibler<rdr` set_yticksset_yticklabelsyaxisget_yticklinesrZr/rlist collectionsLineCollectionadd_collectionpatchesEllipse add_artist set_clip_boxbbox set_alpha set_facecolordraw_if_interactive)#icoordsdcoordsivlr_romh orientation no_labels color_listleaf_font_size leaf_rotationcontraction_marksaxabove_threshold_color matplotlibr'trigger_redrawivwdvwiv_ticksxlinesylineslineleaf_rot leaf_font colors_usedcolor_to_linescolorxlineylinecolors_to_collectionscollrrrTs# r9_plot_dendrogramrc s A : #!%  z    ! ! # c(R-C rDy.CyyCHrMA-r2H'' %  KKC ! KKC ! KKa ! KKC !  MM"    r " MM( #e#++H5++E2))+ (  ' (*10S:;8E ,323s8<=:H    sXI  F ) ) & KKa ! KKC ! KKC ! KKC !  MM"    r " MM( #f$++G4++F3))+ (  ' (,323s8<=:H (""3Y"O""3Y"7z*KN# "u#!$VVZ!@>uu$$T#eU*;%<=>,%%44^E5J=BH5F'+e$, < ) )   3E: ;< 55 /0EFG$$$,,' !FQ//QF#)C@QF#cCi@ MM!  NN277 # KK  OOC  !,,.m A9:?@ AAsQ(( R1 Q==R) C1C2C3C4C5C6C7C8C9c|t}n"t|ttzs t d|Dcgc]}t|t }}d|vr t d|aycc}w)a Set list of matplotlib color codes for use by dendrogram. Note that this palette is global (i.e., setting it once changes the colors for all subsequent calls to `dendrogram`) and that it affects only the the colors below ``color_threshold``. Note that `dendrogram` also accepts a custom coloring function through its ``link_color_func`` keyword, which is more flexible and non-global. Parameters ---------- palette : list of str or None A list of matplotlib color codes. The order of the color codes is the order in which the colors are cycled through when color thresholding in the dendrogram. If ``None``, resets the palette to its default (which are matplotlib default colors C1 to C9). Returns ------- None See Also -------- dendrogram Notes ----- Ability to reset the palette with ``None`` added in SciPy 0.17.0. Thread safety: using this function in a multi-threaded fashion may result in `dendrogram` producing plots with unexpected colors. Examples -------- >>> import numpy as np >>> from scipy.cluster import hierarchy >>> ytdist = np.array([662., 877., 255., 412., 996., 295., 468., 268., ... 400., 754., 564., 138., 219., 869., 669.]) >>> Z = hierarchy.linkage(ytdist, 'single') >>> dn = hierarchy.dendrogram(Z, no_plot=True) >>> dn['color_list'] ['C1', 'C0', 'C0', 'C0', 'C0'] >>> hierarchy.set_link_color_palette(['c', 'm', 'y', 'k']) >>> dn = hierarchy.dendrogram(Z, no_plot=True, above_threshold_color='b') >>> dn['color_list'] ['c', 'b', 'b', 'b', 'b'] >>> dn = hierarchy.dendrogram(Z, no_plot=True, color_threshold=267, ... above_threshold_color='k') >>> dn['color_list'] ['c', 'm', 'm', 'k', 'k'] Now, reset the color palette to its default: >>> hierarchy.set_link_color_palette(None) Nzpalette must be a list or tupleFz/all palette list elements must be color strings)_link_line_colors_defaultrrtuplerr#_link_line_colors)paletter__ptypess r9r/r/ sdx+  .9::+23az!S!3G3 IJJ  4sA)rKrrrgct|}t|d|}|dvr td|, t|}|j ddz|k7r tdt|d d d | |j }|ddz}t|ttzr t|}n t d |d vr td|dk(r ||kDs|dk(r|}|dk(rd}|dk(r|dkrtj}|rg}nd}g}g}g}dg}dg}g}|t|tr |dk(r|j|dddfdz}|||||d}|rgnd}td1id |d|d|d|d|d|d|d|d|d| d d|zdz d!d"d#|d$|d%|d&|d'|d(|d)|d*|d+|d,|d-|d.|| s1|j|dddf} t!|||||| || || | |||/t#||d0<|S#tt f$r|j d}YwxYw)2a* Plot the hierarchical clustering as a dendrogram. The dendrogram illustrates how each cluster is composed by drawing a U-shaped link between a non-singleton cluster and its children. The top of the U-link indicates a cluster merge. The two legs of the U-link indicate which clusters were merged. The length of the two legs of the U-link represents the distance between the child clusters. It is also the cophenetic distance between original observations in the two children clusters. Parameters ---------- Z : ndarray The linkage matrix encoding the hierarchical clustering to render as a dendrogram. See the ``linkage`` function for more information on the format of ``Z``. p : int, optional The ``p`` parameter for ``truncate_mode``. truncate_mode : str, optional The dendrogram can be hard to read when the original observation matrix from which the linkage is derived is large. Truncation is used to condense the dendrogram. There are several modes: ``None`` No truncation is performed (default). Note: ``'none'`` is an alias for ``None`` that's kept for backward compatibility. ``'lastp'`` The last ``p`` non-singleton clusters formed in the linkage are the only non-leaf nodes in the linkage; they correspond to rows ``Z[n-p-2:end]`` in ``Z``. All other non-singleton clusters are contracted into leaf nodes. ``'level'`` No more than ``p`` levels of the dendrogram tree are displayed. A "level" includes all nodes with ``p`` merges from the final merge. Note: ``'mtica'`` is an alias for ``'level'`` that's kept for backward compatibility. color_threshold : double, optional For brevity, let :math:`t` be the ``color_threshold``. Colors all the descendent links below a cluster node :math:`k` the same color if :math:`k` is the first node below the cut threshold :math:`t`. All links connecting nodes with distances greater than or equal to the threshold are colored with de default matplotlib color ``'C0'``. If :math:`t` is less than or equal to zero, all nodes are colored ``'C0'``. If ``color_threshold`` is None or 'default', corresponding with MATLAB(TM) behavior, the threshold is set to ``0.7*max(Z[:,2])``. get_leaves : bool, optional Includes a list ``R['leaves']=H`` in the result dictionary. For each :math:`i`, ``H[i] == j``, cluster node ``j`` appears in position ``i`` in the left-to-right traversal of the leaves, where :math:`j < 2n-1` and :math:`i < n`. orientation : str, optional The direction to plot the dendrogram, which can be any of the following strings: ``'top'`` Plots the root at the top, and plot descendent links going downwards. (default). ``'bottom'`` Plots the root at the bottom, and plot descendent links going upwards. ``'left'`` Plots the root at the left, and plot descendent links going right. ``'right'`` Plots the root at the right, and plot descendent links going left. labels : ndarray, optional By default, ``labels`` is None so the index of the original observation is used to label the leaf nodes. Otherwise, this is an :math:`n`-sized sequence, with ``n == Z.shape[0] + 1``. The ``labels[i]`` value is the text to put under the :math:`i` th leaf node only if it corresponds to an original observation and not a non-singleton cluster. count_sort : str or bool, optional For each node n, the order (visually, from left-to-right) n's two descendent links are plotted is determined by this parameter, which can be any of the following values: ``False`` Nothing is done. ``'ascending'`` or ``True`` The child with the minimum number of original objects in its cluster is plotted first. ``'descending'`` The child with the maximum number of original objects in its cluster is plotted first. Note, ``distance_sort`` and ``count_sort`` cannot both be True. distance_sort : str or bool, optional For each node n, the order (visually, from left-to-right) n's two descendent links are plotted is determined by this parameter, which can be any of the following values: ``False`` Nothing is done. ``'ascending'`` or ``True`` The child with the minimum distance between its direct descendents is plotted first. ``'descending'`` The child with the maximum distance between its direct descendents is plotted first. Note ``distance_sort`` and ``count_sort`` cannot both be True. show_leaf_counts : bool, optional When True, leaf nodes representing :math:`k>1` original observation are labeled with the number of observations they contain in parentheses. no_plot : bool, optional When True, the final rendering is not performed. This is useful if only the data structures computed for the rendering are needed or if matplotlib is not available. no_labels : bool, optional When True, no labels appear next to the leaf nodes in the rendering of the dendrogram. leaf_rotation : double, optional Specifies the angle (in degrees) to rotate the leaf labels. When unspecified, the rotation is based on the number of nodes in the dendrogram (default is 0). leaf_font_size : int, optional Specifies the font size (in points) of the leaf labels. When unspecified, the size based on the number of nodes in the dendrogram. leaf_label_func : lambda or function, optional When ``leaf_label_func`` is a callable function, for each leaf with cluster index :math:`k < 2n-1`. The function is expected to return a string with the label for the leaf. Indices :math:`k < n` correspond to original observations while indices :math:`k \geq n` correspond to non-singleton clusters. For example, to label singletons with their node id and non-singletons with their id, count, and inconsistency coefficient, simply do:: # First define the leaf label function. def llf(id): if id < n: return str(id) else: return '[%d %d %1.2f]' % (id, count, R[n-id,3]) # The text for the leaf nodes is going to be big so force # a rotation of 90 degrees. dendrogram(Z, leaf_label_func=llf, leaf_rotation=90) # leaf_label_func can also be used together with ``truncate_mode``, # in which case you will get your leaves labeled after truncation: dendrogram(Z, leaf_label_func=llf, leaf_rotation=90, truncate_mode='level', p=2) show_contracted : bool, optional When True the heights of non-singleton nodes contracted into a leaf node are plotted as crosses along the link connecting that leaf node. This really is only useful when truncation is used (see ``truncate_mode`` parameter). link_color_func : callable, optional If given, `link_color_function` is called with each non-singleton id corresponding to each U-shaped link it will paint. The function is expected to return the color to paint the link, encoded as a matplotlib color string code. For example:: dendrogram(Z, link_color_func=lambda k: colors[k]) colors the direct links below each untruncated non-singleton node ``k`` using ``colors[k]``. ax : matplotlib Axes instance, optional If None and `no_plot` is not True, the dendrogram will be plotted on the current axes. Otherwise if `no_plot` is not True the dendrogram will be plotted on the given ``Axes`` instance. This can be useful if the dendrogram is part of a more complex figure. above_threshold_color : str, optional This matplotlib color string sets the color of the links above the color_threshold. The default is ``'C0'``. Returns ------- R : dict A dictionary of data structures computed to render the dendrogram. Its has the following keys: ``'color_list'`` A list of color names. The k'th element represents the color of the k'th link. ``'icoord'`` and ``'dcoord'`` Each of them is a list of lists. Let ``icoord = [I1, I2, ..., Ip]`` where ``Ik = [xk1, xk2, xk3, xk4]`` and ``dcoord = [D1, D2, ..., Dp]`` where ``Dk = [yk1, yk2, yk3, yk4]``, then the k'th link painted is ``(xk1, yk1)`` - ``(xk2, yk2)`` - ``(xk3, yk3)`` - ``(xk4, yk4)``. ``'ivl'`` A list of labels corresponding to the leaf nodes. ``'leaves'`` For each i, ``H[i] == j``, cluster node ``j`` appears in position ``i`` in the left-to-right traversal of the leaves, where :math:`j < 2n-1` and :math:`i < n`. If ``j`` is less than ``n``, the ``i``-th leaf node corresponds to an original observation. Otherwise, it corresponds to a non-singleton cluster. ``'leaves_color_list'`` A list of color names. The k'th element represents the color of the k'th leaf. See Also -------- linkage, set_link_color_palette Notes ----- It is expected that the distances in ``Z[:,2]`` be monotonic, otherwise crossings appear in the dendrogram. Examples -------- >>> import numpy as np >>> from scipy.cluster import hierarchy >>> import matplotlib.pyplot as plt A very basic example: >>> ytdist = np.array([662., 877., 255., 412., 996., 295., 468., 268., ... 400., 754., 564., 138., 219., 869., 669.]) >>> Z = hierarchy.linkage(ytdist, 'single') >>> plt.figure() >>> dn = hierarchy.dendrogram(Z) Now, plot in given axes, improve the color scheme and use both vertical and horizontal orientations: >>> hierarchy.set_link_color_palette(['m', 'c', 'y', 'k']) >>> fig, axes = plt.subplots(1, 2, figsize=(8, 3)) >>> dn1 = hierarchy.dendrogram(Z, ax=axes[0], above_threshold_color='y', ... orientation='top') >>> dn2 = hierarchy.dendrogram(Z, ax=axes[1], ... above_threshold_color='#bcbddc', ... orientation='right') >>> hierarchy.set_link_color_palette(None) # reset to default after use >>> plt.show() r\r)rgrrhrz>orientation must be one of 'top', 'left', 'bottom', or 'right'Nrrz.Dimensions of Z and labels must be consistent.Trrz$The second argument must be a number)lastpmticalevelnoneNzInvalid truncation mode.rrrFdefaultrgffffff?)icoorddcoordrleavesrr_ truncate_modecolor_threshold get_leavesrlabels count_sort distance_sortshow_leaf_countsrivrrro icoord_list dcoord_listlvs current_colorrcurrently_below_thresholdleaf_label_funcrlink_color_funcr)rrrrrleaves_color_listr7)rrrirrAttributeErrorrrrrrDr<rfinfr#r_dendrogram_calculate_infor_get_leaves_color_list)!rr_rrrrrrrrno_plotrrrrshow_contractedrrrrH len_labelsZsrorrrrrrrrrrs! r9rr8 sh  B#"%A<<01 1 )VJ 771:>Z 'MN Nat#4BG B 1 A!S5[! F>??EE344 q5AFA  6AKKJCM!& C:os#C#2i#?&&1a4/C/+cj 2A.45 55#5(5 5  5  55$5*5 A#'5 5 5 5 5 5 !5"$#5$%5&#<'5(()5*,+5,(-5.4/52  VVAadG_k31b+"J(6'4+</D  F4A6A Hq>* )aJ )s GG;:G;cdgt|dz}t|d|d|dD]H\}}}t||D]3\}}|dk(s |dzdk(s|dzd k(st|dz d z}|||<5J|S) NrrrrrrrrrrO)rr/rD)rrlink_xlink_y link_colorxiyi leaf_indexs r9rr sQx[!11&)!H+*+H+*+L/'; ;" FF+ ;HRSyb1fkb1fk ""gkb0 0:!*- ; ; r8c ||jt||j|r!|j|t|y|!|j|t||z y|jtt|yyr)rrDr#) rr_rorrrrrrs r9_append_singleton_leaf_noder sv   3q6   JJs1v. /! 6#a!e*-. 3s1v;'r8c F||jt|||r!|j|t|y| rL|jdttj|||z dftj zdzy|jdyy)N(rrB)r)rrDr#rfrErF) rr_rorrrrrrrs r9_append_nonsingleton_leaf_noder s   3q6   JJs1v. / 3RZZ!a%( 288%L!MMPSST 2r8c t||t|||z df||||t||t|||z df||||y)Nrr)_append_contraction_marks_subrIrrrrorrHs r9_append_contraction_marksr sT!!R1QUAX;)C"#%6<!!R1QUAX;)C"#%6|t| }/t|/ts td)|j|/n|j|)|$|*zdz |%|+z|(|.fS)+a Calculate the endpoints of the links as well as the labels for the the dendrogram rooted at the node with index i. iv is the independent variable value to plot the left-most leaf node below the root node i (if orientation='top', this would be the left-most x value where the plotting of this root node i and its descendents should begin). ivl is a list to store the labels of the leaf nodes. The leaf_label_func is called whenever ivl != None, labels == None, and leaf_label_func != None. When ivl != None and labels != None, the labels list is used only for labeling the leaf nodes. When ivl == None, no labels are generated for leaf nodes. When get_leaves==True, a list of leaves is built as they are visited in the dendrogram. Returns a tuple with l being the independent variable coordinate that corresponds to the midpoint of cluster to the left of cluster i if i is non-singleton, otherwise the independent coordinate of the leaf node if i is a leaf node. Returns ------- A tuple (left, w, h, md), where: * left is the independent variable coordinate of the center of the the U of the subtree * w is the amount of space used for the subtree (in independent variable units) * h is the height of the subtree in dependent variable units * md is the ``max(Z[*,2]``) for all nodes ``*`` below and including the target node. rz"Invalid singleton cluster count n.rzInvalid root cluster index i.rrg@g$@rrrr ascendingT descendingrr_rrrrrrrrrrrrorrrrrrrrrrFz6link_color_func must return a matplotlib color string!r7)rrirrrrIrrrrrrDrr#r)0rr_rrrrrrrrrrrrorrrmhrrrrrrrrrrHraaabnadanbdbuaubuivauwauahuamdhruivbuwbubhubmdmax_distvs0 r9rr sVb  BAv=>>Bw899 Q37Q ! !a%( A *1aE3+:Av+; =!,)!R#Xq!=NPRSHdC+ + U '1aS(7F DHdC- - ' ! q5UQY!a%( A *1aE3+:Av+; =!,)!R#Xq!=NPRSHdC+ + U '1aS(7F DHdC- -  1u#Aq!UC$3Q @S$S)) 1QUAX; #B 1QUAX; #B Qw rAvqy\ rAvqy\   Qw rAvqy\ rAvqy\  [ J$$6 7BBBB | # 7BBBB + %$)> 7BBBB , & 7BBBB   # 9 9 9' 9, 9"  9 $  9   9" 9( 9. 9 9 9! 9%& 9$ 9$ 9*- 9( 9" 9'@ 9 ,! 9"!)# 9"0A# 9$,% 9&#8' 9T3T, !a%( AO!3 ! $Q ' -a 01 4>> from scipy.cluster.hierarchy import fcluster, is_isomorphic >>> from scipy.cluster.hierarchy import single, complete >>> from scipy.spatial.distance import pdist Two flat cluster assignments can be isomorphic if they represent the same cluster assignment, with different labels. For example, we can use the `scipy.cluster.hierarchy.single` method and flatten the output to four clusters: >>> X = [[0, 0], [0, 1], [1, 0], ... [0, 4], [0, 3], [1, 4], ... [4, 0], [3, 0], [4, 1], ... [4, 4], [3, 4], [4, 3]] >>> Z = single(pdist(X)) >>> T = fcluster(Z, 1, criterion='distance') >>> T array([3, 3, 3, 4, 4, 4, 2, 2, 2, 1, 1, 1], dtype=int32) We can then do the same using the `scipy.cluster.hierarchy.complete`: method: >>> Z = complete(pdist(X)) >>> T_ = fcluster(Z, 1.5, criterion='distance') >>> T_ array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4], dtype=int32) As we can see, in both cases we obtain four clusters and all the data points are distributed in the same way - the only thing that changes are the flat cluster labels (3 => 1, 4 =>2, 2 =>3 and 4 =>1), so both cluster assignments are isomorphic: >>> is_isomorphic(T, T_) True rrzT1 must be one-dimensional.zT2 must be one-dimensional.z0T1 and T2 must have the same number of elements.ci}i}t||D]2\}}||vr||vry|||k7s |||k7s!y||vry|||<|||<4y)NFT)r/)T1T2d1d2t1t2s r9py_is_isomorphicz'is_isomorphic..py_is_isomorphic3so  "bk FBRxR< b6R<2b6R< r22 r8r7T)rrrCrsrH) rrrwrirrrzrr-r )rrrHr rs r9r#r#sH R B " B " B ww!|677 ww!|677 xx288KLL ..)2r!"&2 /C $33$s)3r8c t|}t|d|j|}t|dd|d}t j ||t ||jdf|jd|S) a Return the maximum distance between any non-singleton cluster. Parameters ---------- Z : ndarray The hierarchical clustering encoded as a matrix. See ``linkage`` for more information. Returns ------- maxdists : ndarray A ``(n-1)`` sized numpy array of doubles; ``MD[i]`` represents the maximum distance between any cluster (including singletons) below and including the node with index i. More specifically, ``MD[i] = Z[Q(i)-n, 2].max()`` where ``Q(i)`` is the set of all node indices below and including node i. See Also -------- linkage : for a description of what a linkage matrix is. is_monotonic : for testing for monotonicity of a linkage matrix. Examples -------- >>> from scipy.cluster.hierarchy import median, maxdists >>> from scipy.spatial.distance import pdist Given a linkage matrix ``Z``, `scipy.cluster.hierarchy.maxdists` computes for each new cluster generated (i.e., for each row of the linkage matrix) what is the maximum distance between any two child clusters. Due to the nature of hierarchical clustering, in many cases this is going to be just the distance between the two child clusters that were merged to form the current one - that is, Z[:,2]. However, for non-monotonic cluster assignments such as `scipy.cluster.hierarchy.median` clustering this is not always the case: There may be cluster formations were the distance between the two clusters merged is smaller than the distance between their children. We can see this in an example: >>> X = [[0, 0], [0, 1], [1, 0], ... [0, 4], [0, 3], [1, 4], ... [4, 0], [3, 0], [4, 1], ... [4, 4], [3, 4], [4, 3]] >>> Z = median(pdist(X)) >>> Z array([[ 0. , 1. , 1. , 2. ], [ 3. , 4. , 1. , 2. ], [ 9. , 10. , 1. , 2. ], [ 6. , 7. , 1. , 2. ], [ 2. , 12. , 1.11803399, 3. ], [ 5. , 13. , 1.11803399, 3. ], [ 8. , 15. , 1.11803399, 3. ], [11. , 14. , 1.11803399, 3. ], [18. , 19. , 3. , 6. ], [16. , 17. , 3.5 , 6. ], [20. , 21. , 3.25 , 12. ]]) >>> maxdists(Z) array([1. , 1. , 1. , 1. , 1.11803399, 1.11803399, 1.11803399, 1.11803399, 3. , 3.5 , 3.5 ]) Note that while the distance between the two clusters merged when creating the last cluster is 3.25, there are two children (clusters 16 and 17) whose distance is larger (3.5). Thus, `scipy.cluster.hierarchy.maxdists` returns 3.5 in this case. r\r]Trrc|rt|ddttj|jdf}t j |||jddz|S)NTrrrr)rrfrrrrget_max_dist_for_each_cluster)rrmMDs r9 cy_maxdistszmaxdists..cy_maxdistssN  at#" = XXqwwqzm $00B QG r8rrq)rrrtrrzrr rr)rrHrs r9r+r+IslT  B#RZZB7Aat#"5 >>+q=3C!"bjj#'B 00r8c t||}t|d|j|}t|d|j|}t|dd|t |dd||j d|j dk7r t dd }tj|||t||j df|jd| S) a Return the maximum inconsistency coefficient for each non-singleton cluster and its children. Parameters ---------- Z : ndarray The hierarchical clustering encoded as a matrix. See `linkage` for more information. R : ndarray The inconsistency matrix. Returns ------- MI : ndarray A monotonic ``(n-1)``-sized numpy array of doubles. See Also -------- linkage : for a description of what a linkage matrix is. inconsistent : for the creation of a inconsistency matrix. Examples -------- >>> from scipy.cluster.hierarchy import median, inconsistent, maxinconsts >>> from scipy.spatial.distance import pdist Given a data set ``X``, we can apply a clustering method to obtain a linkage matrix ``Z``. `scipy.cluster.hierarchy.inconsistent` can be also used to obtain the inconsistency matrix ``R`` associated to this clustering process: >>> X = [[0, 0], [0, 1], [1, 0], ... [0, 4], [0, 3], [1, 4], ... [4, 0], [3, 0], [4, 1], ... [4, 4], [3, 4], [4, 3]] >>> Z = median(pdist(X)) >>> R = inconsistent(Z) >>> Z array([[ 0. , 1. , 1. , 2. ], [ 3. , 4. , 1. , 2. ], [ 9. , 10. , 1. , 2. ], [ 6. , 7. , 1. , 2. ], [ 2. , 12. , 1.11803399, 3. ], [ 5. , 13. , 1.11803399, 3. ], [ 8. , 15. , 1.11803399, 3. ], [11. , 14. , 1.11803399, 3. ], [18. , 19. , 3. , 6. ], [16. , 17. , 3.5 , 6. ], [20. , 21. , 3.25 , 12. ]]) >>> R array([[1. , 0. , 1. , 0. ], [1. , 0. , 1. , 0. ], [1. , 0. , 1. , 0. ], [1. , 0. , 1. , 0. ], [1.05901699, 0.08346263, 2. , 0.70710678], [1.05901699, 0.08346263, 2. , 0.70710678], [1.05901699, 0.08346263, 2. , 0.70710678], [1.05901699, 0.08346263, 2. , 0.70710678], [1.74535599, 1.08655358, 3. , 1.15470054], [1.91202266, 1.37522872, 3. , 1.15470054], [3.25 , 0.25 , 3. , 0. ]]) Here, `scipy.cluster.hierarchy.maxinconsts` can be used to compute the maximum value of the inconsistency statistic (the last column of ``R``) for each non-singleton cluster and its children: >>> maxinconsts(Z, R) array([0. , 0. , 0. , 0. , 0.70710678, 0.70710678, 0.70710678, 0.70710678, 1.15470054, 1.15470054, 1.15470054]) r\r]TrrrrQThe inconsistency matrix and linkage matrix each have a different number of rows.c|r&t|ddtt|ddt|jddz}tj|dz f}t j ||||d|S)NTrrrrrr)rrfrrrrrget_max_Rfield_for_each_cluster)rrrmroMIs r9cy_maxinconstsz#maxinconsts..cy_maxinconstssa  at#" = $SR 8 GGAJN XXq1uh 221aQB r8rq) rrrtrrrrrirzrr )rrrHrs r9r,r,sX A B#RZZB7A#RZZB7Aat#"5$SR0wwqzQWWQZ<= = >>.!Qq9I!"bjj#'B 00r8c t||}t|d|j|}t|d|j|}t|dd|t |dd|t |t s td|dks|d kDr td |jd|jdk7r td d }tj||||t||jdf|jd| S)a; Return the maximum statistic for each non-singleton cluster and its children. Parameters ---------- Z : array_like The hierarchical clustering encoded as a matrix. See `linkage` for more information. R : array_like The inconsistency matrix. i : int The column of `R` to use as the statistic. Returns ------- MR : ndarray Calculates the maximum statistic for the i'th column of the inconsistency matrix `R` for each non-singleton cluster node. ``MR[j]`` is the maximum over ``R[Q(j)-n, i]``, where ``Q(j)`` the set of all node ids corresponding to nodes below and including ``j``. See Also -------- linkage : for a description of what a linkage matrix is. inconsistent : for the creation of a inconsistency matrix. Examples -------- >>> from scipy.cluster.hierarchy import median, inconsistent, maxRstat >>> from scipy.spatial.distance import pdist Given a data set ``X``, we can apply a clustering method to obtain a linkage matrix ``Z``. `scipy.cluster.hierarchy.inconsistent` can be also used to obtain the inconsistency matrix ``R`` associated to this clustering process: >>> X = [[0, 0], [0, 1], [1, 0], ... [0, 4], [0, 3], [1, 4], ... [4, 0], [3, 0], [4, 1], ... [4, 4], [3, 4], [4, 3]] >>> Z = median(pdist(X)) >>> R = inconsistent(Z) >>> R array([[1. , 0. , 1. , 0. ], [1. , 0. , 1. , 0. ], [1. , 0. , 1. , 0. ], [1. , 0. , 1. , 0. ], [1.05901699, 0.08346263, 2. , 0.70710678], [1.05901699, 0.08346263, 2. , 0.70710678], [1.05901699, 0.08346263, 2. , 0.70710678], [1.05901699, 0.08346263, 2. , 0.70710678], [1.74535599, 1.08655358, 3. , 1.15470054], [1.91202266, 1.37522872, 3. , 1.15470054], [3.25 , 0.25 , 3. , 0. ]]) `scipy.cluster.hierarchy.maxRstat` can be used to compute the maximum value of each column of ``R``, for each non-singleton cluster and its children: >>> maxRstat(Z, R, 0) array([1. , 1. , 1. , 1. , 1.05901699, 1.05901699, 1.05901699, 1.05901699, 1.74535599, 1.91202266, 3.25 ]) >>> maxRstat(Z, R, 1) array([0. , 0. , 0. , 0. , 0.08346263, 0.08346263, 0.08346263, 0.08346263, 1.08655358, 1.37522872, 1.37522872]) >>> maxRstat(Z, R, 3) array([0. , 0. , 0. , 0. , 0.70710678, 0.70710678, 0.70710678, 0.70710678, 1.15470054, 1.15470054, 1.15470054]) r\r]Trrrz&The third argument must be an integer.rrz/i must be an integer between 0 and 3 inclusive.rc|r&t|ddtt|ddttj|jdf}|jddz}t j ||||||S)NTrrrrr)rrfrrrrrr)rrrrmMRros r9 cy_maxRstatzmaxRstat..cy_maxRstatese  at#" = $SR 8 XXqwwqzm $ GGAJN221aQB r8)rrmrrrCrsrH) rrrtrrrrDrrirrrzrr )rrrrHrs r9r*r*s\ A B#RZZB7A#RZZB7Aat#"5$SR0 a @AA1uAJKKwwqzQWWQZ<= = >>+q!q=;K!"bjj#'B 00r8)rKrLrr=c t||}t|d|j|}t|d|}t|dd||j|j k7r t d|jd|jdd zk7r td |jdd zfd }tj|||t|d |j |j fd| S)a5 Return the root nodes in a hierarchical clustering. Returns the root nodes in a hierarchical clustering corresponding to a cut defined by a flat cluster assignment vector ``T``. See the ``fcluster`` function for more information on the format of ``T``. For each flat cluster :math:`j` of the :math:`k` flat clusters represented in the n-sized flat cluster assignment vector ``T``, this function finds the lowest cluster node :math:`i` in the linkage tree Z, such that: * leaf descendants belong only to flat cluster j (i.e., ``T[p]==j`` for all :math:`p` in :math:`S(i)`, where :math:`S(i)` is the set of leaf ids of descendant leaf nodes with cluster node :math:`i`) * there does not exist a leaf that is not a descendant with :math:`i` that also belongs to cluster :math:`j` (i.e., ``T[q]!=j`` for all :math:`q` not in :math:`S(i)`). If this condition is violated, ``T`` is not a valid cluster assignment vector, and an exception will be thrown. Parameters ---------- Z : ndarray The hierarchical clustering encoded as a matrix. See `linkage` for more information. T : ndarray The flat cluster assignment vector. Returns ------- L : ndarray The leader linkage node id's stored as a k-element 1-D array, where ``k`` is the number of flat clusters found in ``T``. ``L[j]=i`` is the linkage cluster node id that is the leader of flat cluster with id M[j]. If ``i < n``, ``i`` corresponds to an original observation, otherwise it corresponds to a non-singleton cluster. M : ndarray The leader linkage node id's stored as a k-element 1-D array, where ``k`` is the number of flat clusters found in ``T``. This allows the set of flat cluster ids to be any arbitrary set of ``k`` integers. For example: if ``L[3]=2`` and ``M[3]=8``, the flat cluster with id 8's leader is linkage node 2. See Also -------- fcluster : for the creation of flat cluster assignments. Examples -------- >>> from scipy.cluster.hierarchy import ward, fcluster, leaders >>> from scipy.spatial.distance import pdist Given a linkage matrix ``Z`` - obtained after apply a clustering method to a dataset ``X`` - and a flat cluster assignment array ``T``: >>> X = [[0, 0], [0, 1], [1, 0], ... [0, 4], [0, 3], [1, 4], ... [4, 0], [3, 0], [4, 1], ... [4, 4], [3, 4], [4, 3]] >>> Z = ward(pdist(X)) >>> Z array([[ 0. , 1. , 1. , 2. ], [ 3. , 4. , 1. , 2. ], [ 6. , 7. , 1. , 2. ], [ 9. , 10. , 1. , 2. ], [ 2. , 12. , 1.29099445, 3. ], [ 5. , 13. , 1.29099445, 3. ], [ 8. , 14. , 1.29099445, 3. ], [11. , 15. , 1.29099445, 3. ], [16. , 17. , 5.77350269, 6. ], [18. , 19. , 5.77350269, 6. ], [20. , 21. , 8.16496581, 12. ]]) >>> T = fcluster(Z, 3, criterion='distance') >>> T array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4], dtype=int32) `scipy.cluster.hierarchy.leaders` returns the indices of the nodes in the dendrogram that are the leaders of each flat cluster: >>> L, M = leaders(Z, T) >>> L array([16, 17, 18, 19], dtype=int32) (remember that indices 0-11 point to the 12 data points in ``X``, whereas indices 12-22 point to the 11 rows of ``Z``) `scipy.cluster.hierarchy.leaders` also returns the indices of the flat clusters in ``T``: >>> M array([1, 2, 3, 4], dtype=int32) Notes ----- *Array API support (experimental):* This function returns arrays with data-dependent shape. In JAX, at the moment of writing this makes it impossible to execute it inside `@jax.jit`. r\r]rTrrz%T must be a 1-D array of dtype int32.rrz!Mismatch: len(T)!=Z.shape[0] + 1.cb|rt|ddtttj|}tj |tj }tj |tj }tj|||||}|dk\rtd|d||fS)NTrrrBrzLT is not a valid assignment vector. Error found when examining linkage node z (< 2n-1).) rrfrDrzr*rrHrr'ri)rr~rmrrWMr?n_obss r9 cy_leaderszleaders..cy_leaderss  at#" =Q( HHZrxx 0 HHZrxx 0   q!Q:u = 6<<=3jJK K!t r8)rrrq) rrrtrrCrHrrrrirzrr )rr~rHrrs @r9r'r'tsZ A B#RZZB7A#"%Aat#"5ww"((?@@wwqzQWWQZ!^#<== GGAJNE  >>*a]15E 3BHHbhh;O#'B 00r8)rrNF)NN)F)rNr)r)FFN)FFNF)r"rNN)r"rNrrN)NNNNre)rNNNTrgNFFTFFNNNFNNre)\rr=rrrnumpyrfrrrscipy.spatial.distancespatialrxscipy._lib._array_apirrr r r r scipy._lib._disjoint_setr scipy._lib.array_api_extra_libarray_api_extrarzrurv__all__ UserWarningr3r@rI lazy_cythonrrrrrrrr)r _cnode_barer _cnode_typerrr1r.rr"r!r0r$r%rr&rr$r-rrr r(rr]rcrkeysr\sortrbrZr`rdrrrr/rrrrrrrr#r+r,r*r'r7r8r9r/s[B 0))LL0(( Q1!;3  [ G0  -- .0  K; K;\ O= O=d O< OT?Tn48d9dN IN INX K K\ I0 I0X n& n&bQ:Q:hS)S)l68RST`1U`1F#L68RST^AU^AB;?"'+\%*5e2j$$N6767r$}49w9wt$}49!/AEb 9b J ?0 ?0R2r2r1b!RVVQ7 !RRVVR (  ((*+JOO-. " !<@;?48C/NS23G T$$G<@:>GK>B37?C%) u Hu p "(4$<@02vv$+0*/u05#'1$D#'U-1d9=/3115/359c9L$68RSU`4U`4F V0 V0r `0 `0F h0 h0X$}e:;=F0=F0r8