`L iKHdZddlmZddlZddlmZmZddlm Z d dZ d dZ y) z Common code for all metrics. ) combinationsN) check_arraycheck_consistent_length)type_of_targetc d}||vrtdj|t|}|dvrtdj||dk(r ||||St|||t |}t |}d}|}d} |d k(rF|#t j ||jd}|j}|j}n|d k(r~|@t jt j|t j|d d } nt j|d } t j| jdr y |dk(r|} d}d }|jdk(r|jd }|jdk(r|jd }|j|} t j| f} t| D]T} |j!| g| j} |j!| g| j}|| ||| | <V|?| t j"| } d | | d k(<t%t j&| | S| S)aMAverage a binary metric for multilabel classification. Parameters ---------- y_true : array, shape = [n_samples] or [n_samples, n_classes] True binary labels in binary label indicators. y_score : array, shape = [n_samples] or [n_samples, n_classes] Target scores, can either be probability estimates of the positive class, confidence values, or binary decisions. average : {None, 'micro', 'macro', 'samples', 'weighted'}, default='macro' If ``None``, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data: ``'micro'``: Calculate metrics globally by considering each element of the label indicator matrix as a label. ``'macro'``: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account. ``'weighted'``: Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label). ``'samples'``: Calculate metrics for each instance, and find their average. Will be ignored when ``y_true`` is binary. sample_weight : array-like of shape (n_samples,), default=None Sample weights. binary_metric : callable, returns shape [n_classes] The binary metric function to use. Returns ------- score : float or array of shape [n_classes] If not ``None``, average the score, else return the score for each classes. )Nmicromacroweightedsampleszaverage has to be one of {0})binaryzmultilabel-indicatorz{0} format is not supportedr ) sample_weightNr r )rr)axisgr weights) ValueErrorformatrrrnprepeatshaperavelsummultiplyreshapeisclosendimzerosrangetakeasarrayfloataverage) binary_metricy_truey_scorer$raverage_optionsy_typenot_average_axis score_weightaverage_weight n_classesscorecy_true_c y_score_cs [/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/sklearn/metrics/_base.py_average_binary_scorer3sQVFOo%7>>OPP F #F 776==fEFF VWMJJFG];  F'"G LN'  #99\6<<?CL--/ J   #VV FBJJ|W$EFQN VVF3N ::n((*C 0 I %  {{a(||q//'* ./I HHi\ "E 9 R;;s)9;:@@BLL!+;L<BBD  9LQaR   % ZZ7N)*E.A% &RZZ~>?? c"t||tj|}|jd}||dz zdz}tj|}|dk(}|rtj|nd} t t |dD]s\} \} } || k(} || k(}tj| |}|rtj|| | <| |}||}||||| f}||||| f}||zdz || <utj|| S)aLAverage one-versus-one scores for multiclass classification. Uses the binary metric for one-vs-one multiclass classification, where the score is computed according to the Hand & Till (2001) algorithm. Parameters ---------- binary_metric : callable The binary metric function to use that accepts the following as input: y_true_target : array, shape = [n_samples_target] Some sub-array of y_true for a pair of classes designated positive and negative in the one-vs-one scheme. y_score_target : array, shape = [n_samples_target] Scores corresponding to the probability estimates of a sample belonging to the designated positive class label y_true : array-like of shape (n_samples,) True multiclass labels. y_score : array-like of shape (n_samples, n_classes) Target scores corresponding to probability estimates of a sample belonging to a particular class. average : {'macro', 'weighted'}, default='macro' Determines the type of averaging performed on the pairwise binary metric scores: ``'macro'``: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account. Classes are assumed to be uniformly distributed. ``'weighted'``: Calculate metrics for each label, taking into account the prevalence of the classes. Returns ------- score : float Average of the pairwise binary metric scores. rrrr Nr) rruniquerempty enumerater logical_orr$)r%r&r'r$ y_true_uniquer-n_pairs pair_scores is_weighted prevalenceixaba_maskb_maskab_maska_trueb_true a_true_score b_true_scores r2_average_multiclass_ovo_scorerI~s(PFG,IIf%M##A&I9q=)Q.G((7#KZ'K&1'"tJ ]A >? < FQ11--/ ZZ0JrN$VWWaZ-@A $VWWaZ-@A ',6!; B < ::k: 66r4)N)r ) __doc__ itertoolsrnumpyrutilsrrutils.multiclassrr3rIr4r2rPs%#8-jZC7r4