L i>ddlZddlmZddlmZdZdZddddZy) N)partial)statsctj||\}}|jdkDs|jdkDr tdtj|j s#tj|j r tdtj |dk(stj |dk(r tdtjtj||f}|dt||t|d}}hd}|j}||vrtd|d |tjn|}t|tjs td ||||fS) z2 Input validation and standardization for bws testz,`x` and `y` must be exactly one-dimensional.z"`x` and `y` must not contain NaNs.rz$`x` and `y` must be of nonzero size.N>lessgreater two-sidedz`alternative` must be one of .z?`method` must be an instance of `scipy.stats.PermutationMethod`)np atleast_1dndim ValueErrorisnananysizerrankdata concatenatelenlowerPermutationMethod isinstance)xy alternativemethodz alternativess [/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/scipy/stats/_bws_test.py_bws_input_validationrs> ==A DAqvvzQVVaZGHH xx{BHHQKOO-=>> wwqzQ"''!*/?@@ r~~q!f-.A Wc!f:qQzqA3L##%K,&8aHII*0.U $ $ &fF fe55 6;< < af $$ctj||tj||}}|j||j|}}tjd|dztjd|dz} }|||z|z |zz } |||z|z | zz } |dk(r | | z} | | z} n0| tj| z} | tj| z} ||dzz d||dzz z z|z||zz|z } | |dzz d| |dzz z z|z||zz|z } d|z tj | | z |z}d|z tj | | z |z}|dk(r ||zdz }|S||z dz }|S)z:Compute the BWS test statistic for two independent samples)axisrr )r sortshapearangeabssum)rrrr"RiHjnmijBx_numBy_numBx_denBy_denBxByBs r_bws_statisticr6 s WWQT "BGGAD$9B 88D>288D>qA 99Q! bii1Q3/qA 1q5!)a- F 1q5!)a- Fk!&&"&&. "&&.  !WAqsG $q (!A# .q 0F !WAqsG $q (!A# .q 0F 1rvvfVm$/ /B 1rvvfVm$/ /B$ 3bA A H;=r'QA Hr r )rrct||||\}}}}tt|}|dk(rdnd}tj||f|fd|i|j }|S)uPerform the Baumgartner-Weiss-Schindler test on two independent samples. The Baumgartner-Weiss-Schindler (BWS) test is a nonparametric test of the null hypothesis that the distribution underlying sample `x` is the same as the distribution underlying sample `y`. Unlike the Kolmogorov-Smirnov, Wilcoxon, and Cramer-Von Mises tests, the BWS test weights the integral by the variance of the difference in cumulative distribution functions (CDFs), emphasizing the tails of the distributions, which increases the power of the test in many applications. Parameters ---------- x, y : array-like 1-d arrays of samples. alternative : {'two-sided', 'less', 'greater'}, optional Defines the alternative hypothesis. Default is 'two-sided'. Let *F(u)* and *G(u)* be the cumulative distribution functions of the distributions underlying `x` and `y`, respectively. Then the following alternative hypotheses are available: * 'two-sided': the distributions are not equal, i.e. *F(u) ≠ G(u)* for at least one *u*. * 'less': the distribution underlying `x` is stochastically less than the distribution underlying `y`, i.e. *F(u) >= G(u)* for all *u*. * 'greater': the distribution underlying `x` is stochastically greater than the distribution underlying `y`, i.e. *F(u) <= G(u)* for all *u*. Under a more restrictive set of assumptions, the alternative hypotheses can be expressed in terms of the locations of the distributions; see [2] section 5.1. method : PermutationMethod, optional Configures the method used to compute the p-value. The default is the default `PermutationMethod` object. Returns ------- res : PermutationTestResult An object with attributes: statistic : float The observed test statistic of the data. pvalue : float The p-value for the given alternative. null_distribution : ndarray The values of the test statistic generated under the null hypothesis. See also -------- scipy.stats.wilcoxon, scipy.stats.mannwhitneyu, scipy.stats.ttest_ind Notes ----- When ``alternative=='two-sided'``, the statistic is defined by the equations given in [1]_ Section 2. This statistic is not appropriate for one-sided alternatives; in that case, the statistic is the *negative* of that given by the equations in [1]_ Section 2. Consequently, when the distribution of the first sample is stochastically greater than that of the second sample, the statistic will tend to be positive. References ---------- .. [1] Neuhäuser, M. (2005). Exact Tests Based on the Baumgartner-Weiss-Schindler Statistic: A Survey. Statistical Papers, 46(1), 1-29. .. [2] Fay, M. P., & Proschan, M. A. (2010). Wilcoxon-Mann-Whitney or t-test? On assumptions for hypothesis tests and multiple interpretations of decision rules. Statistics surveys, 4, 1. Examples -------- We follow the example of table 3 in [1]_: Fourteen children were divided randomly into two groups. Their ranks at performing a specific tests are as follows. >>> import numpy as np >>> x = [1, 2, 3, 4, 6, 7, 8] >>> y = [5, 9, 10, 11, 12, 13, 14] We use the BWS test to assess whether there is a statistically significant difference between the two groups. The null hypothesis is that there is no difference in the distributions of performance between the two groups. We decide that a significance level of 1% is required to reject the null hypothesis in favor of the alternative that the distributions are different. Since the number of samples is very small, we can compare the observed test statistic against the *exact* distribution of the test statistic under the null hypothesis. >>> from scipy.stats import bws_test >>> res = bws_test(x, y) >>> print(res.statistic) 5.132167152575315 This agrees with :math:`B = 5.132` reported in [1]_. The *p*-value produced by `bws_test` also agrees with :math:`p = 0.0029` reported in [1]_. >>> print(res.pvalue) 0.002913752913752914 Because the p-value is below our threshold of 1%, we take this as evidence against the null hypothesis in favor of the alternative that there is a difference in performance between the two groups. )rrrr)rrr6rpermutation_test_asdict)rrrr bws_statisticpermutation_alternativeress rbws_testr=>srT!6aK6<!>Aq+vN DM(3v(=f9  !Q 5-D 5#)>>#3 5C Jr ) numpyr functoolsrscipyrrr6r=r rrBs%%4 <#.dsr