This may take 4 months or more, since drafts are reviewed in no specific order. There are 2,904 pending submissions waiting for review.
If the submission is accepted, then this page will be moved into the article space.
If the submission is declined, then the reason will be posted here.
In the meantime, you can continue to improve this submission by editing normally.
Where to get help
If you need help editing or submitting your draft, please ask us a question at the AfC Help Desk or get live help from experienced editors. These venues are only for help with editing and the submission process, not to get reviews.
If you need feedback on your draft, or if the review is taking a lot of time, you can try asking for help on the talk page of a relevant WikiProject. Some WikiProjects are more active than others so a speedy reply is not guaranteed.
To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags.
Ball divergence is a non-parametric two-sample statistical test method in metric spaces. It measures the difference between two population probability distributions by integrating the difference over all balls in the space[1]. Therefore, its value is zero if and only if the two probability measures are the same. Similar to common non-parametric test methods, ball divergence calculates the p-value through permutation tests.
Distinguishing between two unknown samples in multivariate data is an important and challenging task. Previously, a more common non-parametric two-sample test method was the energy distance test[2]. However, the effectiveness of the energy distance test relies on the assumption of moment conditions, making it less effective for extremely imbalanced data (where one sample size is disproportionately larger than the other). To address this issue, Chen, Dou, and Qiao proposed a non-parametric multivariate test method using ensemble subsampling nearest neighbors (ESS-NN) for imbalanced data[3]. This method effectively handles imbalanced data and increases the test's power by fixing the size of the smaller group while increasing the size of the larger group.
Additionally, Gretton et al. introduced the maximum mean discrepancy (MMD) for the two-sample problem[4]. Both methods require additional parameter settings, such as the number of groups 𝑘 in ESS-NN and the kernel function in MMD. Ball divergence addresses the two-sample test problem for extremely imbalanced samples without introducing other parameters.
Let's start with the population ball divergence. Suppose that we have a metric space (), where norm introduces a metric for two point in space by . Besides, we use to show a closed ball with the center and radius . Then, the population ball divergence of Borel probability measures is
For convenience, we can decompose the Ball Divergence into two parts:
and
Thus
Next, we will introduce the sample ball divergence. Let denote whether point locates in the ball . Given two independent samples form and form
where means the proportion of samples from the probability measure located in the ball and means the proportion of samples from the probability measure located in the ball . Meanwhile, and means the proportion of samples from the probability measure and located in the ball . The sample versions of and are as follows
1. Given two Borel probability measures and on a finite dimensional Banach space , then where the equality holds if and only if .
2. Suppose and are two Borel probability measures in a separable Banach space . Denote their support and , if or , then we have where the equality holds if and only if .
3.Consistency: We have
where for some .
Define , and then let where
The function has spectral decomposition:
where and are the eigenvalues and eigenfunctions of . For , are i.i.d. , and
4.Asymptotic distribution under the null hypothesis: Suppose that both and in such a way that . Under the null hypothesis, we have
5. Distribution under the alternative hypothesis: let Suppose that both and in such a way that . Under the alternative hypothesis, we have
6. The test based on is consistent against any general alternative . More specifically,
and
More importantly, can also be expressed as
which is independent of .