Jump to content

Behrens–Fisher problem

From Wikipedia, the free encyclopedia
(Redirected from Behrens-Fisher problem)
Unsolved problem in statistics:
Is an approximation analogous to Fisher's argument necessary to solve the Behrens–Fisher problem?

In statistics, the Behrens–Fisher problem, named after Walter-Ulrich Behrens and Ronald Fisher, is the problem of interval estimation and hypothesis testing concerning the difference between the means of two normally distributed populations when the variances of the two populations are not assumed to be equal, based on two independent samples.

Specification

[edit]

One difficulty with discussing the Behrens–Fisher problem and proposed solutions, is that there are many different interpretations of what is meant by "the Behrens–Fisher problem". These differences involve not only what is counted as being a relevant solution, but even the basic statement of the context being considered.

Context

[edit]

Let X1, ..., Xn and Y1, ..., Ym be i.i.d. samples from two populations which both come from the same location–scale family of distributions. The scale parameters are assumed to be unknown and not necessarily equal, and the problem is to assess whether the location parameters can reasonably be treated as equal. Lehmann[1] states that "the Behrens–Fisher problem" is used both for this general form of model when the family of distributions is arbitrary, and for when the restriction to a normal distribution is made. While Lehmann discusses a number of approaches to the more general problem, mainly based on nonparametrics,[2] most other sources appear to use "the Behrens–Fisher problem" to refer only to the case where the distribution is assumed to be normal: most of this article makes this assumption.

Requirements of solutions

[edit]

Solutions to the Behrens–Fisher problem have been presented that make use of either a classical or a Bayesian inference point of view and either solution would be notionally invalid judged from the other point of view. If consideration is restricted to classical statistical inference only, it is possible to seek solutions to the inference problem that are simple to apply in a practical sense, giving preference to this simplicity over any inaccuracy in the corresponding probability statements. Where exactness of the significance levels of statistical tests is required, there may be an additional requirement that the procedure should make maximum use of the statistical information in the dataset. It is well known that an exact test can be gained by randomly discarding data from the larger dataset until the sample sizes are equal, assembling data in pairs and taking differences, and then using an ordinary t-test to test for the mean-difference being zero: clearly this would not be "optimal" in any sense.

The task of specifying interval estimates for this problem is one where a frequentist approach fails to provide an exact solution, although some approximations are available. Standard Bayesian approaches also fail to provide an answer that can be expressed as straightforward simple formulae, but modern computational methods of Bayesian analysis do allow essentially exact solutions to be found.[citation needed] Thus study of the problem can be used to elucidate the differences between the frequentist and Bayesian approaches to interval estimation.

Outline of different approaches

[edit]

Behrens and Fisher approach

[edit]

Ronald Fisher in 1935 introduced fiducial inference[3][4] in order to apply it to this problem. He referred to an earlier paper by Walter-Ulrich Behrens from 1929. Behrens and Fisher proposed to find the probability distribution of

where and are the two sample means, and s1 and s2 are their standard deviations. See Behrens–Fisher distribution. Fisher approximated the distribution of this by ignoring the random variation of the relative sizes of the standard deviations,

Fisher's solution provoked controversy because it did not have the property that the hypothesis of equal means would be rejected with probability α if the means were in fact equal. Many other methods of treating the problem have been proposed since, and the effect on the resulting confidence intervals have been investigated.[5]

Welch's approximate t solution

[edit]

A widely used method is that of B. L. Welch,[6] who, like Fisher, was at University College London. The variance of the mean difference

results in

Welch (1938) approximated the distribution of by the Type III Pearson distribution (a scaled chi-squared distribution) whose first two moments agree with that of . This applies to the following number of degrees of freedom (d.f.), which is generally non-integer:

Under the null hypothesis of equal expectations, μ1 = μ2, the distribution of the Behrens–Fisher statistic T, which also depends on the variance ratio σ12/σ22, could now be approximated by Student's t distribution with these ν degrees of freedom. But this ν contains the population variances σi2, and these are unknown. The following estimate only replaces the population variances by the sample variances:

This is a random variable. A t distribution with a random number of degrees of freedom does not exist. Nevertheless, the Behrens–Fisher T can be compared with a corresponding quantile of Student's t distribution with these estimated numbers of degrees of freedom, , which is generally non-integer. In this way, the boundary between acceptance and rejection region of the test statistic T is calculated based on the empirical variances si2, in a way that is a smooth function of these.

This method also does not give exactly the nominal rate, but is generally not too far off.[citation needed] However, if the population variances are equal, or if the samples are rather small and the population variances can be assumed to be approximately equal, it is more accurate to use Student's t-test.[citation needed]

Other approaches

[edit]

A number of different approaches to the general problem have been proposed, some of which claim to "solve" some version of the problem. Among these are,[7]

  • that of Chapman in 1950,[8]
  • that of Prokof’yev and Shishkin in 1974,[9]
  • that of Dudewicz and Ahmed in 1998.[10]
  • that of Chang Wang in 2022.[11]

In Dudewicz’s comparison of selected methods,[7] it was found that the Dudewicz–Ahmed procedure is recommended for practical use.

Exact solutions to the common and generalized Behrens–Fisher problems

[edit]

For several decades, it was commonly believed that no exact solution to the common Behrens–Fisher problem existed.[citation needed] However, it was proved in 1966 that it has an exact solution.[12] In 2018 the probability density function of a generalized Behrens–Fisher distribution of m means and m distinct standard errors from m samples of distinct sizes from independent normal distributions with distinct means and variances was proved and the paper also examined its asymptotic approximations.[13] A follow-up paper showed that the classic paired t-test is a central Behrens–Fisher problem with a non-zero population correlation coefficient and derived its corresponding probability density function by solving its associated non-central Behrens–Fisher problem with a nonzero population correlation coefficient.[14] It also solved a more general non-central Behrens–Fisher problem with a non-zero population correlation coefficient in the appendix.[14]

Variants

[edit]

A minor variant of the Behrens–Fisher problem has been studied.[15] In this instance the problem is, assuming that the two population-means are in fact the same, to make inferences about the common mean: for example, one could require a confidence interval for the common mean.

Generalisations

[edit]

One generalisation of the problem involves multivariate normal distributions with unknown covariance matrices, and is known as the multivariate Behrens–Fisher problem.[16]

The nonparametric Behrens–Fisher problem does not assume that the distributions are normal.[17][18] Tests include the Cucconi test of 1968 and the Lepage test of 1971.

Notes

[edit]
  1. ^ Lehmann (1975) p.95
  2. ^ Lehmann (1975) Section 7
  3. ^ Fisher, R. A. (1935). "The fiducial argument in statistical inference". Annals of Eugenics. 8 (4): 391–398. doi:10.1111/j.1469-1809.1935.tb02120.x. hdl:2440/15222.
  4. ^ "R. A. Fisher's Fiducial Argument and Bayes' Theorem by Teddy Seidenfeld" (PDF).
  5. ^ "Sezer, A. et al. Comparison of confidence intervals for the Behrens–Fisher Problem Comm. Stats. 2015".
  6. ^ Welch (1938, 1947)
  7. ^ a b Dudewicz, Ma, Mai, and Su (2007)
  8. ^ Chapman, D. G. (1950). "Some two sample tests". Annals of Mathematical Statistics. 21 (4): 601–606. doi:10.1214/aoms/1177729755.
  9. ^ Prokof'yev, V. N.; Shishkin, A. D. (1974). "Successive classification of normal sets with unknown variances". Radio Engng. Electron. Phys. 19 (2): 141–143.
  10. ^ Dudewicz & Ahmed (1998, 1999)
  11. ^ Wang, Chang (2022). "A New Non-asymptotic t-test for Behrens-Fisher Problems". arXiv:2210.16473 [math.ST].
  12. ^ Kabe, D. G. (December 1966). "On the exact distribution of the Fisher-Behren'-Welch statistic". Metrika. 10 (1): 13–15. doi:10.1007/BF02613414. S2CID 120965543.
  13. ^ Xiao, Yongshun (22 March 2018). "On the Solution of a Generalized Behrens-Fisher Problem". Far East Journal of Theoretical Statistics. 54 (1): 21–140. doi:10.17654/TS054010021. Retrieved 21 May 2020.
  14. ^ a b Xiao, Yongshun (12 December 2018). "On the Solution of a Non-Central Behrens-Fisher Problem with a Non-Zero Population Correlation Coefficient". Far East Journal of Theoretical Statistics. 54 (6): 527–600. doi:10.17654/TS054060527. S2CID 125245802. Retrieved 21 May 2020.
  15. ^ Young, G. A., Smith, R. L. (2005) Essentials of Statistical Inference, CUP. ISBN 0-521-83971-8 (page 204)
  16. ^ Belloni & Didier (2008)
  17. ^ Brunner, E. (2000). "Nonparametric Behrens–Fisher Problem: Asymptotic Theory and a Small Sample Approximation". Biometrical Journal. 42: 17–25. doi:10.1002/(SICI)1521-4036(200001)42:1<17::AID-BIMJ17>3.0.CO;2-U.
  18. ^ Konietschke, Frank (2015). "nparcomp: An R Software Package for Nonparametric Multiple Comparisons and Simultaneous Confidence Intervals". Journal of Statistical Software. 64 (9). doi:10.18637/jss.v064.i09. Retrieved 26 September 2016.

References

[edit]
[edit]