Jump to content

Talk:Donsker's theorem

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This article said:

By the classical central limit theorem, for fixed x, the empirical process Gn(x) converges in distribution to a Gaussian (normal) random variable G(x) with mean 0 and variance F(x)(1 − F(x)) as the sample size n grows.

How does the mean of 0 come from this?? The number of observations that are less than x is the number of successes in n trials with probability F(x) on each trial. Its expected value is nF(x) and its variance is nF(x)(1 − F(x)). The empirical distribution function Gn evaluated at x is just that random variable divided by n. Its expected value is F(x) and its variance is F(x)(1 − F(x))/n.

Then the article says:

Donsker (1952) showed that the sample paths of Gn(x), as functions of x ∈ R, converge weakly to a stochastic process G in the space of all bounded functions . The limit process G is a Gaussian process with zero mean and covariance given by

So there's the "0 mean" claim again. The covariance stated above is consistent with what follows:

The process G(x) can be written as B(F(x)) where B is a standard Brownian bridge on the unit interval.

But that is not consistent with my derivation above of the mean of F(x) and the variance of F(x)(1 − F(x))/n.

Something is missing from the definition of the process. It must be something simple, but just what it is is escaping me at the moment. Michael Hardy (talk) 11:57, 21 March 2009 (UTC)[reply]


OK, I think I'm seeing something: the random function is supposed to be a function, not of x, but of F(x). The article could hardly be more vague about that!! Michael Hardy (talk) 12:20, 21 March 2009 (UTC)[reply]

You are right in your first remark; please look now. (But your second remark is unclear to me.) Boris Tsirelson (talk) 17:19, 21 March 2009 (UTC)[reply]
But why the field is analysis rather than probability? Boris Tsirelson (talk) 17:22, 21 March 2009 (UTC)[reply]

Donsker class

[edit]

This is the classical result by Donsker, however this has been generalized to a study of the so called Donsker classes of subsets/functions. If the empirical process indexed by a particular class of subsets/functions converges to a Gaussian process, then this class has the Donsker property and called the Donsker class. indexed by subsets is just a particular case, and it has been shown that this class is Donsker. This is very much similar to Glivenko-Cantelli theorem and the GC classes. This is just my understanding of the subject, and it requires a better expert in this matter to write it up here.(Igny (talk) 02:40, 22 March 2009 (UTC))[reply]

Specific name

[edit]

Anyone have a citation for the result here being called "Donsker's theorem" specifically? I have a source that uses this term for something that is related but not the same (partial sum processes) : Shorack & Wellner (1986) "Empirical Processes with Applications in Statistics", Wiley (page 53, as Donsker theorem in index). Melcombe (talk) 10:48, 23 March 2009 (UTC)[reply]

A good question! Usually, "Donsker's theorem" means indeed convergence of random walk (in the scaling limit) to Brownian motion (=Wiener process). For example, in "Probability: theory and examples" by Richard Durrett, Section 7.6 "Donsker's theorem". Further, in Section 7.8 "Empirical distributions, Brownian bridge", Durrett considers the maximal deviation of the empirical process from its mean (the c.d.f., assumed to be uniform) and proves (Theorem 8.4) that it (multiplied by root n) converges in distribution to the maximum of the Brownian bridge. He derives it from "Donsker's theorem", and adds "Remark: Doob (1949) suggested this approach to deriving results of Kolmogorov and Smirnov, which was later justified by Donsker (1952). Our proof follows Breiman (1968)." Boris Tsirelson (talk) 12:02, 23 March 2009 (UTC)[reply]
On the other hand, the term "Donsker class" is indeed used actively; just try Google:"Donsker class". Boris Tsirelson (talk) 12:19, 23 March 2009 (UTC)[reply]