Jump to content

Wikipedia:Reference desk/Archives/Mathematics/2010 September 3

From Wikipedia, the free encyclopedia
Mathematics desk
< September 2 << Aug | September | Oct >> September 4 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


September 3

[edit]

Standard deviation

[edit]

Hi all! In physics we're doing a bit of stats and I noticed in the standard deviation formula they divide by N-1 rather than just N. I asked my teacher and he said he didn't get it either, and look it up on Wikipedia or something like that, so here I am. I tried looking at your articles Standard_deviation and Bessel's correction, but that didn't really help because I don't have a university-level stats background :/ Can someone who does explain why you divide by N-1, in simpler terms? I'm OK with (and even expect you to) dumb the concept down a little --cc —Preceding unsigned comment added by 76.229.208.208 (talk) 01:58, 3 September 2010 (UTC)[reply]

As I understand it, the N-1 come in because you are trying to estimate the actual standard deviation based on sample data. If you put N in the denominator it turns out that the estimate will, on average, be too low. So a correction factor is built into the formula so that the estimate will average to the actual value if the experiment is repeated many times. When the correction factor is added it works out the same as using N-1 in the denominator instead of N. It has been noted here before though, if your sample is small enough that it actually makes a difference then your sample size is too small.--RDBury (talk) 03:46, 3 September 2010 (UTC)[reply]
See the Wikipedia article on unbiased estimator, which has the explanation you're looking for. --173.49.14.153 (talk) 04:20, 3 September 2010 (UTC)[reply]
If you knew the population (actual) mean rather than estimating it and used that to get the squared differences then N would be correct. However using the sample (estimated) mean makes the sum of the squared differences slightly smaller. In fact the sum of the squared differences from the population mean is equal to the sum of the squares of the differences from the sample mean plus N times the square of the difference between the population mean and the sample mean. This itself gives you an estimate of the probable difference between the population and sample mean so the workings out in the article is just using this to get an estimate of the sum of squared differences from the population mean. A finickety point is that it is only the expression without the square root that is unbiased, the estimated standard deviation from taking the square root is biased but I would worry even less about that than using N instead of N-1 in the denominator. Dmcq (talk) 07:57, 3 September 2010 (UTC)[reply]

Maybe it won't hurt to mention also that unbiasedness may be slightly over-rated, at least by non-statisticians. See my paper on this: "An Illuminating Counterexample", American Mathematical Monthly, Vol. 110, No. 3 (March, 2003), pp. 234–238. Michael Hardy (talk) 18:47, 4 September 2010 (UTC)[reply]

Random variables

[edit]

Hello mathematicians! Can you please help me solve this. It's not homework, it's actually work work. Say is the amount of money I make per "event" and is the number of events per year. Let's also say that has a lognormal distribution and is a poisson distribution (the parameters for can be estimated from some data and let's assume that the parameter for is known).

A) Then the total money I make from these events in one year is . Is there an analytic distribution function for  ?

B) Will the following monte-carlo methods work to determine a distribution for  :

1) sample a random value from , say , then sample values of and add them up - repeat this many times; or
2) sample a random value from , say , and sample a random value of , say , and then use - and repeat this many times.

What is the difference between these two methods? What other possible numerical methods can I use to determine  ? Thanks very much. --Mudupie (talk) 17:32, 3 September 2010 (UTC)[reply]

I'll assume that the events don't all make the same amount of money, but rather that each makes an independent contribution drawn from some distribution. Then . In fact there isn't even an S, there are iid random variables , and . So it's clear that you can't sample the distribution of P with method 2 - you'll get a different distribution which has a much higher variance. You can use method 1, though.
You may know that if X and Y are iid then while . If it seems that E being random makes a difference, think what happens when is large - then E is roughly constant.
If finding the expectation and variance of the distribution suffices, you have , and if I'm not mistaken . This holds no matter what are the distributions of E and S, as long as everything is independent. -- Meni Rosenfeld (talk) 18:56, 4 September 2010 (UTC)[reply]

Thanks very much Meni! That was very useful information. I have one follow up question for now. I'm trying to understand how to derive the expectation of P. I guess the following equation holds but I don't understand why: , where λ is just the expectation of E. I "get" that it makes sense but I don't know the actual theoretic reason. Can you please explain? --Mudupie (talk) 23:09, 4 September 2010 (UTC)[reply]

only makes sense when λ is an integer, so it's not useful to talk about it. What I did is to write and . Then finding is just some algebraic manipulations. -- Meni Rosenfeld (talk) 11:20, 5 September 2010 (UTC)[reply]
Thanks again mate! I managed to arrive at the expression for E[P] using your approach. I'll try to do the variance one as well and come back here if I get stuck. --Mudupie (talk) 09:41, 6 September 2010 (UTC)[reply]

Formula images

[edit]

In every maths page on wikipedia I notice the formulae are images not text. How do you create these? On Mac? Thanks for any replies.86.147.12.111 (talk) 18:05, 3 September 2010 (UTC)[reply]

See Help:Displaying a formula. —Bkell (talk) 18:27, 3 September 2010 (UTC)[reply]
Thank you86.147.12.111 (talk) 19:42, 3 September 2010 (UTC)[reply]

Also, when you see a page with such formulas, if you click on "edit", you'll see how they are created. Michael Hardy (talk) 18:51, 4 September 2010 (UTC)[reply]

Homogeneous polynomials

[edit]

The symmetric degree 4 homogeneous polynomial in two variables: x4 + x3y + x2y2 + xy3 + y4 can be written (x5y5)(xy)−1 for xy. What is the analogous expression for the symmetric degree 4 homogeneous polynomial in 3 variables: x4 + x3y + x3z + x2y2 + x2yz + x2z2 + xy3 + xy2z + xyz2 + xz3 + y4 + y3z + y2z2 + yz3 + z4 ? Bo Jacoby (talk) 22:28, 3 September 2010 (UTC).[reply]

First, just to be consistent with the terminology, these are called the complete homogeneous symmetric polynomials. The expression you're looking for follows from the properties of Schur polynomials.
which turns out to be the complete symmetric polynomial. Here Δ is the product of the differences (xy)(xz)(yz).--RDBury (talk) 04:33, 4 September 2010 (UTC)[reply]
Thank you very much! Bo Jacoby (talk) 06:10, 4 September 2010 (UTC).[reply]
No problem but please be civil. —Preceding unsigned comment added by 114.72.252.111 (talkcontribs)
It is plainly obvious from the edit history that User:Bo Jacoby did not make the uncivil comment you are referring to, per [1]. I have removed the IP's offending comment. --Kinu t/c 05:19, 5 September 2010 (UTC)[reply]