Jump to content

Wikipedia:Reference desk/Archives/Mathematics/2008 May 15

From Wikipedia, the free encyclopedia
Mathematics desk
< May 14 << Apr | May | Jun >> Current desk >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


May 15

[edit]

Notation

[edit]

What does mean in mathematics. How does it differ from or ? Thanks, User:Zrs_12 —Preceding unsigned comment added by 12.213.80.54 (talk) 02:53, 15 May 2008 (UTC)[reply]

It's interval notation. Digger3000 (talk) 03:18, 15 May 2008 (UTC)[reply]
Basically, ( or ) mean exclusive. [ and ] mean inclusive. So (k,n] is from k to n, but not including k. Paragon12321 (talk) 03:40, 15 May 2008 (UTC)[reply]
Having read the above, I investigated whether it would be possible to create a redirect pages called (a,b] etc. We aren't allowd to include [ or ] in page names for wiki syntax reasons. If you type (a,b] into the search box, it returns pages with a and b on them. If you type "(a,b]" it returns "not found - maybe the string isn't indexed". Is there any way we can make it so that folk asking Zrs_12's question can find an answer? -- SGBailey (talk) 07:48, 15 May 2008 (UTC)[reply]
Probably not. However, if a user knows this has something to do with intervals he will have no problem finding Interval (mathematics), and if he knows the names of the symbols, the relevant information can be found at Bracket.
That said, square brackets and parentheses being quite ubiquitous symbols, such notations can mean different things in different contexts. I am most familiar with [a,b] being the lcm, and (a,b) being the gcd or even an inner product. I don't know any alternative for the hybrid [a,b), though. -- Meni Rosenfeld (talk) 08:04, 15 May 2008 (UTC)[reply]
[a,b] is also used for the Lie Bracket and related concepts. (a,b) is used for any ordered pair, which includes numerous things (groups and topological spaces spring to mind). They're both very convenient notations and are very widely used for all kinds of things (generally unambiguously, though, which is impressive!). I also can't think of any other meaning for (a,b], though. --Tango (talk) 12:45, 15 May 2008 (UTC)[reply]

Variance from cdf

[edit]

How do I calculate an approximate variance given an approximation to the cumulative distribution function at various points? The cdf is numerically calculated and often non-smooth, so I don't want to use finite-differences or such to get the pdf. The cdf is defined at ~100 points, and is close to zero at the lowest point and close to 1 at the highest (within about 100 times the machine epsilon). Thanks. moink (talk) 17:56, 15 May 2008 (UTC)[reply]

I'm assuming you mean finitely many points. Say a < b and you know F(a) and F(b) and no value of F at points between a and b. I'm not sure what you mean by using finite differences to get the pdf. The probability that the distribution assigns to the interval (ab] is F(b) − F(a). The very crudest thing you could do is concetrate that probability at b. That smells a lot like something that could be called "finite differences". If you distributed that probability uniformly over the interval (ab), that would the same as using linear interpolation to get a approximation of the cdf at points between a and b, and then the approximate density in that interval would be (F(b) − F(a))/(b − a). I think the expression for the variance would then bear some algebraic simplification BEFORE you plug in the actual numbers. Maybe that's too much like "finite differences" for you. A subtler approach would make the density non-uniform on the interval in some way that depends on the various values of F not only at those two points. Maybe I'll return to this question later when I have some time. Michael Hardy (talk) 18:38, 15 May 2008 (UTC)[reply]
What you said was enough for me to convert my mean-calculating code to variance-calculating code. I suppose in a way a substep could be likened to finite differences, but numerically integrating a non-smooth function is much less problematic than numerically differentiating it. Here's my Matlab code:
function sig=calcvar(cdf,mu,thresh)
for i=1:size(cdf,1)
  sig(i)=0;
  for k=1:length(thresh)-1
    sig(i)=sig(i)+(0.5*(thresh(k+1)+thresh(k))-mu(i))^2*(cdf(i,k+1)-cdf(i,k));
  end
end
Note that it actually does it over a bunch of cdf's, and that the variable thresh is a series of "thresholds" at which I know the cdfs. Thanks. moink (talk) 18:50, 15 May 2008 (UTC)[reply]
You'll get better accuracy (without additional complication) by replacing the term with . Since the integral is easy, there's no point in approximating it with a one-point evaluation at the midpoint (times the interval length). This is particularly important here because the correct formula is always greater than the approximation — by — so error will accumulate (though it will be reduced by a finer mesh of points).
Of course, selecting as Michael suggested is exactly finite differences for . Here it doesn't really matter, though, because your final quantity is and your initial quantity is ; they're both (first-order) integrals of , so noise doesn't get inherently amplified passing from one to the other. If you expect your data is actually in error, however, it might be wise to pass some sort of smooth curve through it (splines or Bézier curves come to mind, or some sort of least squares regression if you have a model for the CDF) whose variance could then be calculated exactly. --Tardis (talk) 15:52, 16 May 2008 (UTC)[reply]
Thank you! This is exactly the kind of suggestion I was looking for. I will make the change to my code. I did notice it seemed to be underestimating; now i understand why. moink (talk) 16:14, 16 May 2008 (UTC)[reply]
The difference between the two methods is about 3% on average, on the particular data set I'm looking at. moink (talk) 16:43, 16 May 2008 (UTC)[reply]

The biggest family-tree you've ever seen...

[edit]

I was kind-of a fan of the Fox series New Amsterdam (apparently I was the only one, it got canceled after 8 episodes), which was centered around a character named John Amsterdam. In 1642, this fella was a Dutch soldier who saved the life of a young native American girl (in a battle in the place that would become New York) who blessed/cursed him with immortality (he will become mortal again when he meets his true love). Unlike many other immortal people in fiction, he was able to father children. In one episode he states that he has fathered a total of 63 children. When he said that it struck me that he probably was the ancestor of a large chunk of the population in New York.

His last kid (well, latest) is called Omar and was born in 1943. If we assume that he had kids with some regularity between 1642 and 1943, that would mean he fathered a kid every four years ((1943-1642)/63=4). My question is this: approximately how many living descendants would he have? Making some reasonable assumptions, we could say that everyone of his kids had two kids at the age of 25, who themselves had two kids, and so on. I tried calculating this, but I couldn't quite nail down the sum properly (haven't done real maths in a looooong time), could you guys help me? I realize this is a fairly trivial question, but I thought it would be fun to find out. I also realize that the answer will be highly approximate, but I'm really just looking for an order of magnitude here (1,000? 10,000? 100,000? 1,000,000?). Cheers! --Oskar 19:46, 15 May 2008 (UTC)[reply]

I get . So that's about 30000 or so, using your formula and ignoring possible intermarriages between his descendants. —Ilmari Karonen (talk) 20:03, 15 May 2008 (UTC)[reply]
Which is, of course, just the size of the last generation, and doesn't include surviving members of the previous generations. Assuming the three latest generations are still alive on average means multiplying the figure by 1 + 1/2 + 1/4 = 1.75, giving a total of about 48912. Four surviving generations would make that 52406. That rounds to about 50000 either way, given the crudeness of the approximation we're dealing with, and I wouldn't be too confident about that 5. Or about the number of zeros either. But for a quick back-of-the-envelope calculation, which is what you were asking for, it should be within a few orders of magnitude. —Ilmari Karonen (talk) 20:08, 15 May 2008 (UTC)[reply]
One of the biggest problems with this kind of calculation is that you're assuming his descendants all marry people that aren't descendants. If two descendants get married, their two kids end up being counted twice, and if a large chunk of the population are descended from him then the chances of that happening become quite large, so you get an overestimate. Although 2 children per family is probably an underestimate, since that would involve 0 population growth. Perhaps we're lucky and it cancels out! These problems become more apparent if you go back further - consider the people that claim to be descended from Jesus. If Jesus had two children and they each had two children when they're about 25, 2000 years later you end up with about 1024 descendants alive today - 15 orders of magnitude larger than the total world population! Over a 300 year period, the effects are significantly less, but it still shows how enormously wrong such a simple model can end up being. --Tango (talk) 20:59, 15 May 2008 (UTC)[reply]
As a computational problem: One child every 4 years from 1642 to 1943 and each descendent having exactly 2 children with an unrelated spouse at their 25th birthday, I get 336,261 descendents through 2008. All the above caveats of course apply. Dragons flight (talk) 04:51, 16 May 2008 (UTC)[reply]
On the other hand, even though Tango is right in that calculating the number of descendants that way is an overestimate, it is true that many people who lived ages ago have lots of descendants because everyone has two parents. You don't have to be immortal for this. I think, if you take a human from a few thousand years ago, then most probably he or she either has no descendants or a significant proportion of the contemporary population is his or her descendant. On the other hand, if you count only male-line descendants, this is far from true, because everyone has only about two male-line ancestor in any generation. – b_jonas 07:46, 16 May 2008 (UTC)[reply]
Surely everyone has precisely one male-line ancestor in each generation? --Tango (talk) 13:18, 16 May 2008 (UTC)[reply]
I believe b_jonas means something like 'people of whom you are a male-line descendant', in which case you get two per generation, barring incest. Algebraist 13:45, 16 May 2008 (UTC)[reply]
You'd only be a male-line descendant of two people per generation if you're male: otherwise zero (well, I suppose it depends on your definition). Anyhow, b_jonas is touching on the identical ancestors point, a point in history where every person then alive is either the ancestor of everyone now alive, or of noone now alive; our article says this likely happened 5000 to 15000 years ago. (Related are most recent common ancestor, mitochondrial Eve, and Y-chromosomal Adam.) Eric. 86.153.201.165 (talk) 00:10, 17 May 2008 (UTC)[reply]
The difference in mine and Dragons flight's results appears to stem from the original poster's assertion that (1943-1642)/63 = 4, which of course isn't true (it's actually 43/9 = 4.777…). Besides, the correct divisor isn't 63 (the number of children) but 62 (the number of gaps between children), giving an average interval of about 4.8548 years. Combining this figure with the other assumptions given, I get a total number of 202,689 descendants, of which the latest generation of course makes up exactly half, or 101,344. With three surviving generations, the number of surviving descendants would then be about 177,353. —Ilmari Karonen (talk) 01:11, 19 May 2008 (UTC)[reply]

Generated and induced σ-algebras

[edit]

Let X be a nonempty set and a collection of subsets of X. We let denote the σ-algebra generated by , and take to mean for any subset Y of X and collection of subsets . Is it true that

?

The right side is of course the induced σ-algebra on Y, and that it includes the left is clear. But does the other direction follow? If not, what sort of additional hypothesis would suffice?  — merge 20:51, 15 May 2008 (UTC)[reply]

Yes, the reverse inclusion holds. Proof is pretty easy: consider the collection of subsets A of X such that is in the LHS; show that this collection contains every element of and is a σ-algebra on X. Algebraist 22:54, 15 May 2008 (UTC)[reply]
Oh. Let be that collection. Then , and if then , so . Moreover, if is a sequence of sets in then , so that . Thus is a σ-algebra in X, and it clearly contains since . It follows that , or in other words that
Did I get that right?  — merge 01:00, 16 May 2008 (UTC)[reply]
Yeah, that's what I'd have written if I'd felt up to hacking out all that tex. Algebraist 09:45, 16 May 2008 (UTC)[reply]
Hah. Thank you! The TeX is the easy part; it's coming up with the tricks that's hard. I'm not very clever, but can usually struggle through when pointed in the right direction.  — merge 10:02, 16 May 2008 (UTC)[reply]
If I might be allowed some musing-space, I think the problem with doing this sort of question is that there's no such thing as a 'typical element' of a generated sigma-algebra. If you want to prove a statement about open sets, or arbitrary elements of the group/vector space/whatever generated by some generating set, you can write 'let U be open' or 'let v be an arbitrary linear combination of spanning vectors' or whatever, but it's never useful to write 'let B be a Borel set', because there's nothing very sensible you can say about what an arbitrary Borel set looks like (the best you can do involves recursion to the first uncountable ordinal). Thus the only way to show that all elements of have some property is to show that the set of things with that property contains all of and is a σ-algebra. And maybe use Dynkin's lemma, though thankfully we were spared that here. Algebraist 11:14, 16 May 2008 (UTC)[reply]

You've hit on the real problem, so your comments are very much appreciated. The same thoughts have been going through my mind in working with σ-algebras: they are structurally similar to topological spaces, but seemingly much less tractable for the reason you stated. The situation is aggravated by the lack of exposition on this matter, which can easily leave the newcomer adrift. So I have a couple of further questions:

  • Is there a source you'd recommend that provides good training in this area (not just for σ-algebras, but for expanding one's bag of tricks with this kind of reasoning)--perhaps a book in the set theory area?
  • More deeply, do you think there is a way to remedy this situation somehow in the theory itself? Perhaps this is "impossible", in the same sense that it's impossible to exhibit a well-ordering of the reals. But on the other hand, perhaps this is a weakness of the theory, and one could modify or supplement it with constructions, methods or tools that would make reasoning easier.

 — merge 12:15, 16 May 2008 (UTC)[reply]

I can't recommend any books; sorry. On your second point, you might be interested in the Borel hierarchy I alluded to earlier, which provides a sort of classification of the Borel sets (or the sets of any generated σ-algebra). It's handy for some proofs, though the only one I can currently recall is that the Borel σ-algebra on R has the cardinality of the continuum (and is thus much smaller than the Lebesgue σ-algebra). Algebraist 13:05, 16 May 2008 (UTC)[reply]