Jump to content

Wikipedia:Reference desk/Archives/Mathematics/2010 July 19

From Wikipedia, the free encyclopedia
Mathematics desk
< July 18 << Jun | July | Aug >> July 20 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


July 19[edit]

Finding text in all of the universe[edit]

Hello, smart people, can any of you please figure out this for me? Say I took every atom in the universe (let's use 1080 from the article Observable universe) and assigned each of them a random letter or punctuation (let's assume 30 possible characters). For what length of a string of characters would I have a 50% confidence of it being somewhere in the 1080 characters? For example, could I be 50% sure of my name being in there (22 characters)? What about the lyrics of A Hard Day's Night (song) (1,160 characters) or Hamlet (200,000 characters)? For what length could I be 95% confident of it being in there? Thank you. —Preceding unsigned comment added by 142.104.215.130 (talk) 02:17, 19 July 2010 (UTC)[reply]

Infinite monkey theorem has information on this kind of thing.--RDBury (talk) 03:44, 19 July 2010 (UTC)[reply]
BTW, 1080 is about 3054. So I think the most you'd expect to find is about 54 characters. In other words you wouldn't get much farther than "To be or not to be, that is the question."--RDBury (talk) 04:04, 19 July 2010 (UTC)[reply]
Thanks, RDBury. What confidence is there that you would find a 54 character string? I'm guessing that it's more than 50%, because the chance of a die rolling at least one six in six rolls is more than 50%, so if 6 chances to get 1/6 is more than 50%, then 3054 chances to get 3054 should be more than 50%. I think. 142.104.215.130 (talk) 04:23, 19 July 2010 (UTC)[reply]

Each n-character string is expected to occur (1080n+1)/ 30n ≈ 3054.16−n times. Each 54-character string is expected to occur 300.16 = 1.7 times, and each 53-character string is expected to occur 301.16 = 52.7 times. The confidence that you would find a 54 character string is the poisson distribution 1−e−1.7 = 82%, and the confidence that you would find a 53 character string is 1−e−52.7 = 100%. So you will most certainly find "to be or not to be, that is the question" somewhere in the long string, and "the slings and arrows of outrageous fortune" somewhere else. Bo Jacoby (talk) 08:56, 19 July 2010 (UTC).[reply]

Awesome! Thanks, Bo Jacoby! 142.104.215.130 (talk) 22:41, 19 July 2010 (UTC)[reply]

If we convert the number pi to a character string (by writing it in base 30), will we find all possible strings with certainty? Count Iblis (talk) 02:51, 20 July 2010 (UTC)[reply]

If π is normal (this is suspected but unknown), then yes, it will contain all finite strings infinitely often. —Bkell (talk) 06:46, 20 July 2010 (UTC)[reply]
So if someone gets a computer to calculate π in binary, it will eventually find a computer virus that escapes from the π-calculating program and destroys the Internet? 142.104.215.130 (talk) 17:37, 20 July 2010 (UTC)[reply]
No. It would have to try and execute (ie. run) pi rather than just print it to the screen or save it to a hard drive or whatever, which it wouldn't do. I can type "format c:" in this edit box and your computer will display it without reformatting your hard drive. --Tango (talk) 17:40, 20 July 2010 (UTC)[reply]

"Fundamental theorem of arithmetic"?[edit]

Merely a question about conventional nomenclature:

Does this term refer to

  • Both the existence and the uniqueness of prime factorizations; or
  • the uniqueness of prime factorizations?

Existence seems trivial. Uniqueness is the "hard" part. Somehow I was raised under the impression that what is conventionally called the FTA is about uniqueness. But I sometimes see people writing "by the fundamental theorem of arithmetic, we have...." etc., when existence is what they're talking about. Michael Hardy (talk) 18:33, 19 July 2010 (UTC)[reply]

What makes you say that Existence seems trivial. Uniqueness is the "hard" part? Uniqueness of factorization is provable already in VTC0 (i.e., it is trivial, as VTC0 is what one needs even to prove that multiplication is total) TV0, whereas the weakest theory known to prove existence is V1, and that probably cannot be significantly improved (under some cryptographic assumptions on hardness of factoring). Not that I would expect anyone to be familiar with these theories of bounded arithmetic, but the important point is that the latter is stronger than the former, it's something like a theory with the computational power of TC0 P vs. a theory with the computational power of NP.—Emil J. 18:49, 19 July 2010 (UTC)[reply]
On second thought, I'm not sure about the TC0 part, but it's weaker anyway.—Emil J. 18:57, 19 July 2010 (UTC)[reply]
EmilJ, you're giving us a nice demonstration of the fact that the formalisms of mathematical logic are less than the whole truth. If you had never read a proof of uniqueness, how long would it take you to discover one by yourself? Michael Hardy (talk) 23:48, 19 July 2010 (UTC)[reply]
I suspect the OP means "trivial" as in "extremely easy to prove within ordinary mathematics" rather than any reference to formal axiomatic systems. Either a number is prime, or it can be decomposed, so clearly you can continue decomposing until you have all primes. The only thing to prove is that the process eventually terminates, which it clearly does since the product of infinitely many positive integers would be infinite (or, more precisely, would diverge to infinity). As David points out below, technically I've only proven factorisation into irreducibles, but whether or not that is enough depends on your definition of "prime". When we're just talking about integers, a prime is usually defined as a number that has no factors other than 1 and itself, rather than using the more general definition of a prime (p|ab => p|a or p|b). If you want the more general definition, then you need to prove that all irreducible integers are prime, which isn't hard. --Tango (talk) 19:07, 19 July 2010 (UTC)[reply]

Existence of a factorization into Irreducible elements is trivial enough, and prime numbers are often defined as being irreducible elements over the integers, but it seems to me that it's not fair to call it a prime factorization until one knows that the irreducible elements really are prime elements. —David Eppstein (talk) 18:54, 19 July 2010 (UTC)[reply]

One can get away with saying that FTA is the uniqueness alone, and I think Gauss would have agreed. To prove uniqueness you have, really, to prove irreducibles are primes on the way, and our definition of prime is "not composite or 1", i.e. irreducible. Charles Matthews (talk) 22:08, 19 July 2010 (UTC)[reply]

The question of whether the uniqueness or the existence is harder is, of course, irrelevant to the name. Many fundamental facts are trivial, once they have been properly isolated and encapsulated in the right concepts. Both existence and uniqueness of prime factorization lie at the heart of many questions in arithmetic, at all levels of abstraction. The name is deserved, no matter how easy or difficult the proof. Phils 10:36, 20 July 2010 (UTC)[reply]

...."the name is deserved"—as the name of what? The existence theorem? The uniqueness theorem? The conjunction of the two? It's not clear what you're saying here. Michael Hardy (talk) 15:11, 20 July 2010 (UTC)[reply]
I agree I did not express myself clearly. The main point was that which part is harder is not a valid criterion for determining what the phrase fundamental theorem of arithmetic ought to refer to. I take the view that the theorem refers to both parts, if only because I cannot remember seeing the existence and uniqueness parts stated separately. This personal opinion is reflected in the last sentence, which you quoted. Phils 16:50, 20 July 2010 (UTC)[reply]
It seems quite surprising that you haven't seen them stated separately, since the proofs are so different that it seems as if they'd have to have separate treatments. Should I infer that this sort of number theory is fairly remote from what you normally do? Or am I confused? Michael Hardy (talk) 18:25, 20 July 2010 (UTC)[reply]
It is true that I rarely give the FTA any thought. I just don't think either part of the proof is very complicated so invoking the proofs as a criterion to distinguish which statement "lies deeper" doesn't seem right. Of course, in the end, it is all a matter of taste. Phils 21:58, 20 July 2010 (UTC)[reply]
Tim Gowers has some discussion about why the theorem is not trivial.[1] 67.122.211.208 (talk) 05:06, 21 July 2010 (UTC)[reply]

Vector Spaces[edit]

Let be finite dimensional vector spaces with dimensions respectively. It is easy to show that . Assume that

Given , by definition if and only if Writing in terms of the basis:

and so if and only if for all It follows that

and by the same argument we see that

Thus and since it follows that

  • What happens in the infinite dimensional case? Is it still true that , even when are infinite dimensional vector spaces? If so, then how might one prove such a thing? •• Fly by Night (talk) 19:28, 19 July 2010 (UTC)[reply]
The relevant case is when P=0. To show that R is isomorphic to the direct sum of Q and S=R/Q, one needs to construct a splitting S -> R. This always exists but it does depend on the axiom of choice. The general framework for this is the notion of a projective module over a ring. When the ring is a field the property is automatic (again, modulo AC). Tkuvho (talk) 20:51, 19 July 2010 (UTC)[reply]
Could you please give some more detail? Why is the relevant case is when ? I would like to be of arbitrary, possibly infinite, dimension. What does it mean for to be a splitting? I appreciate you help. I shall try to read projective module now. •• Fly by Night (talk) 18:22, 20 July 2010 (UTC)[reply]
Use the same argument. Fix a basis of , extend it to a basis of , extend that to a basis of . is then naturally isomorphic to the span of , while and are naturally isomorphic to the spans of and , respectively.
Of course, an easier argument is to say that the isomorphism type of a vector space is characterized by its dimension, and then engage in some simple cardinal arithmetic. —Preceding unsigned comment added by 203.97.79.114 (talk) 12:12, 21 July 2010 (UTC)[reply]
Forgive me for being a little slow here. Can we always find a basis for an infinite dimensional vector space? Take for example, the space of smooth functions from the line to the line, i.e. This is a Fréchet space. What would a basis for be? I would be tempted to say but then the flat functions cause a problem here. To repeat: Can we always find a basis for an infinite dimensional vector space? •• Fly by Night (talk) 17:36, 21 July 2010 (UTC)[reply]
Yes, every vector space has a basis, but the proof is nonconstructive, the result is in fact equivalent to the axiom of choice. The simplest way of proving it is probably by using Zorn's lemma: it ensures that the space contains a maximal (wrt inclusion) independent subset, and then it is not difficult to show that this subset is a basis (if it didn't generate the whole space, we could add another independent element to it, violating maximality).—Emil J. 17:52, 21 July 2010 (UTC)[reply]
Oh, and is certainly not a basis of , because its linear span are just the polynomials. I'd guess that has dimension , and probably has no easily describable basis.—Emil J. 17:57, 21 July 2010 (UTC)[reply]
Thanks EmilJ. If the bases are not easily describable, then how might one make sense of expressions like For example, let be a basis for and be a basis for then what on earth does look like? I understand what is; but should give us its basis. Since and are not easily describable, then is just as intangible. Am I right to say that all we can talk about is existance? What's 2ω? •• Fly by Night (talk) 19:12, 21 July 2010 (UTC)[reply]
We haven't a clue what they would look like, but that doesn't mean we can't reason about them. The elements of are the finite linear combinations of the elements of . The elements of are the finite linear combinations of . If two elements of differ by an element of , then their coefficients on may be different, but their coefficients on are the same. So the map which simply forgets the coefficients on is an isomorphism from to the span of .
is the cardinality of the continuum. It's the size of the set of real numbers (often called the continuum). Saying that a vector space has dimension means it has a basis with as many elements as there are reals. And EmilJ is correct, has dimension : it has dimension at most since there are continuum many continuous functions (since a continuous function is determined by the image of the rationals); it has dimension at least since the collection of horizontal translations of the standard cutoff function form an independent set of size continuum (but certainly not a basis). —Preceding unsigned comment added by 203.97.79.114 (talk) 11:01, 22 July 2010 (UTC)[reply]
Perhaps it should be mentioned that in order to define the quotient of a pair of vector spaces, one need not choose any bases in either space. An element of the quotient is a coset, namely the set of all translates of a given vector. This bypasses the need to choose bases. It should be mentioned that if the infinite-dimensional space happens to be a Hilbert space, then one can construct quotients without having to resort to non-constructive foundational material. Tkuvho (talk) 15:10, 22 July 2010 (UTC)[reply]

Brilliant. Thanks a lot everyone. There were some really helpful answers. Fly by Night (talk) 22:01, 22 July 2010 (UTC)[reply]

3D equivalent of an arc/circular sector[edit]

What is the 3D equivalent of an arc and circular sector? Our articles on the two don't seem to have a "see also" that leads to it, and I didn't see it in the articles themselves. Paraboloid seems possible, but there's probably only one paraboloid shape that's specifically a 3D section of a sphere, but I don't even think there's a specific term like "spherical paraboloid". Anyone? --Wirbelwindヴィルヴェルヴィント (talk) 22:17, 19 July 2010 (UTC)[reply]

Spherical cap is part of the answer. -- SGBailey (talk) 22:32, 19 July 2010 (UTC)[reply]
No paraboloid is a portion of a sphere, just as no (nonzero-length) portion of a parabola is an arc of a circle. —Bkell (talk) 06:49, 20 July 2010 (UTC)[reply]
See steradian. Bo Jacoby (talk) 07:38, 20 July 2010 (UTC).[reply]
Thanks. Spherical cap and steradian seem to be what I'm looking for, but using those terms to describe something would probably get people a lot more confused, unfortunately. --Wirbelwindヴィルヴェルヴィント (talk) 20:14, 20 July 2010 (UTC)[reply]