Wikipedia:Reference desk/Archives/Mathematics/2017 March 15
Appearance
Mathematics desk | ||
---|---|---|
< March 14 | << Feb | March | Apr >> | March 16 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
March 15
[edit]Name for curve defined as constant sum of distances to three points?
[edit] Resolved
A Circle is the points where the distances to a specific point are constant, an ellipes where the sum of the distances to two points are constant. Is there a name for either specifically where the sum of the distances to three points are constant or a name for the family of curves where the sum of the distances to N points are constant? (Note, I realize these curves for N>2 may have no mathematical use)Naraht (talk) 13:02, 15 March 2017 (UTC)
- These are the n-ellipses. Double sharp (talk) 13:46, 15 March 2017 (UTC)
- Thanks!Naraht (talk) 20:19, 15 March 2017 (UTC)
Does the Banach–Tarski trick works on Bread and Fishs?
[edit]Does the Banach–Tarski trick works on Bread and Fishs? Christ! I need to know the answer to this question. 148.182.26.69 (talk) 22:27, 15 March 2017 (UTC)
- I'm not sure this is a serious question, but physically, the answer is no. Bread and fishes are composed of a finite number of quarks, gluons, and leptons, and as far as anyone knows, there is no way to cut any of those in half. Banach–Tarski doesn't work unless you can make infinitely fine divisions.
- From another angle, the answer is yes. That is, you can apply the Banach–Tarski theorem to ideal objects having the shape of bread and fishes. --Trovatore (talk) 22:32, 15 March 2017 (UTC)
- Jesus can't do it either, because "God made the integers, all else is the work of man". Count Iblis (talk) 22:59, 15 March 2017 (UTC)
- Kronecker, of course, was wrong about that. --Trovatore (talk) 23:01, 15 March 2017 (UTC)
- I don't think this is about right or wrong, but more about different possible approaches to do analysis. What has happened is that one method has prevailed, this involves a notion of a continuum as some fundamental object in some sense. From the point of view of classical mechanics, or classical electrodynamics where you work with continuous functions and fields, this looks like a natural way to set things up. But from the point of view of modern physics, it's far more natural to consider the continuum as a scaling limit where we consider the effective physics we see at large length scales. If you pretend that the continuum really exists you're going to be hit by problems such as divergent integrals in Feynman diagrams and then you need to regularize the theory after all. So, you could just as well have acknowledged that some regularized theory is the real theory right from the start. In practice this doesn't matter when doing such computations, but it does tell you that you can't take the continuum seriously.
- Kronecker, of course, was wrong about that. --Trovatore (talk) 23:01, 15 March 2017 (UTC)
- So, analysis can be done just as well by replacing the continuum by a continuum limit instead, and that limit then involves the same sort of renormalization group methods as used in physics. So, e.g. instead of continuous functions we have functions defined on a lattice. We can renormalize the lattice and the functions defined on it by averaging over neighboring blocks of lattice points, this block defines a lattice point on the renormalized lattice and the average defines the renormalized function. The renormalized function gets smoother and smoother as we repeat this procedure over and over again. If we start with such an approach, then you should be able to recover the results of analysis as it refers to practical things, while you won't get bothered by the huge baggage of artifacts like non-measurable sets. Count Iblis (talk) 23:37, 15 March 2017 (UTC)
- I can't rule out that you could reformulate physics using finitary methods. No one has really done it, in a practical sense, but I can't rule out that it's only because not enough cleverness has thus far been applied.
- From my mathematical realist point of view, that is not really the point. --Trovatore (talk) 01:57, 16 March 2017 (UTC)
- So, analysis can be done just as well by replacing the continuum by a continuum limit instead, and that limit then involves the same sort of renormalization group methods as used in physics. So, e.g. instead of continuous functions we have functions defined on a lattice. We can renormalize the lattice and the functions defined on it by averaging over neighboring blocks of lattice points, this block defines a lattice point on the renormalized lattice and the average defines the renormalized function. The renormalized function gets smoother and smoother as we repeat this procedure over and over again. If we start with such an approach, then you should be able to recover the results of analysis as it refers to practical things, while you won't get bothered by the huge baggage of artifacts like non-measurable sets. Count Iblis (talk) 23:37, 15 March 2017 (UTC)