Wikipedia:Reference desk/Archives/Mathematics/June 2006
June 1
[edit]Problem with complex numbers
[edit]Hello,
You are given that the complex number alpha = 1 + j satisfies the equation z^3 + 3z^2 + pz + q = 0, where pand qare real constants. (i) Find and in the form a + bj. Hence show that and p = -8 and q = 10 [6] (ii) Find the other two roots of the equation. [3] (iii) Represent the three roots on an Argand diagram. [2]
Thanks guys as always. DR Jp.
- This sounds like a homework problem. Is there a particular thing you don't understand? Maybe you should take a look at complex number and complex plane. —Bkell (talk) 00:47, 1 June 2006 (UTC)
- Thanks, but I'm currently writing a book and I would like someone to check these problems are doable. If they could put their working so I could see how you are thinking that'd be even better.
How much time is needed to travel around the world for the following modes of transportaion -- the space shuttle, a jet airliner, a cruise ship
- Please make sure that your proxy server is either configured correctly, or you refrain from using them, since each time you edit you introduce backslashes before "'"s, breaking the markup. Dysprosia 01:02, 1 June 2006 (UTC)
- Do you take us all for idiots? Do your own homework. (People who genuinely write books are typically careful with language, and have students, colleagues, and paid reviewers — not to mention an editorial staff — to check their work. Also, it is not helpful to see how a trained mathematician approaches a problem if you want to know how a student would think.) --KSmrqT 03:51, 1 June 2006 (UTC)
- Heh, writing a book, that's a good one. —Keenan Pepper 05:36, 1 June 2006 (UTC)
- Yes, the problem is doable. I would start with part (ii) - find the other two roots of the equation. You know the value of α. Once you know that p and q are real, a second root is immediately obvious. And you can find the third root because you know the co-efficient of z2. Once you have all three roots, you can find the values of p and q. Gandalf61 08:50, 1 June 2006 (UTC)
- Starting with (i) is also easy enough, though. --LambiamTalk 14:15, 1 June 2006 (UTC)
- Heh, it looks like an IB Math problem. He put the possible marks next to each part of the question "(ii) Find the other two roots of the equation. [3]". --Codeblue87 19:58, 1 June 2006 (UTC)
How do I tell if I've chosen an appropriate statistical distribution
[edit]I have a group of 12 observations. I'd like to predict what my observations will be in the future. I also need the distribution to apply Bayes Theorem.
Right now, I'm using the normal distribution but I don't know if that's the right choice. I've calculated the skewness and kurtosis of the data, but I don't have any idea what they're supposed to be! I mean, I know if my observations were truly normally distributed, the skewness would be zero, but I don't know if my skewness of 1.65 is "close enough" or what. Are there rules of thumb for this? moink 05:50, 1 June 2006 (UTC)
- Under the assumption of normal distribution, the probability that a sample of 12 observations has a skewness whose absolute value is at least 1.65 is about 0.002. That is fairly low, and normally ground to reject the null hypothesis of normalcy. What is the source of the observations and how critical is the accuracy of the estimated distribution? Often the physical or other origin of the data suggests a plausible crude model for the distribution that is good enough in practice. --LambiamTalk 06:36, 1 June 2006 (UTC)
- It is not particularly critical. I was actually kinda hoping not to have to share the type of data, but since it's apparently a very poor fit to the normal distribution, I guess I will. It's the length of my menstrual cycle. Now all the boys on the math RD can get all grossed out. :) I like to know if I should carry tampons on me, and the Bayes' theorem thing... well, if you're very bright you may be able to figure it out but I will not provide an explanation. Here's the data: 32, 29, 28, 28, 26, 27, 27, 29, 36, 25, 26, 28. moink 07:41, 1 June 2006 (UTC)
- You know, you could use a neural network for precisely this task. Neural networks can be used to predict the length of menstrual cycles as well as stock market values or other things. Choose an encoding for the lengths, train some sort of recurrent network on the data you have, and then get it to generate predictions. If I get time, and I am sufficiently bored, I might even try this for you. Dysprosia 07:49, 1 June 2006 (UTC)
- Sounds cool but beyond my abilities. Right now, though, I'm less interested in predicting exactly the length of the next cycle, and more in knowing the approximate probability that it is at least some length so I can apply Bayes' theorem. moink 07:56, 1 June 2006 (UTC)
- Using your data and the formula at skewness, I find a skewness of 1.27, which is still significantly different from the null hypothesis but less so. Looking at the data, the problem appears to be the outliers at the high side. If you censor the data by discarding values > 30, you get a good agreement with a normal distribution. Given the application censoring at the high side is acceptable, since you want confidence at the low side. The sample is still a bit small, though, to really confidently assume the low end behaves normally, without outliers. --LambiamTalk 14:38, 1 June 2006 (UTC)
- So much for trusting my spreadsheet software. I thought about dropping the large ones, but it's in the higher range that I'm most interested in the probability, and since it seems that it does occasionally get that large, and not that rarely, I wanted to take that into account. moink 15:28, 1 June 2006 (UTC)
- Well, I'm by no means suggesting that this is what you are trying to calculate, but just for the sake of argument: if A=pregnant, and B=menstruation has not yet occurred, and one were interested in P(A|B), then P(B|A) would of course be very close to unity, but what value should be used for P(A)? Would the age specific fertility rate be correct? --vibo56 15:01, 1 June 2006 (UTC)
- Addendum: P(A) would obviously have to be either zero, or a lot higher... --vibo56 15:11, 1 June 2006 (UTC)
- Why would you say that? I mean, it could be zero, but it could be the small numbers you'd get using the failure rates of certain contraceptives. Even with several instances of unprotected sex in a month, it will generally not go above 25-30%. moink 15:24, 1 June 2006 (UTC)
- Agreed. You are right. --vibo56 16:35, 1 June 2006 (UTC)
- Chi squared might be your answer to whether the data is normal or not. Basically this works by dividing up the domain in to a number of boxes, you then count how many of your data items fall into each box and compare with the number predicted from the normal distribution. Add up the square of differences and compare with the approptite Chi-squared statistic. This should give a confidence interval as to whether the difference is significant or not. I suspect with only twelve points you don't really have enough data to meaningfully talk about skew. --Salix alba (talk) 15:10, 1 June 2006 (UTC)
- Sigh. Ok, so I'm transparent. My prior distribution in this context is from a record of instances of penetrative sexual intercourse along with the underlying numbers used by this site combined with a pdf of the date of ovulation using the pdf above and the possibly quite poor assumption of a constant luteal phase of 14 days. moink 15:14, 1 June 2006 (UTC)
- If the goal is to get pregnant, I'd start off by measuring my body temperature, to get a more precise estimate of the time of ovulation. After one year with no success, I would definitely go see a gynecologist. If, on the other hand, the goal is not to get pregnant, and you want a statistical tool to tell you when to start worrying, I'm afraid your approach won't work. Biological distributions tend to have very heavy tails, and you simply do not have enough data to make a sensible estimate of the distribution. With a limited dataset, however, you could make control charts. Here's a link to a how-to (powerpoint), courtesy of the British NHS Modernisation Agency. --vibo56 17:18, 1 June 2006 (UTC)
- When reading my previous comment: forgetting to mention this was maybe a male freudian slip, but anyway: if the goal is getting pregnant, it would be a good idea to have your partner checked as well. --vibo56 21:33, 1 June 2006 (UTC)
- If the goal is to get pregnant, I'd start off by measuring my body temperature, to get a more precise estimate of the time of ovulation. After one year with no success, I would definitely go see a gynecologist. If, on the other hand, the goal is not to get pregnant, and you want a statistical tool to tell you when to start worrying, I'm afraid your approach won't work. Biological distributions tend to have very heavy tails, and you simply do not have enough data to make a sensible estimate of the distribution. With a limited dataset, however, you could make control charts. Here's a link to a how-to (powerpoint), courtesy of the British NHS Modernisation Agency. --vibo56 17:18, 1 June 2006 (UTC)
- Well, the goal is complicated. It is one, the other, or both of the above, in addition to saving on costs of Human chorionic gonadotropin tests (which have high false negative rates, especially when used too early) by using them at the right time. For example, applying an additional Bayesian update rule, a negative test with a sensitivity of 25 mIU of hCG would reduce my probability by a factor of nearly three if I used it today, while it would reduce the probability by a factor of eight if used tomorrow. And buying a basal thermometer would negate those cost savings. :) The other main goal is the fun of overanalyzing these things. :) moink 04:12, 2 June 2006 (UTC)
- If you check out the presentation that I linked to, and are able not to get too irritated about the "for dummies" manner in which it is presented, you will see that this might be exactly the tool that you are looking for. It is a tool for decision-making, primarily in the manufacturing industry, but it is now mandatory also in blood banks throughout the EU. As you can see from this article, it has been around for a long time, and has stood the test of time. It is a curious mix of parametric and non-parametric statistics.
- Your statement on the goal leaves me with the impression that timing is a rather critical issue. I would definitely invest in that thermometer! I wish you all the best, and hope that you achieve your goal and that it brings you happiness. Best regards, --vibo56 23:39, 2 June 2006 (UTC)
Newton was right after all
[edit]Ok, here is this little idea I had today. It has the potential of solving all problems of theoretical physics in one stroke of genius. Basically, the idea is that the interactions in the world are subject to very simple rules, namely Newtonian mechanics. But hey, I hear you say, wasn't there a guy called Einstein who has proven Newton wrong? Well, he did, but it would be Newton who will be having the last laugh.
The world of Newtonian mechanics is very simple: a Euclidean space and a few differential equations, solutions of which are nicely differentiable curves. There is one problem with this world: it is continuous, which makes its "implementation" extremely hard. There are no continuous things in our world: the space is discrete, the time is discrete, and this brings us to the next point: the world is a finite state machine. The world is, basically, a computer. The bit-twiddling aspect of our world is studied by quantum mechanics.
N-body problem is a classical problem of mechanics. There are a few particles floating around in space under the Newtonian law of gravity. It translates into a system of differential equations, which in general is not solvable by mathematical means. We'll try to simulate this problem on a computer. When n=2 theory states that these two bodies would have elliptic orbits (well, not exactly, the orbits may also be hyperbolic or parabolic, but we assume that they are ellptic). What happens when we simulate this on computer? At first everything seems okay, the smaller planet rotates around the bigger one. Let's modify the program so the path of the smaller planet is visible. Then, after a few rotations we'll see a strange effect: the elliptical orbit is slowly turning! More interesting is that the same thing happens in real life: a result predicted by General Relativity theory. But isn't it strange, that Newton's law of gravity when emulated on a computer produces the same effect?
There are several numerical methods for solving differential equations. They all have the same flaw: when you run the method for a long time, round-off errors and discretization errors build up and the result strays off from the right solution. The same thing happened in our simulation - during the rotation the error builds up as we do little discrete time steps, and it rotates the orbit. The same thing, I suspect, happens in our real world, which faithfully tries to solve Newton's differential equation by discrete means.
Comments are welcome. Grue 12:36, 1 June 2006 (UTC)
- It appears unlikely that this theory can be tweaked to give a quantitative agreement with the observations. Using methods such as Runge-Kutta integration, it is easy enough to get a precision that is able to differentiate between Newtonian and Einsteinian gravitation. And since Wolfram we all know that the universe is a cellular automaton. --LambiamTalk 14:08, 1 June 2006 (UTC)
- I think there's a simple explanation for your observation. Newtonian mechanics predicts precession of planetary orbits due to small pertubations caused by other planets. In your computer models, I suspect that the rounding errors introduce similar pertubations into a 2-body situation, and so give rise to qualitatively similar precession. Both Newtonian mechanics and general relativity predicted a precession of Mercury's orbit - the difference was that the Newtonian prediction of the rate of precession did not agree with the observed value, whereas the GR prediction did agree (within the limits of observational error) - see Tests of general relativity. Gandalf61 15:33, 1 June 2006 (UTC)
- Methods exist for integrating differential equations while respecting conservation laws, and special perturbation methods have been devised for orbital simulations. When JPL computes long-term ephemerides and certain critical spacecraft trajectories, they find it necessary not only to be extraordinarily careful with their numerical methods, but also to include the effects of general relativity. Some tests of general relativity effects require meticulous care to distinguish the sought effect from numerous other sources of perturbation; a notable example is Gravity Probe B. Other effects, like gravitational lenses and black holes, are not subtle at all, and are not only thoroughly observed but also dramatically different from Newtonian predictions. To speak bluntly, it is absurdly arrogant to imagine that your uninformed "little idea" is more insightful and clever than the work of large numbers of trained professional physicists. We are convinced Einstein's theory will eventually be replaced by a more integrated theory, something Einstein himself attempted (unsucessfully) in his later years; but we can never go back to Newton. --KSmrqT 23:50, 1 June 2006 (UTC)
- You all seem to miss the point. It is true that there are high-precision numerical methods for solving differential equations. It is not true that the world is using these methods to calculate trajectories. It is using the simplest one, that is, Euler integration. With Planck time being a very short interval, the error would be noticeably high. "Continuous" Newton mechanics doesn't take this error into account. Relativity does. Grue 07:10, 2 June 2006 (UTC)
- I said and still maintain that it appears unlikely that this theory can be tweaked to give a quantitative agreement with the observations. How so is that beside the point? The onus of showing that you can get a good agreement is on you. Einstein predicted a difference of 43 arc-seconds per century with Newtonian theory; what does your theory predict? See further Scientific method. For the rest, Euler integration? – now if you could explain away action at a distance, that might make it interesting even if zany. --LambiamTalk 14:35, 2 June 2006 (UTC)
One way hash/encryption function that is guaranteed unique
[edit]Any crypto fans here? I need to generate object IDs from other object IDs to mask the true object with one that is a plausible replacement.
Say I have a unique object ID, is there an encryption algorithm I can use against it to produce a different (cyphertext) object ID that is one way (can't easily get the original plaintext Object ID from the cyphertext Object ID, either not at all, or at least short of a computationally expensive attack), and unique, meaning that no 2 original plaintext object IDs produce the same cyphertext object ID and that I never get different cyphertext object IDs from the same plaintext object ID. The text lengths need not be the same I guess, although obviously the cyphertext one can't be shorter. I have looked at Message_digest and it speaks of cypher/hash functions that are unlikely to have different plaintexts go to the same cyphertext. I need guaranteed... the cyphertext length need not be a hash, it can be as long or longer as the original (exactly the same size would be convenient, though). thanks! ++Lar: t/c 18:34, 1 June 2006 (UTC)
- PS... another way of saying this is that I need Collision_resistance that is not just hard but impossible, or at least practically impossible. ++Lar: t/c 18:36, 1 June 2006 (UTC)
- Well, the whole notion of hash functions is that they're practically good enough, but if you want to reduce the possibility of collisions further, just encrypt the original object. There aren't (AFAIK) formal methods for one-way-only full-length encryption, but you can get the effect by generating a public/private key pair (say, via PGP / GPG). Encrypt normally via the public key, and then hide or destroy the decrypting private key as required. Public-key encryption is preferred here since knowledge of the only in-use key can't expose the data, but there's no reason any other encryption won't work so long as you protect the key. — Lomn Talk 19:13, 1 June 2006 (UTC)
- a further note on "practically good enough": the SHA-1 hash function will have its first random collision, on average, after about 1024 entries. Do you really anticipate needing that sort of collision resistance? — Lomn Talk 19:20, 1 June 2006 (UTC)
- It's a customer requirement, and their answer is yes. Mine would be no though! Thanks for your help on this. I'm thinking public key/private key (with a destroyed private key) is the way to go, the object ID is short enough that encrypting it is fine, should not be too compute intensive ++Lar: t/c 20:02, 1 June 2006 (UTC)
- One other option to consider would be sticking to a standard hash but appending some additional nonce such as date-of-entry alongside the hash output. Of course, if hash function collision rates aren't "practically impossible" enough for the customer, I don't know that this is really any better (unless it simply looks stronger to a corporate perspective). Another consideration, if you have short object IDs, is the possibility that object contents will collide more frequently than the cryptographic process -- for instance, I'm quite certain you'd find the MD5 for "password" far more frequently in a password hash file than mere chance would suggest because the source entries aren't random. This may necessitate a nonce in its own right (and, of course, it could be appended prior to hash/encryption as well). — Lomn Talk 20:27, 1 June 2006 (UTC)
- That's why Unix passwords use salt. -- EdC 00:52, 2 June 2006 (UTC)
- a further note on "practically good enough": the SHA-1 hash function will have its first random collision, on average, after about 1024 entries. Do you really anticipate needing that sort of collision resistance? — Lomn Talk 19:20, 1 June 2006 (UTC)
A method that gives good scramble of an N-bit string is to XOR it with a randomly chosen bit string, apply a randomly chosen permutation to the bits of the string, multiply modulo 2N with another random but odd bit string, and XOR+permute again (using different choices). Since each step is invertible, you are guaranteed to have no collisions. For additional security the procedure can be repeated. I have no idea how safe this is against various cryptanalytic attacks; for all I know there is a way of breaking the scheme that is obvious to more devious minds than mine. Use at your own risk. --LambiamTalk 14:53, 2 June 2006 (UTC)
- In general, any block cipher should satisfy the conditions you specify, as long as you never reveal the key. If there's a limit on the block size, you can always construct your own ad hoc cipher, which is pretty much what the previous suggestion does. To be safe, however, you probably want to construct your custom cipher around an established cryptographic hash function, using a well-studied construction such as a Feistel cipher. For my own earlier take on a similar problem, see [1]. (Note that the code actually rather overdoes it, using 2*5 = 10 rounds where, per Luby and Rackoff, 4 should suffice.) —Ilmari Karonen (talk) 17:27, 2 June 2006 (UTC)
- Thinking about this a bit more, it occurs to me that, in principle, any (conjectured) one-way function should also satisfy your requirements without requiring a secret key. However, unless the OIDs are randomly chosen from a very large set, any solution without a secret element is vulnerable to an exhaustive search — in which case you're back to the "block cipher with secret key" solution. —Ilmari Karonen (talk) 22:14, 5 June 2006 (UTC)
notation
[edit]What do you call this type of notation?
or
As far as I can remember, ∗ is always been one of Σ, Π, ∩, ∪, or ∐ (coproduct), but theoretically couldn't it be extended to apply to any binary operation? Or maybe even to any function?
I don't understand why this ubiquitous notation does not seem to have a name.
- Don't forget \bigwedge (wedge product or join), \bigvee (meet) \bigoplus (direct sum), \bigotimes (tensor product), and I'm sure there are more lesser used examples. It's not binary operators that you do this with, but n-ary operators (especially infinitary operations). Of course any associative binary operation can be extended by induction to an n-ary operation. I think it'd be more risky to try to do this with a nonassociative operation, and of course the notation really earns its lunch for infinitary operations, which do not arise from binary operations by induction. As for what the notation is called, well I don't view it as a problem that the notation doesn't have a name. Lots of notations don't have names. What's the dx symbol that sits next to the integrand called in an integration? But if you have to have a name, I think the AMS-LaTeX guide calls them cumulative operators, which might serve you. -lethe talk + 19:19, 1 June 2006 (UTC)
- So could a non-binary operation be extended to an n-ary operation with this notation? I'm particularly thinking of the hyper operator.
- No, because the hyper operators (past multiplication) are nonassociative. The notation could conceivably be extended to nonassociative operators, but order of evaluation would need to be provided. -- EdC 00:50, 2 June 2006 (UTC)
- But could this notation be defined for a function that would, without it, take 3 or more arguments? For example, if you have:
- can you define the following?
-
- Ugh. No. Absolutely not. No way. Look, you have to consider that these cumulative operators are essentially functions defined on multisets (or, at a pinch, sequences); the point is that an associative binary operator (with range a subset of its domain) extends to a function on finite sequences (if it's commutative, on finite multisets). You can't do that with operators of greater arity; for one, the length of the sequence (size of the multiset) is constrained (for ternary operators) to be an odd number ≥ 3. (And where do you grow the evaluation tree?) You'd do better to package the sequence as pairs ((n-1)-tuples) and define a F on sequences of pairs:
- Less convenient, but at least you'll be understood unambiguously. -- EdC 02:59, 2 June 2006 (UTC)
- Then and (function composition) are fine and valid uses of the notation, but (where ) isn't.
- No, because the hyper operators (past multiplication) are nonassociative. The notation could conceivably be extended to nonassociative operators, but order of evaluation would need to be provided. -- EdC 00:50, 2 June 2006 (UTC)
- So could a non-binary operation be extended to an n-ary operation with this notation? I'm particularly thinking of the hyper operator.
- Isn't dx a differential? —Keenan Pepper 21:16, 1 June 2006 (UTC)
- Does differential mean infinitesimally small number? If so, then yes, it was true in Newton's day that the dx symbol in an integration was an infinitesimal number, but it's not true today. Nonstandard analysis puts the infinitesimal on rigorous footing, but even that still does not allow you to interpret the symbol under the integral sign as an infinitesimal. You may interpret the symbol as a differential form in some cases. So I ask you, what does the word "differential" mean to you? The WP page is just disambiguation. -lethe talk + 21:44, 1 June 2006 (UTC)
- Isn't dx a differential? —Keenan Pepper 21:16, 1 June 2006 (UTC)
Statistical process control
[edit]In statistical process control using control charts, I have noticed that presenters often recommend calculating the standard deviation in a, so to speak, nonstandard way. The recommended procedure is to calculate a mean moving range, i.e. , using a relatively small dataset, and then divide the mean moving range by the magic number 1.128. If you google for "(1.128 and calculate)" and are feeling lucky today, you will find a such a presentation. The number 1.128 is often represented by the symbol d2. Does anybody know the maths behind this non-standard estimator of the standard deviation? --vibo56 19:00, 1 June 2006 (UTC)
- If the SD (innate variability about the current mean) was estimated from all the values, there would be an over-estimate in the presence of a trend (shift of mean), whether upward, downward or cyclic. Using the difference between successive observations removes this effect. If the sum of the squares of these differences is used, it has to be divided by 2(n-1) and square-rooted to estimate SD. Your formula uses absolute differences, without a "2" in the denominator. But for a Normal distribution, Mean Absolute Deviation is SD*root(2/pi).
- Combining these corrections, 1.128 is root (4/pi).
Indices
[edit]Can anyone help me with the problem below. I am aware that it is a homework question, but that is why I don't want you to answer it! The question is to simplify it and I assume that you have to multiply out the parenthesises:
I obviously tried WP, but couldn't find the right article. Thank you very much. Kilo-Lima|(talk) 20:05, 1 June 2006 (UTC)
- It looks like your title may be your problem, as those appear to be exponents rather than indices. Given that, you're right that you need to multiply out; now you just need to check the rules for ax * ay = a?. — Lomn Talk 20:18, 1 June 2006 (UTC)
- Ok, I won't answer it. You are correct in assuming that you have to multiply out the parentheses. I am fairly confident that your maths textbook will provide all the information you need to solve it. Just do it step-by-step, slowly and carefully. Write down each line, and be sure to know what rules you are applying in getting from one line to the next. Good luck with your homework. --vibo56 20:24, 1 June 2006 (UTC)
- If you're still having trouble, recall that ax * ay = ax+y --Codeblue87 21:48, 1 June 2006 (UTC)
- Also remember that multiplication is distributive. The first equation of the distributivity article should help you; you can ignore the more complicated stuff after that. moink 22:40, 1 June 2006 (UTC)
- This question is a study in arithmetic properties of exponents. We first meet exponents in the limited form of positive integer powers, such as a3 = a·a·a; but here we are challenged to extend our understanding to fractional powers.
- It is a property of all real numbers, however obtained, that multiplication distributes over addition: r(s+t) = rs+rt. (This is also a property of complex numbers.) Therefore, if we follow your hunch and "multiply out", we obtain
- Still being thoughtful and cautious, we observe that multiplication is both associative and commutative, so we may slightly rearrange to get
- Lacking experience with the fancier forms of exponentiation, it may not be obvious that anything we have done so far is particularly helpful. However, it does bring factors ax, for various x, next to each other. In fact, this is helpful, for it brings us to the crux of the problem. But to simplify any further we must begin to understand the meaning of fractional exponents, and learn the rules that apply to them. In other words, just as we have used rules of multiplication and addition — such as distributivity, associativity, and commutativity — to get this far, we must use analogous rules involving exponents to proceed.
- The big picture will take time to see, but eventually we are able to construct a formal correspondence between expressions like
- on the one hand, and
- on the other, so long as a is positive. At that point (which most of us have reached) the last steps of simplification will be obvious. The purpose of this question is to help you get to that point. So, read the article on exponentiation, and enjoy. --KSmrqT 00:59, 2 June 2006 (UTC)
- A common blunder among people first learning this stuff is that exponentiation distributes over addition. It does not. In other words, . Don't make that mistake. To figure out the right way to deal with exponentiation of sums, consult binomial formula. -lethe talk + 01:14, 2 June 2006 (UTC)
how to run a Monte Carlo simulation on a DEM using ArcGIS
[edit]PLEASE I WOULD LIKE TO KNOW HOW THE Monte Carlo simulation techniques IS USED to evaluate the impact of DEM error on viewshed analyses.
- I would like to know what the caps lock key does. -lethe talk + 01:02, 2 June 2006 (UTC)
June 2
[edit]Challenging Integral
[edit]
What kind of substitution should I use?Patchouli 06:18, 2 June 2006 (UTC)
- integrand=
- integral=
Patchouli 07:17, 2 June 2006 (UTC)
Particle leaving a path
[edit]I have a question involving circular motion. Visualise a roller coaster's path. At the start, a straight part of length x, at an angle of 10° above the horizontal exists. A particle rolls down this, and meets an arc of radius 8m sloping downwards, and declines to a straight path 40° below the horizontal. The path is smooth and is only affected by gravity. I have to find the maximum value of x so that the particle stays on the path. I've only ever encountered particles with a mass, and this one has none specified, so I assume it cancels somewhere.
Anyway,
- If the particle leaves the path, would it leave as soon as it encounters the curved component? (i.e. it takes off and misses the first part of the arc.) Or is it possible that it leaves the path during the arc?
- Does weightlessness have anything to do with it when a particle is on the verge of leaving a path?
I'll figure stuff out from here.
Thanks. x42bn6 Talk 08:57, 2 June 2006 (UTC)
- Once the particle leaves the path, there will be no upward force acting on the particle any more, so the only force acting on the particle will be gravity. What shape of path will this particle take? If you need help, take a look at the article about trajectory. I think how you want to solve this problem is to find out when this "natural" trajectory lies above the path, so that the particle will tend to "take off" instead of following the path. —Bkell (talk) 11:07, 2 June 2006 (UTC)
- A particle follows a path only as long as the resultant force can supply the acceleration needed to keep it on that path. In this case, the resultant force comes from gravity; to stay on the path the centripetal component of the gravitational vector must be at least as great as the centripetal acceleration of a particle taking that path. Now, the centripetal acceleration depends solely on velocity and arc radius; velocity depends solely on height lost (assuming a frictionless path i.e. total conversion of gravitational potential energy into kinetic energy).
- This implies that the particle becomes more prone to leave the path as it proceeds round the arc: the velocity increases, and the centripetal component of the gravitational vector decreases. So the maximum x is that which has centripetal acceleration equal to centripetal component of gravitational vector at the end of the arc. Quite beautifully, both mass and gravity cancel out. See here to check your solution.
- Finally, what's special about 70.529 degrees? -- EdC 11:46, 2 June 2006 (UTC)
- (1) Yes, if the particle is going to depart at all, it'll depart at the very start of the circularly curved section. Both factors point in the same direction:
- If energy is conserved, the particle is moving fastest at the start of the curve. So it requires the most centripetal acceleration to hold to the path there.
- At the start of the curve, the slope is the steepest. So the normal component of gravity is the weakest there.
- Melchoir 20:42, 2 June 2006 (UTC)
- Are we looking at the same path? I thought it was like this:
____ x 10° ˇˇˇˇ----____ ˇˇˇˇˇˇˇˇˇˇˇˇ _--ˇˇˇ--_ /ˇ ˇ\ | 8m |\ | ·<---->| ˇ\ | | ˇ\ \_ _/ 40° ˇ\ ˇ--___--ˇ ˇˇˇˇˇˇˇˇ
- Oh, if it's that then, conversely, the particle might leave at any point, depending on its energy. Melchoir 21:37, 2 June 2006 (UTC)
- Actually, never mind, my exam is over. Luckily it didn't come out. But I did use the argument that if the particle reaches the next straight path, and we want it just to, R must be minimized, so we look for the upper value of or something. Thanks, anyway. x42bn6 Talk 04:51, 5 June 2006 (UTC)
Age in Chocolate
[edit]Why does this work?
1. First of all, pick the number of times a week that you would like to have chocolate (more than once but less than 10) 2. Multiply this number by 2 (just to be bold) 3. Add 5 4. Multiply it by 50 -- I'll wait while you get the calculator 5. If you have already had your birthday this year add 1756 .... If you haven't, add 1755. 6. Now subtract the four digit year that you were born. You should have a three digit number The first digit of this was your original number (i.e, how many times you want to have chocolate each week). The next two numbers are: YOUR AGE! (Oh YES, it is!!!!!)
I got it from ebaumsworld and it works for me. I find it really weird. Thanks. schyler 13:48, 2 June 2006 (UTC)
- For an explanation, look here: http://mathforum.org/library/drmath/view/61702.html (the numbers are a bit different, but obviously they must be adjusted each year). --LambiamTalk 14:14, 2 June 2006 (UTC)
- See elementary algebra. If you want chocolate n times a week and you were born in year y, then you get
- 50 (2n + 5) + 1755 − y if you haven't had your birthday this year.
- 50 (2n + 5) + 1756 − y if you have had your birthday this year.
- This rearranges to:
- 2005 − y + 100n (if you haven't had your birthday)
- 2006 − y + 100n (if you have had your birthday)
- Hopefully it's now obvious that the (2005 − y) or (2006 − y) bit is your age, and the 100n bit gives n in the hundreds column (given the range you specified for n).
- Arbitrary username 14:19, 2 June 2006 (UTC)
- See elementary algebra. If you want chocolate n times a week and you were born in year y, then you get
- You know, this wouldn't work if your age was at or beyond a century. Black Carrot 18:01, 2 June 2006 (UTC)
- The puzzling part is why anyone would be surprised. If I call a psychic hotline and give them my name, phone number, and credit card number, should I be surprised that they "mystically know" a lot about me? Just so, this song-and-dance routine is telling you no more than you tell it. You provide the chocolate number, you provide your year of birth and an adjustment for whether you've had a birthday this year; the calculations are merely an obscure and entertaining way of producing the stated result: (chocolate)×100 + (age), where (age) = (this year, adjusted)−(birth year). --KSmrqT 20:00, 2 June 2006 (UTC)
no, I put in 3 for how many per week and I'm 14 and I got 313,
- Check your work, especially step 5. If your chocolate number is 3 and if you are about 14 (born 1992) you should get either
1 → 3 2 → 6 3 → 11 4 → 550 5 → 2306 6 → 314 1 → 3 2 → 6 3 → 11 4 → 550 5 → 2305 6 → 313
- depending on whether you have had your birthday this year (first line) or not (second line). --KSmrqT 08:02, 3 June 2006 (UTC)
June 3
[edit]Laurent Series
[edit]I have a complex analysis exam coming soon, and I'm not very confident with Laurent series. Suppose I have a complex function f(z) which has a finite number of poles. Is there a general method for finding its Laurent expansion? Admittedly, my knowledge of Laurent series is somewhat limited. I know what they are, but not really how they work. Can anyone help? Maelin 03:12, 3 June 2006 (UTC)
- Have you read Laurent series? Conscious 06:56, 3 June 2006 (UTC)
- Yes, but it's an infinite series from -infinity to infinity. If I'm asked to find the Laurent series of a function, where do I start? The examples in the article don't show where the expressions come from, it's just "consider this function. Now, abracadabra! Here are some Laurent expansions for it depending on where you're interested!"
- The second formula of the article gives a general formula for calculating the coefficients in a Laurent series, so it's not magic, just calculation. Admittedly, that's not usually the easiest way to calculate. -lethe talk + 07:40, 3 June 2006 (UTC)
- You can always use the general formula and well-known series for some functions (see the example for in the article). Conscious 07:49, 3 June 2006 (UTC)
- Yes, but it's an infinite series from -infinity to infinity. If I'm asked to find the Laurent series of a function, where do I start? The examples in the article don't show where the expressions come from, it's just "consider this function. Now, abracadabra! Here are some Laurent expansions for it depending on where you're interested!"
- Okay, I've done some more work with them. In our lecture note examples, we generally have a function of the form f(z) = 1 / g(z) where g(z) is some polynomial in z (usually conveniently factorised into linear terms). Then we reduce it to some form of geometric series and find the Laurent series. The problem is that I have no idea how we end up with a Laurent series that is valid for the particular region we're interested in. We just seem to head off reducing it in one particular way and then voila, we have the right one. In the article, we have an example just like this, and then three different Laurent expansions miraculously appear, correct for each region. Clearly, whoever made the example didn't do an infinite number of integrals around γ and then build a series out of them, so what is going on here? Apologies if I sound terse, this is very frustrating. Maelin 05:19, 4 June 2006 (UTC)
- Well, doing infinitely many integrations for the coefficients is not as bad as it sounds, since you can often tell at a glance what a contour integral will be. Anyway, like I said, that's not usually the easiest way. The example in the article goes like this. Split that thing up by partial fractions. When |z| < 1 and 2, you can use the geometric series with the z – 1 and the z – 2i terms. When z is between the two roots, you can use the geometric series with the z –2i term, but for the other term, you have to use like z/(1 – 1/z). And for the last region, you have to invert z for both terms. Basically, you can only invoke the geometric series when the ration has modulus less than 1, and this fixes how you find the Laurent series in each region. -lethe talk + 20:58, 4 June 2006 (UTC)
voronoi diagram
[edit]I would like the definition and applications of a voronoid diagram (scatter application). Please put a definition on your web site.
qauadratic formula
[edit]what is the quadratic formula —Preceding unsigned comment added by 69.231.27.28 (talk • contribs) 04:03, 2006 June 3
- Please read and follow the first bullet point at the top of this page. (And sign your posts with four tildes, ~~~~.) Thank you. --KSmrqT 05:33, 3 June 2006 (UTC)
Firefox: how to search edit areas?
[edit]How do I search edit areas in Firefox? For example, when I edit a long article and want to find where it links to a specific category, the text entered into the search field seems to be searched for only outside the wikicode. Is there any way to get it straight? Conscious 06:07, 3 June 2006 (UTC)
- I don't know of a way to do that. It would be very useful. However, if you use the "Show preview" button, you'll be able to search the edited version (though not the edit box itself), and that makes it a lot easier to find the required text, by counting paragraphs.... TheMadBaron 19:28, 3 June 2006 (UTC)
Mathematica to Wikipedia
[edit]I have Mathematica 5.0. What is the best way to transform the *.nb to a format that Wikipedia understands.
- You don't. You write text for an article explaining what you're doing. Dysprosia 04:14, 4 June 2006 (UTC)
- The HTMLSave function comes to mind. Realistically, the text will mostly need to be copied as plain text, perhaps with some UTF-8 characters; and each formula will need to be rewritten in either wiki syntax or TeX format. The TeXForm facility may help with the latter, though compatibility with MediaWiki's limited version is not guaranteed. Come BlahTeX, the MathMLForm version may be of interest, though it would make editing obnoxious. Note that it would be inappropriate to import Mathematica style sheets to override Wikipedia's own. --KSmrqT 04:27, 4 June 2006 (UTC)
Upper bound on n!
[edit]Given an integer n, what are the tightest bounds on factorial n? More specifically, I want to calculate the number of binary digits required to represent n! for a given n. -- Sundar \talk \contribs 10:37, 3 June 2006 (UTC)
- The article on factorial gives the following approximation based on Stirling's approximation :
- Can this be taken as an upper bound on ? If not what would be an upper bound? -- Sundar \talk \contribs 10:45, 3 June 2006 (UTC)
- The article on Stirling's approximation says that the error is the same sign and size as the first omitted term. The next term in the series after the one you've written is negative, so your series is an upper bound. -lethe talk + 11:30, 3 June 2006 (UTC)
- Oh whoops. My comment above will be true if you also include the 1/12n term. So do that. -lethe talk + 11:32, 3 June 2006 (UTC)
- Oh thanks, Lethe. -- Sundar \talk \contribs 11:44, 3 June 2006 (UTC)
- Oh whoops. My comment above will be true if you also include the 1/12n term. So do that. -lethe talk + 11:32, 3 June 2006 (UTC)
- The article on Stirling's approximation says that the error is the same sign and size as the first omitted term. The next term in the series after the one you've written is negative, so your series is an upper bound. -lethe talk + 11:30, 3 June 2006 (UTC)
- Sposta be a 12. -lethe talk + 11:48, 3 June 2006 (UTC)
- Thanks, fixed it. By the way, what would be the equivalent one for log to the base 2 (my original question)? (Excuse my laziness.) -- Sundar \talk \contribs 12:06, 3 June 2006 (UTC)
- Well, log2 x = log x/log 2. So divide both sides of the equation by ln 2, and you're good to go. -lethe talk + 12:10, 3 June 2006 (UTC)
- I now realise. It wasn't laziness, but naivety. -- Sundar \talk \contribs 12:16, 3 June 2006 (UTC)
- Well, log2 x = log x/log 2. So divide both sides of the equation by ln 2, and you're good to go. -lethe talk + 12:10, 3 June 2006 (UTC)
- Thanks, fixed it. By the way, what would be the equivalent one for log to the base 2 (my original question)? (Excuse my laziness.) -- Sundar \talk \contribs 12:06, 3 June 2006 (UTC)
- Sposta be a 12. -lethe talk + 11:48, 3 June 2006 (UTC)
Well considering that summation is cheap for computer, for relatively small n you can get a very good estimate by using
- Thanks. But, summation was not the main concern, logarithm was. -- Sundar \talk \contribs 06:29, 5 June 2006 (UTC)
i cut a little piece out of a torus (donut), what is the remaining space
[edit]Consider a torus, cut a tiny CLOSED piece out on the side. So the piece I cut out is homeomorphic to .
What is the remaining space? My professor tells me I can only see it when i start stretching that hole open 'until my fingers touch on the other side'. However I was born with no 3D mind, I simply do not see it.
It should be an 'easy space involving cylinder(s)' I would like to understand this completely for the understanding of homotopy and homology groups.
Evilbu 11:35, 3 June 2006 (UTC)
- Well, a torus is a sphere with a handle, and a sphere with a disc removed is a disc, so a torus with a disk removed is a disc with a handle. I don't know if that's the answer you're looking for though. -lethe talk + 11:53, 3 June 2006 (UTC)
Well uhm, what is a handle, my syllabus says every is a handle? The article http://wiki.riteme.site/wiki/Handle_%28mathematics%29 doesn't really help me out right now. Evilbu 12:16, 3 June 2006 (UTC)
- A handle is a cylinder attached at its two end circles. It is not the product of two balls, which is trivial. -lethe talk + 12:29, 3 June 2006 (UTC)
- is . To see this, represent as with side identifications; remove a chunk around the corner; shrink the remainder. Alternatively consider as the skeleton of a torus and imagine growing a patch of skin around the torus from that skeleton; the remaining hole is -shaped. EdC 12:39, 3 June 2006 (UTC)
- Is that supposed to be a disjoint union? The torus with a disc removed is certainly a connected space. Perhaps you mean the wedge sum of two circles instead? But that's not right either, as the latter is 1 dimensional, while the former is two dimensional. They are certainly homotopy equivalent though. -lethe talk + 12:55, 3 June 2006 (UTC)
- Yeah, I meant wedge sum, sorry. As for dimension - if you cross each with a , so that they join at a , then you get something like two cylinders tangent at right angles. EdC 17:09, 3 June 2006 (UTC)
- isn't the boundary of a 1-ball just a disjoint pair of points? If you go crossing anything with that, you'll get a disconnected space again. Surely not what you want. -lethe talk + 20:48, 3 June 2006 (UTC)
- Yeah, I meant wedge sum, sorry. As for dimension - if you cross each with a , so that they join at a , then you get something like two cylinders tangent at right angles. EdC 17:09, 3 June 2006 (UTC)
- is not the same as . , the deformation of the space in question, is a 1-manifold except at one point, whereas is a 2-manifold with boundary. looks like a thickened copy of ---though it's not just a figure-8 drawn with a really fat-tipped marker. (The boundary of a this figure-8 has three components, where the boundary of has only one by the construction.) To visualize , take a big fat blocky plus sign (like the Red Cross logo), then glue the top and bottom ends together in the back, and the left and right ends together in the front. Tesseran 01:43, 6 June 2006 (UTC)
I am very confused now. What do you mean, a cylinder attached at its two end circle, you mean take a cylinder, then attach the upper and lower circle? Wouldn't that be a torus? Why would the product of two balls be trivial? What is the union of twice Evilbu 12:44, 3 June 2006 (UTC)
- A coffee cup has a handle. The handle attaches to the mug in two different places, once at each end. -lethe talk + 12:58, 3 June 2006 (UTC)
Seems like I know much less than I thought. Is there a precise definition of handle (I am familiar with the language of quotient spaces) In order to make sure we understand each other, I will say a couple of things, and please tell me when you disagree : is the closed disk in two dimensions is the unit circle (it is one dimensional , the (empty,thus 2d)torus, is homeomorphic with the full,thus 3d torus is homeomorphic with
- I disagree that a 3d torus is equal to a circle times a ball. Rather, such a thing is called a genus 1 handlebody. A 3d torus is the cartesian product of 3 circles. If you'd like a technical definition of a handle, you can take it to be a torus with a disk removed, though that won't help you visualize what it is. Hence the coffee cup description. -lethe talk + 20:46, 3 June 2006 (UTC)
Maybe it would be relevant to say why I want this. I want to find the torus' homology groups, especially the group Now my professor told us to do a Mayer Vietoris trick on the torus, by cutting out a little piece (a disk) . The two spaces I get then, have a 'relatively easy' intersection in my Mayer Vietoris sequence, it is a cylinder, homotopic with a circle, and thus completely known. But what is the other space?Evilbu 15:31, 3 June 2006 (UTC)
- It's two cylinders kissing, homotopic to a bouquet of two circles. EdC 19:26, 3 June 2006 (UTC)
- I suppose if you've previously worked out the homology of the more complicated piece this makes sense. (It also hints at a general construction involving a 2n-gon and edge gluing.) But otherwise, wouldn't it be more natural to use two cylinders (retracting to circles), with overlap deformation retracting to two disjoint circles? Also, this space is simple enough that you could take a rectangle split diagonally into two triangles, and glue the outer edges of the triangles together in such a way as to give a simplicial decomposition for an explicit computation. And if you've gotten as far as Mayer-Vietoris sequences you're probably not far from the Künneth theorem, which makes this T2 = S1×S1 product a trivial computation. You can check your work by these other means, and also by comparing the known Euler characteristic of the torus, namely zero, to that given by its Betti numbers summed with alternating signs.--KSmrqT 20:58, 3 June 2006 (UTC)
- Also, isn't Hn(X,Z) always isomorphic to Z when X is a connected manifold? Generated by the fundamental class of X. -lethe talk + 03:22, 4 June 2006 (UTC)
- How best to respond to a learning exercise? I'm assuming that the purpose is to get comfortable with computations, here using the Mayer-Vietoris sequence, rather than to get the answer. It's not as if computing the homology of a torus is much of a burden. Also, Evilbu doesn't tell us if the coefficients are Z, Z2, R, or something else. (Has the class covered the universal coefficient theorem yet?) Nor do we know if this is singular homology, though that seems a good assumption, and relatively unimportant. In short, I'm trying hard not to say "The answer is …!" :-D --KSmrqT 05:03, 4 June 2006 (UTC)
- To (sort of) answer lethe's question, consider the homology groups of Rn; these are the same no matter what n may be. For, Rn is homotopy equivalent to a punctured n-sphere, and deformation retracts to a point. (It is a contractible space.) So, what is H2(R2,Z)? Remember, homology measures cycles modulo boundaries under homotopy equivalence; to get a Z there must be a "hole" to catch a cycle, preventing repetitions from collapsing. --KSmrqT 19:20, 5 June 2006 (UTC)
- Well, I'm a little concerned by your phrase "homology measures cycles modulo boundaries under homotopy equivalence". It's possible to have homologous paths that are not homotopic, I think. But your point is well-taken, there's something wrong with my assertion that Hn(X,Z) is always Z for an n-dimensional manifold, since, as you rightly point out, for Rn, which is trivial. Suitably chastened by your correction (and Blotwell's above), I will put forth that perhaps it's true for closed manifolds? -lethe talk + 08:54, 6 June 2006 (UTC)
- To (sort of) answer lethe's question, consider the homology groups of Rn; these are the same no matter what n may be. For, Rn is homotopy equivalent to a punctured n-sphere, and deformation retracts to a point. (It is a contractible space.) So, what is H2(R2,Z)? Remember, homology measures cycles modulo boundaries under homotopy equivalence; to get a Z there must be a "hole" to catch a cycle, preventing repetitions from collapsing. --KSmrqT 19:20, 5 June 2006 (UTC)
- Yes, there's a "problem" with non-compact manifolds which you can fix by taking compactly supported homology. This gives you Hn (Rn, Z) = Z, but at the price that (by the above argument) it can't be a homotopy invariant. But there's one other problem with your claim, which is that if your manifold isn't orientable then the integer top-dimensional homology is 0. You can fix that by taking coëfficients in Z2. —Blotwell 18:05, 7 June 2006 (UTC)
Uhm, no I do not know anything of universal coefficient theorem. If it is relevant, we always consider these groups as modules, thus abelian groups. I know I cold do a Mayer Vietoris trick by using two cylinders, who intersection are two disjoint cylinders, then I find everything except Basically I was hoping by doing this cutting out of a little sphere, I would be able to find it in another way.Evilbu 09:47, 4 June 2006 (UTC)
June 4
[edit]cost management accounting - indirect labor unit cost.
[edit]Please help a new small manufacturing company by locating "indirect labor unit cost." We need this for our financial portion of the business plan as required by Small Business Administration. Our leaders, Maasters of Science in Healthcare, for some reason did not include this information. If anyone out there can help, it will greatly appreciated. Our company sews dresses and suits for premature infants and low birth weight infants. The SBA booklet indicates that this "cost" is needed to be included with total production costs.
This question may not be a typical for Wikipedia, but we have really been trying to find this and to date have not been successful.
- "Labor unit cost" is the cost of labor per unit produced. So if in a month your labor cost is $100,000.00, and you produce in that time 5000 garments, the labor unit cost is $100,000.00 divided by 5000 equals $20. "Indirect labor cost" is that portion of labor cost that can be ascribed to activities not directly contributing to production (e.g. administration, salespeople, advertising). And the unit cost is as before. So if in that month your indirect labor cost is $40,000.00, then the indirect labor unit cost is $8. See http://www.uwm.edu/Course/IE360-Saxena/two.pdf section 2.5, elements of cost. --LambiamTalk 13:51, 4 June 2006 (UTC)
Formula used in civil engineering
[edit]Can someone give me a simple mathematical forumla sude in civil engineering? It's for a math homework project. Thanks in advance! --Wizrdwarts 01:38, 4 June 2006 (UTC)
- Somewhere not far from where you live is a firm that does civil engineering. Contact them. Chances are excellent that they would be delighted to spend a few minutes talking to you about what they do, and offer a sample calculation or two. It should be much more fun and educational than asking a bunch of mathematicians on the web. (Though we can be fun, too, in our own way.) --KSmrqT 05:19, 4 June 2006 (UTC)
- Does Hooke's law count? Dysprosia 09:20, 4 June 2006 (UTC)
Catenary has a cool formula for suspension bridges... (and that is TYPICAL engineering right?) Oh by the way, you should write formula. Evilbu 10:48, 4 June 2006 (UTC)
- Minor pedantic correction - the curve of a free-hanging uniform chain is a catenary, but the curve of the cables of a suspension bridge, in the ideal case where we assume that the weight of the cables is neglible compared to the weight of the horizontal road deck, is a parabola. Gandalf61 08:59, 5 June 2006 (UTC)
random drawing of numbers 1–100
[edit]Suppose you had the values of 1 to 100. Then, you randomly organized the 100 numbers into 10 groups of 10. One group might contain the numbers 7, 13, 17, 38, 41, 52, 59, 71, 90 and 95 for example. Then, by group, the highest number would be given a corresponding value of 10, the second highest a corresponding value of 9, and so on. Then, the process is repeated, and any value earned is added to the last value earned. For example, the number 100 will always be given a value of 10 (because it is always the highest number out of 100), so after 3 random "draws," its corresponding value would be 30. Likewise, the number 1 would have a value of 3 after 3 "draws." My question is: how many random "draws" would it take so that all the numbers were in numerical order based on their values. Likewise, how many random "draws" would it take so that less than 10 numbers were not in correct placement when organized by values. I understand that this question is confusing (and it is quite hard to word), so if you have any questions as to what I mean, then please ask and I will update and clarify accordingly. Thank you in advance for all of your help. - Zepheus 02:55, 4 June 2006 (UTC)
- Fun challenge, though not a practical way to sort. But you cannot use random draws and ask those precise questions. For example, although it is statistically unlikely, fifty random draws could be exactly the same. Or did you know that? Anyway, perhaps you should think about why you expect that the cumulative "10-ranking" scores will converge to a correct 100-ranking order. That will be essential to answering the first question. For simplicity, consider four numbers in groups of two. Write out all possible draws and consider how they combine. Notice that after one draw the values will be ⟨1,1,2,2⟩, with only two distinct quantities; and after two draws the cumulative values will include 1+1 and 2+2, and typically 1+2 as well, but nothing else. So it is impossible to sort properly with only one or two draws. More generally, after n draws the largest cumulative value will be n times the group size and the smallest cumulative value will be n. Randomness aside, the difference of these must be large enough to permit a distinct value for each number in the full set. Since 11×(10−1) = 99 = 100−1, clearly at least 11 draws are necessary for a full sort. Of course, this necessary condition may not be sufficient; nor does it address the likelihood of a correct sort. --KSmrqT 06:02, 4 June 2006 (UTC)
- 11 draws are sufficient:
(1-10) (11-20) (21-30) ... (91-100) (1,11,21,31,...) (2,12,...) (3,13,...) ... (9,19,...) ... (10 times) ... (1,11,21,31,...) (2,12,...) (3,13,...) ... (9,19,...)
- It's fairly obvious that this schema assigns each number n the value n+10. EdC 14:23, 4 June 2006 (UTC)
- Additionally, this (with permutations) is the only way to get a full sort in only 11 draws.
- You're going to have to clarify how many random "draws" would it take so that all the numbers were in numerical order based on their values. As KSmrq pointed out, the numbers can stay unsorted through an indefinite number of draws. One possible question is how many draws are needed such that P(sorted | n draws) exceeds some value (say ½). I wouldn't think that P(sorted | n draws) has a nice form, though. —Preceding unsigned comment added by EdC (talk • contribs)
- Thank you for all your help so far, and it's been a long time since I've had a math class so it's tricky to write in mathematical ways. I knew originally that every draw could be the same, but it is statistically improbable. Also, I figured that when the number of draws approached infinity, the number of errors (or numbers not in correct placement when sorted by value) would reach zero. I was just wondering how many draws would probably be sufficient. I think my question has pretty much been answered, unless EdC has more to say on the matter. - Zepheus 16:56, 4 June 2006 (UTC)
- I'm currently running a simulation of the problem, and it seems that the average amount of draws required is around 2045 - If that is of any use. Keep in mind, though, that I have not double-checked my program's correctness, and it is very inefficient, so it could take a while until an accurate result is obtained. -- Meni Rosenfeld (talk) 17:28, 4 June 2006 (UTC)
- Thank you for all your help so far, and it's been a long time since I've had a math class so it's tricky to write in mathematical ways. I knew originally that every draw could be the same, but it is statistically improbable. Also, I figured that when the number of draws approached infinity, the number of errors (or numbers not in correct placement when sorted by value) would reach zero. I was just wondering how many draws would probably be sufficient. I think my question has pretty much been answered, unless EdC has more to say on the matter. - Zepheus 16:56, 4 June 2006 (UTC)
- I've tried such a simulation too, and I've got a similar result: average 2069 draws needed from a sample of 160 tries. The same warnings apply as above. Btw, see birthday paradox as for why you need so many draws. – b_jonas 18:35, 4 June 2006 (UTC)
- Update^2: run on a faster SMP machine (from 13919 iterations) gives average 2058 draws, quartiles of number of draws are 1521, 1911, 2443. – b_jonas 19:19, 4 June 2006 (UTC)
- I've tried such a simulation too, and I've got a similar result: average 2069 draws needed from a sample of 160 tries. The same warnings apply as above. Btw, see birthday paradox as for why you need so many draws. – b_jonas 18:35, 4 June 2006 (UTC)
Some thoughts on the convergence of the order. Consider a random variable , the rank of number k in a random draw. Denote . Denote by the event that the numbers k and k+1 are in different groups, its probability is . Given , has the same distribution as , and is symmetric with respect to 0 ( for x>0), thus . With probability 1/10 the numbers k and k+1 are in the same group (, complementary to ) and and . That is,
Besides, we have
- .
After realization of n draws, the numbers 1,...100 have the correct order if
- for all k=1,...99
It is known than in distribution so the convergence to the correct order should be expected. The questions arise whether the r.v. are independent of each other, and for which smallest n the event occurs the first time, and what is the distribution of such n. (Igny 22:46, 4 June 2006 (UTC))
These mathematical functions are getting crazy. I wish I could decipher them. I'll definitely archive this page. One more question, the first answer I receive was that 11 draws would be sufficient. The next answer was that roughly 2,050 draws would be needed. How are these related? Also, what is the rough estimate for the number of draws needed for less than, say, 10 mistakes. - Zepheus 19:09, 5 June 2006 (UTC)
- No, the first answer was that 11 draws are necessary. That is, with less than 11 draws you have zero chance of getting it right. No finite amount of draws is sufficient (in the sense of having a probability of 1 of winning). Roughly 2047 (obtained after over 55,000 experiments) is the average number (expectation) of draws until a success is obtained. Assuming the distribution is roughly symmetric, this also means that 2047 draws will give you a 50% chance of success. -- Meni Rosenfeld (talk) 19:27, 5 June 2006 (UTC)
- Well, according to the numbers I gave above, you have about 50% success after less draws than that: about 1911 draws. – b_jonas 20:47, 5 June 2006 (UTC)
- No, the first answer was that 11 draws are necessary. That is, with less than 11 draws you have zero chance of getting it right. No finite amount of draws is sufficient (in the sense of having a probability of 1 of winning). Roughly 2047 (obtained after over 55,000 experiments) is the average number (expectation) of draws until a success is obtained. Assuming the distribution is roughly symmetric, this also means that 2047 draws will give you a 50% chance of success. -- Meni Rosenfeld (talk) 19:27, 5 June 2006 (UTC)
Okay. I understand now. Thanks for the update, and all of your hard work. - Zepheus 21:16, 5 June 2006 (UTC)
- Above is a graph of the cdf of the distribution of the number of draws needed I've made from the output of my simulation. – b_jonas 21:26, 5 June 2006 (UTC)
- This graph is awesome thanks. - Zepheus 17:29, 6 June 2006 (UTC)
- Above is a graph of the cdf of the distribution of the number of draws needed I've made from the output of my simulation. – b_jonas 21:26, 5 June 2006 (UTC)
Loudspeakers
[edit]Can anyone recommend me loudspeakers for an integrated sound card (asus a7v8x-x) with a good quality/price ratio? thanks.
- The web site for NewEgg lists numerous speakers along with specs and customer ratings. That should help you narrow down your interests to price range, number of channels, optional subwoofer, wattage, and sensitivity. A quick web search suggests that the integrated ADI 1980 sound chip is not exceptionally good, so if you do not intend to some day add a separate card, it may not be worth investing in top-quality speakers. It does provide 6 channels of output, so you may be interested in a 5.1 speaker setup, consisting of front stereo, rear stereo, center, and subwoofer. Finally, listening habits and tastes vary considerably, so it matters whether you are interested in gaming, hip-hop, classical, and so on. Again, a little reading will be quite helpful in narrowing your options. --KSmrqT 19:22, 4 June 2006 (UTC)
free group (abelian?) <->free product<->coproduct confusion
[edit]Hi,
I have yet again a topology inspired question. First of all though I would like to express my gratitude for the many people who have helped me here.
Right now I mostly receive from Wikipedia being a student in exams, but I have and I will again give to the community myself:)
I am confused about http://wiki.riteme.site/wiki/Free_product_with_amalgamation
Suppose I take a free product of the groups and the article states it should give me a free group on two generators. Now the article on free groups it links to says that free groups and free abelian groups are not the same. There goes my hope that it would be
but wait! , later that article says it is a coproduct of two groups in the categorical sense. But I was thought in my algebra class that for Rmodules, and thus also abelian groups (as they are the same as \mathbb{Z} modules ), simply taking the outer direct sum of two modules should do just fine to give you a categorical coproduct.
So what is going on, can anyone point out the difference between coproduct and free product. What am I doing wrong?
This confusion has led me to believe than 'eight' or an 'infinity symbol' has fundamental (homopoty) group}.
Thanks, Evilbu 14:53, 4 June 2006 (UTC)
- The coproduct of and in the category of -modules (i.e. abelian groups) is indeed . The fundamental group of is the coproduct of and in the category of groups, that is, the free product (which, for example, has uncountably many elements, so is clearly not ). —Blotwell 15:04, 4 June 2006 (UTC)
But that is bad for me! So you are saying : free product of two groups is NOT the same as coproduct?Evilbu 17:05, 4 June 2006 (UTC)
- Coproducts look different in every category. In the category of groups, the coproduct is the free product. In the category of abelian groups and the category of modules, it is the direct sum. In the category of topological spaces, it is the disjoint union (also the category of sets). In the category of pointed spaces, it is the wedge sum. Z is a set, a space, a group, an abelian group, and a module. So there are many different coproducts you can make out of Z. The fundamental group is a functor from the category of pointed spaces into the category of groups, and the coproduct that you have to use is the free product (the coproduct in Grp), not the direct sum (the coproduct in Ab). So π1 takes coproducts of pointed spaces to coproducts of groups. It is a continuous functor. -lethe talk + 20:39, 4 June 2006 (UTC)
- π1 isn't a continuous functor (one which preserves limits) because this would contradict the long exact sequence of a fibration. But more importantly, it isn't cocontinuous (preserving colimits) which is what I imagine you meant. For example it doesn't preserve the colimit of the diagram where both arrows take the line segment to a circle by identifying the two endpoints. (Hint: the colimit is again S1.) —Blotwell 01:37, 6 June 2006 (UTC)
- Firstly, you're right, of course I meant cocontinuous. I don't understand the bit about the long exact sequence of a fibration. I was about to complain that your claim contradicts the Seifert-van Kampfen theorem, but then I saw the edit you made to (my addition to) the article fundamental group. π1 preserves pushouts along injections, but not every pushout. Your counterexample of course also helps. But doesn't π1 have an adjoint? -lethe talk + 08:44, 6 June 2006 (UTC)
- The classifying space functor? It's a "homotopy adjoint": the homotopy equivalence classes of maps biject, , but you can't make this into an actual categorical adjunction. Correspondingly, the neatest statement to make about π1 is that it takes homotopy colimits to colimits. Coproducts, and more generally limits of diagrams of cofibrations, are examples of homotopy colimits: my counterexample above is not. —Blotwell 16:38, 6 June 2006 (UTC)
- So classifying spaces are only defined up to homotopy equivalence? I was under the impression that they were defined up to homeomorphism, but I'm not so comfortable with the whole business, so I could be out for a six. Anyway, it sounds like we will be able to see that π1 is cocontinuous as a functor from the homotopy category of topological spaces then, no? Only, I don't know what the colimits look like in that category. -lethe talk + 02:20, 7 June 2006 (UTC)
- You'd think, wouldn't you? But I'm not convinced that the homotopy category is cocomplete and I can prove that colimits in the homotopy category are not the same as homotopy colimits: has homotopy colimit Sn+1. Homotopy colimits are generally the Right Thing and colimits in homotopy categories don't exist in general, but I can't actually think of a counterexample to your statement. I would say classifying spaces are only defined up to homotopy equivalence, though of course the bar construction picks a canonical representative for you. —Blotwell 16:54, 7 June 2006 (UTC)
- So classifying spaces are only defined up to homotopy equivalence? I was under the impression that they were defined up to homeomorphism, but I'm not so comfortable with the whole business, so I could be out for a six. Anyway, it sounds like we will be able to see that π1 is cocontinuous as a functor from the homotopy category of topological spaces then, no? Only, I don't know what the colimits look like in that category. -lethe talk + 02:20, 7 June 2006 (UTC)
- The classifying space functor? It's a "homotopy adjoint": the homotopy equivalence classes of maps biject, , but you can't make this into an actual categorical adjunction. Correspondingly, the neatest statement to make about π1 is that it takes homotopy colimits to colimits. Coproducts, and more generally limits of diagrams of cofibrations, are examples of homotopy colimits: my counterexample above is not. —Blotwell 16:38, 6 June 2006 (UTC)
- Firstly, you're right, of course I meant cocontinuous. I don't understand the bit about the long exact sequence of a fibration. I was about to complain that your claim contradicts the Seifert-van Kampfen theorem, but then I saw the edit you made to (my addition to) the article fundamental group. π1 preserves pushouts along injections, but not every pushout. Your counterexample of course also helps. But doesn't π1 have an adjoint? -lethe talk + 08:44, 6 June 2006 (UTC)
- π1 isn't a continuous functor (one which preserves limits) because this would contradict the long exact sequence of a fibration. But more importantly, it isn't cocontinuous (preserving colimits) which is what I imagine you meant. For example it doesn't preserve the colimit of the diagram where both arrows take the line segment to a circle by identifying the two endpoints. (Hint: the colimit is again S1.) —Blotwell 01:37, 6 June 2006 (UTC)
Oh yes, I see, I think I have made a serious mistake in assuming something. I was seeing the abelian groups as a subcategory of the group category. I cannot take some abelian groups, take the free product (defined in the categorical sense) and assume it will still be that in the bigger category right? Evilbu 20:58, 4 June 2006 (UTC)
- There is nothing wrong with thinking of Ab as a subcategory of Grp. But you're right, the coproduct of groups does not restrict to the coproduct of Abelian groups on this subcategory. Stated more explicitly, the coproduct of two groups which are abelian in the category of groups (this coproduct is a free product; always nonabelian) is not the same as the coproduct of two abelian groups in the category of abelian groups (this coproduct is a direct sum; always abelian). -lethe talk + 21:02, 4 June 2006 (UTC)
- More generally, it is a mistake to think that any operation must restrict to a suboperation on a subset. Just because Ab and Grp both have coproducts does not mean the coproduct in Grp restricts to the coproduct in Ab on that subcategory. Similarly, the Killing form of a Lie algebra need not restrict to the Killing form of a subalgebra (this will happen for the Cartan subalgebra in the semisimple case, but need not in general). The covariant derivative of a vector in a submanifold of a Riemannian manifold need not equal the induced covariant derivative of that vector. Subcategories are, by definition, closed under composition of morphisms. This does not imply that they must be closed under every operation, like coproducts -lethe talk + 21:28, 4 June 2006 (UTC)
Bike problem
[edit]I know how easy it is to fall off a bike while not moving - I also know that it's harder to turn the wheel when I am in motion. I've had a look at angular momentum and related topics and I can't figure why a rotating bike wheel is harder to turn from its line of motion than a stationery one. Any help please? Anand 18:54, 4 June 2006 (UTC)
- Although your second question should be answered at Gyroscope, it is not generally considered the correct reason for bicycle stability. See Bicycle#Balance, including the link to Bicycle physics. Walt 19:48, 4 June 2006 (UTC)
June 5
[edit]I just started this article. If anyone believes they can add anything more to it, even one more sentence, go right ahead. — BRIAN0918 • 2006-06-05 05:52
- I added my comments on the talk page. --vibo56 09:05, 5 June 2006 (UTC)
- What is the difference with automorphic numbers? Can you give an example of an automorphic number that is not circular? --LambiamTalk 14:51, 5 June 2006 (UTC)
- If the definition in circular number is interpreted as "there exists a power such that...", than it's the other way around - 4 is circular (4^3 = 64) but not automorphic (4^2 = 16). -- Meni Rosenfeld (talk) 15:09, 5 June 2006 (UTC)
- Discussion on the talk page has reached the conclusion that circular number is equivalent to automorphic number. Gandalf61 16:00, 5 June 2006 (UTC)
- If the definition in circular number is interpreted as "there exists a power such that...", than it's the other way around - 4 is circular (4^3 = 64) but not automorphic (4^2 = 16). -- Meni Rosenfeld (talk) 15:09, 5 June 2006 (UTC)
- What is the difference with automorphic numbers? Can you give an example of an automorphic number that is not circular? --LambiamTalk 14:51, 5 June 2006 (UTC)
Complex Functions & Poles
[edit]A question about poles. Consider a function of this form:
Will it have a pole at z = a? A removeable singularity? Or some other form of oddness? Maelin 06:26, 5 June 2006 (UTC)
- Removable discontinuity. No pole. -lethe talk + 06:41, 5 June 2006 (UTC)
- f is in fact differentiable at z = a -- there's no oddness. Cancel the z-a factor. Dysprosia 06:55, 5 June 2006 (UTC)
- It's a removable singularity, and as one professor said, "At this point we assume all removable singularities are removed." In other words, while f(z) is technically undefined at z = a, since it is identical to elsewhere and can be analytically continued through it, you can essentially work with the analytic continuation instead of the function as you defined it. More interesting is a function like , which has no obvious cancellation, but because of its removable singularity is basically treated as though f(0) was automatically defined as 1. Confusing Manifestation 09:19, 5 June 2006 (UTC)
- Am I missing something (Analysis has never been my strong suit)? Why is f(z) undefined at z = a -- surely f(a) = (a-b)/(a-c)? Dysprosia 09:31, 5 June 2006 (UTC)
- Simply put, you can only cancel out nonzero terms. When z = a, you cannot cancel. The function is undefined there, though as others point out, it can easily be extended. -lethe talk + 10:07, 5 June 2006 (UTC)
- Of course. Excuse the diversion. Dysprosia 10:59, 5 June 2006 (UTC)
- Simply put, you can only cancel out nonzero terms. When z = a, you cannot cancel. The function is undefined there, though as others point out, it can easily be extended. -lethe talk + 10:07, 5 June 2006 (UTC)
- Am I missing something (Analysis has never been my strong suit)? Why is f(z) undefined at z = a -- surely f(a) = (a-b)/(a-c)? Dysprosia 09:31, 5 June 2006 (UTC)
- Having studied them some more, I can now answer. The reason is that at z = a, you get z - a = 0, and to cancel that you must divide by zero and that's not defined. Everywhere else, z - a is some finite nonzero quantity, and the terms will cancel normally, but zero terms do not cancel in that way.
- As an example, (3x) / (3y) = x / y everywhere because 3 is never equal to zero, but (0x) / (0y) is indeterminate. -Maelin 10:00, 5 June 2006 (UTC)
- Having studied them some more, I can now answer. The reason is that at z = a, you get z - a = 0, and to cancel that you must divide by zero and that's not defined. Everywhere else, z - a is some finite nonzero quantity, and the terms will cancel normally, but zero terms do not cancel in that way.
- There could still be a pole at z = a, in the special case that c = a and b ≠ a. --LambiamTalk 14:57, 5 June 2006 (UTC)
Measurement unit on measuring tapes.
[edit]Several of my measuring tapes have a measure mark that is a small black diamond shape. There are five of them for every eight feet or 19 1/5 inches for each mark. What is this measure? Sincerely
- Albert J. Hoch
Its purpose is to allow carpenters to divide 8 feet exactly in five. See this page and this page (scroll down to diamond) for documentation. There has been a dispute on the Wikipedia where a user claimed it to be an "English cubit", but this is apparently not correct. --vibo56 09:45, 5 June 2006 (UTC)
- BTW, is there a method to divide a line in five equal parts ? (ruler and compass only). Thanks. --DLL 21:29, 5 June 2006 (UTC)
- Yes. You draw two parallel lines extending from the endpoints of the line, then mark off five equal intervals on each of the parallel lines. Lines drawn through corresponding marks will divide the original line into five equal parts. --Serie 00:12, 6 June 2006 (UTC)
So the black diamond is a carpenter's mark! Very interesting, I'd been supposing it was some foreign unit of measurement. I was guessing Chinese!
Thanks very much. Albert J. hoch Jr.
Integral of dirac delta over step function
[edit]What does the following integral evaluate to:
where is the Dirac delta function and H(x) is the Heaviside step function.
What about the following:
If these do not exist, what are reasonable approximations to them I can make? deeptrivia (talk) 18:27, 5 June 2006 (UTC)
- As a distribution, the delta function is really only supposed to be integrated against certain smooth functions, which the Heaviside function is not. Strictly speaking, the integral isn't defined. As for whether a meaningful approximation can be made, I do not know. -lethe talk + 21:32, 5 June 2006 (UTC)
Thanks lethe. Is it terribly unsafe to assume, for engineering purposes, that H(0) = 0.5, while integrating these? deeptrivia (talk) 21:42, 5 June 2006 (UTC)
- That's probably perfectly safe. For the purposes of integration, we only care about the equivalence class of functions which differ almost everywhere. You can do anything you want on a set of measure zero without affecting the integration. So you can take H(0) to be anything you want, and 1/2 is a sensible value. -lethe talk + 23:12, 5 June 2006 (UTC)
- NO! Integration with respect to Lebesgue is independent of sets of measure zero, but integration with respect to the Dirac measure, which is what you are doing, cares about pointwise values. You can only integrate functions that are continuous at zero with respect to Dirac. You cannot do this integral, you need to use an approximation for either Dirac or Heaviside. The problem is, the answer depends on which approximation, which is bad, and the reason the integral isn't defined. But, maybe your problem really involved some function that is being approximated by a Dirac or Heaviside. You should use that function instead. (Cj67 15:05, 6 June 2006 (UTC))
- So basically, I guess you want to do these integrals by substituting 0 in as an argument for H. I expect that this bit is unsafe, though I'm not sure exactly how unsafe it is. -lethe talk + 23:14, 5 June 2006 (UTC)
- There is a problem here: the symbol
- means "take the integral of the function on the set w.r.t. the measure ". While it is perfectly true that you can change the values of a function on a -measure set without changing its integral, you must not miss that it is not the Lebesgue measure we are integrating against, but the Dirac delta, which assignes a weight of to , hence you cannot change the value of . The integral you are asking, actually, indeed equals (which is undefined, according to Heaviside step function). Cthulhu.mythos 09:00, 6 June 2006 (UTC)
- So basically, I guess you want to do these integrals by substituting 0 in as an argument for H. I expect that this bit is unsafe, though I'm not sure exactly how unsafe it is. -lethe talk + 23:14, 5 June 2006 (UTC)
- Okay, if I ask maple to evaluate:
it gives:
The limit of this expression as epsilon --> 0 is undefined. However, as an engineering approximation, we can assume to be something small, like 1e-24, and then and , and so the integral evaluates to 1. Is there any flaw in this reasoning? I'm asking because there's a significant thing happening between -1e-24 and 1e-24, which will be ignored by this assumption. Another thing is, if I were doing this integration from, say -1 to 1, then, say you pointed out I won't have cared about values at finite points. But here, the integration has to be done in a range that encloses 0 and is as small as can be imagined, so the value at 0 might have a significant effect. Regards, deeptrivia (talk) 01:14, 6 June 2006 (UTC)
- You say that the limit of the expression is undefined, but it appears well-defined to me. The right=hand limit of Heaviside is 1, the left-hand limit 0, and so the difference between the two is 1. Your subsequent comments bear out this limit, and by the way, it does not matter how small an interval you integrate over, the result is always the same. What I fail to understand is how Maple arrived at the expression you quote. I don't know how to arrive at the integral that Maple has given you, so I'm not sure how bulletproof it is, but anyway, your reasoning about the value is correct: it is 1, no matter the interval. -lethe talk + 01:57, 6 June 2006 (UTC)
- But how can this be? Shouldn't the c from the integrand return in the result? Does Maple assume that Heaviside(0) = 0? After all, you expect the answer 1/(1+c(Heaviside(0))2). --LambiamTalk 02:18, 6 June 2006 (UTC)
- The rule is
- if f is nice enough. My guess is that "nice enough" in this case is "continuous". In any case, since H(0) is ill-defined, I guess that none of the integrals are defined.
- If I am pressed to give a value to the integral
- then I'd use that the delta function is a derivative of the Heaviside function, and hence
- I have grave doubts about this reasoning, but perhaps it can be made rigorous.
- If your integral arises from an engineering application, I would take a closer look to the limiting process that you are using. -- Jitse Niesen (talk) 03:11, 6 June 2006 (UTC)
- The rule is
The limit of
as epsilon --> 0 is undefined according to Maple itself (using the 'limit' function). The first approach I followed was the one proposed by Jitse Niesen, but I can't remember any more why I gave up on it. Anyway, these equation arise from a nonlinear treatment of point loads and moments at various points on a flexible beam with discontinuities in cross sectional area (like a stepped beam.) I am using heaviside functions to model steps in areas, and dirac functions to model point loads and moments. Any suggestions appropriate to this situation? Thanks. deeptrivia (talk) 04:18, 6 June 2006 (UTC)
If you use the definition of the Riemann-Stieltjes integral, then the first integral is equivalent to
- (this is because )
On the other hand, . Conscious 05:57, 6 June 2006 (UTC)
- The integral no. 4 seems to be equal to , and no.3 and no.5 are infinite (because of δ2). No.2 was evaluated by Jitse. I'd say all these integrals are undefined as Riemann integrals, but well-defined as Riemann-Stieltjes integrals. (And since you start getting infinite results, you might need to tweak your physical model, as results seem to be dependent on how abrupt the edges are and how pointy loads are). Conscious 06:45, 6 June 2006 (UTC)
- I tweaked my model a bit. It's now working atleast for small values of parameters. Hopefully, won't have problems with large values. Thanks for your help. deeptrivia (talk) 18:04, 8 June 2006 (UTC)
two loops on a cylinder , are in same homology class??
[edit]Hi,
consider a cylinder thus
I am still working on that torus, and I thought, this would be handy :
suppose I have a loop ,so let's say a path
and another , a path
Are these two loops in the same homology class, I mean, is their difference, in an element of the image of
--Evilbu 20:01, 5 June 2006 (UTC)
- If each loop goes around the cylinder the same number of times, then there is a homotopy taking one into the other: they are homotopy equivalent. By definition, they then are in the same cycle class. Consequently, yes, they are also in the same homology class (cycles modulo boundaries). The Z of H1 comes from that fact that a once-around cycle can be added or subtracted with itself any number of times to give homotopically different cycles (twice-around, once-around-reversed, and so on). Viewed at a slightly higher level, a deformation retraction of the cylinder produces a circle, S1; therefore these spaces have the same homology. Furthermore, this is true more generally: X×Rn has the same homology as X. --KSmrqT 21:10, 5 June 2006 (UTC)
- "If each loop goes around the cylinder the same number of times, then there is a homotopy taking one into the other: they are homotopy equivalent." You mean homotopic. Tesseran 22:46, 9 June 2006 (UTC)
- A nit about your notation. What you've written are not paths. Your paths should have domain [0,1] and codomain X. You should better write something like
- to indicate that the number t in the unit interval is mapped to a point in the path on the cylinder. -lethe talk + 21:36, 5 June 2006 (UTC)
Thanks, yet I'm sorry but I don't completely get it.
Please be very clear in what you mean : homotopy between paths, or homotopy between continuous maps in general.
Here was my idea : a 'push up u' of (b-a) is a continuous map from the cylinder to the cylinder, homotopic with the identity
this means
and thus
A little weird I think. Why would a homotopy between those two points suffice? And what kind of homotopy do you speak, usually they mean with 'homotopy between two paths' : the homotopy fixes begin and end point all the time, which cannot be the case here as both are even disjoint.
Evilbu 21:42, 5 June 2006 (UTC)
- It's quite easy to see that your two paths are homologous: the boundary of the finite cylinder segment bounded by p and q is p – q. Thus they differ by the boundary of a 2 chain, so they are homologous. As for homotopy, you often consider homotopies with fixed endpoints, but you don't have to. The point is that given a homotopy between two curves with fixed endpoints, the two curves are the boundary of the image of the unit square under the homotopy. This also works on the cylinder for a homotopy without fixed endpoint, because the sides of the square are not in the boundary of the image. -lethe talk + 23:03, 5 June 2006 (UTC)
- To amplify on what lethe has said, in this example we have two options to consider. The definition of homotopy applied to paths says that the ends can move. A path is a map, f, from the unit interval [0,1] to the space X. Given two such maps, f and g, a homotopy continuously deforms one path into the other. Before we start to compute homology groups, we want to take our huge number of cycles and reduce them to classes by homotopy equivalence. A closed loop on the cylinder is a 1-cycle and also a path for which the start and end points coincide. If we have two such loops a homotopy will necessarily deform loop to loop, but it need not leave any point fixed.
- This not quite the same as computing the fundamental group, π1(X,x0), which requires a relative homotopy leaving point x0 fixed. (Of course, the homotopy group π1 is independent of the choice of x0 if the space X is a path-connected space.)
- A formally different option is the definition of homology groups as cycles modulo boundaries. Even without the reduction by homotopy equivalence this can cause two cycles to be identified in a homology group.
- The definitions and implications in algebraic topology take time and exercise to grok. It will come; and besides, (modern) algebraic geometry is worse. I had the strange experience that algebraic topology seemed to have more geometric appeal than algebraic geometry! --KSmrqT 23:58, 5 June 2006 (UTC)
Thanks everyone, it's a bit hard to understand that all completely. But I surely would like to know this : lethe, you wrote that p-q is the boundary of a segment. But I was taught that you need linear combinations (over the integers) of 2-simplices (those are maps from a triangle in the plane to your space). How would you proceed? Evilbu 08:15, 6 June 2006 (UTC)
- I'm not sure what you're asking. Two 1-cycles are homologous if they form the boundary of a 2-chain, which can be thought of as a particular type of linear combination of 2-simplices. So a cylinder segment is a 2-chain, and its boundary is the two circles. Thus the two circles are homologous. -lethe talk + 08:39, 6 June 2006 (UTC)
- The earlier reference to Mayer-Vietoris suggested Evilbu already had a solid grounding in some of the basics, but maybe we'd do better to take more explicit steps.
- So, consider a cylindrical strip, a circle swept perpendicular to its plane. Topologically, the circle is S1, the sweeping is some interval such as I = [0,1], and the cylinder is the product space S1×I. (We could just as well use an infinite cylinder, S1×R.)
- Now suppose we consider a strip in the midsection, in the interval [a,b], with 0 ≤ a,b ≤ 1. If we slice it open parallel to the sweep, flattening gives a rectangle. Split the rectangle into two triangles. Each triangle is a 2-simplex. In fact, we can consider these triangles "on the surface". That is, we have a map from plane triangles to cylinder triangles.
- To compute explicitly, we'll need some names. Call the edges of the rectangle T (top), B (bottom), L (left), R (right), and call the diagonal edge from top-left to bottom-right D. Call the vertices tl, tr, bl, br. This gives us a 2-simplex, σ1, with edges D, R, T (in that order); and another, σ2, with edges L, B, D (in that order).
tl T tr ● ⟵ ● L ↓ ↘ ↑ R ● ⟶ ● bl B br
- But we need to be a little more careful, because each edge is a 1-simplex with its own vertex order. Thus
T = (tr, tl) B = (bl, br) L = (tl, bl) R = (br, tr) D = (tl, br) ∂T = tl − tr ∂B = br − bl ∂L = bl − tl ∂R = tr − br ∂D = br − tl
∂σ1 = D + R + T ∂σ2 = L + B − D ∂∂σ1 = (br−tl) + (tr−br) + (tl−tr) = 0 ∂∂σ2 = (bl−tl) + (br−bl) − (br−tl) = 0
- So far, so good. Each 2-simplex has a boundary that is a chain of 1-simplexes, and the boundary of each boundary is 0. Thus, automatically, a boundary is a cycle. Now we want to add our two 2-simplexes to form a chain, σ1+σ2. But before we do, we should glue the left and right edges of our rectangle together, which is what happens on the cylinder. Being careful with orientation, we declare that R = −L. This implies
∂σ1 = D − L + T ∂σ2 = L + B − D ∂(σ1+σ2) = T + B
- We ordered the vertices of T right-to-left, and those of B left-to-right. (If we do many of these calculations we need to adopt a consistent ordering convention, and there is one that works well automatically.) Taking that into account, notice that we have verified what lethe asserted, that a loop around the cylinder at height a is homologous to a loop (in the same direction) at height b, because their difference is the boundary of a 2-chain, σ1+σ2.
- I apologize for not including a good picture. (Anyone?) It would make most of this easier to see. --KSmrqT 23:17, 6 June 2006 (UTC)
- Addendum: Since we've come this far, we might as well relate the rectangle to the torus. Simply identify the top and bottom edges with proper orientation, T = −B, and we're done. To remove a disc for Mayer-Vietoris, cut out σ2. Or, identify generator cycles and compute homology directly.
- Notice that when we identify top and bottom edges, the chain σ1+σ2 becomes a cycle: its boundary is zero, since T+B = −B+B. Notice that we have also necessarily identified all four vertices. Claim: For the torus, L, B, and D are 1-cycles that are not boundaries. (Verify!) Are any of them equivalent? Claim: For the torus, D is homotopic (and homologous) to L+B. (Verify!) --KSmrqT 07:03, 7 June 2006 (UTC)
Sine-rule like formula for radians
[edit]There is a highly useful formula which looks like the sine rule, ie something/something = something/something = something/something but I've completely forgot it. I believe terms like arc length, area, theta etc were included in it but I can't remember the other ones, nor can I remember the order. Thanks, Matt. —Preceding unsigned comment added by 80.229.237.12 (talk • contribs) 19:39, 5 June 2006
- Have you read the radian article? Radian is the ratio between the arc length and the radius, i.e. , where s is arc length and r is radius. Notice that since circumference of a complete circle is , it follows that(from the radian article)
- or:
What the article didn't mention(can anyone expand the radian article?) is however the formula of sector area in radian terms. Since the area ratio between the sector and the full circle is the same as the ratio between their radian measures, we have
- or:
- where A is the area. In anycase, I don't think a sine-like formula will be very useful, really, the only things you need to know is that radian is simply the ratio between the arc length and its radius, and everything else follows logically. --Lemontea 02:48, 6 June 2006 (UTC)
- Hi, thanks for the help. I've worked out with the help of those two that the one I was looking for is:
- I find this very useful for solving radian problems. Is this a well-known formula? —Preceding unsigned comment added by 80.229.237.12 (talk • contribs)
- Of course. In fact, this is an accurate restatement of the ideas presented by Lemontea. -- Meni Rosenfeld (talk) 15:57, 6 June 2006 (UTC)
Multiplying 2 16-bit numbers with 32-bit registers?
[edit]I'm trying to implement IDEA in assembly language on a 386, and I'm having trouble because it was optimized for 16-bit processors. I want to be able to concatenate two 16-bit numbers to multiply mod and then separate them later, but I'm having trouble with it. I've tried using and but neither lets me extract just the multiplications I want and it's all mixed up. --Zemylat 21:05, 5 June 2006 (UTC)
- Using , we have:
- So perform the r.h.s. multiplication in 32 bits, giving --LambiamTalk 02:43, 6 June 2006 (UTC)
Geometry question -- truncated icosahedron
[edit]I assure you this is not homework, just a question from someone who hasn't taken geometry since high school and did almost nothing with three dimensional shapes, at that.
Let's say I have a truncated icosahedron that should fit into a sphere an inner diameter of 150 cm. How long should each of the vertices be? I'm sure this is probably easy for someone to calculate given all of those wonderful symbols on the icosahedron page but I'm totally baffled by them.
Many thanks. --Fastfission 22:48, 5 June 2006 (UTC)
- A vertex is a single point; it has no length. So, do you mean how long should the edges be? Or do you want coordinates for each of the 60 vertices? If the latter, the section Canonical coordinates has all the data, assuming a sphere of radius r, where r2 = 9φ + 10. The associated edge length is not given, but can be computed if so desired. --KSmrqT 23:08, 5 June 2006 (UTC)
- Length of edges, sorry. I probably picked up the habit of calling the lines between vertices as themselves being vertices from computer graphics or sometihng like that. I don't need the coordinates, just the edge lengths. I can't computer them myself, because I don't understand the formulation and am not really interested in learning it from scratch just for this one question. :-) --Fastfission 02:33, 6 June 2006 (UTC)
- No matter where you picked up the habit, get rid of it; it's wrong. As for edge lengths, the article links to MathWorld, where the ratio of radius to edge length is given as
- or approximately 2.478. For a radius of 150 cm this implies an edge length of approximately 60.53 cm. --KSmrqT 03:05, 6 June 2006 (UTC)
- No matter where you picked up the habit, get rid of it; it's wrong. As for edge lengths, the article links to MathWorld, where the ratio of radius to edge length is given as
- Thanks. --Fastfission 03:11, 6 June 2006 (UTC)
splicing WAV files
[edit]I have to do some cutting and splicing of audio files in WAV format, but I don't seem to have suitable software handy. Does anyone know of any freeware which might do the job and run under Windows XP? —Preceding unsigned comment added by Physchim62 (talk • contribs) 19:00, 6 June 2006
- There is a list of free audio software at Free audio software. One I'd recommend is Audacity. Harryboyles 10:23, 6 June 2006 (UTC)
The number 9
[edit]After i have played around with the number 9 i noticed that 9 will always end up as 9.
For example:
9x9 = 81 (8 and 1) 8+1=9
9x15 = 135 (1 and 3 and 5) 1+3+5=9
9x265 = 2385 (2 and 3 and 8 and 5) 2+3+8+5=18 (1 and 8) 1+8=9
9x996633 = 8969697 (8 and 9 and 6 and 9 and 6 and 9 and 7) 8+9+6+9+6+9+7=54 (5 and 4) 5+4=9
This you can do with any random number. 9x? = 9
So my question is: When was this noticed the first time and who noticed it?
-Randi Hermansen, Denmark
- Indeed, this is the simplest test for working out if a number is divisible by 9 (add digits together, if that sum is divisible by 9, then the number is divisible by 9). A similar test exists to check if 3 is a factor (add all the digits together, and if that sum is divisible by 3, then the number is divisible by 3). Sjakkalle (Check!) 13:42, 6 June 2006 (UTC)
- I can't answer the bolded question, but you'd probably be interested in Divisibility rule and the general way to determine these properties. Walt 14:14, 6 June 2006 (UTC)
- There was a somewhat related question on the science desk awhile back. Y'all should really keep a better eye on that page–some of us a apt to write foolish answers when confronted by anything more complicated than addition. EricR 18:53, 6 June 2006 (UTC)
Manual Calculation
[edit]Does any one have any information on how the vast tables of logarithms were calculated. Clearly, it is not a simple task or tables would not have been necessary. 68.6.85.167 21:52, 29 May 2006 (UTC)
- It's simple, it just takes a long time and it's prone to error, so it's useful to make a handy book. I could do it using power series, but I'm sure there are more efficient methods. —Keenan Pepper 14:38, 6 June 2006 (UTC)
- I'm not sure what method the people at those times used, but the power series are(natural log)
- , provided the absolute value of x is smaller than 1.
- or , which converge for all positive real numbers.
- However, last time I tried, the second one seems to converge dead slowly for some numbers, so I think it's still easier to use the first power series, and use the identity to break down the number until it's small enough to be within the radius of convergence. (reference though on that page, it should be rather than ) --Lemontea 04:24, 7 June 2006 (UTC)
- I think this was done by hand (and yes, that was a laborious task). You might be interested in reading the biographies of Henry Briggs and John Napier. You can also take a look here. Sjakkalle (Check!) 14:40, 6 June 2006 (UTC)
- In the years 1614 (Napier) and 1624 (Briggs), necessarily all computations were manual. Both men were clever inventors and calculators. Logarithms themselves were a brilliant labor-saving device. Curiously, there seems to be a parallel between the methods Briggs described and modern methods for hardware, described here. --KSmrqT 23:56, 7 June 2006 (UTC)
- There must be more fancy power series expansions for the logarithm which converge faster. Anyone know of a neat one to share with the RD? --HappyCamper 19:16, 8 June 2006 (UTC)
- I think if you want to calculate a whole logarithm table, not just a single logarithm, the best is to calculate a complete antilogarithm table (by repeated multiplications) and then reverse the two columns. There are still issues with maintaining precision but I think it can be done. – b_jonas 09:24, 10 June 2006 (UTC)
- See also analytical engine for an ambitious but abortive project to automate this. Arbitrary username 20:03, 11 June 2006 (UTC)
Thanks
[edit]I just wanted to say thanks to all of the contributors here. The people here are really knowledgable and have help satiate my intellectual curosity on different occasions. Mayor Westfall 20:21, 6 June 2006 (UTC)
- Thank you also M. Westfall. --DLL 20:32, 6 June 2006 (UTC)
Recurrence Relations and Logarithm
[edit]I'm learning recurrence relations in school and I frequently come across problems involving finding the number of generations needed for the recurrence relation to have an answer twice its original value. So basically we are asked to find n when:
The method that is taught in school is to go through each generation by iteration and eventually finding n when you find that un has grown to twice the size of u0. However, the obviously easier method is to use the equation:
Is there a similar shortcut when the recurrence relation is instead defined as this?:
I have tried working it out by firstly expanding each iteration:
From this the whole series can be expressed as:
We can see the bracketed part of the equation is actually a geometric series, with the starting value b, ratio a and number of terms n (not n−1 as the starting value counts as one term). Also, the number of terms appearing in the geometric series seems to equal the number of iterations in the recurrence relation. Therefore we can use the equation for the sum of the geometric series:
Where s is the starting term, r is the ratio and n is the number of terms. Thus, substituting into the recurrence relation:
Unfortunately at this point I hit a dead end because I'm unable to change the subject to n. Is this along the right lines or is there a completely different method? ----★Ukdragon37★talk 20:27, 6 June 2006 (UTC)
- You're nearly there. Solve for :
- Take logarithms:
- Divide by (assuming ):
- EdC 21:36, 6 June 2006 (UTC)
- Thank you so much! That question has been bugging me for weeks! ----★Ukdragon37★talk 22:39, 6 June 2006 (UTC)
June 7
[edit]a fat delta function?
[edit]hello, i'm a student of physics taking an engineering course. can anyone give some intuition on the following:
the fourier transform of a sine wave of frequency w extracts the frequency w. but a fourier transform of a delta function (which has no width (in the time domain) and so no duration) extracts all frequencies.
my question is, how can a delta function accommodate all frequencies?
thanks -crj
- Probably because the Fourier transform of a delta function is constant for all frequencies and the Fourier transform of sin(wt) is zero for all the frequencies but w.(Igny 03:11, 7 June 2006 (UTC))
- Consider the family of functions A sech(A t√π⁄2), whose Fourier transforms are sech(1⁄A ω√π⁄2). Each of these has the shape of a hump centered at the origin, and as A gets larger the hump gets higher and narrower. The delta function is the limit of just such a concentration, with the area remaining constant. Observe that as the function gets narrower, its transform gets broader. Intuition says that the more abrupt a transition, the higher the frequencies required to produce it. In fact, the limit of the transforms of functions in this family as A goes to infinity is 1, meaning that all frequencies are present equally. The precise shape of the family is not important in arriving at the delta limit. For example, the Gaussian "bell", A exp(−(A t)2/2), is a nice family to try. --KSmrqT 04:17, 7 June 2006 (UTC)
ha! it makes sense that higher frequencies are needed to produce more abrupt transitions. thank you all for taking the time to answer. (incidentally, not sure how the answer sech(1⁄A ω√π⁄2) was obtained. i searched through my signal analysis books, an engineering mathematical handbook, and a book on mathematical methods of physics but could not find the result. i even tried to get this result using Maple, but all i get in return is my original input!)
- I first ran into sech so long ago I no longer remember the original source. One reason I like to use it as an example is precisely because it is not well-known. Most people learn about Gaussians and impulse trains transforming to versions of themselves, but the typical transform pair involves two different kinds of functions. That's not important for the delta function limit, but it's nice to have something different for a change. Recent versions of Mathematica have a FourierTransform function you might like to try. Better still, maybe someone reading this would like to give a simple argument for correctness. --KSmrqT 10:23, 8 June 2006 (UTC)
- Oooh...that sech example is a nice example... --HappyCamper 19:12, 8 June 2006 (UTC)
Macs and PCs
[edit]Apple has released Boot Camp public Beta, which will be included in Mac OS X v10.5.
Question: Will Windows Vista also be able to run in a Mac usinq Boot Camp? --Alexignatiou 08:49, 7 June 2006 (UTC)
- If not in Boot Camp, then surely using Parallels Workstation. Two indications that it will be possible in Boot Camp are this report that some hackers have already done it, and this Cringely column stating "One reason why Microsoft isn't surprised by Boot Camp is because Microsoft has been working with Apple to make sure that Windows Vista runs well on IntelMacs." --KSmrqT 09:14, 7 June 2006 (UTC)
Is every function in Cp also locally integrable?
[edit]Hello,
I am quite unsure about something my professor told me.
Let be open, and let f be a function in , unless I am really mistaking, that just means that f is defined in , and that is can be continuously derived p times
Now he told me if I just define an extension by making it zero outside \Omega , I get an function, locally integrable thus : integrable over every compactum.
Either I got my definitions incorrect, or this is incorrect : what about in ?
Evilbu 10:14, 7 June 2006 (UTC)
- If I am not mistaken, one also requires the function to be zero near , but I am not sure. Observe also that in order for the extension of to be locally integrable you just need to be (which is nothing surprising). Hope this helps. Cthulhu.mythos 15:39, 7 June 2006 (UTC)
Hm, I don't understand, is that true? What about and That function can be extended to a function, but it certainly will never be integrable?? So what is your definition in your opinion? I am guessing you would go for this then :
and and f can be derived continuously p times
Evilbu 17:10, 7 June 2006 (UTC)
- It is true that the function is in , and you don't need the function to be zero at the boundary of or extend it. Remember, we are talking about , so we are talking about the integral being finite over sets that are compactly contained in . It is not true that the extension by zero is in , as you illustrate. (Cj67 19:30, 7 June 2006 (UTC))
Thanks, but then what is your definition exactly of ? Evilbu 19:52, 7 June 2006 (UTC)
- . Functions from can be extended by 0 to (Igny 22:01, 7 June 2006 (UTC))
AMD64 With 64-bit OS
[edit]If I install Ubuntu (64-bit linux) on my computer with AMD64 3200+ should I expect an increase in performance? --Username132 (talk) 13:22, 7 June 2006 (UTC)
- As compared to what? Compared to a 32-bit Ubuntu system (compiled for ix86), yes. – b_jonas 20:39, 7 June 2006 (UTC)
- It should run faster, but depending on what you're doing the difference may not be all that noticeable. You also give up a certain amount of flexibility since you can't as easily run 32-bit applications. For example, your web browser will not be able to use some plugins, such as Flash. (There are ways around this, involving setting up a 32-bit chroot jail, which are probably quite easy to do but I haven't bothered).-gadfium 01:40, 8 June 2006 (UTC)
- Sorry to butt in - can I ask how soon such problems (eg with Flash) are likely to be resolved? I'm weighing up the same proposition. Thanks --The Gold Miner 06:34, 8 June 2006 (UTC)
- You may find this forum discussion to be of interest. Apparently the latest versions of K/Ubuntu don't require chroot to run 32-bit applications. My information was out of date.-gadfium 08:23, 8 June 2006 (UTC)
- Thanks, a lot of food for thought there. I think I'll wimp out and run 32-bit... Thanks again. --The Gold Miner 06:29, 10 June 2006 (UTC)
- You may find this forum discussion to be of interest. Apparently the latest versions of K/Ubuntu don't require chroot to run 32-bit applications. My information was out of date.-gadfium 08:23, 8 June 2006 (UTC)
- Sorry to butt in - can I ask how soon such problems (eg with Flash) are likely to be resolved? I'm weighing up the same proposition. Thanks --The Gold Miner 06:34, 8 June 2006 (UTC)
Do I Really Need Expensive HD?
[edit]I don't want to pay for hardware capable of more than I need. I want to buy a new system disk and the way I see it, I've a few options; a) buy two WD raptors and put into RAID-0 configuration b) buy two budget HDs and put into RAID-0 c) buy one raptor d) buy one budget drive
When loading the OS for example, is the speed of the HD a bottleneck for an AMD64 3200+ system with Corsair value RAM? And what games really benefit from 300 Mb/s data transfer to and from your HD? I would have thought the graphics card would be the bottleneck in any system with an old ATA-100 - I mean most important, immediately required information will be in the RAM, wont it? —The preceding unsigned comment was added by Username132 (talk • contribs) .
- Assuming you have enough RAM, your assumption is correct and a faster drive will only decrease startup times. It won't have an important effect on performance once everything is loaded into RAM. —Keenan Pepper 23:39, 7 June 2006 (UTC)
- You may need faster HD to do audio/video/picture editing, as well as data mining. Copying, archiving, compiling, and obviously defragging take less time with faster HD. (Igny 00:01, 8 June 2006 (UTC))
June 8
[edit]If I roll three six-sided dice (3d6) and only take the median roll (not the mean or average), what kind of bell curve or distribution odds would it give me for the result 1 to 6?
If I roll two 20-sided dice (2d20) and only look at the higher result of the two, what is the average result I will get? What about 3d20 and only look at the highest die? 4d20 etc. up to 16d20...?--Sonjaaa 11:43, 8 June 2006 (UTC)
- 1. The probabilities are :
1 : 2/27 2 : 5/27 3 : 13/54 4 : 13/54 5 : 5/27 6 : 2/27
- 2. I'm too lazy to find analytical solutions right now, so I'll give you approximate numerical solutions instead : The averages are, repsectively: 13.83, 15.49, 16.48, 17.15, 17.62, 17.97, 18.24, 18.46, 18.64, 18.79, 18.91, 19.02, 19.11, 19.19, 19.26. Write again if you need something more accurate.
- -- Meni Rosenfeld (talk) 15:34, 8 June 2006 (UTC)
- For the second question, the probability of getting x as the highest result when rolling n 20-sided dice is . The average value is . Chuck 14:18, 9 June 2006 (UTC)
- So if you're interested in numerical solutions, they're : 13.8250, 15.4875, 16.4833, 17.1458, 17.6179, 17.9709, 18.2445, 18.4626, 18.6403, 18.7877, 18.9118, 19.0176, 19.1087, 19.1880, 19.2574. -- Meni Rosenfeld (talk) 16:58, 10 June 2006 (UTC)
JavaScript date object
[edit]It is quite straightforward to use JavaScript to return today's date, but how would you go about getting it to return tomorrow's date, or the date in ten days' time? — Gareth Hughes 14:18, 8 June 2006 (UTC)
- Get today's date, then add a day (86400 seconds, or whatever the conversion is). — Lomn Talk 15:03, 8 June 2006 (UTC)
- But how do you add 86,400 s to a date object (should I not use milliseconds?)? Could someone give an example? — Gareth Hughes 15:29, 8 June 2006 (UTC)
- Try this site. It appears you use setDate(getDate() + n) — Lomn Talk 17:58, 8 June 2006 (UTC)
- Ah, that's how you do it! THank you. — Gareth Hughes 23:20, 8 June 2006 (UTC)
- Try this site. It appears you use setDate(getDate() + n) — Lomn Talk 17:58, 8 June 2006 (UTC)
- But how do you add 86,400 s to a date object (should I not use milliseconds?)? Could someone give an example? — Gareth Hughes 15:29, 8 June 2006 (UTC)
Microsoft C++
[edit]Not long ago I purchased an "Introductory" copy of Microsoft's Visual C++ on eBay. Since I have not programed in C since 1984 I was really surprised at how far it looked like C had come. However the "Introductory" version will not compile programs that worked great back in 1984 and even when a console program is written that compiles with no errors it stops and says that because it is an "Introductory" version that I can't make an execute file. I need another compiler but would like to avoid giving away any more of my hard earned money to Microsoft. What C++ compiler do you recommend? ...IMHO (Talk) 17:12, 8 June 2006 (UTC)
- Borland has a great C++ command-line compiler which can be combined with Spetniks C++ Compiler Shell if you want a visual environment. —Mets501talk 17:46, 8 June 2006 (UTC)
- The GNU compilers are an excellent free option for nearly any language/platform combination. — Lomn Talk 18:00, 8 June 2006 (UTC)
- If programs that worked in 1984 won't compile, it might be because they were written in K&R C, while the compiler expects Ansi-C, or because they are using libraries or header files which lack in the Microsoft compiler. You can find a list of free C compilers here. --vibo56 talk 19:41, 8 June 2006 (UTC)
- BTW, I thought Microsoft was givning the introductory version of Visual C++ away for free. I find it difficult to believe that the compiler refuses to create executables. Is the problem only related to command-line programs? --vibo56 talk 19:48, 8 June 2006 (UTC)
- visual studio 2005 has microsoft's latest visual c++, but it compliles to CLI (you need the .net framework 2.0 to run the executables). visual studio express edition is free. If you're just poking around for personal programming pleasure, I recommend it; it's very good. --Froth 03:11, 9 June 2006 (UTC)
- Reinstallation of .net framewoek 2.0 did not solve the problem. Actually I'm thinking of getting back into assembler since it was the first compiler that brought to an end the need to program in machine code (binary). ...IMHO (Talk) 14:54, 9 June 2006 (UTC)
- I meant that you'd need to install .net 2.0 to run executables made by visual studio. --Froth 02:33, 12 June 2006 (UTC)
- I use codeblocks http://www.codeblocks.org/ Kingpomba 11:01, 15 June 2006 (UTC)
Screenshots
[edit]- Intro was sold out of Seattle on eBay. I imagine that someone got a bunch of copies for free and then put them on eBay.
...IMHO (Talk) 20:42, 8 June 2006 (UTC)
- I can believe it. Only running in interpreter mode would allow them to demonstrate most of their capabilities but still make you want to pay them money to get the ability to create compiled executables. Bill Gates didn't create his Evil Empire by being stupid, after all. StuRat 20:24, 8 June 2006 (UTC)
- Doesn't that message just say that the redistribution of executables is not allowed? It doesn't say anything about the introductory version not being able to create executables. In fact it explicitly acknowledges that executables can be created. —Bkell (talk) 21:49, 8 June 2006 (UTC)
- Yes, you are right. I misinterpreted it when I saw the next screen with the error flag. How is no redistribution enforced and more importantly what is the cause of the error? Thanks ...IMHO (Talk) 23:27, 8 June 2006 (UTC)
- It's probably not enforced through technological means; if you make an .exe file with that compiler, then I can't think of any way they could prevent you from giving it to someone else. But by using the software you agreed to some licensing agreement, which is a legal contract, and part of that contract probably said that you will not redistribute the executables you make. (The reason is probably that they don't want software companies buying the cheaper introductory version in place of the full version.) As for the cause of the dialog, I would guess that it will come up every time you compile something, just to remind you that you are not allowed to redistribute the executables. But that's only a guess; I've never used that particular software myself. —Bkell (talk) 02:15, 9 June 2006 (UTC)
Matlab's bvp4c
[edit]I have a very strange problem with Matlab's bvp4c. Hopefully, someone here would have some idea what's going wrong. I am solving a 30-variable boundary value problem, and I know that for certain inputs, a particular variable u3 must be 0 everywhere. bvp4c returns the values of variables y(x), as well as their derivatives yp(x). The derivative of u3 in the result is always zero (sth like 1e-17 to be precise, and x ranges from 0 to 1), so that looks good. But the value of u3 is varying a lot (instead of staying 0, it goes smoothly, but not linearly to 0.9). Doesn't it clearly mean there's a bug in matlab's bvp4c? What else could be the problem? deeptrivia (talk) 18:10, 8 June 2006 (UTC)
- From your description it certainly appears to be a bug, but indeed a very strange one. I've no idea what would cause this. Did you check that the method converged? I'd expect Matlab to print an error message or a warning if it didn't, but check the residual in any case. -- Jitse Niesen (talk) 10:02, 11 June 2006 (UTC)
Matrix question
[edit]Suppose we have a large symmetric matrix which can be partitioned into smaller blocks matricies. These smaller blocks happen to be symmetric too. Suppose further, that we are given all the eigenvalues of each of these blocks. Is there a way to infer the eigenvalues of the entire original matrix easily? --HappyCamper 19:09, 8 June 2006 (UTC)
- Depending on what you mean by "partition", it's likely that the eigenvalues are the same. Meaning that the eigenvalues don't depend on the basis. If you find a basis in which the matrix is block diagonal, bully for you. Find the eigenvalues of the blocks, you have also found the eigenvalues of the original matrix. Edit: after reading KSmrq's followup below, I realize that my reply is only useful for block diagonal matrices, something you didn't specify in the question, so this answer may be entirely useless, in which case, my apologies. -lethe talk + 01:28, 9 June 2006 (UTC)
- Was the intent that the blocks be on the diagonal? --KSmrqT 03:03, 9 June 2006 (UTC)
- In case you are not restricting yourself to block diagonal matrices, then knowing the eigenvalues of the blocks does not give enough information to get the eigenvalues of the big matrix. Example: consider
- The 2-by-2 blocks have the same eigenvalues, but the matrices themselves do not. -- Jitse Niesen (talk) 13:36, 9 June 2006 (UTC)
- Nice simple example. – b_jonas 14:53, 9 June 2006 (UTC)
- In case you are not restricting yourself to block diagonal matrices, then knowing the eigenvalues of the blocks does not give enough information to get the eigenvalues of the big matrix. Example: consider
- I guess that was a bit of wishful thinking, huh? Thanks guys. --HappyCamper 16:49, 9 June 2006 (UTC)
this is somewhat elementary, and perhaps you know this already. but in the special case that all your blocks are scalar multiples of each other, then it is a Kronecker product and you can get the eigenvalues of the big matrix. Mct mht 00:39, 15 June 2006 (UTC)
June 9
[edit]3D taxicab world
[edit]Has there been any research done into extending taxicab geometry into 3D space? --Tuvwxyz 21:11, 9 June 2006 (UTC)
- In that article, it's defined for any dimension, only restricting to the plane after "For example, in the plane..." Is there some question you wanted answered? Melchoir 22:03, 9 June 2006 (UTC)
- Not really, just curious. --Tuvwxyz 01:50, 10 June 2006 (UTC)
June 10
[edit]Sector of an oval
[edit]My stepdad ( a landscaper) recently asked me how to find the area of... I guess the best way to name it is a sector of a oval. There are to radii (?) that meet at a right angle and one is 10 feet and the other 8 feet. They are connected by an arc that is ≈17 feet. How would I go about finding the area of this shape? Thanks. schyler 01:43, 10 June 2006 (UTC)
- Try finding the area of the ellipse ("oval") and dividing by four because it is one quarter of the ellipse (if the two beams meet at a right angle). —Mets501talk 01:57, 10 June 2006 (UTC)
- The area of this one, by the way, would be 80π/4 = 20π, about 63 square feet. —Mets501talk 02:00, 10 June 2006 (UTC)
- This oval is probably a stretched circle, more commonly known as an ellipse. When a circle has radius 1 ft, its area is exactly π = 3.14159… ft², or approximately 355⁄113 ft². Stretching in a single direction multiplies the area by the same scale factor; so stretching to 10 ft one way and 8 ft the other multiplies the area by 80. Thus the full ellipse would have an area of approximately 251.3 ft². The two stretch directions at right angles to each other give the major and minor axes of the ellipse; these cut the ellipse into four quadrants of equal area. So far the calculations are elementary. However, if a sector is cut out by lines in two arbitrary directions, the area of the sector is somewhat more complicated to find. A conceptually simple approach is to "unstretch" the ellipse and sector lines back to a unit circle. The area of the circle sector is half the radian measure of the angle between the "unstretched" lines; scale that up to get the area of the ellipse sector. Unfortunately, stretching changes the angles between lines other than the axes, so we cannot simply measure the sector angle of the ellipse. --KSmrqT 03:02, 10 June 2006 (UTC)
- These calculations assume, of course, that the oval really is a true ellipse. But I think if it were a true ellipse, the arc that connects those two radii should be about 14 feet 2 inches. So it doesn't seem to be an ellipse, and the "63 square feet" measure will be off by a bit. —Bkell (talk) 03:07, 10 June 2006 (UTC)
Mathematica to Wikipedia
[edit]I tryed to translate the Mathematica .nb-files to TEX- and Mathmarkup- files. Wikipedia did not understand these. The only way was to translate to HTML. It consisted of mainly .gif images, which i had to translate to .png using GIMP . If i used small fonts they were unreadable. I think this is my only article, Collocation polynomial,so i am not interested to become spesialist in TEX or mathmarkup. Now i am going to holiday. If someone has time and possibility to cleanup the article, please do. --penman 05:08, 10 June 2006 (UTC)
- Are these copyrighted images and text that really shouldn't be copied into Wikipedia ? StuRat 15:41, 10 June 2006 (UTC)
- Ugh. You don't write Mathematica code to demonstrate results in articles. I said this before, you write text or TeX explaining what you're doing. Dysprosia 23:51, 12 June 2006 (UTC)
- Mathematica has Tex output command; turns a "notebook", .nb, into a TeX file, that you can cut and paste the relevant parts out of. Also, individual lines can be converted to TeX with TeXForm command, (or is it "TexForm"?) and then be cut and pasted. --GangofOne 02:15, 16 June 2006 (UTC)
Conic Sections
[edit]Yes, this is assignment work, but I have done most of the work. We are given the equation of the basic hyperbola x^2/a^2 + y^2/b^2 = 1, and are asked to prove that PF' - PF = 2a, where P(x,y) is a variable point on the hyperbola, and F' and F are the foci at (-c,0) and (c,0) respectively. I can prove this by taking the basic equation above, and manipulating it to show sqrt((x+c)^2+y^2) - sqrt((x-c)^2+y^2) = 2a. However, I find I need to substitute c^2-a^2 for b^2 in order to do this. In other words, I need to prove c^2 = a^2 + b^2. Looking around on the internet, because most people start with the difference of the distances (sqrt((x+c)^2+y^2) - sqrt((x-c)^2+y^2) = 2a) and use that to find x^2/a^2 + y^2/b^2 = 1, they simply define b as being sqrt(c^2-a^2). Obviously, since I am working from the base equation and using it to find the difference of the distances, it would not be right to just replace b^2 with c^2-a^2 without providing justification. Can it be done? Or is my method too complicated?
- (Your hyperbola equation should be x2/a2 − y2/b2 = 1, with a minus sign instead of a plus.) Actually, you have completed the assignment. Why? Because you have showed that if you define the number c so that c2 = a2+b2, then the two points (±c,0) act as foci. Without such a definition, how are the foci derived from the equation, eh? --KSmrqT 11:17, 10 June 2006 (UTC)
Bypassing cyclic redundancy check?
[edit]Hi there,
Working on a publication using a lot of information burned onto a DVD by a friend of mine. But I keep getting cyclic redundancy checks. The article here is rather useful (but rather hefty)... what I need to know, though, is if there's a way to say "just skip that bit and keep copying, please" to the computer. If it's just a little little bit of data that the computer can't read, can't I just hop over that bit and see if the file's still basically okay later? If one pixel of one photo is FUBAR, that doesn't change that much for me. --MattShepherd 12:20, 10 June 2006 (UTC)
- The problem, however, is compression. If one bit is corrupted, then it could corrupt later bits when it is unconpressed. That's why they have CRCs.—Preceding unsigned comment added by Zemyla (talk • contribs)
- The filesystem itself won't be compressed, although the files stored on it could be. If you're on a Unixish system, you should be able to obtain a disk image (which you can burn to another DVD or mount directly via the loopback device) using
dd conv=sync,noerror
. However, you may still end up losing entire disk blocks (a couple of kilobytes) for even minor scratches. It's the best solution I know of, though. —Ilmari Karonen (talk) 00:05, 11 June 2006 (UTC)- Actually, it should be possible to use dd on the individual files as well. No need to make a disk image (unless you want to). —Ilmari Karonen (talk) 00:08, 11 June 2006 (UTC)
- I remember once having come across a program called dd_something, which skipped unreadable sectors, but googling now didn't retrieve it. I found this one, though: safecopy, which may be useful. --vibo56 talk 12:43, 11 June 2006 (UTC)
- You're probably referring to dd_rescue. Not having tried it, I can't comment on what, if any, practical differences it has from
dd conv=sync,noerror
, though the page suggests that it may be somewhat faster in certain situations. —Ilmari Karonen (talk) 13:41, 11 June 2006 (UTC) - There's also sg_dd, a variant of dd using raw SCSI devices, which might be able to extract more data using its
coe=3
setting — except that the feature is apparently not supported by CD/DVD drives. —Ilmari Karonen (talk) 13:50, 11 June 2006 (UTC)- I shall plunge into some of these things (although I am a (shudder) Windows user, I lack the brainpower to make even SUSE roll over and bark on command). Thanks all for the advice and the clarification about how redundancy checks work. --66.129.135.114 14:12, 12 June 2006 (UTC)
- SuSE is eager to roll over and bark, but not so easy to teach to play dead. It's not all that hard, even for Windows folks. As for the dd_rescue program, it works. But your problem doesn't sound like like the usual sector drop out trouble on magnetic media. The DVD file ssytem has layered CRC and other error correction, and if it fails, I don't think dd_rescue can help. It knows nothing about the structure of the file system, relying on the file system code to retry until it goes right or the operator gives up. There's a script companion for it which automates the bookkeeping for sector trials and saves the user a great deal of effort. ww 04:23, 16 June 2006 (UTC)
- I shall plunge into some of these things (although I am a (shudder) Windows user, I lack the brainpower to make even SUSE roll over and bark on command). Thanks all for the advice and the clarification about how redundancy checks work. --66.129.135.114 14:12, 12 June 2006 (UTC)
- You're probably referring to dd_rescue. Not having tried it, I can't comment on what, if any, practical differences it has from
- I remember once having come across a program called dd_something, which skipped unreadable sectors, but googling now didn't retrieve it. I found this one, though: safecopy, which may be useful. --vibo56 talk 12:43, 11 June 2006 (UTC)
- Actually, it should be possible to use dd on the individual files as well. No need to make a disk image (unless you want to). —Ilmari Karonen (talk) 00:08, 11 June 2006 (UTC)
- The filesystem itself won't be compressed, although the files stored on it could be. If you're on a Unixish system, you should be able to obtain a disk image (which you can burn to another DVD or mount directly via the loopback device) using
Is the axiom of choice necessary for constructing sequences recursively?
[edit]Suppose F is a set, is a binary relation on F, and for each a ∈ F there is b ∈ F such that (a, b) ∈ R. I am interested in recursively constructing a sequence (ai)i ≥ 0 such that for every non-negative integer i, (ai, ai+1) ∈ R. It is easy to show that finite sequences of this type with arbitrary length exist; However, I am having difficulties showing that an infinite sequence of this type exists. That is, of course, unless I am using the axiom of choice, in which case the proof seems straightforward. My question is, is the possibility of this construction provable with ZF, or is the axiom of choice (or a weaker form) necessary? Is it still necessary if it is also known that F is countable? I strongly believe that the answers are, respectively, yes and no, but I just want to make sure. -- Meni Rosenfeld (talk) 17:37, 10 June 2006 (UTC)
- What you need, in general, is the axiom of dependent choice. But you don't need any choice if there's a wellordering on F—just choose the least element of F that works, at each step. If F is countable, then there's an injection from F into the naturals, and from that it's easy to recover a wellordering. --Trovatore 18:39, 10 June 2006 (UTC)
Great, thanks! -- Meni Rosenfeld (talk) 19:18, 10 June 2006 (UTC)
regular distribution disappearing when applied to every testfunction : must come from zero?
[edit]Hi,
let be open and nonvoid in
let
by that I mean and on every compactum it is integrable
suppose now that for every
(this means w is infinitely differentiable on all of but it has a compact support in )
show that f is almost everywhere zero
Now I have worked on this, and came up with the idea of convolving with an approximation of unity.
But then I got confused, what exactly to do with this open I have to respect the confinements of my domain right? Thanks,
Evilbu 19:56, 10 June 2006 (UTC)
- The basic idea is that if, say, f>0 on a set with positive measure, then you can construct a w such that . (Cj67 01:31, 11 June 2006 (UTC))
Yes I see what you mean, but if f was continouos and nonzero at some point p, it could make it strictly positive or negative in an open ball around it, and then a proper w can quite easily be found, but what to do here, a set with positive measure, so many cases? Evilbu 08:52, 11 June 2006 (UTC)
- Measurable sets can be approximated with unions of intervals. I am a bit concerned that this is a homework problem, so I don't want to say too much detail. (Cj67 16:34, 11 June 2006 (UTC))
Well I'll be honest, it's a proof from a syllabus that me and my costudents dispute. The proof works with convolving, but seems to show little regard for necessary analytic subtleties (like discontinuity even).Evilbu 17:00, 11 June 2006 (UTC)
- If you post it on my talk page, I'll take a look. (Cj67 17:21, 11 June 2006 (UTC))
June 11
[edit]Help with MASM32
[edit]I need to create an array that can hold 10 million integer numbers and fill it with random numbers ranging from 1 million to 10 million (minus one), When it is filled I need to write the index and contents to a file. I know how to generate random numbers in MASM and how to write from memory to a file using debug but I need to put them together in a MASM program. Anyone have a demo or example? ...IMHO (Talk) 00:52, 11 June 2006 (UTC)
- It would be helpful if you rephrased the question to pinpoint the problem more exactly. Do you need help with the memory management/indexing, or with making your "random" numbers fall in that particular range, with writing from memory to a disk file from outside of debug, or with writing a self-contained MASM program? I see from your user page that you program in C. You might try to first write a C-program that does the job, with as few outside dependencies as possible, and then compile the C-program to assembly and study the output. --vibo56 talk 10:13, 11 June 2006 (UTC)
- Yes, that is quite easy to do with C (or C++) with a few "for" loops and the rand() function (see here for help using that), and then using fstream to write to files (see here). Hope this helps. —Mets501 (talk) 13:57, 11 June 2006 (UTC)
- With the range of pseudorandom numbers that IMHO needs, rand() will not be sufficient, since RAND_MAX typically is quite small (32767). You might of course combine the results of several calls to rand() by bit-shifting. If you do so, I would recommend checking the output with a tool such as ent, to make sure that the result still fits basic requirements to pseudorandom numbers. If you want to write your own pseudorandom number generator, you can find a thorough treatment of the subject in D. E. Knuth. The Art of Computer Programming, Volume 2: Seminumerical Algorithms, Third Edition. Addison-Wesley, 1997. --vibo56 talk 15:00, 11 June 2006 (UTC)
- Yes, that is quite easy to do with C (or C++) with a few "for" loops and the rand() function (see here for help using that), and then using fstream to write to files (see here). Hope this helps. —Mets501 (talk) 13:57, 11 June 2006 (UTC)
Yes this information helps. Thanks. However, my goal in part here is to learn (or relearn) MASM. Back in the late '60's and early '70's assembly language was quit straight forward (and can still be that straight forward using the command line DEBUG command). Where I am having trouble currently is with INCLUDEs. Irvine32.inc in particular so I am trying to avoid even the use of INCLUDEs and do this (if possible) using only a DEBUG script. Don't get me wrong I have spent ALL of my programming career writing in high level language simply so that I could get far more work done but now my goal is to go back through some of the programs I have written in a high level language like Visual Basic and convert what ever I can to concise assembler or machine code which might help bridge the gap between Windows and Linux whereas a program written in C++ for Linux (source code) may otherwise find difficulty (after it is compiled under any version of Window's C++) to run. What I need specifically is to 1.) know how to create and expand a single dimension integer array with the above size. Therefore I need help with both the memory management and indexing, 2.) Although I can make random numbers fall into any range in Visual Basic I'm not sure about doing this in assembler, 3.) I also need help in writing the array contents and index to a file since even though I know how to write something at a particular location in memory to a file using DEBUG and how to write an array to a file using Visual Basic it has been a long, long time since I used assembler way back in the early '70's. Your suggestion to try writing in C and then doing a compile to study the output is a good and logical one but my thinking is that by the time I get back into C so that I can write such a snippet of a program that I could have already learned how to do it using MASM. Even still it is not an unreasonable or bad idea. Any code examples would lend to my effort and be appreciated. Thanks. ...IMHO (Talk) 14:58, 11 June 2006 (UTC)
I followed your suggestion to look at the disassembled output of the following C++ code and was shocked to find that while the .exe file was only 155,000 bytes the disassembled listing is over 3 million bytes long.
#include <stdio.h> #include <stdlib.h> main() { printf("RAND_MAX = %u\n", RAND_MAX); }
I think I need to stick with the original plan. ...IMHO (Talk) 15:43, 11 June 2006 (UTC)
- Wow! You must have disassembled the entire standard library! What I meant was to generate an assembly listing of your program, such as in this example. You will see that in the example, I have scaled down the size of the array by a factor of 100 compared to your original description of the problem. This is because the compiler was unable to generate sensible code for stack-allocated arrays this size (the code compiled, but gave runtime stack overflow errors).
- To bridge the gap between Windows and Linux, I think that this is definitely not the way to go. If you are writing C or C++ and avoid platform-specific calls, your code should easily compile on both platforms. For platform specific stuff, write an abstraction layer, and use makefiles to select the correct .C file for the platform. If you want gui stuff, you can achieve portability by using a widget toolkit that supports both platforms, such as WxWidgets. I have no experience in porting Visual basic to Linux, but I suppose you could do it using Wine. --vibo56 talk 17:53, 11 June 2006 (UTC)
- Looks like I need to learn more about the VC++ disassembler. I was using it to created the execute file and then using another program to do a disassembly (or reassembly) of the execute file. I'll study the VC++ disassembler help references for at least long enough to recover some working knowledge of MASM and then perhaps do the VB rewrites in VC++ if it looks like I can't improve the code. Thanks ...IMHO (Talk) 23:05, 11 June 2006 (UTC)
- You don't need to use a disassembler. In Visual C++ 6.0, you'll find this under project settings, select the C/C++ tab, in the "category" combo select "Listing files", and chose the appropriate one. The .asm file will be generated in the same directory as the .exe. Presumably it works similarly in more recent versions of VC++. --vibo56 talk 04:58, 12 June 2006 (UTC)
- Looks like I need to learn more about the VC++ disassembler. I was using it to created the execute file and then using another program to do a disassembly (or reassembly) of the execute file. I'll study the VC++ disassembler help references for at least long enough to recover some working knowledge of MASM and then perhaps do the VB rewrites in VC++ if it looks like I can't improve the code. Thanks ...IMHO (Talk) 23:05, 11 June 2006 (UTC)
- All of the menu items appear to be there but no .asm file can be found in either the main folder or in the debug folder. With the C++ version of the program now up and running as it is supposed to with all of the little details given attention (like appending type designators to literals) the next step is to take a look at that .asm file ...if only it will rear its ugly head. ...IMHO (Talk) 01:21, 13 June 2006 (UTC)
- Strange. You could try calling the compiler (cl.exe) from the command line, when the current directory is the directory where your source file lives. The /Fa option forces generation of a listing, the /c option skips the linker, and you might need to use the /I option to specify the directory for your include files, if the INCLUDE environment variable is not set properly. On my system that would be:
- All of the menu items appear to be there but no .asm file can be found in either the main folder or in the debug folder. With the C++ version of the program now up and running as it is supposed to with all of the little details given attention (like appending type designators to literals) the next step is to take a look at that .asm file ...if only it will rear its ugly head. ...IMHO (Talk) 01:21, 13 June 2006 (UTC)
E:\src\wikipedia\masm_test>cl /Fa /c /I "c:\Programfiler\Microsoft Visual Studio\VC98\Include" main.c Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 12.00.8804 for 80x86 Copyright (C) Microsoft Corp 1984-1998. All rights reserved. main.c E:\src\wikipedia\masm_test>dir *.asm Volumet i stasjon E er ARBEID Volumserienummeret er 4293-94FF Innhold i E:\src\wikipedia\masm_test 13.06.2006 19:23 2 292 main.asm
- which, as you can see, works fine. The problem may be related to the fact that you have the free version, maybe assembly generation is disabled? Would that be the case if it only compiles to .net bytecode? If so, just about any other C compiler will have an option to generate an assembly listing, try using another compiler instead. --vibo56 talk 17:38, 13 June 2006 (UTC)
- There must be something seriously wrong with my installation. Even after multiple reinstallations of VC C++ v6 Introductory I keep getting command line errors like it can't find the include files, etc. I'll keep working on it. Thanks. ...IMHO (Talk) 00:01, 14 June 2006 (UTC)
Okay, finally got it! The thing that was messing up the command line compile under VC++ v6 Intro seems to have been a "using namespace std;" line (although oddly enough it has to be removed when the contents of an array variable are incremented but required when the same variable is only assigned a value). It looks like VC++ Express 2005 has the same settings function in the GUI but I have not yet been able to figure out and follow the procedure to get it to work. Its command line .asm intruction might also work now but I do not have time right now to test it. Thanks for all of the detailed suggestions and for helping to make the Wikipedia more than I ever dreamed it would be. Thanks. ...IMHO (Talk) 21:35, 15 June 2006 (UTC)
- I'd add a caution here re the random business. It is remarkably hard to generate random sequences deterministically. See hardware random number generator for some observations. If you have to do it in software, you might consider Blum Blum Shub whose output is provably random in a strong sense if a certain problem is in fact intractable computationally. It's just slow in comparison to most other approaches. ISAAC and the Mersenne twister are other possibilites adn rather faster. On a practical basis, you might consult the design of Schneier and Ferguson's Fortuna (see Practical Cryptography). The problem is one of entropy in the information theory sense, and it may be that this doesn't apply to your use in which case the techniques described by Knuth will likley be helpful. Anything which passes his various tests will likely be satisfactory for any none security related purpose. However, for security related issues (eg, cryptography, etc) they won't as the entropy will be too low. Consider Schneier and Ferguson's comentson the issue in Practical Cryptography.
- And with reapect to using libraries, I suggest that you either roll your own routines or install a crypto library from such projects as OpenBSD or the equivalent in the Linux world. Peter Gutmann's cryptlib is in C and has such routines. There are several other crypto libraries, most in C. Check them very carefully against the algorithm claimed before you use them for any security related purpose. G luck. ww 04:56, 16 June 2006 (UTC)
computer
[edit]what is html???
- HTML stands for HyperText Markup Language and is the language used to code web sites. —Mets501 (talk) 13:58, 11 June 2006 (UTC)
- See HTML. You can use our search box in the left to find out other things. Conscious 15:47, 11 June 2006 (UTC)
- This ain't math. -- Миборовский 05:18, 12 June 2006 (UTC)
- The mathematics reference desk is also the place for questions about computers and computer science. -- Meni Rosenfeld (talk) 11:15, 12 June 2006 (UTC)
- Html has more to do with 1) language and 2) information science than computing. You can do everything with a computer, writing, drawing, publishing, searching the net, playing ; computer science uses languages the same way we use them, with grammar, lexicon, good and bad words, orthographic correctors, art of the discourse. Our computer scientists are hegemonists the way some well-bred nations are. So Reference Desk/language or /sciences are good candidates for this question. --DLL 17:01, 12 June 2006 (UTC)
- This logic certainly looks bizarre. Will the question "what is Visual Basic?" also belong to refdesk/language because VB is a language? At this rate, what question can belong on the computers\CS category? -- Meni Rosenfeld (talk) 19:17, 12 June 2006 (UTC)
- Is it time for a separate computers reference desk yet? In the time since computer questions were directed to the mathematics desk, I don't recall seeing a single "hard" computer science (as in, theory of computation etc.) question that would remotely fit in with the math stuff. Now I personally don't mind much, since I do find many of the "How to install Linux?" questions interesting too, but it does get confusing. —Ilmari Karonen (talk) 23:35, 12 June 2006 (UTC)
- On the other hand, there haven't been that many questions about computers, so I don't know if this is a large enough topic (in terms of number of questions) to deserve its own section. I don't know how are things at the other refdesks, but perhaps a general repartition can be useful - for example, separating humanities from social sciences, and adding a "technology" section for questions about computers, electronics etc. -- Meni Rosenfeld (talk) 16:05, 13 June 2006 (UTC)
- The Math(s) desk seems manageable, about 6 topics/day recently. The other reference desks, except language, handle over 15 topics most days. If desks were split I think Science, Misc., and Humanities would be first. (How do you split Misc? :-). Walt 17:05, 13 June 2006 (UTC)
- Not that long ago the Microsoft public newsgroup WindowsXp subject only got maybe 50 to 100 questions per week. That is about how many hits it gets every hour now-a-days so you are luck if you ever get anyone to reply to a question. Maybe one of the reasons there are not that many computer questions here is because there is no computer desk. ...IMHO (Talk) 23:44, 13 June 2006 (UTC)
- To Meni :
- A markup language is not a programming language as VB. You need yet another program - a browser, using parsing and rendering (?) subprograms, to use html, and you can't do much with it alone. --DLL 21:11, 17 June 2006 (UTC)
June 12
[edit]no idea how to do this thing i dont know wat to call
[edit]how would i do these problems, i have an exam tomorrow and i would love an answer soon:
b. x/(2x+7)=(x-5)/(x+1) and c. [(x-1)/(x+1)]-[2x/(x-1)]=-1
i have no idea how to approach this problem the directions say: Solve each equation. --Boyofsteel999 01:09, 12 June 2006 (UTC)
- OK, to solve these equations using rational functions, you generally go through these two steps:
1. Multiply by the terms on the denominator (ie. the bottom). For example, if you had the equation you multiply by the and terms, giving you .
2. Solve the problem as you would any kind of quadration equation - gather it into a normal quadratic form, and either factorise or use the quadratic formula. In this case, we first get , which reduces to , the solutions of which (using the quadratic formula) are .
Technically there's a third step - make sure that the solutions you get are not going to make the denominators zero - but a. this shouldn't happen anyway, and b. once you get to complex analysis you treat these solutions that aren't really solutions as solutions that just aren't explained clearly. Confusing Manifestation 02:15, 12 June 2006 (UTC)
- I'm sorry, but I think your solution to the quadratic equation is wrong. has solutions . – b_jonas 08:39, 12 June 2006 (UTC)
- Whoops, sorry. I was doing the calculation in my head and was so worried about getting the factor in the square root right I screwed up the other term. 144.139.141.137 13:52, 12 June 2006 (UTC)
projective limit of the finite cyclic groups
[edit]The inverse limit of the cyclic groups Z/pnZ for p a prime gives you the group of p-adic integers (a group about which I know little). It seems to me that the collection of all finite cyclic groups also forms a direct system of groups over the directed set of natural numbers ordered (dually) by divisibility. Thus shouldn't there be an inverse limit of this system as well? What is it? Probably it's just Z, right? -lethe talk + 06:10, 12 June 2006 (UTC)
- I don't think so, because I think I can construct an element of the direct product that satisfies the conditions for being an element of the inverse limit, but which obviously doesn't correspond to an integer. It corresponds to 0 modulo every odd number, n modulo 2n for all odd n, 1 modulo 2n for all n, and the following for multiples of 4:
4 8 12 16 20 24 28 32 36 40 44 48 ... 1 1 9 1 5 9 21 1 9 25 33 33 ...
- Oddly enough, this sequence doesn't appear in Sloane's, but it seems easy enough to keep generating new terms with the Chinese remainder theorem. —Keenan Pepper 01:58, 13 June 2006 (UTC)
- So then I guess the resulting group will be kind of elaborate, if it contains weird sequences like this. Another way to see that the result will not be Z: the p-adic integers are uncountable, and my limit group has a surjection to every group of p-adic integers, and so must be uncountable as well. Would you expand a little on how you came up with that sequence? -lethe talk + 17:53, 13 June 2006 (UTC)
- Well, if you know the sequences for all powers of primes (that is, if you know all the p-adic numbers it corresponds to), then you know the whole sequence by the CRT. I tried to think of a simple example that obviously didn't correspond to an integer, so I made it 1 mod 2 but 0 mod all the other primes. Then you can choose whether it's 1 or 3 mod 4, and so on, but it works if you just make it 1 for all powers of 2.
- Do you think this group is isomorphic to the direct product of all the groups of p-adic integers? —Keenan Pepper 22:33, 13 June 2006 (UTC)
- My intuition is no. A direct product of the p-adic groups won't have numbers from the various composite cyclic groups like Z/6Z. Well, we could probably take the direct product over all naturals, instead of just primes. But then this would be too big; it would have independent factors for Z2 and Z4 (here I denote cyclic groups with fractions and p-adic groups with subscripts). Maybe we could take the direct product over all numbers which are not perfect powers. -lethe talk + 03:34, 14 June 2006 (UTC)
the algebraic structure of sentential logic
[edit]In sentential logic, it seems to me that the set of well-formed formulas (wffs) may profitably be thought of as a set with an algebraic structure. One has a set of sentence variables, and one may perform various operations on the sentence variables; usually disjunction, conjunction, negation, and implication. The sets is then some sort of free algebraic structure in these operations on the set of sentence symbols. Another algebraic structure with these operations is the set {0,1} (with the obvious definitions of the operations), and a truth assignment may then be defined as a homomorphism of this kind of structure from the free structure of wffs to {0,1}, which is used to determine an equivalence relation called tautology. The Lindenbaum algebra is the quotient of this free structure by this equivalence relation and is a Boolean algebra.
This description in terms of algebraic language differs in flavor a bit from the way I was taught mathematical logic (from Enderton), and I have some questions. It seems that this algebraic structure is completely free; it doesn't satisfy any axioms. So I guess it's not a very interesting structure. Is this a standard construction? Does it have a name? I've been using the name "free pre-Boolean algebra", so that a truth assignment is pre-Boolean algebra homomorphism.
I like the algebraic description here, one reason being that it gives a concise way of defining semantic entailment. On the other hand, I don't see any nice algebraic way of describing syntactical entailment. Is there one? Can I describe modus ponens as an algebraic operation in this structure? -lethe talk + 06:10, 12 June 2006 (UTC)
- Isn't this what is known as a term algebra? I think you also get an initial algebra by the construction. Valuations like truth assignment are then catamorphisms. An immediate advantage of the algebraic view over the prevalent view as strings over an alphabet (as in the article Formal language) is that you get a structural view in which you don't need to apply handwaving about parentheses and ambiguities, or put them in explicitly all the time (as in the article Formula (mathematical logic)). --LambiamTalk 09:35, 12 June 2006 (UTC)
- Yeah, term algebra sounds like exactly what I'm describing. It's a term algebra with a signature of 1 unary and 3 binary operations (depending on my choice of logical symbols). As for initial algebras and F-algebras, I couldn't make sense of those articles. -lethe talk + 14:12, 12 June 2006 (UTC)
Combining Cubes...
[edit]First I would like to state that this is not for homework purposes, merely some discrepancy with a textbook we discovered recently. We've experimented with combining cubes in a variety of completely different formations (that is, excluding replicas via reflection or any other direct transformation), and the particular one we decided to do on that occasion was 4. However while we could find only seven different combinations, the textbook insisted that there was eight. Can anyone help me to either confirm the textbook answer, or our own answer, and if possible state a brief "proof" of why a certain answer is so.
Also, the reason that we were experimenting was because we were trying to develop a general algebraic processes to find the number of combinations as a function of the number of identical cubes. If anyone can possibly give any directions towards this search at all it would also be helpful and greatly appreciated. LCS
- These types of shapes are called polycubes. Six of the 4-cube polycubes are present in the component pieces of the Soma cube; the other two are the 1x1x4 "line" and the 1x2x2 "square". Notice that two of the soma cube pieces - the "left screw tetracube" and the "right screw tetracube" - are mirror-images of one another. If you count these pieces as the same, you get a total of 7 polycubes; if you count them as different, you get 8. I suspect you and your book are using different conventions on the transformations that are used to identify "identical" polycubes with one another - you are allowing reflections; the book is not. I imagine that the general problem of counting the number of different polycubes of n cubes is very hard. See [2] for values from n=1 to 13. Gandalf61 11:56, 12 June 2006 (UTC)
These moths have been bugging me
[edit]I saw a question on the science page that bugged me and made me think of the following quesiton:
Say you have two moths flying toward each other carrying a light source. They are attracted to light 10 meters away, and their parallel paths are separated by 5 m, would they crash into each other? ...Sounds like this would make a good text book calculus question. Anyone have an answer? XM 16:50, 12 June 2006 (UTC)
- It depends on what "attracted" means, e.g., is it like a force pulling them, or do they try to aim themselves at the light? (Cj67 18:49, 12 June 2006 (UTC))
Once they detect the light, they are pulled towards the light at the same speed they are traveling--(XM) but too lazy to sign in.
- That's not what the article says. It says that, of the two prevailing theories, the one that relates to this suggests they maintain a "constant angle" to the light source. If they were pulled towards the light, at the same speed they were originally travelling, starting when they got within ten meters of each other, they would fly straight, make a sudden, sharp turn, and fly straight into each other, without ever changing speed. Black Carrot 22:08, 12 June 2006 (UTC)
Java JPanel
[edit]I am making an applet at a certain point in this applet i clear a JPanel with the removeAll(), after i have done this it seems as thought it is impossible to add anything to this JPanel again.
Is it possible to add to a JPanel after having used removeAll(), or how would i go about placing new content to the same JPanel while taking off the content thats already there Thank you very much
--70.28.2.95 19:37, 12 June 2006 (UTC)
- What do you mean, impossible? Did your program throw an exception or did it just not update? If it didn't update, force it too, invoke JPanel.validate(). Oskar 00:02, 13 June 2006 (UTC)
I tried using validate() but it still wont update, the JPanel stays blank. I tried invoking it both before and after adding to my JPanel but the results were the same --70.28.2.95 00:47, 13 June 2006 (UTC)
- Allright then, lets put on our thinking caps. A few things off the top of my head:
- If you have access to the parent container, use validate() on that.
- Also, use validate() on all components in the container.
- Make sure that the JPanel is actually showing on screen (since a JPanel with no components is invisible). Try giving it a border, or just paint the background red. You could also assign it a mouselistener that would print a message every time you clicked it so that you'd know it was there.
- Remove and readd the JPanel itself (but still making sure it's actually showing, as in previous point)
- Try calling invalidate() before you remove the components or after you remove the components but before you add the new ones and then validate(), or both. Then call validate().
- Try calling updateUI(). I'm not sure it'll work, but it's worth a shot.
- Try calling doLayout(). You shouldn't really do this manually, but what the hell, we're getting desperate
- The docs say that removeAll() does something with your layout manager, so assign a new layout manager to to the panel before adding all the stuff.
- Not that it'd make a difference, but try using remove() on each component individually instead of removeAll(), maybe that'd help.
- If none of these work, lets get unorthodox: Try resizing your panel while it's running (ie. if it's in a standard frame, resize the frame). Try writing some sort of code that removes the panel and then adds it back in. Try to think of anything that might make a panel update while you run it.
- Let's do some debugging: use getComponents() to get all the components and then System.out.println() all of them to make sure that you actually have added them.
- Let me know if any of these helps, otherwise we'll have to come up with somehting else. Cheers Oskar 01:23, 13 June 2006 (UTC)
validating the parent container and all the components seems to have done the trick. Thanks alot for your time. --70.28.2.95 19:00, 13 June 2006 (UTC)
What are these two solids called?
[edit]For the WikiGeometers out there...I wonder if you could help me identify these two solids by name? I'd like to include them in articles which might be lacking in illustrative images...Thanks! --HappyCamper 21:42, 12 June 2006 (UTC)
- Well, according to their image descriptions, they're both forms of geodesic domes. --jpgordon∇∆∇∆ 22:02, 12 June 2006 (UTC)
- One of those kinda look like one of those freaky d100 dices Oskar 01:33, 13 June 2006 (UTC)
- Makes me think of a Buckyball. --LambiamTalk 07:32, 13 June 2006 (UTC)
The one on the left is impossible to make unless you are talking about spherical geometry. As for the one on the right, I have no idea.Yanwen 00:21, 14 June 2006 (UTC)
- The caption on the globes mention something about "V 3 1" - this is some sort of parameterization I think, but does it help? --HappyCamper 01:00, 14 June 2006 (UTC)
- The one on the left is reminiscient of a truncated icosahedron, but at first glance looks impossible, since three hexagons meet at some of the vertices. However, the edges are probably not uniform, nor the faces, meaning the object is certainly possible, its just not a regular polyhedron. I'm not sure, but I believe that one on the left is derived as the dual of the one on the right. The right one looks like it might be a subdivision surface generated by an actual truncated icosahedron. In computer graphics, polygon models for spheres are sometimes generated by starting with a platonic or archimedean solid, and subdividing the faces into triangles in a symmetric way. The subdivision can be continued to any depth, allowing high resolution models without parametrization artifacts. If you look closely at the one on the right, you can see that most of the vertices connect six edges, but a few of them connect only five; I suspect that if you were able to count how many there are of each, you'd find twelve vertices that connect five edges - the same number of pentagonal faces can be found on a truncated icosahedron. Check out the dual of the truncated icosahedron to see what the first stage of the subdivision would look like. --Monguin61 18:57, 14 June 2006 (UTC)
- These are actually closely related to, among all things, a certain class of viruses, which have exteriors with icosahedral symmetry. See here, for example (scroll down to "The theoretical basis..."). You can make a structure like the second one out of 20(a2+ab+b2) triangles, where a and b are integers, and at least one of a or b is non-zero. The corresponding dual always has 12 pentagons, and 10(a2+ab+b2-1) hexagons. Chuck 20:42, 14 June 2006 (UTC)
- Why is the one on the left impossible to make? It looks like a soccer ball to me. -ReuvenkT C E 22:19, 16 June 2006 (UTC)
- I am the (french) creator of the images of these two solids (Sorry ! I do not speak English). Some explanations on the geometric construction of these solids (named Geode V-3-1 dual [left solid] and Geode V-3-1 [right solide]) can be found at this web address ; It consists of six steps (with explanatory sketches) ! Papy77 (talk) 09:13, 17 June 2017 (UTC)
June 13
[edit]A Very generic thank you
[edit]I would like to say thanks to the editors of this project. I have learned more in a few weeks of perusing the portals in the math section than I did all through my engineering curriculum. Absolutely fascinating stuff, and very well organized. In my mind, mathematics is learned best by a first organizing the general ideas of math, then discovering how they are sometimes connected. This thorough organization has been a hugely interesting, and, I suspect, many others who just read but don't say anything. Anyway, sometimes simple thanks justifies long hours of effort, I've found. I hope it does for all of you. Denmen 02:33, 13 June 2006 (UTC)
- Thank you. Well, our math portal is the fifth in a goolge search. What shall we do to be no 1 ? --DLL 19:27, 13 June 2006 (UTC)
- Get Google spelt right? Oh, and thanks Denmen. Reward all the hardworing editors with bleeding fingers, a bad case of Wikiholicism, and more knowledge than either of us will ever have, with a mention wherever you go. Talk it up everywhere. Vote on featured article candidates, and find some articles to contribute to and supervise, ... ww 05:05, 16 June 2006 (UTC)
Measuring Heights
[edit]How can a person measure the height of a tall object such as a telephone pole, a tree, a tall building, etc? —Preceding unsigned comment added by Stockard (talk • contribs) 20:08, 12 June 2006 (UTC)
- At a certain time of the day, the position of the sun is such that the length of the shadow cast by the object is equal to the height of the object. – Zntrip 04:54, 13 June 2006 (UTC)
- Some options:
- Use optical interferometry with a laser and a corner reflector. To get the corner reflector on top of the object, use a micro air vehicle.
- Cut down the pole and the tree; measure them on the ground. Consult the architectural records of the building.
- Methods like trigonometry may have been fine in ancient Egypt, but surely we can do better today! --KSmrqT 05:16, 13 June 2006 (UTC)
- What could be better than trigonometry? You need nothing besides a length of measuring tape and the sun. Anything else requires a lot more work and makes it only slightly more accurate, probably more accurate than you need it to be. - Mgm|(talk) 09:16, 13 June 2006 (UTC)
- Perhaps you should acquaint yourself with modern methods of responding to homework questions before jumping to the defense of antiquated trigonometry approaches. --KSmrqT 10:54, 13 June 2006 (UTC)
- There's also the barometer trick: measuring the atmospheric pressure at the top and the bottom of the building, and using the barometric formula. – b_jonas 11:15, 13 June 2006 (UTC)
- There is also another 'barometer trick': drop the barometer from the top of the building, time how long it takes to hit the ground, and use Newton's laws of motion to calculate the height :) Madmath789 11:26, 13 June 2006 (UTC)
- We may be thinking along the same lines. --KSmrqT 11:28, 13 June 2006 (UTC)
- There is also another 'barometer trick': drop the barometer from the top of the building, time how long it takes to hit the ground, and use Newton's laws of motion to calculate the height :) Madmath789 11:26, 13 June 2006 (UTC)
- This question was also asked on the Miscellaneous Ref Desk. There are a couple additional methods presented there. --LarryMac 14:35, 13 June 2006 (UTC)
- I tried this for a pine in my garden. Just measure its shadow and yours. You know your height, then : ph = yh * ps / ys. For a building amongst others, it is hard to get the full shadow on the ground. As for the accuracy ... --DLL 19:24, 13 June 2006 (UTC)
- I find using trigonometry a lot easier than cutting down the tree/pole and using the laser. Why not use the easiest method that will give you a fairly accurate answer? Besides, this is posted on a mathematics reference desk so I think we should provide an answer that has to do with math, not physics. Yanwen 00:19, 14 June 2006 (UTC)
Symbols
[edit]Can someone tell me what ΔQ% means?Groc 10:39, 13 June 2006 (UTC)
- "Δ" is the uppercase version of the Greek letter delta. In mathematics and physics, "Δ" is often used as short-hand for "change in"; more specifically, "Δ" represents a macroscopic change, whereas a lower case delta, "δ", represents an infintesimal change. So "ΔQ%" would mean "percentage change in Q" - which doesn't help you much unless you know what Q is. Q could represent heat energy, especially if you came across this term in a thermodynamics equation such as the first law of thermodynamics. Alternatively, Q is also used in physics to represent a quantity of electric charge, or the fusion energy gain factor in nuclear physics. Gandalf61 11:09, 13 June 2006 (UTC)
Least squares approximation question
[edit]I have a set of N points (xi,yi). I want to find out the radius and subtended angle of the circular arc that can best approximate those points and the least square error in this approximation. How can I do this? Thanks. deeptrivia (talk) 19:47, 13 June 2006 (UTC)
- Don't know. But just to define the question more precisely for the benefit of those who may be better able to help, how are you defining your error in this case? Perpendicular distance? Distance parallel to one of the coordinate axes? Arbitrary username 19:55, 13 June 2006 (UTC)
- Well, suppose the center of the arc is located at (x0,y0), and the radius of the arc is R. Then, the error I am looking at is . Yes, I think this would be the same is perpendicular distance of the points from the arc. deeptrivia (talk) 21:09, 13 June 2006 (UTC)
- Defining , you may be looking more for minimizing something like , which is the rms error in the perpendicular distances. Here is a possible approach, assuming N is at least 3. For general use this has to be made robust for handling degenerate cases, like collinearity of the points.
- First find three points that are more-or-less as far away from each other as possible, for example start with some point p0, find the point p1 the farthest away from p0, find the point p2 the farthest away from p1, and finally find p3 maximizing min(d(p1,p3), d(p2,p3)).
- Find the circle through p1, p2 and p3, giving an initial estimate of the centre of the circular arc.
- Given an estimate of the centre, compute an estimate of the radius as , where is as before.
- Given an estimate of the centre and an estimate of the radius, obtain an improved estimate of the centre by shifting it by the average of the "discrepancy vectors", where the i-th discrepancy vector is the difference between the vector from the estimated centre to point i and the vector with the same direction and length R. So it is as if each point is pulling on the centre with a force proportional to its perp distance to the circle.
- Repeat steps 3 and 4 until convergence.
- LambiamTalk 11:53, 14 June 2006 (UTC)
- Defining , you may be looking more for minimizing something like , which is the rms error in the perpendicular distances. Here is a possible approach, assuming N is at least 3. For general use this has to be made robust for handling degenerate cases, like collinearity of the points.
- For a serious study of possibilities try this paper by Chernov and Lesort. They note that short arcs can cause many algorithms to fail. --KSmrqT 12:49, 14 June 2006 (UTC)
- Thanks for your responses. Just figured out that there's a readymade solution, which will do for a dumb engineering student. deeptrivia (talk) 18:42, 14 June 2006 (UTC)
Nifty Prime Finder Thingy
[edit]I've been fiddling around with primes for a bit, and I found a property that I've never heard of before. I'd like to know if it already exists, or if I'm the first to find it. Given a prime P, the product of all primes less than P is A. If a prime N<A can be found that is close to A (meaning A-N<P2), there is a corresponding prime number at A-N. Of course, this can't break any records, since it looks down for primes instead of up, and can only find new primes between P and P2, and then only if there happens to be a prime known between A and A-P2, but still. Anyone heard of it? Black Carrot 22:48, 13 June 2006 (UTC)
- Take P = 5. Then A = 2 × 3 = 6. N = 2 is prime and satisfies N < A and A - N < P2. And yet A - N is 4, which is not a prime number. You also need to require that N > P, which follows from A - P2 ≥ P, which in turn follows from P ≥ 11. Then it is not difficult to prove this property. I am sorry to say that it is not terribly exciting, which may be why we haven't heard of it before. --LambiamTalk 00:20, 14 June 2006 (UTC)
- Damn. And of course, I was thinking of larger numbers than that. How exactly would you prove it? Black Carrot 00:58, 14 June 2006 (UTC)
- Let's define B = A - N. Since B < P2, to establish that B is prime it suffices to prove that no prime Q < P is a divisor of B. (For if B is not prime, we can write it as B = D × E in which D and E are proper divisors, and since they cannot be both ≥ P at least one of the two is smaller than P, and then so is its least prime factor Q.) To prove now that no prime Q < P is a divisor of B, we show that the assumption that some prime Q < P divides B leads to a contradiction. So assume prime Q < P divides B. Q also divides A, since A is the product of a set of primes that includes Q. Then Q also divides A - B = A - (A - N) = N. Furthermore, Q < N (since Q < P < N), so Q is a proper divisor of N. But this contradicts the given fact that N is a prime number. --LambiamTalk 01:58, 14 June 2006 (UTC)
June 14
[edit]World cup betting
[edit]I don't think I want to bet, but I'm curious about the way the odds work. Currently at an online betting site, the odds for the first five teams are:
Brazil 4.1 England 8.4 Argentina 8.6 Germany 9.6 Italy 12
Lets say I were to place my bets so I placed $1 on Brazil and $.50 on each of the other four. Am I right in thinking I'm guaranteed to win something if any of the first five win? Likewise, would there be some way to pick my amounts so I'm guaranteed to win something if any of the first ten win? Finally, am I right in thinking that statistically, even doing this, my expected wins should be zero overall (if the odds are fair and accurate), as the pennies I win when the top teams win would be exactly balanced out by the dollars I'd lose when one of the underdogs won? (obviously there are also fees to pay, I assume, but I'm not counting that). — Asbestos | Talk (RFC) 14:01, 14 June 2006 (UTC)
- For the given data, yes, you could bet such that you'd win if any of those 5 win (though you require a bankroll that grows far faster than the expected rewards, see Martingale (roulette system) for a discussion of a similar problem). However, you can't even out betting on the whole thing, because then the bookies wouldn't get a cut. Betting odds never add back to 1. — Lomn 14:32, 14 June 2006 (UTC)
- Surely its not the same as the roulette example. Here there are a limited number of teams, unlike in roulette, where you can lose an unlimited number of times in a row. You're not placing any more money when you lose, you bet it all at once. In the example above, I would have bet $3 and no more. I could halve all my bets, betting $1.50, and still expect to win if any of those five teams won, right? But I'm still wondering about my last question above, rephrased here a little more generally: If all the odds given are fair and accurate, and the bookie doesn't take a commission, am I right in thinking that no matter how I place my bets, how many bets I place, and how much I put on them, my expected earnings will always be exactly zero? — Asbestos | Talk (RFC) 15:21, 14 June 2006 (UTC)
- To those interested in world cup winning chances from a statistical point of view, this page from the Norwegian Computing Centre might be of interest. All remaining matches are simulated, taking every little detail of the rules into account. --vibo56 talk 15:57, 14 June 2006 (UTC)
- It's not the same, no, but it's similar in that the flaw lies in having a limited bankroll to accomplish a meaningful gain. However, please note the latter half of my point -- betting odds are not fair and will be tilted towards the house. You cannot find a real-world scenario where every possible outcome is unity or better for the player. To extend into the theoretical, a strictly fair system should allow you to find a unity point, but it won't be "no matter how you place your bets" -- it will be the particular pattern of betting that corresponds to the odds. — Lomn 19:30, 14 June 2006 (UTC)
- Actually, I take that back, at least as stated. If you go with multiple iterations over time, a fair system allows you to distribute your money however you want and, over time, you'll average out to zero. However, for a one-shot event, you must match the odds to guarantee a lack of loss. Consider fair betting on a fair coin. If you flip the coin a lot, you can bet all your money on tails every time and will, on average, net zero. However, to guarantee a lack of loss on one flip, you must put half your money on heads and half on tails. — Lomn 19:38, 14 June 2006 (UTC)
- I still don't see how the limited bankroll affects anything, because, as nothing is growing exponentially, and there is a hard limit on the number of teams, I can always halve or quarter my bets to fit my bankroll, no matter how small my bankroll is. If I had an unlimited bankroll, I still wouldn't be any better off. But thank you for answering my question in the end. I did try to be quite specific as to what I meant, by asking what the expected gain would be, and whether or not my expected gain would change depending on how I placed my bets. If I've understood right, in a fair system of this kind (were it fair), it should make absolutely no difference what bets are placed, how many or for how much, the expected gain will always be zero. — Asbestos | Talk (RFC) 20:30, 14 June 2006 (UTC)
- Actually, I take that back, at least as stated. If you go with multiple iterations over time, a fair system allows you to distribute your money however you want and, over time, you'll average out to zero. However, for a one-shot event, you must match the odds to guarantee a lack of loss. Consider fair betting on a fair coin. If you flip the coin a lot, you can bet all your money on tails every time and will, on average, net zero. However, to guarantee a lack of loss on one flip, you must put half your money on heads and half on tails. — Lomn 19:38, 14 June 2006 (UTC)
- It's not the same, no, but it's similar in that the flaw lies in having a limited bankroll to accomplish a meaningful gain. However, please note the latter half of my point -- betting odds are not fair and will be tilted towards the house. You cannot find a real-world scenario where every possible outcome is unity or better for the player. To extend into the theoretical, a strictly fair system should allow you to find a unity point, but it won't be "no matter how you place your bets" -- it will be the particular pattern of betting that corresponds to the odds. — Lomn 19:30, 14 June 2006 (UTC)
lognormal/normal
[edit]If x is lognormal distributed how is s=s0*exp(x) distributed?
- I've seen the term "log-log-normal distribution" for this (or "loglognormal"), but wouldn't consider it standard. --LambiamTalk 16:23, 14 June 2006 (UTC)
Extending the number set
[edit]A first-year maths lecturer last year gave us a delightful little insight into where our progressively less intuitive number sets come from. We start with the positive integers, the normal, every day counting numbers. But then we have no solution x for equations like 1 + x = 1. So we need another number, and enter stage left, zero. But we still have no solution for equations like 3 + x = 2. So we need more numbers, and lo, the negative integers give us the complete set of integers. But now we have no solution for equations like 2 * x = 1, and again, we need more numbers, so we get the rationals. Then equations like x * x = 2 yield the irrationals (giving us the set of reals) and if we expand our number set one more time to solve x * x = -1, here we are finally with the complex numbers.
But is that as far as we need to go? Are there any equations like this that we still can't solve, that lead us to extending our number set yet further? Is this where quaternions, octernions et al become needed (I've not read much on them, I admit) , or are they just useful extensions of the concept of complex numbers that have nifty results for physicists? I've played with complex numbers idly while thinking about this but I can't think of any problems left. Are complex numbers finally the end? -Maelin 15:26, 14 June 2006 (UTC)
- In a way, they are "the end", depending on what you want to achieve. By the Fundamental theorem of algebra, any polynomial equation over the complex numbers has a complex solution. However, there are many larger fields that can be considered (the space of meromorphic functions, for example). All of these are infinte-dimensional, though. If you want a finite dimensional extension of the real or complex numbers, you have to give up some of the properties of a field: commutativity for the quaternions, associativity for the octonions. Kusma (討論) 15:34, 14 June 2006 (UTC)
- But if you are willing to consider infinite numbers, there's a whole lot of different infinite cardinal numbers. --vibo56 talk 16:01, 14 June 2006 (UTC)
- Not to mention ordinal numbers, hyperreal numbers and surreal numbers... Cardinals and ordinals are not extensions of the reals, only of the non-negative integers, so they demonstrate a different path one may take in his quest for extensions. Regarding your original question, indeed, as long as one is only interested in solving polynomial equations with one unknown, the complex numbers suffice. But if you want to solve an equation like ab - ba = 1, the complexes aren't up to the task - This is where non-commutative rings of, say, matrices, come in handy. In short, there are enormously many ways of extending the elementary notions of "number" - it all depends on what features one wishes in the structure he investigates. -- Meni Rosenfeld (talk) 16:20, 14 June 2006 (UTC)
- And of course, let's not forget the equation x + 1 = x, which is solvable in the real projective line and the extended real number line. -- Meni Rosenfeld (talk) 16:23, 14 June 2006 (UTC)
- ... or you can consider questions like "what if there were a solution to x2=1 that was not 1 or -1 ?" - and you get the split-complex numbers. Gandalf61 16:14, 14 June 2006 (UTC)
- But if you are willing to consider infinite numbers, there's a whole lot of different infinite cardinal numbers. --vibo56 talk 16:01, 14 June 2006 (UTC)
- Just one comment on this point: "Then equations like x * x = 2 yield the irrationals (giving us the set of reals)" Actually, real solutions to polynomials only give us some of the irrational numbers, namely the algebraic numbers. They don't give us transcendental numbers. Chuck 21:02, 14 June 2006 (UTC)
- The progression of simple polynomials is an excellent way to motivate and introduce number systems. Both logically and historically this route has been important, culminating in the system of complex numbers and the fundamental theorem of algebra, which suggests we need go no farther. However, another motivation is geometry. A basic example is the circumference of a circle with unit diameter. Archimedes was able to provide lower and upper bounds for this length based on sequences of regular polygons, inscribed and circumscribed. However, the value itself, which is π, is not the solution of any polynomial equation with complex coefficients. The real line consists almost entirely of such values, required to form a geometric continuum.
- Quaternions also emerge from geometry. Sir William Rowan Hamilton had worked with complex numbers both as algebraic objects and as ordered pairs suitable for plane geometry. Through his interest in mathematical physics he was naturally curious if there was a number system that could play the same role for space, meaning the 3-dimensional Euclidean space of physics at the time. For 15 years he tried unsuccessfully to create a system of triples instead of pairs. Habitually and unconsciously he assumed that multiplication was commutative, so that ab = ba. Then one evening as he and his wife were walking through Dublin to a meeting, the thought struck him — like a bolt of lightning — that if he let ij = k but ji = −k he would obtain a system of quadruples instead of triples, but otherwise the number system would work as he required. This was the famous invention/discovery of quaternions.
- It was also the beginning of the crucial realization that we could devise number systems and algebras with great latitude in their rules. For example, a few years later Arthur Cayley explained how to calculate with matrices, whose multiplication is also non-commutative. William Kingdon Clifford built on earlier work of Hermann Grassmann to produce a family of arithmetic, or more properly algebraic, systems called Clifford algebras, suitable for geometry in any dimension. The examples of such inventions are too numerous to list.
- Each system of numbers has its own motivations, its own uses. Sometimes these go far beyond the original impetus. For example, we now know that the structure of any Clifford algebra is based on matrices built from one of three fundamental systems: real numbers, complex numbers, or quaternions.
- So, no, complex numbers are not the end. They are just a particularly scenic and historic stop on a tour of a beautiful country. --KSmrqT 05:46, 15 June 2006 (UTC)
a question
[edit]hello, i hope i am looking in the right section. what is archetypal systems analysis? also i found it as archetypal social systems analysis. thank you very much for you time. --Marina s 19:12, 14 June 2006 (UTC)
- I have no idea, but I tried googling for it. Almost all hits point to the same source: "Mitroff, I. L. (1983). Archetypal social systems analysis: On the deeper structure of human systems. Academy of Management Review, 8, 387-397". If you contact your library, you could get a copy of the paper. --vibo56 talk 16:54, 15 June 2006 (UTC)
Game Archive Browsing/Windows Shell Integration
[edit]Recently I had the idea of somehow creating a shell extension similar to Microsoft's .ZIP CompressedFolder extension that would enable users to browse through a game archive file. It would handle the game archive files almost the same way as .ZIP files are handled (with shell menu items and being able to open the file and browse through as though it were a folder). Is this possible? If so, how would I go about doing it? What language would be best for this project? I know C#, some C++, and Visual Basic.
Any help, comments, or input on this would be greatly appreciated.
--Kasimov 19:14, 14 June 2006 (UTC)
- Well, if you don't know the format to the game archive, then you can't really do much, can you? Dysprosia 08:47, 15 June 2006 (UTC)
It's the Halo/Halo 2 .map format. --Kasimov 12:21, 15 June 2006 (UTC)
- Again, if .map is not an open standard, or is a synonym for a non-open standard, you can't do much. Do you know anything about the map format? Are there libraries available for manipulating map files? Dysprosia 12:49, 15 June 2006 (UTC)
Alright, I don't know if this is enough information but here's some that could be helpful:
The file itself is divided into 4 major sections:
Header |
BSP(s) |
Raw Data |
Tag Index and Meta |
The header is uncompressed and is always 2048 bytes. However, the rest of the file is zip compressed with zLib.
Now, I figured since it's compressed with zlib that would make it easier to make a shell extension, right?
I hope that's enough information, because really it would be pointless for me to type here the entire structure of the file. If you're looking for a complete breakdown of it then visit these two pages: Page #1 Page #2.
Thanks --Kasimov 13:37, 15 June 2006 (UTC)
- If there's no specific libraries, binary read past the header and other nonsense, and somehow feed the rest to zlib and uncompress. I don't know about shell extensions in Windows, but you could write wrap/unwrap programs and then use Explorer to do all the manipulating. I've never used zlib so I can't be more specific, but if you can use zlib, then there you go. Dysprosia 00:11, 16 June 2006 (UTC)
June 15
[edit]Multi Choice exam strategies
[edit]Last night a friend and I got into a dispute. In a multi-choice exam with 4 choices (e.g A/B/C/D), where the answers are randomly selected among the 4 possibilities (so for any one question, a random guess at the answer has a 0.25 chance of being correct), what is the best strategy if you have to guess at an answer?
She said that sticking with one letter (e.g. Always guess "A") gives a 0.25 chance of getting the right answer, BUT that choosing at random between two letters (e.g. random guess between "A" or "B") gives a 0.125 (1/8) chance of being right instead of 1/4. Her reasoning: First there is a 0.5 choice between A and B, and then a 0.25 chance of being right. 0.25*0.5=0.125.
I'm sure that's only correct if the real answer is always the same letter.
I think that regardless of whether you guess randomly between A - D, or any two choices, or stick with just one, your chances of being right approach 0.25 in all 3 cases. Because of you always choose A, on average the answer will be A 25% of the time. If you guess randomly between say, A and B, on average each letter will be right 12.5% of the time, and 12.5+12.5=25% (because while A is correct 25% of the time, by choosing between 2 letters the number of A's chosen has been halved. Of course, this also applys to the choice of B, thus the total proportion of right answers is still 25%). Increase the guess to between ABC and D, we get 6.25*4=25%.
Who is correct here?--inksT 00:28, 15 June 2006 (UTC)
- You are correct. Most people just choose to stick with one letter just because of the mindset, but it really makes no difference. —Mets501 (talk) 01:30, 15 June 2006 (UTC)
- An interesting variant: let's suppose there's a trickster daemon (in the same meaning of the word as Laplace's or Maxwell's) that, whenever you try to make a random choice, changes it to the worst possible outcome (if the answer was A, it'll make you chose B). In that case, chosing randomly for each answer, even between two letters, will result in a 0 chance of being correct, while sticking with one letter (the daemon will make you chose the worst possible one) will result in a 0.25 chance of being correct, given enough questions. Of course, this only works if the daemon can't influence which answer was the right one, only the outcomes of your choices. --cesarb 02:41, 15 June 2006 (UTC)
- In case that's unclear, cesarb is suggesting a daemon who can affect luck—whenever you make a random choice, Lady Luck tries as hard as she can to screw you over. If you pick randomly every time, you give Lady Luck lots of oppotunities to mess with your choices, whereas if you pick A everytime she can't do anything. Tesseran 03:39, 15 June 2006 (UTC)
- I think I got that. Thanks all for the replies. I have since used Excel to verify this experimentally (generating 4000 "questions" and 4000 "guesses") and the odds are as I expected. She owes me an ice cream :)--inksT 04:05, 15 June 2006 (UTC)
- One unnoted flaw here is the assumption that the correct answers are evenly distributed, which is often not the case. My own anecdotal observations are that human-generated tests tend to avoid the first and last options being correct, so I would expect that in many real-world situations, picking "always B" is generally superior to rotating through the options. Also of interest is this PDF regarding multiple choice and "look random" generation, where (assuming you fill in what questions you are confident of first) strategies based on least-seen answers can raise an SAT score an average of 10-16 points over pure random guessing. — Lomn 15:55, 15 June 2006 (UTC)
- I think I got that. Thanks all for the replies. I have since used Excel to verify this experimentally (generating 4000 "questions" and 4000 "guesses") and the odds are as I expected. She owes me an ice cream :)--inksT 04:05, 15 June 2006 (UTC)
- In case that's unclear, cesarb is suggesting a daemon who can affect luck—whenever you make a random choice, Lady Luck tries as hard as she can to screw you over. If you pick randomly every time, you give Lady Luck lots of oppotunities to mess with your choices, whereas if you pick A everytime she can't do anything. Tesseran 03:39, 15 June 2006 (UTC)
- Alternating between two doesn't lower your chances. What she meant to say is that, there is a 0.5 chance of picking A and a 0.25 chance of it being right, PLUS a 0.5 chance of picking B and a 0.25 chance of being right, EQUALS 0.125 * 2 = 0.25, which is the same. Think about expected value. If you alternate between two choices (one of which is right), and the same one is always right, you'll get half the questions right. But if you pick only one, then you have a 0.5 chance of getting a perfect and 0.5 chance of a zero, so you expect half the questions right (again). Taking into consideration choices C and D, then you expect to only get 0.25 of the answers right. No guessing strategy can help you (if the answers are evenly distributed). --Geoffrey 22:53, 22 June 2006 (UTC)
foray into linux
[edit]I'm off to college this fall, to major in Computer Science. I thought it would probably be a good idea to get a laptop, so, hearing that ThinkPad hardware is well-supported by Linux, I bought a very nice ThinkPad. I want to dual-boot windows and linux.
I've looked into Linux in the past (and even tried to install Slackware, though I was unable to resize my Windows partition so I gave up) and I've decided on SUSE Linux, primarily because of the easy setup (especially in partitioning) and the focus on ease-of-use.
I have three questions:
1) Which desktop envoronment should I choose, KDE or Gnome? I've had a pretty good experience with KDE trying out Knoppix, but I want to know if I'm really missing out on good stuff in Gnome. Can someone give me a comparison feature-by-feature of what they like about each? Which is used in this video?
2) Will YaST automatically configure my boot menu to dual-boot with windows if it detects OEM Windows XP installed, or am I just going to be stuck with Linux until I can figure out LILO or GRUB?
3) When I upgrade to Vista this fall, will there be any problem getting it to stay in the Windows partition and keeping it from taking over the whole hard drive when it installs? Will I have to rewrite the boot settings, or will Vista do this for me? Or will it rewrite the entire record and take out Linux? In that case, how do I modify it from within Vista to allow access to Linux again?
--Froth 01:46, 15 June 2006 (UTC)
- Kudos on buying a ThinkPad. Don't give those other closed-hardware people any of your money. I use GNOME because it "just works", and the panels are nifty, especially Workspace Switcher and Character Palette. The Nautilus file manager is a nice piece of software too, although I don't use it much. Not sure about number 2, but number 3 brings up the question of where you're going to store all your files. Linux doesn't like NTFS and Windows will have nothing to do with Ext2 or ReiserFS, so you'd better put your files on a FAT partition. —Keenan Pepper 02:22, 15 June 2006 (UTC)
- For number 3, the Windows installer will usually overwrite the MBR. That means that in order to restore a bootloader that will run linux, you may have to boot off the install CD and re-run the bootloader installer after you install Windows. As for Keenan's suggestion that you need a FAT partition to swap files between Windows and Linux, it's not really true. Linux has read support for NTFS, so you can always access your Windows files while you're running linux (write support exists, but is pretty spotty in my experience). Windows can't see the linux partitions though. -lethe talk + 02:43, 15 June 2006 (UTC)
- I'm not interested in swapping files between filesystems; I'm going to have one 60GB NTFS partition and let YaST play with the other 20GB. Also, how would I restore the MBR? Would it automatically be brought to my attention as a "repair" option or something instead of "install" or would I be better off getting a live-cd distro and using the copy of GRUB included? --Froth 15:00, 15 June 2006 (UTC)
- You can backup your MBR from a command shell opened from knoppix by typing:
- $dd if=/dev/hda of=mbr-copy.bin bs=512 count=1
- I would recommend doing this with a usb disk/stick partitioned as FAT32 connected, with the current directory being on that removable drive, thus saving your MBR copy on a separate drive. To make sure that the copy is valid, do a hex dump, and verify that the last two bytes are aa55. (Magic signature at end of MBR).
- While you're at it, I would also suggest saving a copy of your laptop's main partition on the usb disk, using partition image, before repartitioning. It's on the knoppix CD, and you can also get it here.
- When reinstalling the MBR, you should keep in mind that the information about the main partitions on the disk is located at the end of the MBR. Therefore, if you have repartitioned the disk after making the MBR backup, and want to preserve the new partitioning, you would want to do
- $dd if=mbr-copy.bin of=/dev/hda bs=446 count=1
- Otherwise, it's
- $dd if=mbr-copy.bin of=/dev/hda bs=512 count=1
- NOTE: The last command will overwrite your partition table. Before restoring an MBR backup, it's a good idea so save the current setup (as described above), so that whatever you do is reversible. --vibo56 talk 17:34, 15 June 2006 (UTC)
- If you can't decide between KDE and Gnome, I recommend getting an Ubuntu live CD (uses Gnome) and a Kubuntu live CD (uses KDE), and using each for a while. --Serie 22:09, 15 June 2006 (UTC)
- Reinstalling the MBR shouldn't be too hard. I'm not familiar with the distro you're using, but generally, yeah, it should be there in the repair or installation methods. Most distros with high level GUI installers have this. You can do it yourself from the command-line as well. If your installer previously configured a GRUB bootloader, then all that's required is the command "grub-install /dev/hda". This will read the grub.conf file and set up a bootloader in your MBR as before. You can also backup your MBR as vibo suggests. -lethe talk + 01:18, 16 June 2006 (UTC)
taxonomy of real numbers..
[edit]taxonomy of real numbers
- There are various ways of classifying the real numbers. One scheme starts with the integers, which are a subset of the rational numbers, which are in turn a subset of the real algebraic numbers. The rest (i.e. real numbers that are not algebraic numbers) are transcendental numbers. An alternative scheme divides the real numbers into positive numbers, negative numbers and zero. Gandalf61
- This shows that ordering by type instead of size is sometimes better. If you order real numbers by absolute size, beginning with 0, then alternating positive and negative ones, you'll never see an integer in your life. --DLL 21:59, 15 June 2006 (UTC)
DOM Inspector
[edit]In Mozilla Firefox, is it possible to install the DOM inspector after you've already installed the browser earlier without it? I didn't install DOM inspector because I thought I wouldn't need it, but now it looks like I do, and would like to avoid completely reinstalling and losing bookmarks, extensions and history info in the process. - 131.211.210.12 11:59, 15 June 2006 (UTC)
- You can download the installer again and install over it.. that's how updates used to work, and I assume the functionality is still there --Froth 15:05, 15 June 2006 (UTC)
I'm totally stumped!!!
[edit]Me and all my friends cannot get this one. It seems easy enough but there's always a part where we can't get any further..
2x = (x+1)(ln10)/lne
What is x???? —Preceding unsigned comment added by Gelo3 (talk • contribs) 13:07, 2006 June 15
- Please do not directly answer questions like this. Some people have a bad habit of mining the reference desks for homework answers. Stated clearly at the top of this page is the following:
- Do your own homework. If you need help with a specific part or concept of your homework, feel free to ask, but please do not post entire homework questions and expect us to give you the answers.
- The appropriate response to such a question is something like, "Show us what you have done, and explain why you get stuck." It is totally not appropriate to do the problem for someone and provide the answer. Not only is that unethical, it is educationally counterproductive. Everyone's cooperation in these matters is appreciated. --KSmrqT 14:13, 15 June 2006 (UTC)
This ISN'T homework, so PLEASE stop making assumptions. This was a question from a past exam paper I found on the internet for study. 220.239.228.252 14:43, 15 June 2006 (UTC)
- Sorry, we have no way to verify that. Either way, the appropriate response is not to give the answer, but to find where understanding fails and help bridge the gap. It's a trivial problem, and the intrusion of logarithms is mostly an irrelevant distraction. The equation might as well be written
- 2x = (x + 1)c.
- So, please, enlighten us. Show us concretely what you can do, and where you can't get any further. Then we can honestly help — with your understanding. That we're happy to do. --KSmrqT 15:16, 15 June 2006 (UTC)
Read this : http://wiki.riteme.site/wiki/Logarithm#Other_notations
Evilbu 15:44, 15 June 2006 (UTC)
JPEG image strangeness
[edit]I am feeling very daft, as I can't seem to figure this one out and I hope that someone smarter (and more awake!) than me will be able to help.
I scanned in a photo using my scanner, it returned a 150kB file. All well and good. I open it (in Paintshop, if that makes a difference), find out it's sideways, rotate it 90°, save and close. Imagine my surprise when the same file is now suddenly 750kB! What in the world is going on - I just rotated the image, surely it contains the same amount of information?
My guess (after much reading through JPG) is that my scanner is sending me a JPG which is already compressed somewhat, but when Paintshop saves it at 'no compression' the filesize obviously increases. Does this make sense? Or do you suspect something else may be at work?
Thanks in advance! — QuantumEleven 14:09, 15 June 2006 (UTC)
- It's unusual, but quite possible, for your scanner to be giving PSP a compressed jpeg. Rotate it in Paint and save as PNG, or rotate in PSP and save with a higher compression setting - be careful not to overdo it, it'll ruin your image. --Froth 15:08, 15 June 2006 (UTC)
- Because of the way JPEG works it's possible to rotate through multiples of 90° without decompressing and recompressing, thus not losing quality or increasing filesize. The library that does it is called jpegtran. See here for a list of applications which use this to provide lossless rotation. —Blotwell 18:18, 15 June 2006 (UTC)
a very basic but fundamental question : when is tensorproduct zero?
[edit]Hello,
I am studying tensor products and I think it would really be useful to think about this problem in general
let M be a right R module, and N a left R module
Now let us assume nothing about the ring (commutativity, division ring,...)
now consider and an element in it of form
Now when is
I know just saying "one of them must be zero at least" is simply not true, at least when I am working with non-divisionrings... But then what is the criterion?
Could this be it : one of them must be "divisible" by an element in the ring R such that the other one, multiplied with it, gives zero?
Thanks,
Evilbu 14:16, 15 June 2006 (UTC)
- I'm not sure what the full result is, but I'll say that all tensor products of all modules over a ring with zero divisors will have lots of such pairs. For exampleif rs = 0 in your ring, then mr⊗sn = 0 for all m in M and all n in N. I can also say that for any torsion group G, the entire group G⊗Q = 0, where Q is the group of rationals. Thus every single tensored pair in that group is equal to zero. This is a consequence of the fact that the tensor product functor by a torsion group is not left-exact. -lethe talk + 15:15, 15 June 2006 (UTC)
Thanks. So what do you think? This criterion is not correct? I stress again that the ring (apart from having unity) can be as free as it pleases in all it weirdness, and so can the modules. Nobody seems to be comfortable answering this question. I studied constructing tensor products (with balanced products and all) , which eventually implied taking a quotient (which is a result of the relations) of a free abelian group. So an element in the big free abelian group gives zero in the quotient if it is a finite sum of several elements given by those relations ( I am taking of elements like ~. There can really be millions of terms in a sum like that, so I don't see proving a criterion in this way can ever be done?
Anyway, as always, I stress my gratitude for the kind, quick and to the point I help I receive from this wonderful site.
Evilbu 14:25, 16 June 2006 (UTC)
- Yes, I think that's about as concise a description as you will find. The tensor product M⊗RN may be defined as the quotient of M×N by the ideal generated by terms (mr,n) – (m,rn); (m1 + m2,n) – (m1,n) – (m2,n); and (m,n1 + n2) – (m,n1) – (m,n2). Therefore any element of this ideal will tensor to zero. This doesn't really answer your question though, because none of the elements of this ideal can be represented by a single tensor product of two module elements. You wanted a criterion to tell you when two nonzero elements tensor to zero, which I don't know the answer to. For example, in general, given m1, m2, n1, and n2, we cannot assume that there are a, b such that a⊗b = m1⊗n1 + m2⊗n2. Thus we have no guarantee of nonzero elements which satisfy a⊗b = 0 (and indeed, in general, there be no nonzero elements which satisfy, for example if R is an integral domain). -lethe talk + 14:54, 16 June 2006 (UTC)
- Er, careful there. The tensor product is not in any real way the "quotient of M×N" (for a simple counterexample, note that if M and N are free modules of dimension m and n over the field with 2 elements, the tensor product has cardinality 2mn, while the cartesian product has cardinality 2m+n — less in general.
- Maybe you meant "the quotient of the free module generated by M×N"?
Arc vs. curvature?
[edit]If one holds a piece of string between two points on a sphere, the string would be tracing the arc/curvature/perimeter/circumference——i.e., "great circle"——segment between the two points, which would equal the central angle, . To find the distance between the two points you would multiply the central angle by the sphere's radius, as the radius equals the radius of the circumference. With an ellipsoid, however, the radius of the body and the radius of its circumference is different. You have two principal curvatures, (north-south,east-west), and their corresponding radii,
. Curvature in a given geodetic direction, , is given as
. The corresponding radius of curvature ("in the normal section") is then given as
. But, if you take a minuscule distance (i.e., ≈ 0), then
not ! There was a stub for arc recently created. Would the second equation, equaling , be the "radius of arc", thus the equation of arc would be ?
If you divide any north-south distance by it equals the average value of M within that segment, and a minuscule east-west distance (since, except along the equator, east-west along a geodesic only exists at a single point——the transverse equator) equals N. So what is a minuscule distance, in a given geodetic direction, divided by , a radius of? Curvature? Arc? Perimeter? If I Google "arc" or "radius of arc" (or even "degree of arc"), all I find are simplistic spherical contexts, nothing elliptical, involving M and N! P=(
I understand basic, concrete geodetic theory (besides ellipticity, there is curvature shift towards the pole as the geodetic line grows, culminating in a complete shift to north-south for an antipodal distance, since north-south is the shortest path), so I know you can't simply take the spherical delineation, average all of the radii of curvature/arc along the segment and multiply it by to get the true geodetic distance (though, the difference does seem directly proportional to the polar shift involved——i.e., the smaller the distance, the closer this "parageodetic" distance is to the true geodetic one!). But I digress... P=) ~Kaimbridge~17:20, 15 June 2006 (UTC)
Markov-like chain bridge?
[edit]I am currently making a program to generate a random name using United States Census data and Markov chains. However, I want to have a little more flexibility in the process. So I want to be able to make a bridge between a given beginning, some given middle letters, and a given ending, so I can generate a name like Mil???r?a. Currently, I am using a three-letter window. Does anyone know how to generate this bridge? --Zemylat 21:35, 15 June 2006 (UTC)
Word Puzzle
[edit]There is a certain word puzzle, that I have heard many times before. I have searched for said puzzle, but cannot seem to find it anywhere. I was wondering how the math works out this way in this puzzle:
Three men go to a motel and rent a room. The deskman charges them $30 for the room. The manager of the motel comes in and says that the deskman has charged them too much, that it should only be $25.
The manager then goes to the cash drawer and gets five $1.00 bills, and has the bellboy take the money back to the three men. On his way up to the room, the bellboy decides to give each of the men only one dollar apiece back and keep the other two dollars for himself.
Now that each one of the men has received one dollar back this means that they only paid $9.00 apiece for the room. So three times the $9.00 is 27.00 plus the $2.00 the bellboy kept comes to $29. Where is the other dollar?
Why does it come out to $29 and not $30? I always suspected that it was because you cant multiply the remaining money the men had to get the right amount, but I'm not sure... Just curious.
- The multiplication is fine. You are tricked by the ungrammatical run-on sentence where it says "... is 27.00 plus the $2.00 ...". The 27 dollars is what the men paid. The 2 dollars is what the bellboy took, so it is a "negative" payment. So the sentence should have gone: "So three times the $9.00 is 27.00 minus the $2.00 the bellboy kept comes to $25 which is now in the manager' cash drawer." --LambiamTalk 23:14, 15 June 2006 (UTC)
- Right. Of the $30 paid, $3 were returned, $27 kept. Of the $27 kept, $2 are in the bellboy's pocket, and $25 are in the cash register. Black Carrot 23:38, 15 June 2006 (UTC)
One-to-One Correspondence
[edit]I've been learning over the years, when I find myself in disagreement with nearly all experienced mathematicians in existence, to start with the assumption that I'm completely, shamefully, blasphemously wrong, no matter how it looks to me, and go from there. Because it pisses people off less, and because it's usually true. So, tell me how I'm wrong.
I don't get one-to-one correspondence as a way to measure infinite numbers. I understand(the nonrigorous version) how it works, and I can see how it's a natural extension of normal counting, but it's not how I think, and it's not how I've ever seen infinite numbers. Take the alleged one-to-one correspondence of, for instance, natural numbers and their subset, even numbers. That doesn't make sense to me. Even numbers are sections of an extent. It doesn't make sense to rip them off the number line, jam them together like the vertebrae of a crash victim, and shove them back on, while not doing anything of the sort to the numbers they're being compared to. Here's how I'd compare them, and here's where I need correction. They are each prespecified, patterned, easily identifiable sections of an extent of number line. It is guaranteed that no natural number can exist that is more than one away from an even number, and no even number exists that is not a natural number. They are already, inherently and inextricably, in a particular correspondence with each other. So, take a number x. x can be anything we want, a positive number of some amount. Now, count the number of whole numbers up to (and including, if possible) x, and the number of even numbers up to and including x. Keep doing this as x grows, and let x pass each and every natural number in turn. How many natural numbers will there be as it grows? floor(x). How many even numbers? floor(x/2). What, then, is the ratio of whole numbers to even numbers? 2:1, not 1:1 I'm think the rules of limits back me up on this. This is just a long (and hopefully clear) way of saying what seems so obvious to me: that there are many many whole numbers, and exactly half of them aren't odd.
This last bit is assuming I didn't screw up above. I can understand how the lack of one-to-one correspondence is excellent reason to separate different infinite numbers, but why do people seem to think that the presence of it proves they're exactly the same? Any help is appreciated. Black Carrot 23:34, 15 June 2006 (UTC)
- Cardinality has nothing to do with the structure that a set might have, just the "number" of elements. It doesn't matter if the elements are numbers, people, functions, etc. I think this is the main mistake you are making, thinking about what the elements "are". Another point is that if you believe a lack of one-to-one means they are "different infinities", then what do you think of this: if there exists a map f from A to B that is onto but not one-to-one, then #A >= #B. By this, the number of evens is >= the number of naturals. One final point -- it is not that mathematicians "seem to think" that the naturals and evens are the same size, it is that they do in fact have the same cardinality (by the definition of cardinality). (Cj67 00:25, 16 June 2006 (UTC))
- You seem to be using an inductive argument to show that the ratio of whole numbers to even whole numbers is 2:1. Keep in mind, though, that mathematical induction doesn't go all the way to infinity. An inductive proof generally shows that some property is true for any finite integer n; such a proof can't show that it's true if n is infinite. Your inductive statement is basically saying that of the first n whole numbers, half of them are even. This is true. And since it's true for some n, you can use an inductive argument to show that it's true for n+1, and so therefore it's true for any whole number n. But this argument doesn't show that the entire set of whole numbers has twice as many elements as the entire set of even whole numbers, because in that case n=∞. Your inductive argument doesn't apply, because you never show that your statement is true for n=∞−1, whatever "∞−1" means. This is a subtle point; I hope it makes sense. —Bkell (talk) 00:39, 16 June 2006 (UTC)
- I don't know if this will help or not, but maybe you can see why your "limit" idea (which I think is a form of mathematical induction) doesn't work when you try to prove a property about an infinite set based on properties of its finite subsets. Think about this: If I take any finite set of whole numbers, then I can find a real number that is larger than every element in the set. This is true if the finite set has one element, or two elements, or 18,372,364,735,078 elements, or even no elements at all (see vacuous truth). It's true for every possible finite set of whole numbers, so in the "limit" it seems that it should apply to the entire set of whole numbers. But it doesn't. I can't find a real number that is larger than every whole number. In the same way, in the set of the first n whole numbers (where n is a finite integer), there are twice as many whole numbers as even whole numbers. But this doesn't mean that this is true for the entire set of whole numbers.
- These comments might not convince you that there are the same number of whole numbers as even whole numbers, but I hope that they can help you see why your argument against the claim doesn't work. —Bkell (talk) 00:50, 16 June 2006 (UTC)
- As others have mentioned, comparing cardinalities is a very coarse kind of comparison. It is purely set-theoretic, and ignores any other information that a structure may have. For example, it's true that there is a bijection between the naturals and the integers, but there is no order isomorphism. As ordered sets, they are quite different. It's true that there is a bijection between the real line and the Cartesian plane, but as topological spaces, they're quite different. Comparing the underlying sets is important, but it's not everything. There are other kinds of comparisons that are important for other kinds of spaces. -lethe talk + 01:25, 16 June 2006 (UTC)
- Black Carrot, you need to tell your intuition to shut up, it doesn't know what it's talking about. Let me explain. We build intuition based on our experience, and extrapolate past events to present circumstances. This is not a bad thing; it has helped us survive as a species. Intuition is also valuable to a mathematician, but it must be used wisely, with caution.
- In mathematics we have definitions, axioms, and rules of inference that allow us to create new kinds of objects and worlds with new properties. We can create negative numbers, where adding b to a can create a quantity less than a. We can create fractions, where we must add b repeatedly just to get from a to a+1. These creations are remarkably useful, but also remarkably counter-intuitive — until we re-train our intuition to conform to the new definitions.
- Cardinal numbers, and especially infinite cardinals, are mathematical creations, just like negatives and fractions. They do not obey the old rules. One of the definitions of an infinite set is that it can be put in one-to-one correspondence with a proper subset of itself. Strange? Yes; but so is a number that can square to −1.
- In the words of the ancient Greek playwright Aeschylus,
- If your intuition says there should be half as many even numbers as all, congratulations: your intuition works just fine for the numbers it was trained on. Just don't expect it to apply to infinite cardinals. If you think there should be a way of counting infinite sets that distinguishes between equal cardinal infinities, go play with some axioms and see if you can make it work. Will the results, if any, be intuitive? Hmmmm. --KSmrqT 05:22, 16 June 2006 (UTC)
- Perhaps one thing should be emphasized: Cardinality is a very specific way of comparing sizes of sets. Saying that two sets have the same cardinality is not the same as saying they have the same size - The latter doesn't really mean anything by itself, and cardinality is one way to interpret it. For example, the sets [0, 1] and [0, 2] "have the same size" if you look at their cardinality, but have different sizes if you look at their Lebesgue measure. The great thing about cardinality is that it applies to any set whatsoever - But this is also its weakness, as it completely ignores the strutcture of the set. For example, your idea above basically assigns sizes to sets of natural numbers according to the value of:
- Which is fine, but it isn't hard to see that it uses specific properties of natural numbers in a specific, and rather arbitrary, way. This means that it only applies to sets of natural numbers, and not even to all of them (for some, the limit doesn't exist). Also, I could just as well give other definitions which would make, say, the odd numbers be twice as numerous as the even numbers. Perhaps your definition looks more "intuitive", but that doesn't make it more correct. So it is okay to have definitions of size specific to a given structure we wish to investigate, with properties we wish to have; But that doesn't discredit cardinality, which does exactly what it was meant to do - Measure the size of any set, without being influenced by the structure of this set. -- Meni Rosenfeld (talk) 08:15, 16 June 2006 (UTC)
- Perhaps one thing should be emphasized: Cardinality is a very specific way of comparing sizes of sets. Saying that two sets have the same cardinality is not the same as saying they have the same size - The latter doesn't really mean anything by itself, and cardinality is one way to interpret it. For example, the sets [0, 1] and [0, 2] "have the same size" if you look at their cardinality, but have different sizes if you look at their Lebesgue measure. The great thing about cardinality is that it applies to any set whatsoever - But this is also its weakness, as it completely ignores the strutcture of the set. For example, your idea above basically assigns sizes to sets of natural numbers according to the value of:
That's a pretty impressive response. I've read them carefully, and I'd like to list the main points, so you can tell me if I've left any out. I'll respond to them as I can.
- 1.1) It doesn't matter to "cardinality" what the elements actually are.
- Yes. I said that.
- 1.2) One-to-one correspondence isn't the only way to measure "cardinality". There can be overlap and stuff.
- Good point. I take back what I said about lack of it meaning something.
- 1.3) They must have the same size, because they have the same "cardinality". And by the definition of "cardinality", they must have the same "cardinality" if they show one-to-one correspondence.
- WTF?
- 2.1) That looks like induction. If it is, it doesn't work, because induction can't handle infinity.
- I can see the resemblence, but it wasn't meant to be induction. Nothing that formal. I was just trying to make my thoughts as clear as possible so they could be hacked to pieces more efficiently. Since you mention it, though, why wouldn't that work as an inductive proof? That article says, "Mathematical induction is a method of mathematical proof typically used to establish that a given statement is true of all natural numbers." So, change "let x pass each and every natural number in turn" to "let x move up to each and every natural number in turn". It doesn't make a difference as far as my point is concerned. Now, at what point does that move beyond the natural numbers and the capabilities of induction?
- 3.1) Infinite sets and finite sets behave differently.
- Awesome. Anything more specific?
- 3.2) "I can't find a real number that is larger than every whole number."
- Dandy. That's not what I'm doing, though. That's a totally different kind of property from what I'm looking at. At least, I'm almost certain it is.
- 4.1) "As others have mentioned, comparing cardinalities is a very coarse kind of comparison. It is purely set-theoretic, and ignores any other information that a structure may have."
- I agree completely.
- 4.2) "For example, it's true that there is a bijection between the naturals and the integers, but there is no order isomorphism. As ordered sets, they are quite different."
- I'm blushing.
- 4.3) "Comparing the underlying sets is important, but it's not everything. There are other kinds of comparisons that are important for other kinds of spaces."
- All this agreement may turn my head.
- 5.1) Shut up.
- Bite me.
- 5.2) You don't know what you're talking about.
- Bite me twice.
- 5.3) I'm both thoughtful and sharp-tongued, and I can compare you to a baby playing with blocks! I must be brilliant and emotionally complex.
- Great. Anything else?
- 5.4) You're uneducated, untrained, and probably superstitious. These things are "counterintuitive", a pretentious way of saying they don't make sense. To you. They make perfect sense to me, of course. You may have said this at length at the beginning of your post, but it's worth repeating. To get to the point, "cardinality" is something we made up. As such, it can do whatever we want. It's self-consistent, and it's also consistent with a lot of other stuff we made up, and even with some of the stuff we didn't. You can't make up anything, though, because "we" (meaning people who'd died before my parents were born) did it first. It doesn't matter if it's consistent with the things you want it to be consistent with, like actual numbers, only if it's consistent with the things we want it to be consistent with.
- "You're a cowardly dumbass."
- -Pretentious Greek quote
- "You're a cowardly dumbass."
- Hmmmm.
- 6.1) "Cardinality" is not the same as size. There are lots of ways of measuring and interpreting the size of an infinite set, and they all look at it in different ways. "Cardinality" is good for what it does, which is divvy up all sets, regardless of what they contain, into basic types.
- See, that's what I kind of figured, but I couldn't find any other measures, and people seemed to think "cardinality" was all there was.
- 6.2) "[Your idea] uses specific properties of natural numbers in a specific, and rather arbitrary, way."
- Really? Leaving them exactly where they belong on the number line is "arbitrary"?
- 6.3) "Also, I could just as well give other definitions which would make, say, the odd numbers be twice as numerous as the even numbers."
- I'd like to see that.
BTW, please don't toss comments into the middle of my post. These discussions are a lot easier to follow if they stay chronological. I'd like to boil the answers down even further, to the things that seem most important.
- A)You don't understand the philosophy of mathematical proof. (false)
- B)The proof you gave, to use that word loosely, was slovenly. (Sorry.)
- C)"Cardinality" isn't the same as size. It's a way of dividing up infinite sizes with a broad stroke, in a convenient way, a bit like Big O notation. There are other things that could divide them up more. (That's what I thought, and what I tried to say at the end of my first post. But again, people seem to keep using the word (which I never brought up, if you'll read my post carefully) as the be-all and end-all of infinite sizeness.)
- D)The specific way you've shown of dividing them up may not be entirely valid. (Really? Are you sure?)
Using these responses and the language in them, I'd like to rephrase my original question in a clearer, more concise way: "Is the idea of one-to-one correspondence really the only valid way of showing infinite amounts are different from each other? Doesn't it seem like they could be separated out more than that? How about this way that makes sense to me; it seems just as common-sense as one-to-one correspondence, yet gives an answer that seems more right." As best I can tell from the responses, I was both right and wrong, which I see as a net success. All further comments and corrections are welcome. Black Carrot 19:42, 16 June 2006 (UTC)
- Don't listen to all those people above. You're spot on: there are various kinds of way to measure or quantify infinite sets depending on what you're trying to achieve, and all of them have drawbacks. The one based on bijection is pretty robust, but it's also, as you point out, pretty coarse. It's the one mathematicians associate with words like cardinality, number of elements, size of set, but that doesn't mean it's the only thing you can do no matter how many bigots tell you otherwise. What you're proposing is called natural density and, as was said, it has the problem that it's not defined for every subset of Z (because the limit need not exist). But it has perfectly good applications in number theory. —Blotwell 19:49, 16 June 2006 (UTC)
- I have a better idea - maybe he should listen to "all those people above". So should you. This may prevent you two from imagining people saying things that they actually didn't, and believing people didn't say things that they actually did (e.g. "There are other kinds of comparisons that are important for other kinds of spaces", "saying they have the same size ... doesn't really mean anything by itself, and cardinality is one way to interpret it", "your idea ... is fine", "it is okay to have definitions of size specific to a given structure". -- Meni Rosenfeld (talk) 17:18, 17 June 2006 (UTC)
That's the kind of talk I like to hear. Thank you, and may the Invisible Pink Unicorn protect you and your family from the ravages of the Purple Oyster (of Doom). Especially, thank you for telling me the name of what I was grasping at (it's amazing how much of math is vocabulary), and for confirming that it's not a completely insane/uneducated/baby-with-blocks-like idea. Black Carrot 02:08, 17 June 2006 (UTC)
- You might want to tone down your attitude. Quite a number of folks thought your inquiry had enough interest to respond at length. Really awful posts tend to get very little response. So all the answers weren't to your liking? Get over it. How eager will we be to respond to future questions, knowing how graciously our past offerings were received?
- I'm intrigued to learn that some mathematicians did just what I suggested to you, namely invent measures (for integers) that distinguish between equal cardinal infinities; and I'm not at all surprised to discover that they, too, have "counterintuitive" properties. As I said before, it is routine for mathematicians that when we extend our intuition beyond the realms where it was trained, it fails. Yours, mine, anyone's intuition fails.
- And as I also said, there's little point in fighting it. Would you rather I quote Euclid or Shakespeare or John Lennon? Or perhaps I should remind you, apparently a devotee of the Invisible Pink Unicorn, that "Bob" of the Church of the SubGenius tells us we all need more slack? I know, let's try a quotation from Nasreddin.
- Nasrudin sat on a river bank when someone shouted to him from the opposite side:
- "Hey! how do I get across?"
- "You are across!" Nasrudin shouted back.
- Nasrudin sat on a river bank when someone shouted to him from the opposite side:
- 'Nuff said. --KSmrqT 06:07, 17 June 2006 (UTC)
- I agree that there is no need to be hostile. We are trying to help, and even if you disagree with what we say, there is no need to assume we mean to condescend. And I am afraid it seems you have misinterpreted a lot of what was said here. I'll address some of your points:
- 1.2) No, one-to-one correspondence (aka bijection) is the only way to measure cardinality. That is, there exists a bijection iff they have the same cardinality. But cardinality is not the only way to measure size.
- 1.3) No one said that (or didn't mean that, anyway). If they have the same cardinality, they have the same cardinality (which, indeed, by definition, means that there is a bijection). Anything else is an interpretation.
- 2.1) Your argument was that, since for every n, there are (roughly) half as many evens than naturals up to n (which is obviously true), then it must be the case that there are half as many evens than naturals up to ∞. That is a sort of an inductive argument. Indcutive or not, you have probably already realized that the argument is invalid.
- 3.1) I am sure you know lots of examples. If you are asking about what is the conceptual difference between finite and infinite sets, I doubt there is a very good answer.
- 3.2) Indeed, it's a different thing, but it was given as an example of 3.1.
- 4.1-4.3) Agreeing is good, but it seems like you are expecting cardinality to mean more than it does. lethe reminds you what I have emphasized later, that cardinality is just one thing. There is no need to be surprised that two sets, one being seemingly larger, have the same cardinality.
- 5.1-5.2) Ksmrq was specifically referring to your intuition. I tell my intuition to shut up all the time. Not always, of course.
- 5.3-5.4) These doesn't seem to even remotely resemble what Ksmrq said. It is unfortunate that you chose to interpret it this way.
- 6.1) No, I don't think anyone believes that "cardinality is all there is". Cardinality is cardinality. It does happen to be the only way to compare sizes of any set, but this feature is not always relevant. Your question specifically addressed cardinality (even if you didn't call it that way), so people focused their answers on it.
- 6.2) It is arbitrary in the sense that this is just one definition you could give among many other definitions. I agree completely that it makes more sense than other definition, but again, it is important to understand the status of intuition in a mathematical argument. It can guide us, but we must never follow it blindly. It's great when a definition is intuitive - but that does not necessarily make it "better" than other definitions.
- 6.3) Easy.
- Artificial? Of course. Not elegant, doesn't make sense, counterintuitive? Probably. I wouldn't ever use it when investigating natural numbers? I guess so. But that doesn't make it any less valid, and it is in this sense that the one discussed earlier is arbitrary. Also, arbitrary is not such a bad thing - you shouldn't take it as an insult to your suggestion.
- A) I don't think anyone meant that.
- C) Again, no, no one was trying to argue that cardinality is the only thing.
- D) The most important of them all. No, I doubt anyone meant that. I certainly didn't. What we were saying is that this is just one possible way of assigning sizes (much like cardinality is), and just as you criticized us for implying that "cardinality is the only truth", so did you seem to imply that this definition is the only truth. That said, I agree completely that this definition (which, thanks to Blotwell, we now know by name) is interesting, probably useful (personally I don't know its applications), and worth investigating. Probably more so than my suggestion above. Needless to say, it is perfectly valid. As long is it is understood that this is just one definition (important as it is), and that it has little to do with the issue of 1-1 correspondence (which was, may I remind you, the title of this thread - even if you did not call it cardinality), we will all be doing just fine.
- I hope we have a better understanding of each other now. I'll be glad to answer any further questions. -- Meni Rosenfeld (talk) 08:36, 17 June 2006 (UTC)
- I agree that there is no need to be hostile. We are trying to help, and even if you disagree with what we say, there is no need to assume we mean to condescend. And I am afraid it seems you have misinterpreted a lot of what was said here. I'll address some of your points:
- 2.1: "Mathematical induction is a method of mathematical proof typically used to establish that a given statement is true of all natural numbers." Note that this does not say that induction can be used to establish that a statement is true of the set of all natural numbers, only each and every natural number in turn. An inductive argument usually ends up proving a chain of statements that say "Since some property is true of the natural number n−1, it is also true of the natural number n". So to produce a proof of this property for any natural number N, you can follow a chain of reasoning from 1 to 2, from 2 to 3, from 3 to 4, and so on, until you get to N. The important thing is that you will eventually reach N. If you look at your argument as an inductive proof, at each step of the proof you are saying that "in the set of whole numbers less than or equal to n, the cardinality of the set of whole numbers is twice the cardinality of the set of even whole numbers." The problem is that you can never reach a point where "the set of whole numbers less than or equal to n" is the same as "the set of whole numbers". So there is no chain of reasoning that you can follow to establish that the cardinality of the set of whole numbers is twice the cardinality of the set of even whole numbers. That's the flaw in your argument, from the standpoint of mathematical induction.
- 3.2: You're absolutely right; you're not trying to find a real number larger than every whole number. I recognize this fact. What I was giving was a different example, so that you can understand why mathematical induction doesn't work when you try to use it to prove something about the set of all natural numbers. Note that this example certainly doesn't prove that the cardinality of the set of whole numbers is the same as the cardinality of the set of even whole numbers. It is only meant to show you the flaw in your inductive argument, not to prove the opposite statement. —Bkell (talk) 14:50, 18 June 2006 (UTC)
- Two more comments quick. First, I thought about your "limit" idea a little more, and I realized that maybe what you were saying was that
- which is true. But you have to set in order to be able to say this about the entire set of whole numbers, and doesn't say anything about the value of .
- Second, if you accept the standard axioms and definitions of set theory, then it logically follows that the cardinality of the set of whole numbers is equal to the cardinality of the set of even whole numbers. That's because the definition of cardinality says that if two sets can be put in one-to-one correspondence, then they have the same cardinality. Now, the typical interpretation of "cardinality" is that it is a "count" of how many elements are in a set. This, I think, is where you disagree, because you say that naturally and obviously there are twice as many whole numbers as even whole numbers. Disagreeing with this interpretation, which is a mapping from mathematical language into common concepts, is perfectly reasonable, especially since it gives such a nonintuitive result in this case. So perhaps you want to investigate some other mathematical definitions, like "natural density", which Blotwell proposed. I don't know anything about these, so I can't help you there. But the fact remains that because of the way cardinality is defined, the cardinality of the set of whole numbers is equal to the cardinality of the set of even whole numbers. —Bkell (talk) 15:26, 18 June 2006 (UTC)
- Two more comments quick. First, I thought about your "limit" idea a little more, and I realized that maybe what you were saying was that
June 16
[edit]2 = 1
[edit]This is entirely accurate.
Simplify both sides two different ways.
Divide both sides by (x-x).
Divide both sides by x.
Look what I got. Can anyone explain? Political Mind 01:32, 16 June 2006 (UTC)
- x - x = 0. Division by zero. —Keenan Pepper 01:34, 16 June 2006 (UTC)
Brilliantly simple. So when I am at , it is really ? Ok, thanks! Political Mind 01:37, 16 June 2006 (UTC)
- Also, the first step is wrong. Did you mean instead of ? —Keenan Pepper 01:40, 16 June 2006 (UTC)
Thank you, will change. Political Mind 01:46, 16 June 2006 (UTC)
- There's more — you actually start from ! See that
- so
- and
- is equivalent to .
- Finally you wrote something equivalent to:
- which is obviously false.
- CiaPan 05:52, 23 June 2006 (UTC)
- There's more — you actually start from ! See that
Derivatives and Laplace transforms commuting
[edit]In a differential equations textbook I'm working with, there's an exercise where the student is asked to compute the Laplace transform of the function f(t)=t*sin(ωt). Doing it from the definition, by integrating t*sin(ωt)*e^(-st) from 0 to infinity is tedious, but works. The book offers a hint for a simpler method: begin with the fomula L[cos(ωt)]=s/(s2+ω2), and just differentiate both sides with respect to ω. This works out nicely enough, as long as you assume that differentiation w.r.t. ω commutes with the Laplace transform operator, but that seems like a highly unobvious thing. Can someone help me see why it's valid to say that d/dω[L[f(ω,t)]]=L[d/dω[f(ω,t)]]? -GTBacchus(talk) 02:58, 16 June 2006 (UTC)
- ; so long as the integral converges uniformly on some interval (which it does, for your f, for all ω and s, except at ), you can interchange differentiation and integration (with respect to different variables, of course), so . Hope that helps. --Tardis 03:26, 16 June 2006 (UTC)
- Yes, that does help. I'd like to make sure I'm clear about uniform convergence of the integral. Do you mean that there's some region in the ωs-plane over which, for each ε there is a b such that, for every (ω,s), the integral from 0 to b is within ε of the integral from 0 to infinity? -GTBacchus(talk) 04:20, 16 June 2006 (UTC)
- Sorry for the long delay. Your interpretation is correct except that I don't think it's necessary to consider a region with extent in s, as you are neither differentiating nor integrating with respect to it. It should be okay to consider it a fixed parameter, and just talk about varying ω. As long as , your integral obviously converges without difficulty for all ω (without even so much as a problem at infinitely-trending ω), and that's enough for exchanging the operations. --Tardis 03:41, 23 June 2006 (UTC)
Proof that lim x->(infinity) of Ln(x) = infinity
[edit]Can anyone give me a rigurous proof based in the formal definition of limit? Thank you very much ;)
- Looks like a proof that can be found in any elementary calculus textbook. A good way to do it would probably be to use the definition of ln as an integral to show that the above limit is greater than the harmonic series, which diverges. -- Meni Rosenfeld (talk) 08:22, 16 June 2006 (UTC)
- You can prove that lim x->(infinity) of f(x) = infinity from first principles provided that two conditions are satisfied: (1) function f is monotonically increasing, and (2) f has a pre-inverse (or right inverse), that is, some function g such that f(g(x)) = x for all x. The logarithm function satisfies both conditions. --LambiamTalk 10:11, 16 June 2006 (UTC)
- That is the approach I would take, but you need to be a bit careful in formulating the conditions. For instance, the limit of the arc tangent as x goes to infinity is 1.
- To the original poster: Start by writing down what you want to prove, then use the formal definition of limit, then use the definition of the logarithm, and then you're almost done. How far did you get? -- Jitse Niesen (talk) 10:34, 16 June 2006 (UTC)
- You can prove that lim x->(infinity) of f(x) = infinity from first principles provided that two conditions are satisfied: (1) function f is monotonically increasing, and (2) f has a pre-inverse (or right inverse), that is, some function g such that f(g(x)) = x for all x. The logarithm function satisfies both conditions. --LambiamTalk 10:11, 16 June 2006 (UTC)
- Recommended reading is George Pólya's 1945 book How to Solve It (ISBN 978-0-691-08097-0), whose guidelines are summarized in the cited Wikipedia article. Obvious questions are:
- "What is your working definition of the Ln function?"
- "What definition do you have for a function having a limit of infinity?"
- Almost certainly you have seen related problems. Try to imitate them.
- On an introspective note, a strange phenomenon in solving problems is that, often, the greater the struggle the sweeter the success. (Imagine how Wiles must have felt when he finally proved Fermat's last theorem!) Also, it seems that a struggle often indicates exactly where more insight is needed, so that after the dragon is slain, a post-mortem is especially revealing. Here follows some inspiration:
- ❝When asked what it was like to set about proving something, the mathematician likened proving a theorem to seeing the peak of a mountain and trying to climb to the top. One establishes a base camp and begins scaling the mountain's sheer face, encountering obstacles at every turn, often retracing one's steps and struggling every foot of the journey. Finally when the top is reached, one stands examining the peak, taking in the view of the surrounding countryside — and then noting the automobile road up the other side!❞ — Robert J. Kleinhenz
- ❝Since you are now studying geometry and trigonometry, I will give you a problem. A ship sails the ocean. It left Boston with a cargo of wool. It grosses 200 tons. It is bound for Le Havre. The mainmast is broken, the cabin boy is on deck, there are 12 passengers aboard, the wind is blowing East-North-East, the clock points to a quarter past three in the afternoon. It is the month of May. How old is the captain?❞ — Gustave Flaubert (as a young man, writing to his sister, Carolyn)
- Aren't quotations fun? ;-) --KSmrqT 15:17, 16 June 2006 (UTC)
Sharing swap space between XP and Linux - reprise
[edit]What I would like my GRUB to do is the following:
- ask what OS to boot;
- check what type of partition is FOO;
- according to the OS chosen for booting, if FOO is "compatible" (swap for Linux, fat32 for Windows) then boot, otherwise quick-format it in a compatible way (same as above) and then boot.
All this comes from me wanting to use the same partition for the two OS's paging space. Anyone knows if it can be done? Thanks in advance. Cthulhu.mythos 09:46, 16 June 2006 (UTC)
- I wouldn't try to convince GRUB to do this, instead do it at boot time with the individual operating systems. It's very easy to do with linux, but I don't know windows enough to help with that. – b_jonas 11:51, 17 June 2006 (UTC)
continuation of question on derived functors, that actually aren't functors
[edit]
Hi, some time ago I asked this, I have given the link.
I wanted to know, if you had a covariant functor F from the category of R modules to the Ab category, how you could see the left derived functor as a functor, from to Ab
Now there were people proposing I go to derived category but it still doesn't clear things up.
This is my proposal to understand this for myself :
see L_n F as a functor from the category R-modwpr to R-mod
R-modpwr is the category , of which the objects are pairs with M a left R module, and C a positive complex, over M
morphisms between them are morphisms between and , along with a chain morphism \alpha (for which everything commutes) I mean if :
then
So my module does really depend on the chosen complex over the module.
Is this the best approach? Or am I just way off with this. It seems be the only way I can understand it.
Evilbu 17:30, 16 June 2006 (UTC)
Example of continuous not differentiable function
[edit]Please. I couldn't find one.
- Or if you're simply looking for a function that's continuous at a point but not differentiable there, take the absolute value function at zero. —Keenan Pepper 22:10, 16 June 2006 (UTC)
Polyomino tiling
[edit]What is the smallest simply connected polyomino that cannot be tiled to fill the plane using translation, reflections and rotations? -- SGBailey 23:18, 16 June 2006 (UTC)
- There are three heptominoes that satisfy this criterion: http://mathworld.wolfram.com/PolyominoTiling.html —Keenan Pepper 23:39, 16 June 2006 (UTC)
yo peeps, are hash functions just wild guesses, or is some bounds proven?
[edit]Yo,
So it seems that since MD4, MD5, SHA, SHA1 etc are all "broken", with a recommendation not to use in new infrastructure implementations, they must not have been "proven" in the first place. What I mean is that there was a day when MD5, for example, was thought secure, let's say in 1995, and this meant that some smug researcher could say: "You know, if every computer at this university were networked and at my disposal, there still wouldn't exist a set of instructions I could fill their memory with such that if left plugged in for 72 months, the array would be guaranteed to churn out two distinct files with the same MD5 checksum by the end of that time. Maybe within 100+ years, but not in 72 months." (The 100+ years is meant to allude to brute-forcing without reducing the bitspace, whereas the 72 months alludes to the fact that MD5 is in fact "broken" and does not require a full brute forcing).
So, in fact, this researcher would have been wrong, because even without newer technology (using his 1995 university equipment), we can now construct a set of instructions (program) that, were he to run it on all the computers at his university, would produce the collision in 72 months instead of 100+ years. So what I mean is that a mathematical proof must not have existed in the first place that no such program could exist.
So, now, I am asking, is there any hash function today that isn't just wild conjecture, but actually PROVEN not to reduce to fewer than x instructions on, say, an i386 instruction set to break?
As far as I understand it: does any hash have a mathematical proof that no program exists (the turing computer cannot be programmed to) to produce collisions in fewer than 2^x operations, where x is guaranteed to be at least a certain number?
I understand that quantum computing can "break" cryptography, but only in the sense of using a different physics. No program will make the computer in front of me turn into a quantum computer, but surely there is a hash for which there is a proof that no program exists that will turn the computer in front of me into a speedy collision-producer ???? —The preceding unsigned comment was added by 82.131.188.130 (talk • contribs) .
- I think any proof of this nature would first require a proof that P!=NP, which is worth a million dollars. —Keenan Pepper 00:17, 17 June 2006 (UTC)
- I thought hash functions, like finding large prime factors, aren't questions of p or np, but just a dearth of algorithms. Finding large prime factors isn't hard because it's equivalent to other non-polynomial time problems, it's hard because we're led to believe no good algorithm exists for it. It's just a "social" trick, there's no NP equivalency. —The preceding unsigned comment was added by 82.131.188.130 (talk • contribs) .
- All hash functions run in polynomial time; if they took longer they'd be useless for practical purposes. Therefore, finding collisions is in NP. If you proved that finding collisions for a given hash function could not be done in polynomial time, that would prove P!=NP. —Keenan Pepper 00:43, 17 June 2006 (UTC)
I think that it doesn't follow that if hash functions run in polynomial time, finding collisions also happen in polynomial time (although of course you could not verify your findings). You could just print the two files that don't match. For example, a weakness could allow you to do some hand-waving that's barely more complicated than printing the resulting file. My problem is that hashes are all just hand-waving, and mathematicians don't even assure me that if it takes my computer ten seconds to produce a hash of a certain file, there cannot exist a program that can produce a competing file with the same hash in five seconds. (Although of course the program could not also verify its product). Since there's no math, it's all just hand-waving! I'm looking for some hash that has been mathematically proven to take a certain number of operations to reverse, on a turing machine. (Obviously a quantum computer might be able to sidestep these numbers). It seems like one big social prank.Okay, i think I misinterpretted. All hashes run in polynomial time, you said, but do any of them guarantee that a competing file with the same hash cannot be produced in, say, constant time for that length hash? (Of course constants are technically polynomials) For example, here is an MD5 of a password I just chose:47869165bfa3b3115426b0b235a2591e *-(Not sure why that's what the outlook looks like, this is produced with the line "echo -n "secret" | md5sum" in Bash on Cygwin, but secret is in fact something I chose.) So, can MD5 the algorithm even mathematically assure me that on this architecture I'm typing on (i386, Windows 2000) for that many bits of md5 hash there doesn't exist a CONSTANT time algorithm for producing a file to match it? I don't mean hand-waving social engineering, I mean mathematics.
If you proved that finding collisions for a given hash function could not be done in polynomial time, that would prove P!=NP What about proving that, although the time is polynomial, the factors mean that a theoretically optimal program on i386 architecture with unlimited RAM (as for an arbitrarily large cluster) could not produce an answer on average in fewer than 10^x operations, where the number of zeros (x) puts it out of the reach, at least, of THIS computer over the course of 24 hours???
Okay, I just realized that what you think I want is a proof that finding collisions will take more than polynomial time (for example, exponential time). However, I don't need this. It doesn't matter if a collision can be found in exponential time. It doesn't even matter to me if a collision can be found in CONSTANT time, as long as it guarantees that the constant value translates to more than x operations. I don't care how it scales theoretically, I care about the actual proven minimum. I don't want to hear "Hahaha, just kidding. Tricked you good we did, us mathematicians, because now you can crack [find a collision for] SHAxxx on a rusty laptop from 1987 in 35 minutes." That's essentially what happened to MD5 (read that article, and SHA). Now I want something that that won't happen to. At least, not on a normal computer.
- I think you have a misconception of what a "break" means in cryptography (especially in the context of hash functions). The basic threshold for getting the secret in a cryptographic system, which is usually the only thing that is hidden from outsiders (Kerchkoff's law) is the number of elements in the keyspace, which for a binary cryptosystem is usually 2^(number of bits). However, for cryptographic hash functions, the definition of a "break" is that a person can find a single collision (even for a nonsensical message) for the scheme, so the basic threshold for a binary cryptosystem is 2^(number of bits/2) because of the Birthday Paradox. However, if an attack can find the secret in less than that time, regardless of whether it is feasible within human lifetimes to do so, it's considered "broken" (because faster computers will eventually be developed pulling the threshold into a human lifetime. DES is an example of this.) However, the SHA1 and MD5 attacks are practically feasible, so they're useless.
- Getting back to your question, is there any way to prove that a particular function cannot be broken in a certain timeframe? The only cryptosystem that it can be proven on is the one-time pad. There are proofs in provable security that reduces cryptographic systems to simpler problems (typically hard problems like discrete log and factorization, but other reductions are possible), but those depend on the complexity of other problems, but discrete log for example has not been proven to be exponential time. And even when a system is provably secure, its implementation may not be. BMGL in [4] for example was vulnerable to time-memory tradeoff attacks. There are also assumptions about the security model for security proofs which are being debated in crypto circles as well.
- Your request is almost the equivalent to saying "predict all future attacks on cryptosystems and account for them", which is practically impossible to do. --ColourBurst 03:17, 17 June 2006 (UTC)
- Well, let's try something simpler. Can mathematics guarantee that there could not ever exist a program that, were I to run it on my computer, in fewer than 10 days would analyze a 500KB executable and determine whether if run it would eventually terminate, or whether it would run forever (contains infinite loops, etc), scaling linearly forever. (For example, 20 days to analyze a 1MB executable, 40 days for a 2 MB executable, etc). As I understand it, the halting problem guarantees that I can rely on the non-existence of this program for my architecture within the timeframe I specified? In other words, is it true that computer science does make some hard mathematical guarantees, but other than the one-time pad, not one of these guarantees is in the field of cryptography. Mathematics guarantees nothing more than that a one-time pad is secure, and the "hard" problems other cryptographic mathematics depends on aren't hard in a real sense (guaranteed not to be solveable in fewer than ____________ instructions) they're just hard because hand-waving cryptographers call them hard. Is my dichotomy a false one? (halting problem, one time pad = actually guarantees a certain program could not exist, cryptography = guarantees nothing other than that if a hundred thousand mathematicians say "hey, this is hard for us. Not provably or anything -- we just haven't figured out how to do this in an easier way" then the gullible public will say "Well Golly! If it's hard for them, I guess I can rest assured no one will ever do it.") I guess I'm saying that I take exception to your phrase "Your request is almost the equivalent to saying 'predict all future attacks on cryptosystems and account for them'" -- it's not really ever an "attack", since no one said anything about it being impossible. It's like, if I base my security on no one ever living on Mars, then it's not really an "attack" if someone goes and lives there, in order to "break" my security, since obviously nothing ruled it out. It's not really an "attack" if you take a hard math problem and solve it. So, is it fair to call cryptography (other than OTP) all hand-waving? While saying that other computer science questions (e.g. halting problem) really do have guaranteed findings...
- I'm sorry, I'm not sure I understand. The halting problem doesn't really account for running time as far as I know, and I'm not sure you can define a problem in terms an absolute running time like the way you phrase it. And as far as I know, P ≠ NP hasn't been proven either, so anything based on an exponential running time hasn't really been proven. Most computer scientists believe P ≠ NP, but that says nothing about whether it really is so. Other running time assumption compress to those two classes in most cases, so I'm not sure there have been "other guaranteed findings" the way you suggest. Are you really going to call travelling salesman, etc problems "hand-wavy"?
- I can tell you one thing though. Most good cryptographers will account for the fact that one day their ciphers may be broken (even if there's nothing but Moore's law). I mean, recently a lot of the "good at the time" stream ciphers were broken wide open by algebraic attacks, and now out of necessity cryptographers are exploring nonlinear boolean functions (but their properties are very poorly defined). In addition, most ciphers go through extremely rigorous testing by public bodies to get through, so while it isn't absolute, it's probably not hand-wavy either. I mean, I'm not sure why theories can't be invalidated - it's the basic tenet of science that it's possible (eg. Newtonian physics versus general relativity) As for the "gullible public", see Snake_oil_(cryptography) --ColourBurst 22:34, 17 June 2006 (UTC)
- I think the short answer is "No, no (non-trivial) bounds have been proved for any hash function." As to the (non) relationship with P=NP?, it is conceivable (at least in theory) that someone might prove for some hash function no algorithm exists that will invert it in fewer than 1038 steps on most inputs. That would be reasonably safe, also for the foreseeable future, even if that algorithm runs in constant time. --LambiamTalk 13:02, 17 June 2006 (UTC)
- By "might", then, you mean then that it hasn't been done, and doesn't seem to be on the horizon yet? Black Carrot 21:13, 17 June 2006 (UTC)
- I believe it's an ongoing investigation right now in some crypto circles. I believe there are papers that try to prove the security of certain hash functions reduces to a hard problem. I remember hearing about a generalized transformation as well but I don't remember. --ColourBurst 03:05, 18 June 2006 (UTC)
- The original problem is clearly unsolvable; there are only 2128 possible MD5 sums; therefore, if you consider the 2256 files of length 32 bytes, there are at least two files that have the same MD5 sum (in fact, there are quite a bit more than that).
- Knowing that these two files, of length 32 bytes, exist, you can now put them in your lab's computer, and have a solution almost instantly; the instruction sequence to generate a given file that short is short, too.
- So the number x in your question is very small, and bounded from above by a number not at all very large, and it's provable it's not, for any hash function.
- Hope this helps?
- To RandomP: Knowing that these two files exist is not enough to put them in your lab's computer. In order to do that, you first need to figure out their actual contents. To do that you "only" need to consider about one halve of 2256 possibilities and in the process keep a record of all previous results. I am not sure that the universe contains enough elementary particles to store your intermediate results (but perhaps notched subquanta will do the job). In the meantime, I'll put my money on quantum computing. Back to the question of Black Carrot: for all I know, nothing even vaguely looking like a mathematical guarantee appears to be anywhere from here to the event horizon. The only kind of guarantee available is that a lot of very clever people, in spite of their assiduous efforts, couldn't figure out a way to compute it in a reasonable amount of time, so you need an awesomely real clever mind (or someone who just got lucky) to compromise your secret. --LambiamTalk 00:26, 18 June 2006 (UTC)
- The question was very specific. If you go back and read it, you'll find that my explanation above is precisely why there can be no such proof.
- How secure or insecure such hashes actually are is a much much harder question; I'm thus thankful I wasn't asked it :-)
- Keep in mind that all such hash algorithms do have collisions (that's easy to prove), we just don't know how to find them.
- Your best bet would probably be a hash function that is itself NP-complete, though I'm not sure such a thing could exist with a fixed-length output (if any integer is allowed, things are easier - just use the input to set up a network, "just" solve the TSP, then re-encode that cycle to get an integer).
- RandomP 09:48, 18 June 2006 (UTC)
- Or just Moore's law, which is essentially what happened to DES (there were breaks but not enough to compromise the security). Right now, quantum computing can only solve the two hard problems in public-key cryptography; they haven't been able to come up with algorithms that do anything but those yet. It's possible that quantum computing may solve other similar problems but right now those are the only two they can solve. --ColourBurst 01:30, 18 June 2006 (UTC)
Thanks for everyone's input! I'm putting the short answer in the section below. The only question I still have is whether RandomP's original answer referred to just putting in "precooked answers" into the laboratory computers, and therefore "getting" them instantly. Of course, I referred to getting them algorithmically, rather than just coming back from time with precooked example answers. (Of which of course there are guaranteed to be many, as RandomP seemed to correctly point out). I wonder how to phrase this issue more rigorously, so that "coming back with precooked answers" is ruled out.... I'll ask this in a new question below. 82.131.190.200 18:45, 20 June 2006 (UTC) excde
- Yes, my answer referred to "precooked answers", because I see no way to phrase the problem in a rigorous way that would exclude those. There might be one, of course, but I really doubt it at this point, and I see no way to even start on it.
- Essentially, if you allow to get a bit philosophical, there are two parts to solving a problem mathematically: the first is to state it in rigorous terms and the second is proving whatever you want to prove about the rigorously-stated problem. (some would argue the first bit isn't mathematics, but that's another debate).
- The question you asked, as far as I'm aware, is at the very beginning of the first stage: we need something like a definition of complexity where a random 128-bit number is significantly more complex than a long and badly-written computer program, even though that computer program involves many more than 128 yes/no choices! That sounds like a hard AI problem to me.
- But, hey, maybe there's a really good way to do it that I'm just missing. Let us know when you find it? :-)
- RandomP 19:11, 20 June 2006 (UTC)
short answer
[edit]The short answer is "No, no (non-trivial) bounds have been proved for any hash function."
June 17
[edit]Mozilla Firefox Problems
[edit]OK, I've been have trouble watching some videos on Mozilla Firefox... I've tried downloading all the plugins on this page (https://pfs.mozilla.org/plugins/?action=missingplugins&mimetype=application/x-mplayer2&appID={ec8030f7-c20a-464f-9b0e-13a3a9e97384}&appVersion=2006050817&clientOS=Windows%20NT%205.1&chromeLocale=en-US).
And it still won't work and it won't give me a plugin recommendation.
This is what I get when the video won't work... http://www.esnips.com/doc/2968f507-91c0-4e20-b991-8bb90e9fd09a/Mozilla-Friefox-Problems.jpg
Thanks! ~Cathy T.~
It does this on my mom's pc too... ~Cathy T.~
- Sometimes I have problems with certain things on Firefox. Try viewing it in Internet Explorer. You may want to check out the IE Tab firefox extension as well. —Mets501 (talk) 14:08, 17 June 2006 (UTC)
Yeah, that's what I've been doing... using internet explorer. Going to check out that extension. Thanks! ~Cathy T.~
Thanks... I just downloaded the extension and it works! Thank you so much! :) ~Cathy T.~
- I remember that used to happen to me as well. Are you using the latest version of Firefox? Remember to keep it updated.
Yeah, it's updated. ~Cathy T.~
what is 'the fundamental theorem of homological algebra', if there even is one?
[edit]Hi,
I have been studying homological algebra and I have seen several important theorems, but I still haven't found any theorem (in my syllabus or on the web) that is called 'The fundamental theorem of homological algebra'.
Is there one? (because if there is I would feel a little silly studying all this and still not knowing what it is..:))
If so , my guess would be that it is the theorem about connecting morphisms, that allows you to create a long exact sequence of homolog y modules from a short exact sequence of chain morphisms?
Or is this the one, but in a much more general abelian category?
Thanks,
Evilbu 08:42, 17 June 2006 (UTC)
- A lot of fields don't have any single theorem which could be called the fundamental theorem. Even those fields that do, their theorems are wrongly named. The fundamental theorem of algebra, for example, is not, strictly speaking, a theorem of algebra at all. And what would qualify as the fundamental theorem of algebraic topology? Or of numerical analysis? Why should homological algebra be any different? -lethe talk + 09:13, 17 June 2006 (UTC)
- The fundamental theorem of algebra, for example, is not, strictly speaking, a theorem of algebra at all. What is it? 82.131.187.179 09:43, 17 June 2006 (UTC).
- Well, the theorem is not really of much importantce in algebra these days, so it's not "fundamental". Also, its proof relies on topological properties of the complex plane, so it's not an algebraic theorem. -lethe talk + 12:08, 17 June 2006 (UTC)
- The fundamental theorem of algebra, for example, is not, strictly speaking, a theorem of algebra at all. What is it? 82.131.187.179 09:43, 17 June 2006 (UTC).
Hmm, but once I read an article, I think here on Wikipedia, about math in popular culture, and I think it said something about a movie in which a professor 'demonstrates the fundamental theorem of homological algebra for an obnoxious student'.
It is a shame I can't find it back.
But you think there isn't one? If I had to force you to pick one which would it be?
Thanks,
Evilbu 11:12, 17 June 2006 (UTC)
- Uh... well I really don't feel comfortable calling any one theorem the fundamental theorem. It's not like the entire body of work rests on some result. The existence of a left-derived functor for every projective module is more preliminary than it is fundamental. The universal coefficient theorem is kinda nifty. -lethe talk + 12:08, 17 June 2006 (UTC)
- Here's one theorem given that name, found on the web.
- Theorem. Given a short exact sequence of differential space
- there exists a linear mapping ∂: Hn(C) → Hn+1(A), called the connecting homomorphism, such that the following long sequence is exact
- Theorem. Given a short exact sequence of differential space
- As guessed, it's about the existence of a connecting morphism. The restriction to "differential spaces" is not really part of the fundamental theorem, but merely a convenience for the source in question. --KSmrqT 13:36, 17 June 2006 (UTC)
Thanks. And it was also crucial in Mayer Vietoris in topology hmmm. But I mean wouldn't the most general case, involve abelian categories. But I was wondering, how would you go about categorically defining quotients (for homology modules?)
Evilbu 16:06, 17 June 2006 (UTC)
The "movie in which a professor 'demonstrates the fundamental theorem of homological algebra for an obnoxious student'" is probably a film called It's My Turn (look it up on imdb, I haven't seen it though I'd love to find a copy...) in which someone proves the Snake Lemma on a blackboard. This isn't really a good candidate for the "fundamental theorem of hom. alg." though. It's My Turn gets referenced in Weibel's book An Intro to Hom. Alg. --86.15.136.29 22:58, 24 June 2006 (UTC)
Limit of a complicated sequence
[edit]Hello. May somebody tell me how to find the limit of the following sequence?
lim 1/n * ( (n+1)(n+2)...(n+n) )^1/n
n->infinity
Thank you very much for your patience.
- Try finding first the limit of the natural logarithm of this expression. To do that, think about the result you get after some simplification as an integral. -- Meni Rosenfeld (talk) 21:11, 17 June 2006 (UTC)
- Another approach is to express the product within the outer set of parentheses in terms of factorials, then apply Stirling's approximation. Chuck 21:12, 21 June 2006 (UTC)
Integral of Sinc function
[edit]Is there a compact integration of the formula sin(x)/x? Black Carrot 21:03, 17 June 2006 (UTC)
- You mean, a closed form? Not with elementary functions, of course, but this is exactly the sine integral. -- Meni Rosenfeld (talk) 21:15, 17 June 2006 (UTC)
- The closed form is what I was going for. Ah, well. Thanks. Black Carrot 22:34, 17 June 2006 (UTC)
Confirm Derivative
[edit]Hi. If possible could someone please confirm the following relationship between function and the derivative I've found. I'm getting some answers right and some wrong with this equation, so I'm not sure if the textbook is wrong or I have a equation that works some times and not others.
Thanks, --DanielBC 22:46, 17 June 2006 (UTC)
- It looks right to me. Black Carrot 23:17, 17 June 2006 (UTC)
- Yes, it's correct. —Mets501 (talk) 02:20, 18 June 2006 (UTC)
- Often a good check is to compute a result in a different way. For example, factor the derivative expression as
- with λ = 1⁄50 and see if you get the same results. By inspection, at t = 0 both g and g′ are zero. The derivative is also zero at t = −100.
- In this case, three simple properties of derivatives confirm the relationship.
- the derivative of a product, d(u⋅v) = du⋅v + u⋅dv
- the derivative of a function composition, d(u∘v) = (du∘v)⋅dv
- the derivative of the exponential function, deu = eu⋅du
- One word of caution: If you are computing the results using a computer and floating point arithmetic, round-off error and other numerical effects may be an issue.
- Also, I hesitate to mention spelling errors because they are so common, but the correct spelling is "derivative, with an "a". --KSmrqT 08:58, 18 June 2006 (UTC)
Wave ID
[edit]If anyone here has a copy of Stalking the Riemann Hypothesis: the quest to find the hidden law of prime numbers, I need some help. He explains things very clearly for about the first third of the book, then starts leaving out more and more details. On pages 85-90,
- In Figure 12, what are those "primal waves" called? Do they have a formula?
- How are they derived from the zeta zeros?
- They're clearly not periodic, but he keeps comparing their superposition (which becomes the prime counting function) to Fourier analysis. I thought that only dealt with periodic functions, like sine.
- If it's really related to Fourier analysis, why can't the prime counting function be directly broken up into its "primal waves" from the information we already have?
- Why do these waves go from wiggling along a horizontal line (Figure 12) to wiggling very slowly along a diagonal line (Figures 13-15)?
- (Skip this one, I figured it out. Black Carrot 21:40, 18 June 2006 (UTC))
- The approxmiation he's got looks pretty accurate. Does it become a lot less accurate farther out? Given that a few trillion zeros have been worked out, according to our articles, why isn't the resultant function listed in any of the prime number pages? Black Carrot 23:30, 17 June 2006 (UTC)
June 18
[edit]How much does it cost to make a Plasma TV
[edit]The Plasma TV manufacturer Panasonic produces widescreen plasma TV model X. If they price it at $20000 then they can sell 50 units per week. If they half the price to $10000 then they can sell 500 units per week. If they half the price again to $5000 , they can sell 5000 units per week. And finally at a price of $2500 they can sell 50000 units per week.
The actual selling price for the widescreen plasma TV model X is $2800 dollars. What is the actual cost of the TV.
My solution
[edit]I have calculated that the volume sold is
Vol = 5 * 10 ** ( 1 + (log(20000/price)/log(2)) )
or
The total profit is
Which we can plot as a X,Y graph
Setting the Cost to $1000 , produces a graph with the maximum y at x= 1426
Refer to this graph.
My question is this. Apart from changing the value of Cost and reploting the graph. How else can I find the value of Cost which correspond to a graph with the maximum value of y(Total Profit) at the point of x=2800 (Price = $2800).
Ohanian 02:41, 18 June 2006 (UTC)
- The function is at its maximum when its derivative is zero: . Substitute x=2800 and solve for the cost.
Your derivative is wrong. It produces the cost of 2189 instead of 1960. 211.28.122.175 09:56, 18 June 2006 (UTC)
So I need to find the derivative of this
211.28.122.175 10:02, 18 June 2006 (UTC)
- Yes the derivative is right, and it produces 1960. Conscious 12:26, 18 June 2006 (UTC)
Another approach
[edit]Not essentially different, but a bit less strainful in some respects.
Start by rewriting the expression for the profit in the form:
in which C is the cost. For the purpose of finding a maximum, the value of a is irrelevant, but λ = (log 10)/(log 2) ≅ 3.3219. Taking the derivative with respect to x yields:
which is zero for:
- or
or, plugging in the numeric value for λ,
- or
The value 0.6990 is 1 – 0.3010, where 0.3010 is the base-ten log of 2. --LambiamTalk 13:12, 18 June 2006 (UTC)
Integral Symbol
[edit]I've seen a symbol that looks like an integral symbol with an "o" in the middle of it. What is that symbol? --Shanedidona 03:27, 18 June 2006 (UTC)
- It refers to a path integral along a closed path. -GTBacchus(talk) 03:30, 18 June 2006 (UTC)
- Outside of Wikipedia, these integrals are not called "path integrals", because path integrals are something else entirely. Please visit Talk:Path integral to comment on a request to fix the problem on Wikipedia. Melchoir 04:23, 19 June 2006 (UTC)
- The symbol "∮" (Unicode U+222E) is also referred to as a "contour integral", meaning the same thing. Occasionally one sees fancier variations like "∲" (Unicode U+2232) and "∳" (Unicode U+2233) indicating a clockwise or opposite path, respectively. Contour integration is popular in complex analysis because of its vital relationship to residues of holomorphic functions. --KSmrqT 09:19, 18 June 2006 (UTC)
- And in the markup used on wiki pages, the code is \oint which gives Confusing Manifestation 12:05, 18 June 2006 (UTC)
Computers
[edit]Actually i want to make a story and wanted to know how to make a simple virus.
- Try biennale.py. It's a very simple virus, in that it can only infect other programs written in Python to which it already has write access. It makes no attempt to exploit any security vulnerabilities, so it has no chance of spreading in the wild. —Keenan Pepper 21:37, 18 June 2006 (UTC)
Secant and method of false position
[edit]Hi, i am having problems understanding the differences between the secant method and the method of false position. I have an exam tomorrow so any help would be appreciated. The formulas look pretty much the same and they seem to do the same thing so what's going on?
chemaddict 13:20, 18 June 2006 (UTC)
- Both methods are based on linear interpolation, which is why the formulas are the same. The difference is in the numbers that the formulas are used on.
- Let's take an example. Suppose you want to solve and that you start with and . Both methods yield the same value for the next iterate:
- However, they differ in how they proceed. The secant method uses the last two points to calculate :
- False position uses the last two iterates on which f has opposite signs. Since , and , they are and , and hence false position uses:
- This is also explained at the start of this PDF extract from Numerical Recipes.
- Good luck with your exam. -- Jitse Niesen (talk) 13:57, 18 June 2006 (UTC)
June 19
[edit]Ring theory
[edit]Hi
If I'm too dense to understand the Ring theory article, even by using the internal links to explain the explanatory terminology (words like "and" were OK... most of the others were beyond me) is there any way I'll ever understand what the heck Ring Theory is?
You may have gathered that I'm no mathematician... but I have a friend who is. He specialises in Ring Theory. And he is utterly incapable of explaining it.
- In short, a ring is the most general setting to study the properties of addition, subtraction, and multiplication together. Many of the familiar properties of addition and multiplication do not carry over to all settings, so ring theory is the body of knowledge that organizes the different properties and their ramifications and there is quite a body of knowledge. Integers can be factored into primes, and one thing you might study is what other rings have that property (unique factorization domains). You can add division to the integers and you get the rationals, so what other rings can that happen with (Ore condition). An important fact in all of mathematics is that we study objects by means of the functions on them. In ring theory, this manifests itself in the following way: much about a ring can be understood by knowing what kinds of modules over the ring there can be (a module is like a vector space with scalars from the ring instead of real numbers). -lethe talk + 14:38, 19 June 2006 (UTC)
- Hmmm ... the ring theory article is perhaps not the best place as it is quite concise and abstract - a fine summary for someone who already knows the subject. Try reading the ring (mathematics) and commutative ring articles - at least they give some examples of rings. Gandalf61 14:58, 19 June 2006 (UTC)
Based on those tips, let me try to give an idiots' definition: Simply put, a ring is a place where the "normal" rules of addition and multiplication work. Is that it? --Dweller 15:25, 19 June 2006 (UTC)
- Yes. -lethe talk + 15:48, 19 June 2006 (UTC)
- Remember, though, that there are some rules which you may consider "normal" but do not necessarily hold for every ring. The requirements are the associative and distributive laws, the commutative law for addition (not necessarily multiplication), the existence of 0, and the fact that for every number a there is also a number -a. -- Meni Rosenfeld (talk) 15:52, 19 June 2006 (UTC)
- Additionally, the requirement of the existence of a multiplicative identity (the number 1) is author dependent. Some author require rings to have them, some authors don't. Authors who do sometimes call places without 1 rngs instead of rings. By the way, you might enjoy perusing Glossary of ring theory. You may not understand much of it, but it's still fun to get an idea of the dizzying variety of different kinds of rings. -lethe talk + 15:55, 19 June 2006 (UTC)
- Remember, though, that there are some rules which you may consider "normal" but do not necessarily hold for every ring. The requirements are the associative and distributive laws, the commutative law for addition (not necessarily multiplication), the existence of 0, and the fact that for every number a there is also a number -a. -- Meni Rosenfeld (talk) 15:52, 19 June 2006 (UTC)
"Yes" I understood. Everything below it was wasted on me. I particularly enjoyed learning that "for every number a there is also a number -a." The whole thing seems inherently contradictory. What would be a workable idiots' definition? --Dweller 16:33, 19 June 2006 (UTC)
- Well do you know what commutative means? It means that a+b is always equal to b+a. Do you know what distributive means? It means that a(b+c) is always equal to ab+ac. The existence of an inverse, which Meni was talking about, but you didn't understand because he used formal mathematical language, might be more simply stated: every number has a negative. So a ring is a place where the normal rules of addition and multiplication hold. More explicitly, there is a zero number which does nothing when added, every number has a negative which adds to zero with the number, addition is commutative, addition is associative so that a+(b+c) is always equal to (a+b)+c, multiplication is also associative so that a(bc) is always equal to (ab)c, and multiplication distributes over addition. There may also be a number 1. Which of those 7 properties is bothering you? All of them?-lethe talk + 17:16, 19 June 2006 (UTC)
- The most important thing to know about rings (with multiplicative identity) is that the integers, denoted by Z, are a sort of "universal example". What can we do with integers? Add, subtract (additive inverse), and multiply, but not (usually) divide. This limited collection of operations happens to fit quite a number of situations besides integers. Abstract algebra is used to provide a short list of what properties we can and cannot expect. Examples of rings include
- Integers modulo n, {0, …, n−1}, denoted by Zn
- Boolean arithmetic is a special case, with n = 2, denoted by Z2
- 0+0 = 0, 0+1 = 1+0 = 1, 1+1 = 0; 0×0 = 0, 0×1 = 1×0 = 0, 1×1 = 1; −0 = 0, −1 = 1
- Boolean arithmetic is a special case, with n = 2, denoted by Z2
- Polynomials with real coefficients in a variable t, a0+a1t+⋯+adtd, denoted by R[t]
- For example, (1−t)+(1+t+t2) = 2+t2, and (1−t)×(1+t+t2) = 1−t3
- Real-valued functions of real-valued arguments, denoted by RR
- ∀t∈R, (f+g)(t) = f(t)+g(t); (f×g)(t) = f(t)×g(t); (−f)(t) = −f(t)
- Matrices of n rows and n columns with real-valued entries, denoted by Rn×n
- Integers modulo n, {0, …, n−1}, denoted by Zn
- With examples like these, it's not hard to see that the concept of a ring is useful. The matrix example, a ring with identity but non-commutative multiplication, is especially important. We also have more abstract examples, such as a cohomology ring.
- Of course, a specific ring may have additional structure. For example, the rational numbers (Q), real numbers (R), and complex numbers (C) are all rings that allow division, which most rings do not. Likewise, in the integers we have unique factorization, something not available in many rings.
- We also have algebraic systems, such as groups, that provide fewer operations than rings do. For example, we can add and subtract the vectors of a vector space, but we cannot expect to multiply two vectors to get another.
- When we study any algebraic structure such as rings, we also study functions between them that preserve the structure, generically called homomorphisms. Examples include
- Mapping even integers to zero and odd integers to 1, Z → Z2
- Mapping polynomial a0+a1t+⋯+adtd to the function defined by f(t) = a0+a1t+⋯+adtd, R[t] → RR
- Mapping every real number to the integer zero, R → Z
- Mapping integer a to the diagonal matrix with a in all diagonal entries, Z → Rn×n
- The first and last examples illustrate a general construction, in that there is exactly one ring homomorphism from the integers to any ring (with identity). It is defined so that 1 maps to the identity and a maps to the identity added to itself a times.
- It's probably fair to say that most of the interest in rings is not because ring theory itself is so fascinating; it's because we use rings as a tool for so many other parts of mathematics. Naturally, a ring theorist might disagree. :-) --KSmrqT 18:12, 19 June 2006 (UTC)
- Actually I meant not to use a language too formal... Anyway, if you are able to fully appreciate KSmrq's explanation that's great - But my guess is that you may lack the mathematical knowledge to understand some parts of it. It would be difficult to give a definition which is both correct and very accesible - Your suggestion above seems, to me at least, to more accurately describe a field. Anything more correct than that would require at least some elementary background in abstract mathematics. The thing is that a "ring" is an abstract algebraic structure, and if you haven't had much experience with abstract mathematical structures, it will be difficult to understand what a "ring" means. But perhaps a few simple examples would help: The Natural numbers are not a ring, since they don't satisfy the condition "for every number a there is also a number (-a)." - for example, 5 is a natural number but -5 is not. The integers, or whole numbers, however, are a ring - You can see they pass all the criteria I mentioned. So are the Real numbers - the ordinary numbers we are used to, and the Complex numbers if you are familiar with them. If you are familiar with the concepts Ksmrq mentioned, these are additional examples. -- Meni Rosenfeld (talk) 18:22, 19 June 2006 (UTC)
If you are "too dense" to understand a given article, see if there is a simple english version of it. Russian F 23:39, 19 June 2006 (UTC)
P2P connection when both peers are behind a proxy server
[edit]Here the problem -
There are two computers,their respective LANs they connect to the internet via their respective proxy servers
Computer A -- Proxy server1 <Internet> Proxy Server2 ---- Computer B
Now is it possible for computer A to computer B and transfer data ?
It is possible for them to transfer data using a common server on the internet,which is whati think yahoo messenger and google talk does to transfer text from one computer to another. They both can connect to a single server and transfer data but if they have to transfer large files it will eat up the bandwidth on the server.
I managed to figure a technique called 'NAT Traversal', but I think it is limited to cases where computers connect to internet using NAT and not proxy servers.
musing: I am behind a proxy server and BitTorrent clients like the official client, Bit Tornado etc dont let me download even a single byte,at the most they have an option stating they can 'proxy the communication to the tracker using the proxy' which I think is extremely trivial. But with clients like Bitcomet, they can proxy even my data, so I can seamlessly download anything,with great speeds. (I downloaded FC5 at speeds around 70-100 KBps which is great on my connection)
This indicated it is possible to achieve this, but how ?? —The preceding unsigned comment was added by 203.145.128.6 (talk • contribs) .
- You may want to ask this question at the Science reference desk.
- Don't forget to sign your posts using "~~~~" and use the preview button!
- Actually, the math desk also encompasses "computer science", so this is the right place. —Keenan Pepper 18:40, 19 June 2006 (UTC)
- If they are both HTTP proxies, there's no way AFAIK to do it; what happened with your BitTorrent clients is that they connected (using the CONNECT method, which is also used for SSL/TLS) directly to other clients which were not also behind a proxy. --cesarb 02:10, 25 June 2006 (UTC)
changing site based on IP address
[edit]I have a blog and want it to display a fake blog-page for a certain IP address. That way a certain computer won't be able to see what's really on my blog. I would like to know how I can do this.
- The webserver knows the requesting IP address. If you use apache/php, then you can check the client's IP address in the REMOTE_ADDR variable of apache, then have php choose which blog page to display accordingly. -lethe talk + 19:52, 19 June 2006 (UTC)
- you should know that if you're planning to fake out Google's bot, this is called "cloaking"; Google will figure it out almost immediately; bye-bye to any semblance of page-rank for you. You'll get to the bottom of the list, even under 404 pages. :(
- Also, don't forget that some ISPs don't give out static IP addresses to their subscribers, but rather dynamically allocate them one when they log on. So if you're trying to block a particular physical machine, make sure its IP address won't change anytime soon! — QuantumEleven 09:04, 20 June 2006 (UTC)
Goedel's theorems
[edit]Goedel's completeness theorem, if I am not mistaken, proves that any logicall consistent statement can be proven in 1st order logic. Goedel's incompleteness theorem, if I am not mistaken, proves that there exist mathematical truths unprovable by any set of axioms in 1st order logic. But surely a mathematical truth is also logically consistent and can be proven using 1st order logic? How are these 2 theorems reconciled?
- I may be very mistaken, but I understand that 1st order logic is complete, but Godels incompleteness theorem only applies to axiom systems that give enough structure to do arithmetic (I vaguely recall that the proof of the incompleteness theorem depends on setting up a self-reference through godel-numbering statements, theorems and proofs ... Madmath789 21:37, 19 June 2006 (UTC)
- Gödel's incompleteness theorem only applies to proofs based on recursively enumerable axioms. These unprovable statements can be proved in accord with Gödel's completeness theorem, but only using more axioms. Generally only recursively enumerable axioms can be used by human beings to reason in finite time, so these proofs from longer axioms are not useful. -lethe talk + 21:56, 19 June 2006 (UTC)
- The article Gödel's incompleteness theorems is quite confusing in how it keeps referring to first-order logic, but actually the theorems apply to systems in which you can formalize the natural numbers, including induction, which cannot be done in FOL with a finite set of axioms. The reduction of the Peano axioms to "First order arithmetic" requires an infinitude of axioms, and although they are recursively enumerable, Gödel's completeness theorem handles only finite systems. --LambiamTalk 00:03, 20 June 2006 (UTC)
- Gödel's completeness theorem handles only finite systems? I seem to be under the impression that the soundness and completeness theorems apply to any FOL. -lethe talk + 00:08, 20 June 2006 (UTC)
June 20
[edit]How do you solve this question?
[edit]I have a polynomial of the form:
and have a few questions.
- Is this considered non-homogenous or homogenous?
- I know how to factorise the "homogenous" part by dividing out by y and using the quadratic formula, but I don't know how to solve it with the d. Mostly because I think it will have the form which I would prefer not to solve. :(
Thanks for any help.
x42bn6 Talk 01:43, 20 June 2006 (UTC)
- Maybe there's something I don't see, but just attacking it with a piece of paper, here's something. If you write down That turns into six equations, which you can reinterpret if you think of them as being about three 2-vectors: u=<A,D>, v=<B,E> and w=<F,C>. Then you're just looking for two vectors, u and v, with lengths and , whose dot product equals b, and a third vector w normal to both, with length . Any three vectors satisfying those conditions should give you a factorization of your polynomial.
- Did that make sense? -GTBacchus(talk) 02:06, 20 June 2006 (UTC)
- The definition of a homogeneous polynomial of degree n says that every term has total degree n. In the given polynomial, term ax2 has degree 2 in x, term bxy has degree 1 in x and 1 in y for a total of 2, term cy2 has degree 2 in y, and term d has degree 0; the conclusion is obvious.
- The fundamental theorem of algebra is highly misleading, because it only applies to polynomials in a single variable. Most polynomials in multiple variables cannot be factored; they are called irreducible polynomials, and lead to the concept of an algebraic variety and the discipline of algebraic geometry. A polynomial in two variables describes an algebraic set in the plane, generally a curving shape. If such a polynomial of degree 2 can be factored, it must be a product of two degree 1 polynomials. Geometrically this means that the zeros lie on two lines. But any conic section, such as an ellipse, is equivalent to a degree 2 polynomial in two variables, so non-degenerate conics are irreducible. --KSmrqT 09:44, 20 June 2006 (UTC)
Monstrous moonshine
[edit]I badly want to understand monstrous moonshine. I'm very good at math and I already know the basics of group theory and complex analysis, but it's still tantalizing gibberish to me. Here's the bulletin for the math department of the university I attend: undergraduate and graduate. What courses should I take in order to understand monstrous moonshine? —Keenan Pepper 04:53, 20 June 2006 (UTC)
- You probably need to know about the classification of simple Lie algebras and their representations, which is a warm-up for Kac-Moody Lie algebras. Some group cohomology is nice for understanding extensions. Then you've arrived at current algebras. You should probably know a little bit about elliptic curves and modular forms for when they tell you that the terms of the j-invariant give you the ranks of the representations. The main result is I guess that the monster group is the automorphism group of some module constructed out of the Leech lattice. It's a long construction and I still don't understand what it has to do with elliptic curves, but the point is, you have to know all these topics. It's hard to find courses that cover all these topics, some of them are quite specialized, so I think you should be prepared for a lot of self-study. For example, I don't see any courses in that list that look like they'll cover Kac-Moody algebras. On the other hand, a lot of those special topics seminars have a lot of freedom in topics, and if people demand it, the prof might cover it (but he'll make the students do half the lectures). -lethe talk + 05:18, 20 June 2006 (UTC)
agriculture
[edit]changes in agriculture in the world till today
- Try our agriculture article. It has a section which might help you.-gadfium 05:56, 20 June 2006 (UTC)
how to phrase one-way (hash) function requirements rigorously.
[edit]I would like to state requirements for a proof in terms such as "This architecture (i386) cannot produce collisions in fewer than approximately 100 billion operations" on average (as a simple bounds), so, for example, 100 seconds for a 1 GHz cpu, or less time if you use higher speed or multiple cores/nodes. I'd like to phrase this like this: "Here is an idealized computer, it's a Pentium 4 with 1 core running at 2^32 hertz and accessing 2^32 bytes RAM" (I guess to idealize further we could assume the full ram is full-speed cache, assuming zero latency and more bandwidth than the CPU can consume in a cycle). "Does there exist any digest algorithm (hash) that is proven not to be 'breakable' in fewer than 1,000,000 such processor-hours for a certain length? (This question was answered above, and the answer seems to be "no"). However, the issue is that in fact the answer as I asked the question is literally "yes" since you can get two files with the same checksum/md5 sum etc in 0.1 seconds, if you just bring back the files from the "possible" (since the pigeonhole principle guarantees they exist.) Of course, for short files (like 4-byte files) you can just brute-force it on the spot. So I want to rule this out (brute-forcing based on the small file length) and also rule out bringing in homecooked files. How do I rigorously phrase my algorithmic requirements?
- Note that if you make the length of the desired collision string a part of the input, and ask for an algorithm (the "cracking algorithm") that, given a natural number n, produces a hash collision of length n, it's conceivable (I don't see why not, at least) that it's impossible, for a given hashing algorithm, that the "cracking algorithm" requires some minimum order of running time, or that there's no polynomial-time cracking algorithm, or something.
- However, the hash functions I'm aware with actually keep a bounded amount of state information about the file they're hashing; thus, again by the pigeon-hole principle, there are two strings of length 128 bytes (I think) such that if you append any string to both strings, you get the same md5sum. It's then trivial to build a linear-time cracking function: just produce the two strings, and append zeroes as necessary.
- Thanks, RandomP!
- What RandomP said is correct, though I have to add that since a hash function is supposed to reduce state information (to enable efficient checks) that a hash that produces a state-space that is larger than the original state-space is fairly useless, though it will fit your requirements. In addition, collision-resistance isn't the only requirement of a cryptographic hash function - it needs to be efficiently computed (which is why most hash functions use lots of XORs, as they are relatively simple to compute), and there are also other properties in the cryptographic hash function article you should read. --ColourBurst 19:48, 21 June 2006 (UTC)
Dimensions
[edit]My friend found that in two dimensions, there can only be two lines that form a perpendicular at one point. In three dimensions, there can be a maximum of three lines intersecting and forming perpendiculars with each other. So he said that by continuing the pattern: In four dimensions, there can be four perpendiculars, with time as the fourth dimension. I understand what he's saying but I don't agree. Is this true or not? --Yanwen 20:15, 20 June 2006 (UTC)
- It's true! There's a lot of background involved, but you can start by checking out Dimension (vector space) and the surrounding articles. Melchoir 20:26, 20 June 2006 (UTC)
- [added after edit conflict. Melchoir is, of course, correct]
- I assume you're making the requirement that all lines intersect in a given point.
- For the extent to which time can be, and is, considered a fourth dimension, see spacetime. However, if you think of n-dimensional space, as is usually done, as Euclidean space, I'm afraid your friend is correct: there are at most 4 lines that are perpendicular to each other (furthermore, whenever you're given fewer than four perpendicular lines, you can extend that to a set of 4 perpendicular ones!) in 4-space, and, in n-space in general, there are n of those (and the extension property still holds). This is closely related to the concept of an orthonormal basis. There is even an infinite-dimensional generalisation of Euclidean space (or many of them), called a Hilbert space. In that space, you can find infinitely many lines all perpendicular to each other.
- Hope this helps a bit?
- RandomP 20:31, 20 June 2006 (UTC)
- I still don't see how that is possible. --Yanwen 20:45, 20 June 2006 (UTC)
- I agree it's hard (or impossible) to visualise — ultimately, it might be helpful to think of four-dimensional space not as spacetime, but as "something like" three-dimensional space where there can be four orthogonal lines all going through a single point, but no more. The truly great idea Euclid had was to write down axioms that tell you how two-dimensional space behaves. You could then modify those axioms to get a definition of four-space, though there are other ways to do it, too.
- No matter which exact definition you use, you will then find out that a set of four lines like that exist. For example, if you think of four-space as the space of four-tuples with real entries (so elements of four-space look something like (1,2,-3,17.251), except you can substitute any other real number for the numbers used here), you can then write down what the four lines look like: the first one consists of all points of the form (x, 0, 0, 0), the second of all points of the form (0, x, 0, 0), and the third and the fourth I'll leave to you to figure out :-)
- RandomP 20:57, 20 June 2006 (UTC)
- (After conflict) Are you confused by the progression, or by the notion of a 4th dimension complementing length/width/height? If the latter, it may help to remember that you don't have to consider such things in physical terms; any data can be represented similarly. For instance, a batter's count in baseball can be considered a two-dimensional space of balls and strikes, and the fuller state of a game might be a 4-D space of balls, strikes, outs, and innings. You should be able to see how 4 "lines" of the form RandomP laid out above are necessary to specify an exact state of a game (a point of intersection, if you will). — Lomn | Talk 21:01, 20 June 2006 (UTC)
- Actually, I'm confused by how the timeline can make a perpendicular with the other three dimensions. I thought that the fourth dimension was used to comprehend a changing/moving 3-D object?--Yanwen 21:19, 20 June 2006 (UTC)
- "Perpendicular" (actually "orthogonal"), with respect to dimensions, doesn't refer to angles. Rather, it means that the particular dimension x can't be represented by any combination of any of the other dimensions present. For instance, no combination of width and height conveys information about length, and no combination of the 3 conveys any information about time/duration. However, your "comprehend a changing/moving object" appears to be basically the correct interpretation -- time is the fourth variable needed to distinguish a paper cup now from the same cup five minutes later, whether or not I've crushed it (changing its traditional dimensions) in the interim. — Lomn | Talk 21:40, 20 June 2006 (UTC)
- Actually, I'm confused by how the timeline can make a perpendicular with the other three dimensions. I thought that the fourth dimension was used to comprehend a changing/moving 3-D object?--Yanwen 21:19, 20 June 2006 (UTC)
- (hooray for edit conflicts) Well, mathematically speaking, there is no canonical "The Fourth Dimension". See the article Fourth dimension, for example; although there's some unnecessarily provocative language in there, it's mostly right. It's just that since spacetime is four-dimensional (on human scales), it's helpful to use spacetime as an intuitive crutch when thinking about 4D problems.
- One way to think of perpendicular lines is that they aren't moving along with each other. So a line pointing north is perpendicular to a line pointing east, because if you go north you're not going east; but a southwest line is not perpendicular to an east line. Well, if you move along a time-oriented line, you're neither moving forward, nor up, nor to the right, or any other spatial movement. So the "timeline" is perpendicular to the lines pointing in those directions.
- However, you won't find the perpendicular argument I just made in any respectable physics book, since relativistic spacetime turns out to be very, very different from Euclidean 4-space. Probably the moral is not to take this "time=4D" stuff too literally. Melchoir 21:47, 20 June 2006 (UTC)
- It is a common misconception among non-mathematicians that "time is the fourth dimension". The way mathematics defines dimension is much more flexible than measuring independent directions in physical space. We also use the word "space" in ways that have nothing to do with ordinary experience. Nor do we need any notion of perpendicularity to talk about dimension. Our best modern theories of physics need all this freedom to formalize how our universe works.
- You've probably seen a bit of plane geometry, with point and lines and so on. Think about all the possible circles we can draw. Each circle has a center that can be anywhere in the plane, and it also has a radius. Therefore we can select a circle with three numbers, (x,y,r). Different numbers mean different circles. We call the collection of all possible circles a three-dimensional "space".
- For all possible spheres, we need four numbers, (x,y,z,r); the collection of spheres is a four-dimensional space. Neither the spheres nor the circles suggest a meaning for perpendicularity, yet each collection is a "space" with a dimension.
- Or choose one specific sphere. The collection of points on its surface is another kind of "space". We can select any point using two numbers, like latitude and longitude on the surface of the Earth; so this collection is a two-dimensional space.
- Going back to the plane, instead of circles look at all the possible lines. This collection turns out also to be a two-dimensional "space"; in fact, we can pair a unique line with every point in the plane. Geometers call this duality.
- Notice that most of these "spaces" are not made of "points", nor can we say they are "flat", nor do we have any obvious sense of what "perpendicular" would mean for them. Yet they all have a dimension.
- Now consider a grain of sand wandering through the galaxy. At different times it is in different places. Some other grain could be at a different place at the same time as the first. The collection of all possible times and places is, like the collection of spheres, a four-dimensional space. We can select a member of the space with four numbers, (x,y,z,t).
- Or think of a room with beams of light shooting through it. The collection of all possible light beams, essentially lines, is another four-dimensional space. Time is not involved, nor is perpendicularity.
- But why stop at four? Picture an interstellar spacecraft. Forget about time; imagine not only all the places it could be, but also all the different ways it could be turned. This collection of "configurations", so to speak, is a six-dimensional space.
- So free your mind. There is nothing sacred and mystical about four dimensions.
- What, then, is perpendicularity? Good question, but I think this reply is long enough! --KSmrqT 23:34, 20 June 2006 (UTC)
- I'm impressed by your reply! I was going to write something of that sort, but it's hard to put into words :-) —Mets501 (talk) 23:39, 20 June 2006 (UTC)
(stolen from Martin Gardner) A square has two diagonals, which intersect at right angles. This corresponds to the fact that a square is two-dimensional. A cube has four spatial diagonals (each of the eight corners is joined to its opposite corner) which intersect at right angles. Hence a cube is four-dimensional :). (Exercise: find the fallacy in this argument.) —Blotwell 02:53, 21 June 2006 (UTC)
- Do the diagonals of a cube really intersect at right angles? That doesn't sound right to me. For example, two diagonals of a cube are also the diagonals of a rectangle of dimensions 1 and √2. Diagonals of a rectangle are perpendicular only in squares. -lethe talk + 00:37, 23 June 2006 (UTC)
- I didn't want to spoil anyone's fun by spilling the secret, but perhaps it's stale enough now. Yes, the fallacy is not really in the argument, but in the premise. A plane through diagonally opposite edges of a cube cuts a face diagonally, which indeed gives √2 for one side of a rectangle, while the other side is a cube edge, thus of length 1.
- Cube cuts can be difficult to visualize; for example, it is possible to cut a perfect regular hexagon as cross-section. How is this possible, and what is the side length and area (assuming a unit cube)?
- Since we're discussing dimension, consider the following. The Platonic solids are the five convex regular polytopes that are possible in three-dimensional Euclidean space. Only three of these generalize to all dimensions. One of these is the cube; what are the other two? --KSmrqT 02:19, 23 June 2006 (UTC)
- So this "paradox" was counting on my inability to visualize diagonals of a cube? For shame. Though I admit I'm having a hard time visualizing your hexagon.
- As for the higher dimensional polytopes, I have a vague recollection of John Baez writing in his This Week's Finds column about how the regular polytopes fit into an ADE classification like the simply laced Lie algebras. Without knowing the details, I'd guess that the polytopes that generalize to all dimensions are the root systems that generalize to all dimensions as well; An, Bn, and Dn. But as for which might be which, I've no clues yet. -lethe talk + 02:44, 23 June 2006 (UTC)
- TWF62 says that An is the symmetry group of a regular n-simplex (the analogue of a tetrahedron) and Bn is the symmetry group of the "hypercube" and "hyperoctohedron". Doesn't mention what polytopes the Dn groups are symmetries of. -lethe talk + 03:01, 23 June 2006 (UTC)
- But of course, there is no distinct Dn in 3 dimensions. SU(4) = SO(6). -lethe talk + 03:08, 23 June 2006 (UTC)
- Symmetry applies to both the polytopes and the original "paradox": What is the name of the symmetry group of the cube? Angles between vertices of the cube equal angles between faces of what? --KSmrqT 12:05, 23 June 2006 (UTC)
- But of course, there is no distinct Dn in 3 dimensions. SU(4) = SO(6). -lethe talk + 03:08, 23 June 2006 (UTC)
- TWF62 says that An is the symmetry group of a regular n-simplex (the analogue of a tetrahedron) and Bn is the symmetry group of the "hypercube" and "hyperoctohedron". Doesn't mention what polytopes the Dn groups are symmetries of. -lethe talk + 03:01, 23 June 2006 (UTC)
Just to make this more fun, lets change the question a bit:
- In CP3, how many lines meet 4 given lines?
- In CP4, how many lines meet 6 given 2-flats?
- In CP5, how many lines meet 8 given 3-flats? How many 2-flats meet 9 given 2-flats?
In all cases, the given flats are "in general position".
Answer: 2,5,14,42. The last is of course the answer to the universal question. The questions and answers exemplify Schubert calculus, by the way.---CH 22:31, 21 June 2006 (UTC)
- Not only that, but your answers are consecutive Catalan numbers. —Blotwell 19:44, 22 June 2006 (UTC)
- Since this is ranging all over the place, I'd like to toss in something I've been wondering about that's along somewhat similar lines. It seems to me that any two-dimensional thing (say, a plane) can be described using the coding system that keeps track of a similar one-dimensional thing (say, a line). (BTW, as the discussion above demonstrated, I don't yet understand set theory or anything related to it, so I've probably flubbed a very basic idea somewhere.) Imagine a square centered on the origin of a two-dimensional graph of the plane. This is considered the unit level. Now, break this square into nine equal squares. Number these squares 0-8, with 0 at the center. Any of the squares just formed can, using the same pattern, be broken up to locate any point on the plane exactly, using an infinite decimal (well, nonimal I guess) expansion. Any point beyond the first square can be found by first extending the pattern outward by stages, using the central square as zero each time, until the point is contained, then narrowing in again. A decimal point could be used to keep track of order of magnitude. How would this fail to transform any pair of real numbers into an equivalent single (positive) real number (without any nines in it), from which the original pair could again be deduced? Black Carrot 17:02, 23 June 2006 (UTC)
- Oh, I forgot to mention the borders. They would be treated the same as on the number line - Multiple equivalent expansions are acceptable(1.000_ vs 0.999_), but wichever turns out to be the most convienent could be made standard. Black Carrot 17:05, 23 June 2006 (UTC)
- See Hilbert curve and space-filling curve. Topology is essential. --KSmrqT 20:39, 23 June 2006 (UTC)
- Your argument basically shows that the plane, , can be put into a one-to-one correspondence with the line, - That is, they have the same cardinality. This is a surprising, yet known, fact. There are also simpler constructions for showing it - for example, if you have the pair , you can match it with the number . For example, you can match (1357.83333..., 79.64444...) with 10305779.8634343434... (actually this causes some problems when sequences ending with 9's are involved, but that's the general idea).
- This shows you, again, how rough cardinality is when assigning sizes to sets. A seemingly larger set has the same cardinality as the smaller one. As always, the one-to-one correspondence given is very far from preserving the structure of the sets. If you seek such a corrsepondence which satisfies a few minimal requirements, you will fail. For example, if you see these sets as vector spaces over , the above correspondence (or any other bijection) will not preserve vector operations. -- Meni Rosenfeld (talk) 13:26, 24 June 2006 (UTC)
- Awesome. Black Carrot 14:51, 25 June 2006 (UTC)
June 21
[edit]math equation?
[edit]is there a quick way to make a formula that tracks and record visitors to a website and compares them to other websites?--Bee(y)Ti 01:32, 21 June 2006 (UTC)
- I don't understand the question. What would this formula calculate, and what information would you need in order to use it? —Keenan Pepper 02:38, 21 June 2006 (UTC)
- Based on this user's other contributions, in particular some questions on other reference desks (e.g. [5]), I don't think this is a serious question. --LambiamTalk
- I don't think so. You would have to use some kind of computer language. --Proficient 20:16, 23 June 2006 (UTC)
- Based on this user's other contributions, in particular some questions on other reference desks (e.g. [5]), I don't think this is a serious question. --LambiamTalk
I am sure there is a quick way to make such a function. previous_hits + 1 = current_hits It is called a hit counter. However, if you want something specific, the particular programming language you plan to use would be required to formulate the function. -- Freebytes
Residue Theorem Application
[edit]We know that to find the co-efficient of the principal part in the expansion of a complex function f(z),we can use the residue theorem.BUT z=x+iy, so y=0 gives f(x).So can we use this residue theorem to find the co-efficients of any function in case we need to partialty fraction them?
Just in case, Residue of root a of order m R(a,m)=(1/(m-1)!)*lt[z tends to a]{(d/dz)^m-1 (z-a)f(z)}
- Yes we can, see examples in Laurent series (not sure that's really what you're asking for, though). Conscious 05:25, 23 June 2006 (UTC)
Complex Equation
[edit]Hi, Can anyone tell me which is the most complicated or longest equation ? I know that complicated equations need not to be longest.But just curious as which could be the longest equation existing.
- How long do you want it to be? 2 × 1111111111 = 2222222222. 2 × 11111111111111111111 = 22222222222222222222. 2 × 111111111111111111111111111111 = 222222222222222222222222222222. ... We can make it to order. We can also make it as complicated as you desire. --LambiamTalk 07:55, 21 June 2006 (UTC)
What I am asking is any textbook mathematical or physical equation. A very simple example would be E = m×c2 .
- Again, the answer is arbitrary, because you can (and do) combine equations to solve problems, so the resultant gets longer. — QuantumEleven 09:38, 21 June 2006 (UTC)
- If you just want some examples of quite complicated or long equations, try
- Drake equation - long but not complicated
- Maxwell's equations and Navier-Stokes equations - complicated but not long (as long as you use differential operator notation)
- Prime number formula#Formula based on a system of Diophantine equations - a system of 14 Diophantine equations in 26 variables whose solutions can be used to generate prime numbers - both long and complicated
- Gandalf61 10:25, 21 June 2006 (UTC)
- If you just want some examples of quite complicated or long equations, try
The Lagrangian (and therefore also the equations of motion) for the standard model are so long in most notations that they can't fit on a single page. -lethe talk + 11:31, 21 June 2006 (UTC)
In celestial mechanics, the literal expansion of the disturbing function in terms of orbital elements is an ugly beast, even to low order in the eccentricities and inclinations. In other news, how do we not have an article on that? Melchoir 19:31, 21 June 2006 (UTC)
- Knuth's Concrete Mathematics mentions an ugly (but not very long) equation with multinomials as a curiosity in chapter 5.1, numbered equation (5.31) in the translation (but the equation numbers will probably differ in another edition. I don't want to copy it here, sorry. I also like this equation I've derived myself, though it's not so ugly (I hope I don't make a mistake copying it):
- where S is the Stirling number of the second kind.
- – b_jonas 18:01, 22 June 2006 (UTC)
- Many equations in tensor calculus (particularly those used in General Relativity) tend to be deceptively complex. There is one (which I can't remember off the top of my head, but which I have in notes elsewhere) that can be written in a very small space (using about as many symbols as, say, Newton's Graviational Formula), but because there are tensor terms involved there are technically several thousand individual components. Confusing Manifestation 02:52, 23 June 2006 (UTC)
- It looks as easy as , (see Einstein field equations) but understanding what G and T mean requires some effort. Conscious 05:30, 23 June 2006 (UTC)
- I'm reading The Elegant Universe, and according to Brian Greene the equations of string theory are so complex that no one has ever been able to write them down :) Conscious 05:19, 23 June 2006 (UTC)
Macroeconomics
[edit]Could you please help me with the following table for macroeconomics, I understand which each column represents but am having trouble with the math. If someone could complete the first row I should be able to figure out from there. I know I, G, and X are constant. Does AE have to equal Y? Thanks
Y C S I G X M AE 100 200 300 400 500 600
Given the following:
C = 50 + 0.75Y M = 40 + 0.15Y I = 30 G = 20 X = 100
- You seem to have a way to calculate everything except S and AE (Average Earnings ?). I don't know what these represent or how to calculate them. If you can provide more info, such as what each of those variables represent, then maybe we can help more. Here is the best I can do with the info you gave:
Y C S I G X M AE 100 125 30 20 100 55
You will get a bit further by noting the basic macroeconomic identity:
Y = C + I + G + X - M
AE stands for Aggregate Expenditure and is defined as
AE = C + I + G
Savings S is defined as what is left over after subtracting consumption C from disposable income (Income less Taxes):
S = Y - T - C
You do not have an expression for Taxes, so it does not seem you will not be able to fill out the S column. In a problem like this Taxes might typically be described by an equation such as T = t*Y where t is a tax rate.
Vickrey 18:42, 24 June 2006 (UTC)
macroeconomics
[edit]Did anyone have any luck on my macroeconomics question?
- Yes, there was a reply. Please look again, and respond if that isn't what you need. Notinasnaid 15:52, 21 June 2006 (UTC)
"Half-square numbers"?
[edit]Has any name been given to the numbers x^2 + x (x integer)? I am interested in them because (x + .5)^2 = x^2 + x + .25, and these "half-integer squares" can also be used in Fermat's factorization method a^2 - b^2 = N to produce factors (a-b) and (a+b) of even numbers; as you can see (a-b) and (a+b) are still integers, with both a and b half-integers. --Walt 15:42, 21 June 2006 (UTC)
- The numbers of the form (x^2+x)/2 are the triangular numbers. I don't think that the numbers that you get by multiplying all of them by 2 have a special name. Kusma (討論) 19:36, 21 June 2006 (UTC)
- Actually, they do: Pronic number. Chuck 21:30, 21 June 2006 (UTC)
Thanks guys. Any opinions on the appropriateness of cross-referencing the article to/from perfect square? Or is the relationship only obvious from the problem I'm working on? --Walt 12:42, 22 June 2006 (UTC)
- As no one has mentioned this, there's an easy way to find out if there's a name: just calculate the first few values and search in Sloane's OEIS. It says "Oblong (or pronic, or heteromecic) numbers: n(n+1)" and then later " The word "pronic" (used by Dickson) is incorrect. - Michael Somos. According to the 2nd edition of Webster, the correct word is "promic" - Richard K. Guy (rkg(AT)cpsc.ucalgary.ca)". – b_jonas 17:37, 22 June 2006 (UTC)
- I'd recommend not a cross-reference, but a category might be appropriate: Mathworld suggests the term figurate number. (Also, Mathworld agrees with "pronic"--if it was just an error for "promic" at one point, I think it's become pervasive enough to be called correct now.) Chuck 21:30, 22 June 2006 (UTC)
June 22
[edit]e^x - e^(-x) = 3
[edit]how do i solve this? 88.153.89.18 09:52, 22 June 2006 (UTC)
- Multiply through by , then solve as a quadratic in . EdC 10:35, 22 June 2006 (UTC)
- Or if you were really clever you'd recognize the left-hand side as 2 sinh x. --KSmrqT 13:25, 22 June 2006 (UTC)
- Yes but evaluating the inverse hyperbolic functions is just the same as solving the quadratic and taking logs anyway. Except that maybe your calculator does it for you. Arbitrary username 16:10, 22 June 2006 (UTC)
- Or if you were really clever you'd recognize the left-hand side as 2 sinh x. --KSmrqT 13:25, 22 June 2006 (UTC)
YALQ
[edit]Yet Another Linux Question... :)
I will soon take the plunge into Linux, and am planning to set up a dual boot Ubuntu / Windows XP system for the time being, so I can experiment with both OSs, and hopefully use the advantages of each as best I can. I have some Linux experience (from working with it at the lab), but not a huge amount.
My question: what do you recommend as filesystems? I have been trawling through the Wikipedia articles on the various filesystems to see what filesystem is compatible with which OS. I am trying to achieve a compromise between performance and compatibility, and will obviously need at least two different ones...
FAT is readable and writeable by both Windows and Linux, however, I detest it with a passion (I have long since stopped counting the number of defrags I've had to do on it), plus, the filesize and volume size limitations will probably come back to bite me. I would prefer to avoid it if I can.
How good is read / write support for NTFS in Linux? (the external links at the bottom of the article point to several projects, but I must admit my techie knowledge is insufficient to judge them).
Conversely, it seems that ext2/ext3 read/write support seems to exist under Windows (again, information gleaned from the articles) - does anyone have any experience with this?
And, to close, a somewhat more basic question - in your experience, how much do you find yourself needing read/write access between your different OS's partitions on a multiboot system? Do you have any advice for me in this regard? Thanks muchly in advance! — QuantumEleven 12:23, 22 June 2006 (UTC)
- NTFS read support is just fine with newer linux kernels. I have never found NTFS write support to be usable. Does anyone have any good experience with this? Also, I have never heard of ext2/ext3 support for Windows, so obviously I have no experience with that. I personally like to use ReiserFS under linux because I like the things I've read about it. I am probably not expert enough to actually have a qualified opinion about which FS is the best, but I can attest that startups after improper shutdowns are very fast with ReiserFS. I think that would be true with any journalled filesystem. I will also note that I think ext3 doesn't usually have full journalling enabled by default, it's somehow not a full-fledged journalled FS, despite what its ads say. That's what I've read, anyway, so I never used it. And as far as the need for write support, for me I've found that it's very possible to do without it. Usually I get myself comfortable working with one platform and that's where I get all my work done. Only once in a while do I need something that can only be done on the other side. If that requires writing a file, then I have to do something ugly like email it, but that happens seldom enough that I don't mind. A slightly less ugly solution might be to give yourself a FAT partition of a couple megs for those rare OS swapping occasions. I do keep my MP3s on the NTFS partition, because I always need those, whichever OS. HTH. -lethe talk + 16:18, 22 June 2006 (UTC)
- I've heard that that fuse allows for good write support (userspace and all that). I'm Feeling Lucky gives me this - look under the "Updates" section for some stuff about fuse. Real Googling will probably give you better results. I haven't used it, as I store all my music on my ext3 partition, but I know it works a lot better than the kernel driver (which, incidentally, they have stopped marking as experimental - even though it doesn't work well). - Braveorca 23:02, 22 June 2006 (UTC)
- My recommendation is three partitions: an NTFS partition for Windows, an ext3 or XFS partition for Linux, and a FAT32 shared partition. The preferred size for the FAT partition depends on what you'll be using it for. If it's just going to be used to shuffle files between Linux and Windows, make it 4GB. If you're going to store your porn collection or your MP3s on it, make it as large as possible, while still leaving enough room on the Linux and Windows partitions for software. --Serie 19:28, 22 June 2006 (UTC)
- Thank you everyone! — QuantumEleven 16:52, 26 June 2006 (UTC)
Finding bicliques
[edit]Hello, How do we easily find a biclique in a bigraph, after applying bigraph crossing minimization heuristics (like by barycenter) to reorder the two vertex sets of the bigraph. This paper "SPHier: Scalable Parallel Biclustering Using Hierarchical Bigraph Crossing Minimization" suggests that we use Breadth First Search after this reordering procedure, but I'm not sure how exactly this is achieved. In their example Figure 1(C), supposing we started on node Y, BFS (ignoring loops) will wield nodes A, B, W, X, C, D, Z. I'm not entirely sure how we extract the bicliques {A,B} + {Y,W} and {B,C} + {W,X}. I'm sure the answer must be pretty obvious, but I'm missing something here.
Many Thanks ! --213.22.236.27 20:55, 22 June 2006 (UTC)
- You are referring to a specific paper, and apparently one not in a journal at that; you could help us help you by providing a link to the paper. I'm assuming this (PDF) is it, since page five corresponds to your description. The authors are W. Ahmad, J. Zhou, and A. Khokhar. --KSmrqT 16:52, 23 June 2006 (UTC)
- It's exactly that one I'm having trouble with. I should have provided a link directly to the pdf. Thanks! 194.65.141.26 17:16, 23 June 2006 (UTC)
June 23
[edit]the maths of Time
[edit]Hi folks; I am writing a dissertation on the nature of Time. Why ? Because we know nothing about it.
I have written about the human perceptions of Time, and have compiled 7 axioms. I now need the assistance of a mathematician to create and formulate the maths.
So this is more of an open request for assistance than a question. I live in BC, Canada.
Kind regards, Bruce >>>>>
- Can we assume that you have read about time and sense of time? Also, it may be helpful to a responder if you list your "7 axioms". --hydnjo talk 15:58, 23 June 2006 (UTC)
- "Why ? Because we know nothing about it." Sounds like quite a dissertation. Actually, you might want to read about special relativity to see that there are some things that we know about time. (Cj67 19:04, 23 June 2006 (UTC))
- Thank you both.
Yes I have read all of those references (and their sources), many times. Interestingly, Albert said very little about Time itself, other than to clearly define the (human) ways of measuring simultaneity and the consequences arising therefrom, across distances and with relative speeds of observers. But, Time is not measured by humans. Our clocks simply emulate time with great precision. Our science knows nothing about it. We need to sit back and think again. Cheers, Bruce >>>>>
- I'd like to point out that we (Bruce, Immanuel, and me) also know nothing about space. --LambiamTalk 20:31, 23 June 2006 (UTC)
- "Space is big. You just won't believe how vastly, hugely, mind- bogglingly big it is. I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to space." --LarryMac 20:44, 23 June 2006 (UTC)
- I'd like to point out that we (Bruce, Immanuel, and me) also know nothing about space. --LambiamTalk 20:31, 23 June 2006 (UTC)
- I too would be very interested in seeing these axioms, but I wouldn't much trust the article sense of time, it doesn't strike me as very accurate. While psychophysical experiments examining perception of time (e.g. interval production, duration estimation, etc) abound, our knowledge of the mechanisms underlying that perception is very incomplete. From the papers I've read it appears that there are a number of different regions involved in various aspects and timescales of time perception. Personally I expect any set of axioms relating to this subject to be a matter of philosophy and not neuroscience or math, given our current knowledge. Still, as I said, I'd be interested in seeing these axioms. 128.197.81.181 21:49, 23 June 2006 (UTC)
- The dissertation, including axioms, is presently about 15 pages. I cannot very well post it all here can I ? I am seeking assistance from willing mathematicians; please contact --email removed-- .
- Are you concerned with physics of time? Philosophical aspects of time? You said you came up with some axioms of time, may be you want to build some kind of logic system? (Igny 22:56, 23 June 2006 (UTC))
- As a logged-in user you could post all 15 pages on a user sub-page. --hydnjo talk 23:02, 23 June 2006 (UTC)
- As a student I studied Logic, Philosophy and Philosophy of Science at Royal Melbourne Institute of Technology (now a University). At present, I would prefer it if a suitably qualified mathematician(s) would take an interest in this quest. Let's face it, we have absolutely no idea what Time is. That worries me because everything comes from the future.
Yes, I am concerned about the physics of Time; the philosophy will take care of itself as we develop the mathematics. OK, I don't know whether it will be as simple as F=ma or it will be vastly more complex. Currently, I believe that there is some linkage with String Theory. Time-lines connect everything that we can perceive. Cheers, Bruce --email removed-- >>>>>
- Have you looked at Arrow of time? (Igny 23:34, 23 June 2006 (UTC))
- Samuel Beckett was right: "Time like a last oozing, so precious and worthless together". JackofOz 23:43, 23 June 2006 (UTC)
- Thanks to all. Jack, thank you for that lead (Arrow of time); I should add a few of those examples to my dissertation section (The vector of Time). I am a Melbourne expat; my wife and daughter live there. Cheers, Bruce --email removed--
>>>>> I am now a "logged in " user with moniquer "Time'sup". I am willing to post the draft dissertation as suggested by Igny. Perhaps you folks can point me in the right direction. Cheers, Bruce >>>>>
- I removed the email address you included in your last three posts. You definitely don't want to go posting your email address here unless you like spam. To post the draft on your user page, just click on the red link above that says "Time'sup" and post the content on that page. That's your user page and you can put whatever you like there. Expect comments on your user talk page (the discussion tab when you go to your page). Also, I for one tend to disagree with everything, so expect me to not necessarily agree with your claims. :) 128.197.81.181 17:22, 25 June 2006 (UTC)
- Well thank you 128.197.81.181 whomever you are. I will go ahead and post the updated dissertation. And Yes I read the advice regarding appending one's email address, but you missed the point that I wished for contact from an interested mathematician. I am not overly bothered by the moronic junk emails.
Kind regards, Bruce bnb at ahausa.com Time'sup 01:34, 26 June 2006 (UTC)
- Note sure whether I fit your definition of a "suitably qualified mathematician" - I have a mathematics degree - but FWIW, here is some feedback on your seven axioms:
- Axiom 1 - Time exists independently of human perceptions.
- Probably a pre-requisite for any quantitative model of time, but it doesn't place many constraints on that model.
- Axiom 2 - Time is infinite; It neither started nor will It end.
- Okay as an axiom, although it appears to be at odds with some of the cosmological evidence.
- Axiom 3 - At any one place (in time and space) Time moves exclusively from the future into the past.
- But at "one place in time and space", time co-ordinate is fixed, so how can time "move" ?
- Axiom 4 - Time is comprised of an infinite number of time-lines.
- Is this a definition of "time" in terms of time-lines ? If so, then you need to give a definition of a time-line.
- Axiom 5 - Events that occur on the same time-line are always related events.
- This is very vague. It could be a definition of what "related" means when applied to events; it could be a definition of a "time-line"; or it could be a proposition about "related" events, which needs to be proved. Sounds as if your "time-line" might be similar to a world line, but it is impossible to be certain without more clarity.
- Axiom 6 - Given certain constraints (of physics), events on one time-line can trigger events on other, adjacent time lines.
- You need to define what "adjacent" means when applied to time-lines, and what "triggers" means when applied to events.
- Axiom 7 - The Universe cares nothing about human events.
- Possibly true, but not a mathematical axiom.
- Bottom line - your "axioms" need a lot of tightening up before anyone can even begin to translate them into a mathematical model. Hope this helps. Gandalf61 14:52, 26 June 2006 (UTC)
- Thank you Gandalf61. Yes, the axioms are deliberately tentative so as to not approach their final definitions until sufficient lateral thinking has accumulated - that may take months. The reason for Axiom 7 is to rebuff the concept that Time is actually measured by humans.
Cheers, Bruce Time'sup 18:07, 26 June 2006 (UTC)
Fast division algorithms?
[edit]I'm looking for a way to divide one multiple-precision integer by another. One of the things I was looking at was using a number-theoretic transform, but I can't figure out how to get the remainder separate from the quotient. Plus, it blows up if one of the coefficients in the number-theoretic transform of the quotient has a 0 in it. Can someone help? --Zemylat 17:11, 23 June 2006 (UTC)
- You should check out GNU MP Library (Igny 22:44, 23 June 2006 (UTC))
- Agreed. Also, consider the software and papers of David M. Smith, especially the paper A Multiple-Precision Division Algorithm. --KSmrqT 23:27, 23 June 2006 (UTC)
- I once thought about doing it the number-theoretic transform way, and I concluded that it doesn't work. I might be wrong though. If you want to write it yourself, a simple way to do it using only multiplications is by Newton's method. It's got quadratic convergence, so your complexity for computing a reciprocal is about where n is the number of bits, assuming that multiplication is being done in time . Dmharvey 01:13, 25 June 2006 (UTC)
- The definitive reference here is the second volume of Knuth's The Art of Computer Programming. – b_jonas 09:13, 26 June 2006 (UTC)
Bootable USB Drive
[edit]I've done my basic google searching, but my only finds were way over my head. I currently have a SanDisk Cruzer Mini with 1.0 GB, and I'm interested in making it bootable.
I understand how to switch the order in BIOS, but I think there are some needed components to go on my stick, something about bootsectors?
I would be interested in doing this on a computer with XP, if that matters. --134.134.136.4 22:52, 23 June 2006 (UTC)
- This page seems relatively straightforward, but I speak geek as a second language. If there are specific steps on the linked page that you don't understand, post again with the particular item you have a question on and we'll go from there. --205.143.37.68 19:11, 27 June 2006 (UTC) (That was me, got logged off) --LarryMac 19:30, 27 June 2006 (UTC)
Just a few things: I don't have a floppy drive, didn't think I'd ever have use for floppies again. Also, I'm afraid to format something like a C drive without some sort of confirmation that it is the correct C drive. I'm not going and formatting my only system drive here, right? It's a lot easier and safer for me to format the drive from the Windows GUI with a simple right click. =P. Any Suggestions? -- 134.134.136.4 20:41, 27 June 2006 (UTC)
- If you don't have a floppy, do you have a bootable CD? If not, you should probably make one of them first. Regarding the safety of your hard drive, there are several steps that address this concern -- Methods 2 and 3 under step 1 should be more than sufficient. Subseqently step 7 exists for safety: "This step is just to verify that the C: drive is actually the primary partition on the USB Drive. "
- You might be more comfortable getting a local geek to do this for you; they can usually be bought for a pittance in snacks and/or beer. --LarryMac 15:47, 28 June 2006 (UTC)
- I'd just like to say (to larry Mac), thanks for the link, i've added it to the page Live USB which i'm working on. MichaelBillington 00:51, 30 June 2006 (UTC)
June 24
[edit]a message written in math/english
[edit]There is a math code at http://inthemath.com and I have 2 questions 1.)Is it a math code? 2.)Did I make it up? —Preceding unsigned comment added by Lightprize (talk • contribs) 22:23, June 23, 2006
- First, I have no way of knowing if you are the person who created that website. If you instead mean "is it valid?" in some sense, then that's more or less the same as the first question, which I take to mean "is it mathematically significant?". I believe the answer to this question is "no; you can derive patterns from anything". In particular, with a small sample size (the English words "one" through "ten"), all sorts of patterns can be found that do not necessarily have any meaning.
- What would be significant, at least from a linguistic standpoint, would be if the names of numbers alternated ending in a vowel and a consonant, or had an even number of letters precisely whenever the described number was even. (Neither of these is true for English, of course.) To get real mathematical significance, you'd have to look for something in, say, the names of the prime numbers. But even then, if you found something it would be more relevant to the language than to math itself, as the numbers exist entirely outside of the language used to name or describe them. It would, for instance, be quite interesting if the sequential perfect numbers contained one, then two, then three, etc., vowels in their names. That, if it continued forever, would be evidence that knowledge of the perfect numbers was involved with the selection of the names, or else that a similar construction rule underlied the perfect numbers and the English number names.
- Much more interesting from a mathematics-of-language (rather than language-of-mathematics) standpoint are things such as Berry's paradox and Richard's paradox, where the nature of any language describing mathematics is explored, rather than the idiosyncracies of one particular set of words (e.g., English). Hope this helps. --Tardis 02:52, 24 June 2006 (UTC)
- Well, we don't know if it was you, but whoever did that has a career ahead of them in numerology; I'm sure we can expect great things out of him or her in the field of proving that the next Secretary-General's name indicates that he or she is the Antichrist, or some such. - Braveorca 03:33, 24 June 2006 (UTC)
- Actually, as I was reading through the first section and then briefly skimming over the rest of them, I was quite reminded of the Time cube, especially in the last paragraph. If the Lightprize above is indeed the person who wrote this page, he or she needs seriously to consider making it intelligible, because it's too hard to follow. Maelin 14:43, 24 June 2006 (UTC)
That's allot. First, whether I did it or not is a waste. If I didn't the law will handle it.
I asked if it fit the description of a code, not whether or not "YOU" thought it was valid-you ain't seen the rest. But are you saying that if it were more complex it would be okay? I think that some say it can't be anything "because" it is too simple.
And I guess you are saying that I did find it, and that I did't make it up? Whether it is valid in your eyes or not.
So did find a code, didn't I?
- I will ignore all but the last line. These are words of a particular language that evolved over time from the way people used them. Yes, you found a pattern, but just because there is a pattern doesn't mean it actually means anything at all. If I keep track of the times I drink coffee and find that all of the times end with an odd number of minutes, then I have found a pattern, but does it mean anything? Patterns are only useful when they reflect the underlying process that gives rise to the observables and when they can be used to make predictions. Does your pattern tell you that the word for eleven must necessarily have 6 letters? Does it say that the next number after nine-hundred and ninety-nine must have 11 letters? What does it say about the number words in other languages? If nothing, why only english? This is basically numerology, not mathematics or science. 128.197.81.181 16:21, 24 June 2006 (UTC)
Predictable? 12 and 3 were predicted. At least that is what I was looking for. And I would not have been able to find them if they were not there.
I have the advantage of having seen more of it than you have. From that, I say it has meaning, and I have provided more than enough. I didn't say it would repair the ozone layer, or bring home those MIAs from Vietnam. So don't expect everything. But can you imagine "if" it were a message, being ignored because the receiver didn't like how it was written?
"If." You see I am not married to the thing. But I am giving it the benefit of the doubt. Nothing I have heard has shown me why I shouldn't. I will keep looking till then, while I try and deliver what it does say.
- You do realize that this is precisely what numerology is all about, that is looking for hidden messages in patterns of numbers. So your ideas is subject to the very same criticisms. However, don't let this stop you, you might as well find something of use there, even though I doubt that...(Igny 20:48, 24 June 2006 (UTC))
- Maybe there *is* a pattern! The first important number found on that page is 12, right? But if you reverse the digits you get 21. Now, clearly 3 is an important number and 3 divides both 12 and 21. But here's the important part: In the "who sent" message, the word "who" has one word above it and one word below it that are not part of the message. "Sent" on the other hand has a number above it but not one below it, so we have to add one extra number: 11. So: 12 21 3 11... look familiar? Maybe you should check the alphabet.. letter 12 is L, letter 21 is U, letter 3 is C, and letter 11 is K. Think about *that* one. (Sorry, couldn't resist). 128.197.81.181 20:56, 24 June 2006 (UTC)
Those 3 words that you are talking about are 1,5 and 6 and they add up to 12, that makes them part of the original code.
But do you see where you have gone? I had to look up numerology after it kept popping up in your writings.
The code I am talking about says nothing about "magic" numbers" or predicting the future, or getting insight into anyones personality.
You are trying to make it harder then it is, I said I found a math code, I did. The patterns are there, "in math." The number was 12 "what if I reversed it" you say? Why would I want to do that? I don't know anything about the meanings of numbers, other than what they add up to.
Look back at the way you sound, and the way I sound, which one of us is talking about numberology? You are trying to feed that to me, but you have seen the numbers and they are "invincible in their simplicity," why make them hard? Talk about what is there, not what you imagine should be there.
- Ok. As I read your webpage, it divides itself into two parts in my head (plus the ending paragraph where you go off about intelligent design). There is 1. the "who sent" part and 2. a series of poorly described patterns you have found in the *words* for numbers in English. It cannot be stressed enough that the simplest explanation for these words is that they are arbitrary letter sequences that have been chosen to represent the cardinality of small collections of objects. No one person chose these words and so the simplest explanation is that no one hid a code in the words. Human brains have evolved to find patterns because they considerably simplify the information we have to store to survive in the world. The fact that you can find a pattern in an arbitrary collection of words reflects this. If you prefer a mathematical argument, see Ramsey theory. I do not doubt that the patterns you describe exist: they are clear to see. I do doubt that it has any mathematical or scientific utility at all. Said again: I do not doubt that your observations are correct, I do doubt that your interpretation of the observations is the most reasonable one possible. Finally, as a side note: your arguments may be more convincing if words weren't randomly colored on the webpage in a large font and containing a reference to a Biblical figure using a laser beam. Do not let me discourage your research, but do be prepared for criticism. 65.96.221.107 01:24, 25 June 2006 (UTC)
That maybe the simpliest explanation, but it is not the most interesting. What if I thought it wasn't what it seems, and it was? I have to take it at face value, and then be proved wrong. The references I "added" were metaphors, and they sound mean because people were being mean to me. I asked a question about what I found and I got something else. I believe in a designer with evolution as the design-no more.
I looked at Ramsey Theory, and a couple of other theorms, and it occurred to me that science is addicted to complex math. And while I have no doubt about the success of scientific thinking, few outside science can understand it. I know I don't. Science is looking for a formula in math to "unite the forces" so they are looking for a message/code in math, and "know" it is there. How did it get there? And if there was a message/code meant for the "masses",(however it got there)would it be left to science to interpret? Or would it be easy enough for all to understand it? Like the simple, consistent patterns at my site? —The preceding unsigned comment was added by Lightprize (talk • contribs) .
- What if six turned out to be nine? I don't mind, I don't mind... KWH 07:01, 26 June 2006 (UTC)wh
The question was: since science "knows" that the unity formula is there in math (and I believe them)how did it get there?
Oh wait, I'm sorry. The mathematics behind the four of them would match up into some type of an equation, whatever it was. I was not trying to be funny.
- As best I can decipher your words, the answer might best be, "Because math, as it's been built, is descriptive." It's intended to mimic the natural order, whatever that turns out to be (quantum mechanics, counting, etc). It's just a matter of finding a system that mimics it properly, which is harder than it sounds. That's the unification you're talking about - the system we now use isn't enough, we need to develop a more descriptive one. Counting the number of letters in mostly arbitrary English words isn't going to do that, though it might give a linguist some facinating dynamics to improve his field with. Black Carrot 17:08, 26 June 2006 (UTC)
I understand what you are saying, but it seems here that not only is science addicted to complex math, but it also has to communicate in the most complex language it can. A simpler way to say what you did would be that because the forces are united in nature, their math must come together in a math equation of some kind, once the method is found. I was looking for a simple code in math, since everything else was done. I found some patterns, and then some more. I thought they could not be anything, so I checked out the names in other languages, and nothing. So the code was unique. People have looked at where the numbers came from and dismissed them saying they can't mean anything. Where would science be if researchers listened to what people "thought" about what they were doing, without "proving" what they said?
Are the names of the numbers in English arbitrary or fact?
- To answer the last one first, both. It's a fact that the words are spelled that way, but it's mostly arbitrary that that's the way they turned out. I took a long time to what I said because it had been said already, faster, but you'd had a lot of trouble grasping it. Yes, science has a lot of complexity, because it's a complex world. That doesn't mean they're "addicted" to it. In fact, much of mathematics (and by extension, the math used in science) is a quest for brevity and elegance. It is also a quest for accuracy, however, which won't come from applying the arrangements of letters to quantum physics. Black Carrot 16:33, 27 June 2006 (UTC)
So, the names in English are a fact than.
I don't see where I had allot of trouble, but it took allot of time for those there to say I found the patterns I did. And I thought it was a simple question. Whether or not anyone "thought" the numbers meant anything. Only I (right now) see where those numbers go. And time will tell.
Accuracy, should also be sought in conversation. At which time did I apply the arrangements of letters to quantum physics? Stick to what the numbers say instead of what you "imagine" them to say.
Would you say that there is a certain "brevity and elegance" to the code at inthemath.com? Brevity and elegance is what helped me to see them for what they are.
- Sorry, but I saw neither brevity nor elegance. --LarryMac 15:50, 28 June 2006 (UTC)
- (edit conflict) I did. However, I only mentioned that because you claimed that "science [is] addicted to complex math", which is untrue. My problem with your idea is on the accuracy side. I mentioned quantum mechanics because you mentioned the "unity formula of science", which I took to be the Unified Field Theory or the related Theory of Everything. Perhaps you didn't have anything that specific in mind. And last, people skipped the actual content of the site when possible because it doesn't matter whether two does have three letters in it - that's indisputable. It matters whether that can be applied to, as you claim, contacting other races, solving the mysteries of god, or uniting the forces of the universe under one banner, none of which it can. Oh, and don't get snippy with me. And start signing your posts. Black Carrot 16:00, 28 June 2006 (UTC)
How long did it take you to get thru the site? Quick, right? That is brevity. All those symbols and numbers did what they did without fuss, and without anything being left out, or left over, thats' elegance. The code will soon show a connection to the theory for everything, but not by counting the letters. Time will tell, and not allot of time either. Accuracy, again. I asked what science was looking for in a message, and patterns and symbols, in math, is what they are looking for. [Lightprize.]
I know this is not the right place for this but, you know how children take on the characteristics of their parents? Well where did the characteristics of life come from? Electromagnetic radiation is 2 energies blended into 1. The characteristics of its electrical part are negative and positive, and those of its magnetic part are attraction and repulsion. That's No, Yes, I like you(love) & I don't like you(hate). That is the mental/emotional nature of life, and math to boot. Because negative is subtraction positive is addition, attraction is multipulcation and repulsion is division. Now this does not have to be right, but it is the best description of the soul of life that I have ever heard. And light is 1 force, made up of 2 parts. Sometimes some interesting things can come from something seemingly of no value. [Lightprize]
Everyone, from that I have seen from the responses, is saying that the issue with your 'discovery' is that it only applies to English, and that numerology consists of people that have found more fantastic 'discoveries' than you. English was not created. Here is an example of a constructed language: http://sipen.com/projects/language/ Notice the differences from a logical viewpoint? Words created in an ad hoc manner are done so in languages that are made by numerous people in most circumstances. If I were to change the name of the word "two" to "cianide" in your language, would your code still work? All it would take to do that is for someone to repeatedly use the word until others began using it. (An example of this is using the word "cool" to mean awe-inspiring or "bootylicious" to mean voluptuous.) -- Freebytes
June 25
[edit]SAT Questions
[edit]I recently took a practice SAT test from The Princeton Review and I didn't do very good on the math section. The following are the ones I didn't know how to do.
- if √y-3=7, then y+5=?
I got 57 on this, but I have no idea if it's right. - if xΔy is defined as xΔy=(x+y)2-x for all values of x and y, what is the value of 2Δ5?
(A)6
(B)10
(C)16
(D)44
(E)47 - If the square root of a certain positive number x is equal to x divided by k, which of the following represents x?
(A)-k
(B)k
(C)2k
(D)k2
(E)-k2
In the correctly worked addition problem above, each letter represents a different digit. What is the value of ?
(A)109
(B)190
(C)200
(D)201
(E)218- If , then x=?
(A)2/3
(B)3
(C)4
(D)8
(E)64 - if , then ?
- If and is equal to , then ?
I would really appreciate any help. Thanks. schyler 01:44, 25 June 2006 (UTC)
- You should make sure that math isn't just a bunch of formulas for you, and that you understand what things mean. For example, not knowing how to do the sixth problem means, I think, that you are manipulating symbols instead of understanding. (Cj67 02:44, 25 June 2006 (UTC))
- Someone about to attend college is going to need the skills to handle the kind of problem-solving these questions solicit.
- Should that be or ? Anyway, if we can solve for y+5, then we must know y. We are only told the square root of y. To get y alone, we must isolate the square root on one side and everything else on the other side of an equation, then square both sides.
- What is the problem? A formula is given, as are two values to substitute; just do it.
- Change the words to algebra and solve: √x = x⁄k. The square root is already isolated.
- The ones digits tell us N+P = N, so we know what P must be. The hundreds digit of the result is M, which comes from a carry alone, so we also know what M must be. The fact that we get a carry forces N.
- Brute force (try all answers) would work. Or notice that 8 = 23 and 4 = 22, so the laws of exponents give (23)2/x = 26/x = 22; and since both sides now are powers of the same base, the powers must be equal: 6⁄x = 2.
- Solve for x in terms of y in the first, then substitute in the second.
- Put together the ideas learned from the other problems. Also notice this is a trick question: We don't care what x is (nor nx), we only need to know n.
- Think of it as playing a game where you need to look ahead. For example, this being World Cup time, imagine your team has the ball and is trying to score. Eventually the ball is going to travel from one of your players into the goal, but it's not going to work for you to take the ball straight towards the goal. That same kind of strategic foresight is required for solving these problems in algebra, and really for solving almost any problem, whether mathematical or not. --KSmrqT 04:12, 25 June 2006 (UTC)
- One more thing: you're not looking for MPN in question 4, you're looking for MPN + NM, so remember to add up the answer once you get the digits. --ColourBurst 05:23, 25 June 2006 (UTC)
- On several (independent) occasions I have encountered undergraduates who couldn't do substitution. They claimed they were never taught to do such a thing, and didn't seem to know or understand it was a basic "rule of the game" that you are allowed and supposed to use this for the parameters of a definition. I couldn't check whether this dramatic lacuna in their knowledge and understanding was their fault or due to abysmal teaching practices, but having seen too much of those, the latter explanation for this sad state of affairs is entirely possible. --LambiamTalk 09:20, 25 June 2006 (UTC)
I am actually no about to attend college. I am just now going to be a sophomore. The course is specifically designed to have you see what to look for when you are learning the stuff in class, so some of it, I haven't even learned how to do yet. Thank you for yall's help though. schyler 14:12, 25 June 2006 (UTC)
- Learning to solve problems is like getting to Carnegie Hall; the answer is "Practice, practice, practice!" As always, I highly recommend George Pólya's guidance in How to Solve It.
- One thing that is often omitted is the emotional side of problem solving. For many people an SAT test is a stressful event, and an unfamiliar question topic can evoke mental paralysis. Fear is not helpful; to quote the famous cover of The Hitchhiker's Guide to the Galaxy, DON'T PANIC! Try to cultivate a calm and alert mental state with a dash of enthusiasm. Practice helps this, too.
- An overview of learning can also be helpful; for example, an old but popular description is the broad taxonomy of Benjamin Bloom.
- Subject knowledge makes a big difference, and it looks like a common theme in many of the problems you list is powers, also known as exponents. The essential facts (for real numbers) are:
- bn = b×b×⋯×b, where b is repeated n times, for n a positive integer, and for all b
- b0 = 1, for any nonzero b
- bp+q = bp×bq, for all b, p, and q
- By implication, b−p = 1/bp
- bp×q = (bp)q, for all b, p, and q
- By implication, b1/n is the n-th root of b, for n a positive integer
- The "big picture" idea that makes these facts easy to remember is that powers are to multiplication as products are to addition.
- For example, the powers rule bp+q = bp×bq corresponds the products rule b×(p+q) = b×p+b×q, which should be more familiar.
- Just be careful not to overgeneralize; although b×a = a×b, it is not true that ba = ab.
- Lastly, you might find something useful in our article on problem solving. Have fun, and good luck. --KSmrqT 19:32, 25 June 2006 (UTC)
3D Gaussian Blur, with regards to a 3D Fluid Solver
[edit](I do hope this question will not be too lengthy..) I am a 3D animator, so although I am fairly profecient wiht it, math is not my strongsuit. But because I don't feel like spending thousands of dollars buying one, I have decided to code a highly simplified fluid dynamics solver for special effects in my animations.
Note that unlike CFD/engineering applications, my purpose is NOT to achieve complete accuracy, but to create a convincing image.
In my program, the Navier-Stokes Equations are solved on a 3D grid. The velocity and density of the fluid are sampled at the center of each grid cell. I do not fully understand the Wikipedia page explanation of the NSEs, however, I do have a vague understanding of the form Jos Stam presents them in a paper he wrote:
This involves two 3D grids: a vector field that stores the velocity of the fluid, and a scalar field that stores the density of the fluid.
in the first equation: u is a vector representing the velocity of the fluid at a given point, t is a timestep by which the simulation advances each frame, ν is viscosity, and f accounts for the addition of forces by the user.
in the second equation: ρ represents the density of the fluid at a given point, t us a timestep by which the simulation advances each frame, κ is the rate of diffusion, and S accounds for the addition of density by the user.
What I am currently concerned with is the term,
which simply states that the density diffuses over time. To implement this in the 3D grid structure I am using, it seems simple: we exchange the density with the six immediate neighbors of the cell we are trying to solve for. However, if the density in that cell must diffuse beyond its immediate neighbors, this is not accounted for, and the simulation "blows up." The only soltion is a shorter ∂t, a smaller κ, or a finer grid with smaller cells. In any case, the effect of diffusion is either lost, or the simulation takes much longer.
Stam wrote a solution for this which is quite elegant--he uses Gauss-Seidel Relaxtion to solve for the new value. It is stable and will not blow up. It is also patented, so I cannot use it.
Given that I am striving for visual appearance and not accuracy, it occurred to me that the effects of diffusion look strikingly similar to the Gaussian Blur filter in many graphics/image manipulation programs. It seems logical that a fast 3D gaussian blur would do the trick for me and work at any time step.
So I look it up on Wikipedia. Apparently, to get a convulsion kernel with Gaussian distribution in 2 dimensions, you use:
Let me be frank--I am only attempting this project because of the simplicity Stam's paper provided. I do not know a whole lot about higher math, so bear with me if this seems like it should be obvious.
My question is: What would this Gaussian distribution function look like in 3D? My guess would be:
is this anything close to correct??
Jos Stam's website is http://www.dgp.toronto.edu/~stam/reality/index.html. The article I am basing my program on is the publications link, it is entitled, "Real-Time Fluid Dynamics for Games."
Although I suppose I could use Stam's code for personal use, I may distribute the code to some friends, and I do not want to risk anything, since I have no attorney... So I am fairly dead set on staying away from the patented section of the code.
Any help would be much appreciated --Loki7488 09:15, 25 June 2006 (UTC)
- I think the denominator should have The idea of using Gaussian blur seems good to me; it's what you would get from the Laplacian diffusion if there were no other time-dependent influences on the field, so the simplification may result in the finesse of "second-order" effects being somewhat speeded up, which is negligeable if the first-order effects are small, and probably not noticeable if they are strong. --LambiamTalk 09:37, 25 June 2006 (UTC)
- The sequence is as follows:
- In one dimension:
- In two dimensions:
- In three dimensions:
- You can write where the function G on the left-hand side is the three-dimensional convolution kernel and the G on the right-hand side is the one-dimensional convolution kernel. This says that the diffusion in 3D can be seen as three independent diffusion processes in the x, y and z direction.
- I agree that the idea to use Gaussian blur is basically sound. It is more accurate than solving the linear system. However, a full convolution in 3D might be rather time-consuming. Perhaps it's possible to introduce a cut-off (set all elements of G which are smaller than some small value, say 0.001, equal to zero). This should speed up the computation. -- Jitse Niesen (talk) 11:34, 25 June 2006 (UTC)
- The sequence is as follows:
- If you are using Jos Stam's work, be sure to go through his talk slides as well. Fluid simulations are well-known as challenging to do simply, efficiently, and plausibly. (Note I didn't say "accurately", which is harder still.) For games, you might explore nVidia, ATi, and Real-Time Rendering Resources. And while you can, you might want to download the paper in the January ACM Transactions on Graphics (TOG). However, be warned that computational fluid dynamics (CFD), even with computer graphics simplifications, naturally involves a great deal of mathematics and numerical algorithms. If Stam can make it seem accessible, that's a tribute to his abilities; but don't be fooled, it's hard work. --KSmrqT 13:59, 25 June 2006 (UTC)
I have the slideshow in PDF form, it expands on the paper I am currently reading from and I'm sure I will refer to in as I progress. I have also read many papers written by Ron Fedkiw, Nick Foster, and Dimitri Metexas. I am fully aware of the challenges fluid dynamics presents, and that professional outfits have huge physics teams who get paid top dollar to do this stuff to the extreme... on the flipside, I am also aware of numerous individuals or groups of individuals who have developed very simple fluid solvers. Thanks everyone for the helpful replies! --Loki7488 14:39, 25 June 2006 (UTC)
major revisions complete
[edit]The Half-life computation article has undergone substantial revision which has hopefully addressed everyone's concerns. If you have any further comments after looking at the article again, please list the items you do not like, make whatever comment you have and please be specific and allow time for further revision. If there is any reason I can not comply with your wishes then I will let you know the reason why. ...IMHO (Talk) 12:20, 25 June 2006 (UTC)
- How is this not original research? And what is the point? For any formula you can write an essay on how to compute it, which might be interesting for the Navier-Stokes equations or polylogarithms, but not for such an elementary formula. And even if you write about it, it should only be a few lines. The article never makes clear what the model is that is being computed or simulated. My preliminary impression is that it ought to be deleted. --LambiamTalk 16:43, 25 June 2006 (UTC)
what does this equality (doteq) sign mean in fuzzy logic?
[edit]Hello,
my sister, who studies informatics and has exams just like me, asked me if I recognized this symbol :
an equality sign with a dot above it
\doteq in latex, but it doesn't seem to get rendered here.
I had seen and stuff like that, but never that.
It appeared as a sign between two membership functions in fuzzy logic (the syllabus is about artificial intelligence).
Can you help me?
Evilbu 16:45, 25 June 2006 (UTC)
- ["We employ the symbol $\doteq$] to indicate the unsymmetric relation between the item of input data on the left-hand side and the corresponding theoretical expression for that item as a function of the adjusted constants on the right-hand side. In general, this set of equations is overdetermined, so the left- and right-sides will not be equal, even for optimized values of the constants." (Remember, mathematicians do not speak or write as we mortals do). More help would be appreciated. --DLL 18:43, 25 June 2006 (UTC)
- If we had the example with a bit of context, we might perhaps venture a guess for the intended meaning. I doubt it is a standard notation. --LambiamTalk 19:20, 25 June 2006 (UTC)
- Is this the symbol you're looking for?
- --Yanwen 23:51, 25 June 2006 (UTC)
- In Unicode, U+2250 "≐" (may not display; try installing DejaVu fonts and using a Unicode-aware browser) is called APPROACHES THE LIMIT. I don't know what it means in a fuzzy logic context. EdC 00:15, 27 June 2006 (UTC)
- can be used in the context —Mets501 (talk) 12:33, 27 June 2006 (UTC)
Heaven
[edit]How many feet are in heaven? --63.170.208.190 17:55, 25 June 2006 (UTC)
- Please do not ask silly questions at the Reference Desk. —Mets501 (talk) 18:21, 25 June 2006 (UTC)
- Which religion are you talking about? For Christian heaven, I think the bible gives dimensions in cubits, which could be converted to feet. —Keenan Pepper 18:31, 25 June 2006 (UTC)
- Number of feet = number of two-footed good christians * 2 + number of one-footed ones. For exotic heavens, maybe more-footed animals may enter, so please specify the exoticity or you heaven. --DLL 18:46, 25 June 2006 (UTC)
- This is a subjective question. But I'd say infinite feet. --Proficient 03:54, 26 June 2006 (UTC)
- Assuming that infinity exists, that the universe is infinite, that there are more than one planet with redeemable souls (linked with mortal feet, too!), that body parts go to heaven, and that heaven is the same for all ... that's too many assumptions (see Occam's gillette). --DLL 17:51, 26 June 2006 (UTC)
June 26
[edit]Does .999999...=1?
[edit]I've been grappling with this concept for a while. Now, while I realizes calculus states that .999... theoretically equals one since it's infinitely close, it also states that it will never equal one, since there will always be an infinitely small gap between the numbers. However, consider this equation:
- x=.999...
- 10x=9.999...
- 10x-x=9.999...-x
- 9x=9.999...-x
Since it's been established that x=0.999:
- 9x=9.999...-0.999...
- 9x=9
- x=1
Therefore,
- 0.999...=1
I know that practically, this is impossible, since adding no amount of decimals will ever cause 0.999... to actually reach one. How, then, does this equation work? If this problem's already been posted, please accept my apologies in advance. --Thetoastman 07:11, 26 June 2006 (UTC)
- Proof that 0.999... equals 1. Dysprosia 08:00, 26 June 2006 (UTC)
- It may seem illogical, but it's true. The actual infinity is not the easiest thing to understand. Conscious 12:05, 26 June 2006 (UTC)
- The premise in your second sentence is the source of your mental conflict; there is no infinitely small gap. Often mathematics uses phrases like, "the limit of xk as k goes to infinity is y", which evokes the image of a journey that never reaches its destination. Academic studies suggest that this kind of language does confuse students, so that they never quite understand limits properly. In fact, the ancient Greeks were similarly challenged by Zeno of Elea to explain the outcome of a race between Achilles and the tortoise. Yes, Achilles is faster than the tortoise, but the tortoise has a head start. Achilles first must run to the point where the tortoise begins; but when he reaches that point the tortoise has moved on, and Achilles next must run to the new position of the tortoise; and again, and again. It appears that Achilles can never reach the tortoise, much less pass; but, of course, he wins easily. Likewise, 0.999… equals 1 exactly; there is no gap.
- Unfortunately, many mathematics students and teachers still do not appreciate the confusion their language causes. We regularly have people trying to insert their habitual language into the proof article. It's a sort of reverse Zeno's paradox: no matter how many times we try to explain to them that it's unhelpful, they never reach the conclusion! Such a discussion is happening there on the talk page right now, if you'd like to see the process in action. Fortunately, so far we have been successful at gently (or not) removing the offending efforts, and you can see proofs of the equality without any kind of "journey to infinity". --KSmrqT 14:32, 26 June 2006 (UTC)
- If I may add, however, you already know that. In your proof, you assume that 10*x = 9.99..., then that 9.99...-0.999... = 9. In other words, though you use a semialgebraic way of saying it that makes it easier to ignore, you already know that what's sitting at the end of that inifite decimal (0.000...1, you might say) has no effect. Black Carrot 16:54, 26 June 2006 (UTC)
- Black Carrot, that's terrible! There's no "0.000...1 sitting at the end!!!) There's a 9 to subtract from each 9. There's a nine in the tenth place (0.9), in the hundredth place (0.09) etc. There's no "1" sitting at the end. There's no difference between removing one nine and ten nines. Consider: If there's an infinite row of jelly donuts next to an infinite row of cream donuts, if I eat two jelly donuts and move all the jelly donuts down by two it's NO DIFFERENT from eating three jelly donuts and moving all the jelly donuts down by three. Each jelly donut will still have a cream donut next to it. None "lags" at the end. In other words, there is not one fewer number in the series 2, 3, 4, 5 ... than there is in the series 1, 2, 3, 4 ... The fact that you skipped the first number leaves no fewer.
- If I may add, however, you already know that. In your proof, you assume that 10*x = 9.99..., then that 9.99...-0.999... = 9. In other words, though you use a semialgebraic way of saying it that makes it easier to ignore, you already know that what's sitting at the end of that inifite decimal (0.000...1, you might say) has no effect. Black Carrot 16:54, 26 June 2006 (UTC)
- I don't know enough pure math to express this with appropriate rigor, but I think the guts of the argument are as follows:
- Let's assume that . Then if we let , it follows that , i.e. you can evaluate some non-zero difference from 1. But by taking enough digits I can always disprove any non-zero difference that you care to write down. So we can disprove the original assumption with a reductio ad absurdam type argument. Arbitrary username 17:48, 26 June 2006 (UTC)
your problem is that you don't picture infinitely many 9's, you just picture them repeating. They don't "repeat" one after the other (this repeating is the process you're imaginging), mechanically or something, taking a bit of time to add the next one: just imagine them all there all at once. Imagine a make a computer game where you live in a town, and in one direction train tracks go off forever, since I don't program them. You can't follow them, because the game doesn't let you leave the town. Now, the tracks are "generally anchored perpendicular to beams (termed sleepers (Commonwealth except Canada) or railroad ties (U.S. and Canada) of timber, concrete, or steel to maintain a consistent distance apart." If you cast a spell "railroad beams to snakes" that turns EVERY railroad beam into a snake, it doesn't matter if you first add a beam where you can still walk to. If instead of "railroad beams to snakes" you just cast "infinite snakes 2 feet apart" and the beams are 2 feet apart, then if you point it parallel to the train tracks, each beam will get a snake. It doesn't matter if you remove the first beam yourself and take a step forward before casting the spell.
Likewise, there are as many positive numbers as negative and positive numbers together, since you can just count them like this: tracks: 1 -1 2 -2 3 -3 4 -4 5 -5 etc. snakes: 1 2 3 4 5 6 7 8 9 10 etc. It doesn't matter.
Likewise, if you start counting at 8 and I start counting at 1, we can count together forever. You won't "run out" 7 numbers before I do.
(This anonymous comment added by 82.131.190.16)
Making abootable disk
[edit]can you please help me to make a bootable disk.--Saksham Sharma 11:55, 26 June 2006 (UTC)
- You mean a bootable floppy? —Mets501 (talk) 13:39, 26 June 2006 (UTC)
- And on what platform? (Windows XP/Mac/Linux?) —Mets501 (talk) 13:39, 26 June 2006 (UTC)
On windows.--Saksham Sharma 03:55, 27 June 2006 (UTC)
- Windows 95, 98, XP??? Each would have a different way. Have you tried the help option in your start menu? - Mgm|(talk) 09:24, 27 June 2006 (UTC)
On windows XP service pack 1
- What do you need the bootable disk for? Usually the purpose as to why you need the disk is for a program or to install drivers, etc. So you can probably access the program and find the option that will allow you to create a bootable disk. I think there is also an option that exists when you are formatting the floppy that allows you to create the bootable disk. From my experience, bootable disks (you are referring to a floppy disk (diskette) right?) are usually used to install drivers or something of that nature when installing an OS. --Proficient 02:46, 28 June 2006 (UTC)
strobe stop
[edit]Rotating objects – as wheels on automobiles – appear stopped on TV screens.
What is exact way to calculate probable car speed?
Is it proper to count the “spokes” , assume a wheel diameter, assume a TV frame rate of 60 Hertz, or what?
Please show exact step by step procedure to arrive at an estimate.
I hope this isn’t a stupid question but where will I find this answer if someone responds? --Itserp 14:52, 26 June 2006 (UTC)
- You'll probably find the answer on this page, but some users will post on your talk page too. EVOCATIVEINTRIGUE TALKTOME | EMAILME 15:05, 26 June 2006 (UTC)
- You can make some educated guesses if you know the following : d - The diameter of the wheel, h - the frame rate of the TV, T - the time it appears to take (on the TV) for the wheel to make a complete turn, and s - the symmetry order of the wheel (that is, if the wheel makes 1/s of a complete turn, it looks exactly like before - I think this is what you meant by "spokes"). Then, the formula for calculating the speed of the car is, if I'm not mistaken,
- Where n is an unknown whole number (integer). So you can put some small numbers instead of n, arrive at some guesses, and pick the one that is the most reasonable. -- Meni Rosenfeld (talk) 15:41, 26 June 2006 (UTC)
- You can make some educated guesses if you know the following : d - The diameter of the wheel, h - the frame rate of the TV, T - the time it appears to take (on the TV) for the wheel to make a complete turn, and s - the symmetry order of the wheel (that is, if the wheel makes 1/s of a complete turn, it looks exactly like before - I think this is what you meant by "spokes"). Then, the formula for calculating the speed of the car is, if I'm not mistaken,
- Don't expect an exact step-by-step response for something that looks suspiciously like a homework problem. Meni Rosenfeld has highlighted some of the factors involved, but let's take a look at the underlying concepts. In digital signal processing, the fact that a moving wheel can appear not to spin on television is an example of the phenomenon known as aliasing. The spinning of the wheel is periodic in time, and so is the sampling of the television images. Different television standards sample time at different rates, and motion pictures adopt yet another rate. We need to know the sampling rate of the original recording; for example, if the scene was shot on film, the relevant sampling rate is 24 frames per second, even if we are now watching it on TV. Since this is common, let's work with 24 Hz. If the wheel completes one complete revolution at the same rate, 24 revolutions per second, then each time the shutter opens the position of the wheel is exactly the same. Thus, as captured on film, the wheel appears not to spin. The same is true if the wheel completes 48 revolutions per second, or any integer multiple of 24.
- Now that we understand why aliasing occurs, we must apply the theory more carefully to the problem at hand. We have four complications.
- We do not know the original sampling rate.
- We do not know the integer multiple of the sampling rate which is the rate of wheel revolution.
- The wheel may have rotational symmetry, so that it has the same appearance after a fraction of a revolution, which we do not know.[6]
- The rate of wheel rotation does not tell us the forward speed of the vehicle unless we know the diameter of the wheel, which we do not know.
- Naturally the forward speed will be limited by plausibility. Still, it could be a fun exercise. Enjoy. --KSmrqT 18:03, 26 June 2006 (UTC)
- To expound a bit, though:
- Half the sampling frequency is the most you can accurately determine, so 12 revs per second is your actual top end.
- More problematically, most wheels have radial symmetry, most commonly as pentamerism (a 5-way split). That further reduces your resolution.
- Taking a 15 inch 5-way symmetric (typical small car) wheel and a 24 fps movie camera, I calculate that you have no resolution beyond 7 miles per hour. I wouldn't call that in any way sufficient for a useful estimate. — Lomn | Talk 19:04, 26 June 2006 (UTC)
- A little knowledge is a dangerous thing. (I'm still trying to accumulate enough to be safe!) It appears Lomn's post makes a mistake on every line. Because the wheel appears stopped, there is no issue of positive versus negative frequencies, so no rate halving; study the Nyquist theorem more carefully. A link to biological symmetry is less helpful; rotational symmetry is exactly what we want, with correct link, already provided. A 15 inch tire diameter would fit a very small car indeed; that's the wheel rim diameter. (This calculator may help.) Five-fold symmetry is common today, but the page full of images I linked to shows many examples of higher symmetry; what an art form! Also note that the size of a wheel uses much smaller distance units than the speed, so don't forget to convert. (1 mi = 63 360 in) --KSmrqT 17:21, 27 June 2006 (UTC)
- I'm curious about the no halving bit, as I don't see any discussion of negative frequencies in the Nyquist theorem article that appear relevant to this. As for the wheel size, you're right: I thought 15" seemed small but blanked on why that was. Scale it up to 26 inches, then. And on radial vs rotational symmetry, I had to guess at the name (and somehow missed your wikilink). However, I think my main point still stands (and the unit conversions are correct, inches per second to miles per hour isn't much of a magnitude shift). Fix the wheel size and you've still got a resolution of only 12 miles per hour (24 if the halving thing holds), less for wheels with more symmetry. Core point: you can't usefully estimate car speed by this method. — Lomn | Talk 18:22, 27 June 2006 (UTC)
- (A note on nyquist): I'm going off audio sampling from my DSP work, and as best I can tell, there weren't negative frequencies involved with that and we certainly had to have a doubled sampling rate there. Is there a distinction I'm not seeing? — Lomn | Talk 18:32, 27 June 2006 (UTC)
- Recall that sin −ϑ = −sin ϑ, and that cos −ϑ = cos ϑ. The discrete Fourier transform is linear, so the minus sign passes straight through. In terms of frequency discrimination, we can't see a difference between a positive frequency and a negative one. This is one way of understanding the halving in the Nyquist theorem: the frequencies above half the sampling frequency "wrap around" to negative frequencies, which we've just pointed out cannot be discriminated.
- For video we must be more careful. Look at the valve stem on the tire as the wheel spins. If we shoot film at 24 frames per second and the wheel spins at 12 revolutions per second, then we get two different pictures of the wheel each time it turns, one with the stem at the top and one with the stem at the bottom. It may look weird, but it sure doesn't look still.
- Also compare a wheel spinning at 6 per second with one spinning at 18. With the slower turn we get a sequence of four different stem pictures in a single turn: top, right, bottom, left. With the faster turn we get also get a sequence of four different pictures, but not in a single turn: top, left, bottom, right. There is an apparent relationship, but the faster spin produces a visual effect of backwards revolution. --KSmrqT 06:47, 28 June 2006 (UTC)
- It also bears noting that the rate of 24 frames is not certain. 24p is the standard used in film--actual silver based media; the movies. On TV, there are numerous frame rates, most common are 30p and 60i. (Unless you live in a regoin where NTSC is not the standard). If it is 60i, that means that the frames are interlaced. Only half the lines are redrawn every time the screen refreshes. This decreases the resolution you get as well. If it is 24p film being shown on TV, it will be resampled to a TV compatable framerate; the motion will not be reproduced exactly (its like a square peg in a round hole...) but close enough that it will fool our eyes in normal viewing. For the purposes of mathematical calculation, this may have some considerable impact. --Loki7488 00:14, 29 June 2006 (UTC)
- You might want to read the whole thread before responding to the last post. We've already noted the problem of not knowing which frame rate applies, though without explicitly listing all of the variations of NTSC versus PAL versus HDTV progressive. When projected through a film projector, each frame is actually shown twice (to avoid flicker). And if we were to get really technical, that 30 frames per second of NTSC is actually more like 29.97. Mix in the effects of interlace and 3:2 pulldown, and only a real hardcore video engineer will continue to follow the discussion. So let's just not even go there. --KSmrqT 01:40, 29 June 2006 (UTC)
- It also bears noting that the rate of 24 frames is not certain. 24p is the standard used in film--actual silver based media; the movies. On TV, there are numerous frame rates, most common are 30p and 60i. (Unless you live in a regoin where NTSC is not the standard). If it is 60i, that means that the frames are interlaced. Only half the lines are redrawn every time the screen refreshes. This decreases the resolution you get as well. If it is 24p film being shown on TV, it will be resampled to a TV compatable framerate; the motion will not be reproduced exactly (its like a square peg in a round hole...) but close enough that it will fool our eyes in normal viewing. For the purposes of mathematical calculation, this may have some considerable impact. --Loki7488 00:14, 29 June 2006 (UTC)
- (A note on nyquist): I'm going off audio sampling from my DSP work, and as best I can tell, there weren't negative frequencies involved with that and we certainly had to have a doubled sampling rate there. Is there a distinction I'm not seeing? — Lomn | Talk 18:32, 27 June 2006 (UTC)
- I'm curious about the no halving bit, as I don't see any discussion of negative frequencies in the Nyquist theorem article that appear relevant to this. As for the wheel size, you're right: I thought 15" seemed small but blanked on why that was. Scale it up to 26 inches, then. And on radial vs rotational symmetry, I had to guess at the name (and somehow missed your wikilink). However, I think my main point still stands (and the unit conversions are correct, inches per second to miles per hour isn't much of a magnitude shift). Fix the wheel size and you've still got a resolution of only 12 miles per hour (24 if the halving thing holds), less for wheels with more symmetry. Core point: you can't usefully estimate car speed by this method. — Lomn | Talk 18:22, 27 June 2006 (UTC)
- A little knowledge is a dangerous thing. (I'm still trying to accumulate enough to be safe!) It appears Lomn's post makes a mistake on every line. Because the wheel appears stopped, there is no issue of positive versus negative frequencies, so no rate halving; study the Nyquist theorem more carefully. A link to biological symmetry is less helpful; rotational symmetry is exactly what we want, with correct link, already provided. A 15 inch tire diameter would fit a very small car indeed; that's the wheel rim diameter. (This calculator may help.) Five-fold symmetry is common today, but the page full of images I linked to shows many examples of higher symmetry; what an art form! Also note that the size of a wheel uses much smaller distance units than the speed, so don't forget to convert. (1 mi = 63 360 in) --KSmrqT 17:21, 27 June 2006 (UTC)
- To expound a bit, though:
Lightbulb
[edit]How many mathematicians does it take to change a lightbulb? --Dweller 20:32, 26 June 2006 (UTC)
- See lightbulb joke.-gadfium 21:11, 26 June 2006 (UTC)
- This question has been posted on all reference desks ( except /M) --hydnjo talk 20:54, 26 June 2006 (UTC)
- Without loss of generality, a back-of-the-envelope calculation shows that almost all lightbulbs can be changed by a countable number of mathematicians. Gandalf61 12:49, 27 June 2006 (UTC)
That's the spirit! If time is sufficiently curved, perhaps you're still in time to retrospectively beat the scientists. --Dweller 16:23, 27 June 2006 (UTC)
- Q: How many lightbulb jokes does it take to get hydnjo into vandal patrol mode?
A: One *blush*, only one! --hydnjo talk 13:29, 29 June 2006 (UTC)
- Q: How many lightbulb jokes does it take to get hydnjo into vandal patrol mode?
- This page has a number of variations on that joke. It oddly omits the answer "None; it is left as an exercise to the reader." --George 23:11, 4 July 2006 (UTC)
Mathematical Novel
[edit]What is the title of the famous math novel involving shapes of varying dimensions (3d, 2d, 1d) visiting one another and being unable to comprehend anyone of a higher dimension than their own. I believe a rectangle visits a 1-dimensional world to tell them about two dimensions or something. I'm sorry I don't recall much more, but I saw an article on it on Wikipedia several months ago and was thinking about reading it now that I have more time on my hands. -Dave 04:17, 27 June 2006 (UTC)
- Flatland: A Romance of Many Dimensions.-gadfium 05:00, 27 June 2006 (UTC)
- ...and you can find the book online here: http://www.alcyone.com/max/lit/flatland/ Madmath789 06:21, 27 June 2006 (UTC)
- And it's still a good read. Do try to find a copy with illustrations.[7] :-D --KSmrqT 07:35, 27 June 2006 (UTC)
- ...and you can find the book online here: http://www.alcyone.com/max/lit/flatland/ Madmath789 06:21, 27 June 2006 (UTC)
June 27
[edit]Power PC's versus Intel Processors
[edit]I am looking to get information on comparing power pc's with Intel processors under the following headings: architecture, cache, speed, power consumption, heat dissipation, future prospects for Intel. If you could give me a few useful links I would appreciate it
June 28
[edit]trignometry
[edit]show that tan 15+cot 15=4
- Except it isn't. tan 15 + cot 15 = -2.024. --Zemylat 12:09, 28 June 2006 (UTC)
- It is if you work in degrees not radians. Madmath789 12:15, 28 June 2006 (UTC)
- Punch it into your calculator, write down the answer, hand in your homework, get a bad grade, go home and brood, stop using Wikipedia to try and do your homework for you. — QuantumEleven 12:41, 28 June 2006 (UTC)
- It is if you work in degrees not radians. Madmath789 12:15, 28 June 2006 (UTC)
- ... or prove general relationship
- then plug in 15 degrees for A. Also think about why you should expect the general solution to be symmetrical about A=45 degrees (as indeed it is).Gandalf61 14:00, 28 June 2006 (UTC)
Real-time Remote Mac Software
[edit]Is there (Mac) software to allow the remote use of a computer over a fast local network at fast enough speeds to make no visual difference? For example, I want to use a desktop Mac via a (Mac) laptop, but at full refresh, large resolution and 32 bit colour with very little lag. This would be for general real-time use rather than admin purposes. Finally, it would not be X, but a streamed display.
In fact, is this even possible?
- Something like Timbuktu perhaps? I doubt that you could have a network that is as fast as a hardware bus, you'd have to decide if the lag is tolerable. --LarryMac 15:58, 28 June 2006 (UTC)
- You don't need to match the speed of the hardware bus, just the speed of the video connector. A 100Mbit ethernet connection is fast enough for small displays; for larger displays, you need gigabit ethernet. You'll have a latency of about two screen refreshes no matter what (one for the screen to be composed on the host, and one for it to be displayed on the client), and content such as video from capture cards or 3D graphics might not display, as those are drawn directly to the video card's buffer, not to a buffer in main memory. --Serie 23:55, 28 June 2006 (UTC)
- Why not use Chicken of the VNC? It is very full-featured, free, and open-source. -- 24.75.133.178 18:28, 10 July 2006 (UTC)
Programming in LISP
[edit]I am looking for an online LISP environment to do things in, I would like to be able to save programs as well, can anybody give me a link?
- There's a Scheme implementation for your web browser here but it isn't really useful for writing code because it uses a simple text box for editing, and thus has no parenthesis matching. You can find links to more useful Lisp implementations here. 84.239.128.9 16:07, 28 June 2006 (UTC)
Gestalt Value .
[edit]wat is the principle of GESTALT VALUE ? --59.161.8.150 15:09, 28 June 2006 (UTC)Roman Nagpur.
- I'm not sure if it is mathematical either. But mathematicians are so much at loss of proper words for fuzzy concepts ... --DLL 20:53, 30 June 2006 (UTC)
this may have to do with something that is 'holistic' and has to be viewed as a hole instead it will not be comprehended. it can also refer to completing an experience...but I don't see how can it be applied to math because it's a psychological concept.--Cosmic girl 19:51, 10 July 2006 (UTC)
Sum
[edit]how do we calculate the product of ((r^2)-1)/(r^2), where r = prime numbers starting from 2.
- Um... the best non-giveaway hint I can think of is to simplify the fraction, invert the whole thing, and think about geometric series. Melchoir 18:25, 28 June 2006 (UTC)
- (Never mind, I should refresh before responding to these things!) Melchoir 18:26, 28 June 2006 (UTC)
June 29
[edit]Windows XP Volume Muting Question
[edit]Hello,
I am in need of an automated process (registry file, script, etc.) that I can run on several Windows XP computers that will perform two actions:
1) Add the volume control speaker icon to the taskbar system tray.
and
2) Mute the master volume.
Repeated searches of Wikipedia have yielded no assistance in solving this for me. I've also done some Google searching but have yet to find a solution. Your help would be greatly appreciated!
Thank you.
- A Google search for the terms visual basic muting volume brought me to this. If you download it and view the class, you can see that it uses API calls to winmm.dll. You would have to make calls to that DLL through whatever programming language you are using. Changes to the Windows Registry would probably not be reflected immediately in the interface and would be overwritten at shutdown. I found another program that will mute the volume called Wizmo if you aren't up to the required programming task. —Bradley 19:58, 29 June 2006 (UTC)
- An answer to your first question might be on this site, specifically #320, but I make no guarantees. My Google search was registry setting volume control system tray. —Bradley 20:19, 29 June 2006 (UTC)
about percentage
[edit]Hi Guys and Gals! Wikipedia defines percentage as:
"...a way of expressing a proportion, a ratio or a fraction as a whole number,.
what if a percentage is expressed in decimal form? (e.g. 2.45%)
Is there an exact term to call it? Thanks for your help.
iTunes/outdoor speaker set up
[edit]In August I'm having a large outdoor gathering, a party of sorts, at my house. I'd like to have iTunes playing music for the roughly 100 guests who will be just outside the house (i.e. not very spread out). What I'm looking for is ways to have good quality sound played outside with equipment that I can use for some other purpose once the gathering is over. Like using it for a stereo set up that I don't yet have or using it for an improvement over the stock speakers on my television or playing music from the computer throughout the house etc.
What I'm working with is a dual 1 Gig G4 tower. If there's a need for wireless, I'm using a Sonnet internal card for wireless internet with a Linksys wireless router. I also have an iPod (and iTrip) which will be able to hold the desired playlist if need be.
What's the best (while not going into thousands of dollars into debt) way to do this? Dismas|(talk) 07:52, 29 June 2006 (UTC)
- Doesn't Apple have AirPort Express which permits easy music streaming? Dysprosia 12:00, 29 June 2006 (UTC)
- If you have the music on your computer, and you already have a stereo, it's just a matter of taking the output from the computer to input into the stereo. All you need is the right cable. Note, though, that "all in one" music players do not necessarily have any inputs. If you do have a CD player that is suitable, almost certainly the most cost effective thing to do is to just cut CDs of your music choice. If you have a stereo player supporting MP3 playback, a single CD can easily contain the whole night's music. If you do want to go down the wireless route, be sure to test it thorought, and make sure the signal is strong; filling an area with people will block off a lot of the signal, and you don't want your oh-so-high-tech wireless system to stop just as it gets crowded. Also not what you asked for, but bear in mind that for a one-off party it may make more sense to hire a good quality/loud system that is louder than you will ever be able to use it for again. Oh, and if you are using your computer to drive it, make sure people understand this, so they don't reboot it, or start playing games that kill iTunes. Notinasnaid 12:07, 29 June 2006 (UTC)
- The computer does have all the music on it and will be inside where it should be safe from the curious guests who think that they can pick and choose their own selections. The last thing I need is one of the younger guests to start playing Nine Inch Nails' "Closer" while Grandma is telling Auntie how nice the weather has been.
- I knew a bit about AirPort Express but wanted other ideas as well.
- I don't currently have a good stereo (just a little bookshelf system) but that may be what I end up getting to meet my needs for this event as well as just because I want an actual stereo.
- Thanks for the ideas! Dismas|(talk) 12:34, 29 June 2006 (UTC)
- I play CDs through the DTS receiver I have attached to my television and DVD player. Mine has additional input, so I imagine I could hook-up the line-out from the computer to the line-in of the receiver. Get some speaker cable and drag the speakers outside (and hope it doesn't rain). Otherwise, they do make outdoor speakers that are water-proof. Perhaps this the time for you to upgrade your home theatre system. Personally, I'd skip any wireless options as they would cost more for a solution I don't neeed—running cheap speaker wire outside is not an issue for me. —Bradley 20:43, 29 June 2006 (UTC)
June 30
[edit]Concepts of Non-linear optimization
[edit]Hi, I have looked at the optimization, lagrange multiplier and quadratic programming. I have some confusions here. Suppose we are consider optimization in Euclidean space. Since the use of lagrange multiplier can be reduce our problem from constraint to unconstraint ones, and also can be used for all non-linear problem, why do we have to invent some other techniques such as linear/quadratic programming, or some heuristic such as simulated annealing ? -- 131.111.164.110 11:05, 30 June 2006 (UTC)
- From what I can tell, your question is a general why are there different approaches to different problems question. The simple fact of the matter is that mathematical solutions to problems require several stages: First, the problem must be well formed; next, the problem must be translated into a mathematical formalism suitable to capture all the necessary paramters; and finally, the solution that the mathematical formalism yields must be re-translated into the original problem's setup (e.g., 'plain English'). Of course, different problems have different best approaches. Do you have more specific questions about particular fields? Nimur 20:34, 30 June 2006 (UTC)
- General non-linear optimization is difficult, uncertain, and expensive. A mathematician may use a general result to assert the existence of a solution, but that is not the same as a practical algorithm to find a specific solution.
- Let's take a simple example that most people can follow. We can write a closed form solution in surds for the roots of any univariate polynomial of degree four with complex coefficients. This form is horrendously complicated. If we have a linear polynomial, the general method applies but we would be crazy to use it. For higher degree polynomials, we cannot write a closed form but we do have an extremely sophisticated and expensive method called cylindrical algebraic decomposition that would allow us to isolate roots to some extent. To use this for a routine quadratic polynomial, rather than one of the two quadratic formulae would again be madness.
- One of the methods used to find the solution of some complicated non-linear optimizations is sequential quadratic programming, locally approximating a difficult objective function by a quadratic objective subject to safety bounds. An analogous procedure for polynomial roots is Newton's method, which solves a series of linear approximations.
- To optimize a quadratic objective function subject to linear equality constraints, we have an option that is much simpler and more efficient that using Lagrange multipliers: we can project the problem onto a space of lower dimension, with each constraint effectively removing one variable. For example, to minimize x2+y2 subject to x−y = 1, we need merely consider 2x2−2x. Equating the derivative to zero, we immediately obtain the solution x =1⁄2, y = −1⁄2.
- Thus as a practical matter we seek the most restrictive classification of our optimization problem, not the most general. --KSmrqT 23:35, 30 June 2006 (UTC)
- Additionally, the method of Lagrange multipliers does not work if the problem has inequality constraints (more precisely, it does work but it does not get rid of the constraints). Even if the problem has only equality constraints so that the method of Lagrange multipliers yields an unconstrained problem, we do not have a good method for solving unconstrained nonlinear problems. I think that the people working in the field have found that the unconstrained problem is harder to solve in practice than the original constrained problem. It is an interesting question, though, and I'm not sure of my last point; it may be just that problems arising in practice always have inequality constraints. -- Jitse Niesen (talk) 04:23, 1 July 2006 (UTC)
- Inequality constraints have been accommodated by treating them as temporary equality constraints (the "active set" idea), and by penalty/barrier functions, just to name two options. In the latter case, the constraint is replaced by a term added to the objective function that gives a prohibitive increase upon approach to the constraint wall. This sounds promising on paper, but is difficult to get working well; it can easily turn a nice problem ugly.
- One level up, it is possible to express a variety of problems as optimizations; and again, it is often not wise to do so.
- The literature of optimization takes time to penetrate. Because so much of the work has grown up as applications in industry, the field evolved its own language, its own terminology and world view. It also takes time to build a mental model of the geometry of non-linear optimization complications, especially since the typical problem lives in a space of many dimensions (many variables). As an introductory example, try to understand why conjugate gradient is more efficient than steepest descent in a long narrow valley. --KSmrqT 19:06, 1 July 2006 (UTC)
Thank you for all answers; Then can I just conclude that we have many methods to attack many specific problems, mainly for the reason of efficiency? 131.111.164.226 13:50, 3 July 2006 (UTC)
- Yes, mainly. An overly-general algorithm may also give less accurate results, or even fail to find a solution. --KSmrqT 04:37, 4 July 2006 (UTC)
Parabola?
[edit]I was thinking of a pattern in which you start with two numbers, say 1 & 10, and add one to the first number and subtract one from the second number, and take the product of every pair. For example:
1*10=10
2*9=18
3*8=24
4*7=28
and so on.
I decided to graph this, except using a larger range (-10*21 to 21*-10) and a smaller interval (one tenth). The result is the graph you see on the right.
I am guessing this is a parabola, simply by what it looks like, but is there any way to tell if it is one?
Thanks for any help.
--Tuvwxyz 21:53, 30 June 2006 (UTC)
- The second differences are constant so it is a quadratic equation and hence a parabola. 128.197.81.181 22:06, 30 June 2006 (UTC)
- Oh, to clarify second differences: 18-10 = 8. 24-18 = 6. 28-24 = 4. Taking the differences of those: 8-6 = 2. 6-4 = 2. The number of times you have to do this until they all come out the same tells you the order of the polynomial. 128.197.81.181 22:07, 30 June 2006 (UTC)
- Thanks for the response, I think I've heard of the second differences before. --Tuvwxyz 22:13, 30 June 2006 (UTC)
- Oh, to clarify second differences: 18-10 = 8. 24-18 = 6. 28-24 = 4. Taking the differences of those: 8-6 = 2. 6-4 = 2. The number of times you have to do this until they all come out the same tells you the order of the polynomial. 128.197.81.181 22:07, 30 June 2006 (UTC)
- You can also put it into an equation of a parabola, namely . StuRat 22:16, 30 June 2006 (UTC)
- Good eye! Shall we try a little algebra? Call the starting numbers p and q. After n steps these have become p+n and q−n. Their product is (p+n)(q−n) = pq+(q−p)n−n2. Since the graph depicts the product versus n, it is the graph of a quadratic polynomial in n. Letting n be fractional makes no essential difference.
- The graph of a quadratic polynomial, y = ax2+bx+c, is always a parabola. If a is positive the "arms" go up; if negative, down. The constant term, c, is the height of the lowest point (arms up) or highest point (arms down). The remaining term, b, shifts that center point left or right, to −b⁄2a.
- To see the proposed graph in simpler terms, take the starting values p = 0, q = 0. Then q takes negative values, but the polynomial reduces to −n2, the quintessential parabola inverted. --KSmrqT 00:12, 1 July 2006 (UTC)