Jump to content

Wikipedia:Reference desk/Archives/Mathematics/2010 April 24

From Wikipedia, the free encyclopedia
Mathematics desk
< April 23 << Mar | April | May >> April 25 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


April 24

[edit]

Linear / ... / logarithmic scale

[edit]

On a linear scale, equal distances between points demarcate equal values, i.e., the difference between 8 and 9 is exactly equal to the difference between 0 and 1, and (20-10)=(30-20)=10.

On a logarithmic scale, such as decibels, incrementing by ten represents an augmentation by an order of magnitude. (9-8)!=(1-0), (20-10)!=(30-20); (20-10)=100, (30-20)=1000.

What do I call a scale on which incrementing by ten represents an augmentation by two-thirds (or some other fraction) of an order of magnitude, and how would I write down and calculate actual values? --92.116.6.112 (talk) 14:09, 24 April 2010 (UTC)[reply]

That's still logarithmic. The decibel scale is the case where there fraction is 1. Logarithmic scale has a lot of detail on this.--RDBury (talk) 14:29, 24 April 2010 (UTC)[reply]
Thanks. "where there fraction is 1", sorry I don't understand. Logarithmic scale, Thanks but that article is a bit long... maybe I can read it one section at a time... and hope that I won't have forgotten the beginning by the time I get to the end (smile).--92.116.6.112 (talk) 14:57, 24 April 2010 (UTC)[reply]
The defining feature of a logarithmic scale is that incrementing the scale by some constant amount always corresponds to multiplying the actual quantity we're measuring by the same factor. If the quantity we're measuring is x, then a log scale measures logb(x) for some base b. b is equal to the factor x has to increase by in order for logb(x) to increase by 1. We can choose whatever b we want. So for example using b = 10, when log10(x) increases by 1 then x increases by a factor of 10. With decibels incrementing logb(x) by 10 has x increasing by a factor of 10, so incrementing by 1 corresponds to a factor of 101/10, or in other words the base being used is b = 101/10. The example you asked about would be incrementing logb(x) by 10 increasing x by a factor of 102/3, so the base is b = 102/30. Rckrone (talk) 20:26, 24 April 2010 (UTC)[reply]

Schrodinger Eigenvalues

[edit]

Hi all :)

I've just finished a self-taught course on Quantum Mechanics and in the appendix a few of the exercises are about numerically solving Schrodinger's equation in cases where we can't analytically solve for eigenfunctions. I'm trying to solve a (simplified) version of the 1-Dimensional Quantum Harmonic Oscillator equation, , first: it seemed more sensible to try and solve an equation for which the bound states are known already: I calculated that E takes odd positive integer values and as far as I can tell the internet confirms this. However, solving the differential equation numerically (using a Runge-Kutta method) I find that when plotting my solution, it becomes unbounded as X tends to infinity: my book points out that this happens even if you put an exact eigenvalue for E into the differential equation solver, and asks what the solution behaviour indicates to us about the eigenvalues. Problem is, there doesn't really seem to be much mention in the book of classifying eigenvalues in any particular way, particularly with regards to numerical rather than analytic methods such as using a differential equation solver, and I'm presuming that this fact tells us the eigenvalue is of some particular class... Why does this even happen, when our solutions are meant to be bounded? Are there such things as 'unstable eigenvalues' or something like that? I've looked through the book and the Wikipedia article and neither gives me much of a clue of what we can deduce about the eigenvalues from the fact that numerical solutions are unbounded even for exact eigenvalues... Am I right in thinking it's something about instability? Or would that be more relevant to the case 'even when you get arbitrarily close to the eigenvalue, our numerical solution diverges', rather than talking about inputting the exact value then solving the differential equation? Is there something deeper going on here that I'm missing, which this behavior indicates? I'd really appreciate any help or suggestions you could give, even if it's just a book to refer to or another website link.

Many thanks, Simba31415 (talk) 14:48, 24 April 2010 (UTC)[reply]

It seems to me that the problem is simply that you need to impose the boundary condition at infinity. At infinity the wavefunction should be zero. But the second order diff. equation has two linearly independent solutions and using numerical methods like Runge Kutta all you can do is choose the function and its derivative at some point, say x = 0. You will then be approximating a solution that is some linear combintation of the correct one that tends to zero, and one that blows up exponentially at infinity. Count Iblis (talk) 22:04, 24 April 2010 (UTC)[reply]
That makes sense - but then does that tell us anything about the eigenvalues? I was under the impression that you could solve the differential equation by a Frobenius series solution, which terminated if and only if E took on an odd integer value, and in that case we had a bound-state solution; but I guess we must have a second unbounded solution too! So can we actually say anything about the eigenvalues? Thanks very much for the help, unfortunately I don't have anyone to teach me this! :) Simba31415 (talk) 02:24, 25 April 2010 (UTC)[reply]
If you use the series method, then you would first look at the asymptotic behavior of the function, which will be x^p exp(±x^2/2). You then choose the minus sign in the exponent and write the full solution as
x^p exp(-x^2/2) times a series expansion. So, the other solution that blows up at infinity has been eliminated right from the start in this approach. Then the solution you find also blows up at infinity, unless the energy satisfies a condition which will terminate the series.
I don't see a relation with the nature of the eigenvalue spectrum here... Count Iblis (talk) 02:54, 25 April 2010 (UTC)[reply]
Again, that makes perfect sense, thankyou :) That's odd then, I wonder what they were referring to with the 'what does the solution behavior tell you about the eigenvalues'? The previous parts of the question seem to all refer to the behavior at infinity, and the fact that the solutions always tend to infinity no matter what value we put in for E - but I can't seem to figure out what, if anything, it does tell us. Thankyou so much for the help anyway, it's made a lot of things much clearer! :) Simba31415 (talk) 03:20, 25 April 2010 (UTC)[reply]