Jump to content

Wikipedia:Reference desk/Archives/Mathematics/2009 April 1

From Wikipedia, the free encyclopedia
Mathematics desk
< March 31 << Mar | April | May >> April 2 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


April 1

[edit]

Nonlinear ODEs with linear symmetries

[edit]

What does it mean for a nonlinear ODE to have linear symmetries. Maple tells me that the ODE

y' ' ' + a* y ' + b*cos(y) = 0

has linear symmetries. The following command

DEtools['odeadvisor'](diff(y(s),`$`(s,3))+a*diff(y(s),s)+b*cos(y(s)),y(s),'help');

produces the output

[[_3rd_order, _missing_x], [_3rd_order, _with_linear_symmetries]]


I don't know what either of these mean. Any help will be very greatly appreciated. deeptrivia (talk) 02:18, 1 April 2009 (UTC)[reply]

If y(t) is a solution, then also -y(-t) and y(t)+2kπ are solutions. There are constant solutions y=(2k+1)π, and odd solutions y(-t)=-y(t). Maybe the outputs allude to this? --pma (talk) 08:38, 1 April 2009 (UTC)[reply]
Also, if y(x) is a solution, then y(x + c) is a solution for any constant c, because x does not appear in the expression. This may be what the "missing x" means. — Emil J. 13:55, 1 April 2009 (UTC)[reply]
Thanks a lot! Do these facts help solve the equation in any way? deeptrivia (talk) 02:31, 2 April 2009 (UTC)[reply]
Further questions: What's linear about these symmetries. Maple lists the symmetries as [_xi = 1, _eta = 0]. What does this mean? deeptrivia (talk) 02:33, 2 April 2009 (UTC)[reply]
The maps y(t)→-y(-t) and y(t)→y(t+c) are both linear maps on the space of possible solutions (whatever you decide that is: the space of smooth functions, or of three-times-continously-differentiable functions, or whatever). Algebraist 11:35, 2 April 2009 (UTC)[reply]

Subgraph isomorphism in both directions implies isomorphism?

[edit]

If I have two directed graphs annotated with vertex labels, and there exists a subgraph isomorphism from the first to the second and another from the second to the first, does this imply that the graphs are isomorphic? 88.104.220.119 (talk) 22:36, 1 April 2009 (UTC)[reply]

No, it does not. Algebraist 22:39, 1 April 2009 (UTC)[reply]
Could you give a counterexample? 88.104.220.119 (talk) 22:42, 1 April 2009 (UTC)[reply]
Yes. Let's make the (nonstandard) definition that a 'star' is a digraph consisting of a number of paths connected at a single point, which for the sake of argument are all directed outward from that point. It's clear that a star is determined up to isomorphism by the number of paths of each length (for the sake of definiteness, the length of a path is the number of edges, not the number of vertices). The length of a path can of course be any natural number or infinity. It's easy to see that non-isomorphic stars can sometimes be embedded in each other: for example, any two stars with a countable infinity of length two paths, countably many length one paths, and no paths of length >2, are mutually embeddable. That gives a countably infinite family of mutually embeddable nonisomorphic examples. Similarly, any two graphs with countably many paths, all finite, of unbounded lengths are mutually embeddable. That gives a cardinality-continuum family. The one thing I couldn't do when I thought about this last year was produce a finite family: i.e. a graph G (or rather a digraph, but last year I was thinking about graphs) which is mutually embeddable with some nonisomorphic graph, but such that there are only finitely many isomorphism classes of graphs mutually embeddable with G. Algebraist 22:55, 1 April 2009 (UTC)[reply]
Right, interesting...I'd actually not considered infinite graphs at all, but your example makes sense. I'd tried proving it for finite graphs, but my maths is a bit too rusty :( 88.104.220.119 (talk) 23:17, 1 April 2009 (UTC)[reply]
For finite graphs, the question is totally uninteresting. If G embeds in H, then G has at most as many vertices as H, and at most as many edges. But similarly H has as most as many vertices and edges as G. So G and H both have the same number of vertices and edges, and so each embedding is in fact an isomorphism. This is the usual situation for 'does both-ways embeddability imply isomorphism?' questions: the finite case is true by a trivial counting argument, and the infinite case is false, but not quite obviously so. This is the situation for graphs, groups, linear orders, and topological spaces, for example. Three cases where the result does hold are pure sets (this is the Cantor–Bernstein–Schroeder theorem), vector spaces over a fixed field (which reduces to the pure sets case by a dimension argument), and measurable spaces (the Schroeder-Bernstein theorem for measurable spaces).
Sometimes the result can be recovered if we make additional assumptions. For example, if in the case of linear orders we assume that the image of one embedding is an upper set and the image of the other is a lower set then we can mimic a standard proof of Cantor–Bernstein–Schroeder to obtain the result. I haven't thought yet about whether the case for graphs can be weakened in this fashion to get a true theorem. Algebraist 23:27, 1 April 2009 (UTC)[reply]
Apologies for being slow-witted and boring. I did note that if you have a subgraph isomorphism f from G to H, and g from H to G, then in the finite case trivially both must be bijections between vertices. f by definition satisfies the property that if then . What I got stuck on was trying to show that implies , which I believe is required to show f is a graph isomorphism(?). 88.104.220.119 (talk) 23:41, 1 April 2009 (UTC)[reply]
Like I said, count edges. Since f is a subgraph isomorphism, G has at most as many edges as H. Since g is also, H has as many edges as G. Thus they have the same number of edges. We have that if then , (and these edges (f(u), f(v)) are distinct since f is injective on vertices), so the number of edges of the form (f(u), f(v)) is equal to the number of edges of G. Since that's the same as the number of edges of H, H has no edges left over and we're done. Algebraist 23:58, 1 April 2009 (UTC)[reply]
Got it, that makes sense. Thanks for taking the time to spell it out. — Matt Crypto 00:10, 2 April 2009 (UTC)[reply]

White noise

[edit]

Is it possible to prove, mathematically or logically, that adding two white noise signals of equal power will result in a signal with a flat spectrum but approximately 1.5 or 3dB (exactly , which is either 1.5 or 3dB depending on the scale used) times the power of one of the comprised noise signals? I can show it approximately with a spreadsheet, but I'd like a formal proof. 4.242.232.11 (talk) 23:57, 1 April 2009 (UTC)[reply]

Maybe it helps that the Fourier transform is a linear transformation. 207.241.239.70 (talk) 01:46, 2 April 2009 (UTC)[reply]
(ec) Thanks, 207. This is 4.242... (original poster) checking in. I think I've found my answer, or an explanation that's formal enough for my purposes. "White noise" as I used the term above, has a rectangular probability distribution i.e. every value has an equal probability of appearing in the signal. However, when two independent RPDF signals are added together as in my example, the result is a signal with a triangular probability density function. For an RPDF signal with a max of +1 and a min of -1, the RMS value is 0.5 because all real numbers between -1 and +1 have an equal probability of coming up at any given moment in the signal. For a TPDF signal with a max of +2 and a min of -2 (the expected result of adding two RPDF signals), the RMS value is which is somehow related to the integral of the graph of the PDF (a 45 degree triangle has equal area on either side of about the sqauare root of 2 times its base). 4.242.238.145 (talk) 02:25, 2 April 2009 (UTC)[reply]
I am not sure about that. You are talking about a power spectrum, not a probability density function. There is no random variable here. There are some nice analogies between the two representations, but be careful not to push them beyond what can be justified. Baccyak4H (Yak!) 03:40, 2 April 2009 (UTC)[reply]
The power at each frequency and each time is a random variable, and they are all independant, so adding two lots of white noise together should get white noise with a total power equal to the sum of the original powers (the power couldn't be anything else, by conservation of energy). --Tango (talk) 21:32, 2 April 2009 (UTC)[reply]
Actually that is correct, I was thinking along another vein entirely. Baccyak4H (Yak!) 03:21, 3 April 2009 (UTC)[reply]

As a radio engineer I observe that if the two noise sources are uncorrelated their powers add together giving twice the power or 3dB relative to a single source. But if the two sources were waves identical in amplitude and timing then the amplitude of each sample is doubled which gives a fourfold increase in power or 6dB. Incidentally here is a warning about testing an expensive communications satellite using multiple signals: make sure the signals are not correlated or risk damaging the satellite by excessive power.Cuddlyable3 (talk) 19:52, 3 April 2009 (UTC)[reply]