Talk:Fourier transform/Archive 5
This is an archive of past discussions about Fourier transform. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | ← | Archive 3 | Archive 4 | Archive 5 |
Some "META-TALK"...
- ... concerning practices on this talk page (but neither a single section of the t.p., nor a single section of it and at least one of its respective tree of subordinate section(s)).
--JerzyA (talk) 23:45, 13 September 2019 (UTC)
Retro-active categorization of old or continuing discussions
So many discussion sections, so few of them subordinate to another. Apparently (when last I noticed) not all pages get automated archiving, tho hopefully those pages are protected against WP:V for the sake of a reliable archive. (Hmm, is there an established procedure for "reactivating" an archived page or section whose relevance unanticipably revives beyond some threshold (... and annotating the fact in the archive)...?) My plan is to identify similar and/or related sections, and move them, within this talk page, into appropriately named (note this requires preserving uniqueness within the talk page, since uniqueness within subtree and uniqueness within nesting-depth are (jointly and severally) inadequate for preserving original addressability. Hmm....
JerzyA (talk) 23:45, 13 September 2019 (UTC)
Relocated top-level talk sections of this talk page
Issues about adding, removing, or modifying this article's content
Currently resolved wiki-technical matters
Citation issue
FYI: I'm seeing a red warning message: Harv error: link to #CITEREFHewittRoss1971 doesn't point to any citation.
I dont have time to fix it now, but I thought Id post a notice. --Noleander (talk) 21:37, 24 April 2012 (UTC)
- Done Fixed: the in-text citation used the year 1971, but the reference itself at the bottom of the page was 1970. I changed the in-text reference to 1970, which seems to be the correct year. This was a book in a multi-volume set, so I checked that the reference at the bottom is the one actually being referenced in the text, and it is (since chapter 7 is in volume 2). Good spot! This has been wrong ever since the citation was added (in-text citation added, reference added). Quietbritishjim (talk) 12:49, 25 April 2012 (UTC)
- Tnx, QBJ, for yr diligence, & for being neither LBJ nor LBJ. (Any thots on LDT?)
--JerzyA (talk) 21:22, 13 September 2019 (UTC)
Overemphasis on time as a variable
The article begins:
- "expresses a mathematical function of time as a function of frequency, known as its frequency spectrum" [italics mine]
Throughout time is emphasized, although in fact the transform applies to a function of any variable whatsoever. Most noticeably, the Fourier transform applies to functions of distance, and allows expansions in terms of components characterized by wavevector.
The article does contain the section on space, but this minor consideration does not convey the generality of the method, which should be made apparent at the beginning. Brews ohare (talk) 15:15, 26 April 2012 (UTC)
A very general discussion is found here in terms of distributions and test functions. Brews ohare (talk) 15:47, 26 April 2012 (UTC)
- It is not an absolute requirement that an article must start in the maximum possible generality. In fact, there are generally good reasons for not doing this, and I believe this is the case here. Sławomir Biały (talk) 16:18, 26 April 2012 (UTC)
- It is well known that different minds react differently to material, with some responding best to development from the particular to the general, and others the reverse. However, it is not desirable to allow the initial impression that an introductory example is the entire subject, and that impression is easily avoided by a clear statement at the outset. Brews ohare (talk) 15:39, 18 May 2012 (UTC)
- Agree with Brews. This should be first defined in the most general way (that can be any variable). After that, one can tell something like this: "For example, if defined as a function of time ..." and so on (and mostly keep the current text in introduction as not to cause anyone's objections). Moreover, it tells "Fourier's theorem guarantees that this can always be done". It would be better to tell: "Fourier's theorem guarantees that this can always be done for periodic functions". My very best wishes (talk) 01:38, 20 May 2012 (UTC)
- So, per you, the Fourier transform should be defined in the lead as follows: "The Fourier transform is the decomposition of a tempered distribution on a locally compact group as an integral over the spectrum of the Hecke algebra of the group." Sławomir Biały (talk) 12:17, 20 May 2012 (UTC)
- No, quite the opposite. It would be much easier for me just to fix the text instead of discussion reductio ad ridiculum, but I suggest that Brews should do it, just to look if there are any problems with his editing, or ths is something else. My very best wishes (talk) 13:13, 20 May 2012 (UTC)
- Well, I disagree quite strongly with the assertion that the Fourier transform "should be first defined in the most general way". I don't have a problem mentioning in the lead of the article the extension to Euclidean spaces, or indeed to other locally compact groups, nor indeed including an entire paragraph about that. Sławomir Biały (talk) 13:24, 20 May 2012 (UTC)
- No, quite the opposite. It would be much easier for me just to fix the text instead of discussion reductio ad ridiculum, but I suggest that Brews should do it, just to look if there are any problems with his editing, or ths is something else. My very best wishes (talk) 13:13, 20 May 2012 (UTC)
- So, per you, the Fourier transform should be defined in the lead as follows: "The Fourier transform is the decomposition of a tempered distribution on a locally compact group as an integral over the spectrum of the Hecke algebra of the group." Sławomir Biały (talk) 12:17, 20 May 2012 (UTC)
- And I do not even see any reason to mention time so many times in introduction. In fact, the introduction could be completely rewritten for brevity and clarity.My very best wishes (talk) 02:00, 20 May 2012 (UTC)
- The lead is written to be read by someone with no prior background in the subject (WP:LEAD, WP:MTAA). It's true that it could be shortened substantially, thereby rendering it useless to such an individual. Sławomir Biały (talk) 12:22, 20 May 2012 (UTC)
- I am extremely surprised that such an obvious matter (the Fourier_transform is not about time) becomes a matter of discussion. My very best wishes (talk) 13:13, 20 May 2012 (UTC)
- To be sure. But the issue that I bring up is not about whether the Fourier transform is about time, but how to explain it to a lay person. Indeed, this is not an easy question! Sławomir Biały (talk) 13:25, 20 May 2012 (UTC)
- An explanation is most likely to be understood when it starts with the familiar and particular and gently proceeds to the alien and abstract. For this reason it is good (when possible) to assume that time and frequency are the independent variables (so not for the multidimensional cases). Perhaps a second paragraph in the lede could say that the transform can be between the domains of any two reciprocal variables, but that for simplicity the article will assume that these are time and frequency - as is commonly the case. --catslash (talk) 15:16, 20 May 2012 (UTC)
- To be sure. But the issue that I bring up is not about whether the Fourier transform is about time, but how to explain it to a lay person. Indeed, this is not an easy question! Sławomir Biały (talk) 13:25, 20 May 2012 (UTC)
- I am extremely surprised that such an obvious matter (the Fourier_transform is not about time) becomes a matter of discussion. My very best wishes (talk) 13:13, 20 May 2012 (UTC)
- The lead is written to be read by someone with no prior background in the subject (WP:LEAD, WP:MTAA). It's true that it could be shortened substantially, thereby rendering it useless to such an individual. Sławomir Biały (talk) 12:22, 20 May 2012 (UTC)
- Agree with Brews. This should be first defined in the most general way (that can be any variable). After that, one can tell something like this: "For example, if defined as a function of time ..." and so on (and mostly keep the current text in introduction as not to cause anyone's objections). Moreover, it tells "Fourier's theorem guarantees that this can always be done". It would be better to tell: "Fourier's theorem guarantees that this can always be done for periodic functions". My very best wishes (talk) 01:38, 20 May 2012 (UTC)
- It is well known that different minds react differently to material, with some responding best to development from the particular to the general, and others the reverse. However, it is not desirable to allow the initial impression that an introductory example is the entire subject, and that impression is easily avoided by a clear statement at the outset. Brews ohare (talk) 15:39, 18 May 2012 (UTC)
Catslash: Why not put your disclaimer the transform can be between the domains of any two reciprocal variables, but that for simplicity the article will assume that these are time and frequency - as is commonly the case at the outset, and then proceed. That would satisfy me. Brews ohare (talk) 16:18, 20 May 2012 (UTC)
- I would prefer the case of n dimensions to be handled in a separate paragraph after the first paragraph. (I'm generally against disclaimers such as the one you suggest.) This paragraph can also mention the case of locally compact groups. Sławomir Biały (talk) 19:42, 20 May 2012 (UTC)
- But please remember that it must be understandable for a lay person like myself. My very best wishes (talk) 02:09, 21 May 2012 (UTC)
- I'm confused how n-dimensions entered this discussion. The point under discussion is that time and frequency are not the universe of applicability. A statement to this effect is not a disclaimer: it is simply pointing out that the example to follow is selected for the sake of keeping the discussion simple. The transform can be between the domains of any two reciprocal variables, but for simplicity the article will assume initially that these are time and frequency - as is commonly the case Brews ohare (talk) 03:51, 21 May 2012 (UTC)
- First, disclaimer is your word, not mine. Second, if you're not objecting to focusing on the one-dimensional case in the lead, then I must admit that I'm quite baffled by your objections. I thought we were talking about using space as a variable instead? Sławomir Biały (talk) 12:26, 21 May 2012 (UTC)
- Hi Sławomir. It really hadn't occurred to me to stress multidimensional Fourier transforms, although maybe that should be part of the thinking here. I am unsure just how to express the generality of the Fourier integral. But at the moment I am not alone in feeling the emphasis on time and frequency appears overstated. Some explanation at the outset that the use of time and frequency is only as a very common example would fix this impression. Can you propose some wording that you would accept? Brews ohare (talk) 15:03, 21 May 2012 (UTC)
- First, disclaimer is your word, not mine. Second, if you're not objecting to focusing on the one-dimensional case in the lead, then I must admit that I'm quite baffled by your objections. I thought we were talking about using space as a variable instead? Sławomir Biały (talk) 12:26, 21 May 2012 (UTC)
- I'm confused how n-dimensions entered this discussion. The point under discussion is that time and frequency are not the universe of applicability. A statement to this effect is not a disclaimer: it is simply pointing out that the example to follow is selected for the sake of keeping the discussion simple. The transform can be between the domains of any two reciprocal variables, but for simplicity the article will assume initially that these are time and frequency - as is commonly the case Brews ohare (talk) 03:51, 21 May 2012 (UTC)
- But please remember that it must be understandable for a lay person like myself. My very best wishes (talk) 02:09, 21 May 2012 (UTC)
Fourier's theorem
The article refers to Fourier's theorem as follows:
- The Fourier transform is a mathematical operation with many applications in physics and engineering that expresses a mathematical function of time as a function of frequency, known as its frequency spectrum; Fourier's theorem guarantees that this can always be done.
This wording is incorrect. Fourier's theorem applies to periodic functions and Fourier series, and what is needed here is a theorem regarding arbitrary functions, not restricted to periodic functions. The more general theorem came later with Dirichlet and others. Brews ohare (talk) 17:26, 20 May 2012 (UTC)
- Dean G. Duffy (2001). Green's Functions With Applications. CRC Press. p. 391. ISBN 1584881100.
The Fourier Transform is the natural extension of Fourier series to a function f(t) of infinite period.
- That's bothered me as well. Should the reference to Fourier's theorem be removed? Sławomir Biały (talk) 19:44, 20 May 2012 (UTC)
- We are not moving anywhere. Brews, could you please post here, on this talk page, your new version of the introduction. And please do not be shy, rewrite everything that needs to be rewritten. Thank you, My very best wishes (talk) 19:52, 20 May 2012 (UTC)
- According to Sean A. Fulop (2011). Speech Spectrum Analysis. Springer. p. 22. ISBN 3642174779. the equation:
- was derived by Fourier in 1811 and is called the Fourier transform theorem. Apparently its examination and various conditions upon the functions involved were pursued into the 20th century. I guess that is what the article should refer to. Brews ohare (talk) 20:42, 20 May 2012 (UTC)
- The source you found identifies this as the "Fourier integral theorem". I will change the lead to reflect that. Sławomir Biały (talk) 21:27, 20 May 2012 (UTC)
- According to Sean A. Fulop (2011). Speech Spectrum Analysis. Springer. p. 22. ISBN 3642174779. the equation:
Sorry I don't have time to weigh in on this properly right now, but I just want to point out that the theorem in Brew ohare's comment is the Fourier inversion theorem (an article that desperately needs clarifying). I've never heard it called anything other than the Fourier inversion theorem before, which might be why you're having trouble finding references. The above statement is not correct: one of the exponentials should have a minus sign in the exponent (it doesn't matter which), otherwise the left hand side would be s(-t) instead of s(t). I'd be surprised if Fourier proved it in 1811 rigorously by today's standard, instead it seems more likely that he just gave a heuristic argument, but this is just my guess. Even if I'm wrong about that, he certainly wouldn't have proved the most general case (i.e. with the weakest assumptions on the function s); he would almost certainly have assumed that s ∈ L1 (i.e. it's absolutely integrable) and probably that it's also infinitely differentiable with compact support (the simplest case). Quietbritishjim (talk) 23:49, 20 May 2012 (UTC)
- Quietbritishjim: Your comments strike me as accurate. I added a few links about this in the text. However, if these matters deserve more attention, perhaps a subsection about these matters is better? Brews ohare (talk) 03:22, 21 May 2012 (UTC)
The label "Fourier's Theorem" is used even though Fourier did not prove the theorem. This is a little unusual in maths, but it is a fact. Whittaker--Watson, for example, explain it this way: "This result is known as Fourier's Integral Theorem", they are referring to Fourier inversion for the Fourier integral transform. Earlier in their chapter, when they are talking about Fourier series, they call "A theorem, of the type described in §9.1, concerning the expansibility of a function of a real variable into a trigonometric series is usually described as {\it Fourier's Theorem}." (p. 163.) Later they even have a subsection titled "The Dirichlet--Bonnet proof of Fourier's Theorem." They carefully give credit to Fourier for proving many special cases: "Fourier ... shewed that, in a large number of particular cases, a Fourier series {\it actually converged to the sum f(x).}" — Preceding unsigned comment added by 181.225.234.8 (talk) 18:14, 27 November 2014 (UTC)
Fourier integral theorem as an historical note
I moved the material on the Fourier integral theorem to a separate section Fourier transform#Historical note. It seems pertinent to me to state this theorem explicitly and to note its easy derivation using modern analysis. The links to other WP articles also helps the reader. Brews ohare (talk) 15:32, 21 May 2012 (UTC)
- The article already does state the theorem in the Definition section, and includes the attribution to Fourier (with I think a more authoritative source). Sławomir Biały (talk) 15:34, 21 May 2012 (UTC)
- Sławomir Biały: I see that you are quite determined to avoid the introduction of Fourier's integral theorem in the double integral form that naturally leads to the Dirac delta function. This may be a matter of aesthetics? To the uninitiated the use of the Dirac delta function and its representations is a very straightforward approach to much of the article, and the needed mathematical rigor that completely obscures the meaning can be relegated to the specialist articles on distributions.
- In any event here are a few observations:
- Reference to Théorie analytique de la chaleur is not very helpful without a page reference where the theorem can be found. Even if that is done, Fourier's notation may defeat an attempt to connect his work to the article. The reference I provided may be less authoritative, but it is way more understandable.
- The article now refers to what is commonly called a Fourier transform pair as the "Fourier integral theorem". Although the pair obviously are connected by the theorem, they are not themselves the theorem. The common usage is the double integral form that results when one of two is substituted into the formula for the other. If you insist upon mentioning only the single integral formulas, the Fourier integral theorem consists of stating in words the result of the substitution of one into the other. See, e.g. this.
- Through some error, (Titchmarsh 1948, p. 1) is not provided in the citations.
- In any event here are a few observations:
- I'd suggest that some changes in presentation would be a service to the community. However, as we seem to be at odds, I won't pursue these matters. Brews ohare (talk) 16:23, 21 May 2012 (UTC)
- The definition section says that under appropriate conditions, the function can be recovered from its Fourier transform, and then gives the integral formula for the inverse transform. Later the article discusses some sufficient conditions under which the theorem is true. Sławomir Biały (talk) 16:46, 21 May 2012 (UTC)
- Sławomir Biały: The issue here is clarity of exposition, not whether the thing is said somehow, somewhere. Maybe of interest: The presentation of Myint-U & Debnath suggests the exponential form of the Fourier theorem originates with Cauchy. Brews ohare (talk) 17:06, 21 May 2012 (UTC)
- Well, it does say exactly the same theorem in the definition section as you would have it say. Actually what's there now is more technically correct than your version, which in no way alludes to there being any conditions on the function whatsoever. Sławomir Biały (talk) 17:59, 21 May 2012 (UTC)
- You have not provided a page number to refer to Théorie analytique de la chaleur, but it appears that the formulation of Fourier is:
- which is of the double integral form, but not of exponential form. Brews ohare (talk) 17:44, 21 May 2012 (UTC)
- Excellent. This seems to be progress. Sławomir Biały (talk) 17:59, 21 May 2012 (UTC)
- See, for example, the English translation of Fourier. Brews ohare (talk) 18:10, 21 May 2012 (UTC)
- Sławomir Biały: The issue here is clarity of exposition, not whether the thing is said somehow, somewhere. Maybe of interest: The presentation of Myint-U & Debnath suggests the exponential form of the Fourier theorem originates with Cauchy. Brews ohare (talk) 17:06, 21 May 2012 (UTC)
- The definition section says that under appropriate conditions, the function can be recovered from its Fourier transform, and then gives the integral formula for the inverse transform. Later the article discusses some sufficient conditions under which the theorem is true. Sławomir Biały (talk) 16:46, 21 May 2012 (UTC)
- I'd suggest that some changes in presentation would be a service to the community. However, as we seem to be at odds, I won't pursue these matters. Brews ohare (talk) 16:23, 21 May 2012 (UTC)
Proposal
How about an historical section along these lines, with maybe more on the modern developments, and with the references properly formatted, of course?
- Historical background
Joseph Fourier presented what is now called the Fourier integral theorem in his treatise Théorie analytique de la chaleur in the form:(see this)
|
which is tantamount to the introduction of the δ-function: See this.
|
Later, Augustin Cauchy expressed the theorem using exponentials:Myint-U & Debnath Debnath & Bhatta
|
Cauchy pointed out that in some circumstances the order of integration in this result was significant.Grattan-Guinness Des intégrales doubles qui se présentent sous une forme indéterminèe |
Full justification of the exponential form and the various limitations upon the function f necessary for its application extended over several centuries, involving such mathematicians as Dirichlet, Plancherel and Wiener,some background, more background, and leading eventually to the theory of mathematical distributions and, in particular, the formal development of the Dirac delta function. |
As justified using the theory of distributions, the Cauchy equation can be rearranged like Fourier's original formulation to expose the δ-function as:
|
where the δ-function is expressed as:
|
Brews ohare (talk) 18:49, 21 May 2012 (UTC)
- Seems quite good, although really it's the theory of tempered distributions that are important in the development of the Fourier integral. More needs to be fleshed out in this later development, if you're up to it. Sławomir Biały (talk) 19:35, 21 May 2012 (UTC)
- Here is a quote:
- To this theory [the theory of Hilbert transforms] and even more to the developments resulting from it - it is of basic importance that one was able to generalize the Fourier integral, beginning with Plancherel's pathbreaking L2 theory (1910), continuing with Wiener's and Bochner's works (around 1930) and culminating with the amalgamation into L. Schwartz's theory of distributions (1945)...
- and here is another:
- "The greatest drawback of the classical Fourier transformation is a rather narrow class of functions (originals) for which it can be effectively computed. Namely, it is necessary that these functions decrease sufficiently rapidly to zero (in the neighborhood of infinity) in order to insure the existence of the Fourier integral. For example, the Fourier transform of such simple functions as polynomials does not exist in the classical sense. The extension of the classical Fourier transformation to distributions considerably enlarged the class of functions that could be transformed and this removed many obstacles.
- Query: do these quotes seem to you to cover the subject adequately for this historical discussion, or what else would you suggest? Brews ohare (talk) 20:05, 21 May 2012 (UTC)
- The historical discussion looks good, but the last paragraph seems questionable or out of place. This is roughly how Cauchy proved the formula. (I have read Cauchy's account myself many moons ago.) The issue wasn't a lack of a notion of Delta function—Cauchy even had such a gadget—but a lack of appropriate function space on which the Fourier transform was defined. That is, it's the f in the formula that mathematicians subsequently worked so hard to clarify, not the δ. The emphasis on the Dirac delta seems misleading/wrong.
- Also a general remark is that this content seems like it might ultimately be more suited to the Fourier inversion theorem article rather than here. At present, this article lacks any kind of history section, so it has to start with something I suppose. Sławomir Biały (talk) 18:50, 22 May 2012 (UTC)
- I am happy you have read Cauchy on this topic. Perhaps you can supply a source?
- The delta-function was used also by Fourier as noted in the proposed text. The point to be made is not the notion of a delta function, which is inevitable in any double-integral relation relating a function to itself, but an explication of the historical events that lead to its solid formulation as a distribution, among which are elucidation of exactly the points you raise: the correct function space and the role of distributions. Perhaps you might indicate what you consider to be the benchmark events?
- Your last suggestion leaves me somewhat confused as to your recommendation. Are you saying Fourier transform needs a history section and maybe this proposal is a start that can be built upon? Brews ohare (talk) 14:58, 23 May 2012 (UTC)
- I've elected to put this historical matter in the article Delta function. Brews ohare (talk) 19:54, 23 May 2012 (UTC)
Name of the theorem
To reiterate one of the things I said in my last comment the name of this theorem is the "Fourier inversion theorem", NOT the "Fourier integral theorem". This naming convention is essentially universal: I have seen myriad references to that name over many years, but I've never heard the name "Fourier integral theorem" before this discussion. What's more, as I said before, Fourier inversion theorem is already an article (admittedly one in need of some attention). Obviously it's a critical theorem relating to the Fourier transform, so perhaps it should have a short section in this article, with one of those notes at the top like "for more information see Fourier inversion formula". The history section above seems to be exclusively about that theorem, so it belongs in that article. Quietbritishjim (talk) 21:57, 22 May 2012 (UTC)
- I'm not sure what makes you think that the naming convention is universal. Most of the classical literature in the subject uses "Fourier integral theorem" or some variant thereof to refer to the theorem, including the now referenced textbook by Titchmarsh—at one time required reading in the subject. I believe this convention remains in engineering and related areas. Google books bears this out: 34,000 hits for "Fourier integral theorem" versus 13,000 hits for "Fourier inversion theorem". Google scholar, which does not index older literature, gets about 1000 each. Sławomir Biały (talk) 22:19, 22 May 2012 (UTC)
- Sorry about that. "... engineering and related areas" This seems to be the reason, I'm a pure mathematician, and looking at the top results in Google Books it seems Fourier inversion theorem is used in pure maths and Fourier integral theorem is used in science. The Google results aren't an accurate test because a lot of the Fourier integral theorem results (even in the top 20) are just results that have Fourier, integral and theorem anywhere in the name, but I agree that the name is in use. Quietbritishjim (talk) 23:23, 22 May 2012 (UTC)
- Your Google results seem to differ from mine. The links I gave search (for me) for the exact phrase "Fourier integral theorem" and the exact phrase "Fourier inversion theorem". All of the top twenty links seem to be about the theorem we're discussing. Sławomir Biały (talk) 00:02, 23 May 2012 (UTC)
- The term Fourier integral theorem is used in many authoritative works. A Google count is a poor way to find accepted usage because many, maybe even most, authoritative works are not searchable, and so do not appear in a Google search. A compromise position is that either name can be used, and both are widely understood. Brews ohare (talk) 17:48, 23 May 2012 (UTC)
- Your Google results seem to differ from mine. The links I gave search (for me) for the exact phrase "Fourier integral theorem" and the exact phrase "Fourier inversion theorem". All of the top twenty links seem to be about the theorem we're discussing. Sławomir Biały (talk) 00:02, 23 May 2012 (UTC)
ξ vs ν ?
Since it bothers me, maybe it bothers others as well. Instead of:
I would prefer:
Examples:
--Bob K (talk) 12:48, 7 June 2012 (UTC)
- There are of course various conventions on where the 2π goes and what letters to use. The one here is that found in most harmonic analysis texts such as Stein and Weiss, and Grafakos. This is also the convention used, e.g., by Terrence Tao [1] in his entry to the The Princeton Companion to Mathematics. Your prefered version seems to be more common in the dispersive PDE community (e.g., Hormander). In any event, I don't really know if there is any good reason for preferring one convention over the other, besides individual familiarity and tastes. Sławomir Biały (talk) 15:56, 11 June 2012 (UTC)
Thanks. I don't have a familiarity preference, because I use for hertz, myself. What it comes down to for me is that, for the sake of beginners, I don't like to make anything look any more intimidating that absolutely necessary. And I think this
- and
are a little friendlier looking than this
- and .
And indeed, as of 9/24/2008, the Hz convention was represented here by .
- --Bob K (talk) 20:40, 12 June 2012 (UTC)
- Most of the references use ξ and not ν. This is consistent with the book of Stein and Weiss (which is a canonical textbook in modern Fourier analysis) and Hormander (a canonical textbook in dispersive PDE). Sławomir Biały (talk) 22:19, 19 June 2012 (UTC)
Confused section on LCH spaces
That short section is confused; it simply contains some trivial statements. The context Gelfand-Pontryagin-Fourier transform is unitary representations of topological groups. No group structure, no transform.
The Gelfand transform is the same as the Pontryagin transform, in this context: take a locally compact topological group G, one forms the convolution algebra L^1(G). This algebra comes with a natural involution, given by taking inverses of group elements. Its enveloping C*-algebra is the group C*-algebra C*(G). The one dimensional representations of C*(G) can be identified with the Pontryagin dual G^, i.e. one dimensional representations of G. The Gelfand transform is then an isomorphism from C*(G) to C_0(G^).
In the special case G = the real line R, the Gelfand transform is exactly the Fourier transform, but extended by continuity to all of C*(R). It says C*(R) is isomorphic to C_0(R), not the vacuous statement currently in the section. Mct mht (talk) 19:13, 28 June 2012 (UTC)
Legibility of example images in Introduction
I have some comments that may help improve legibility of the images in the introduction (http://wiki.riteme.site/wiki/Fourier_transform#Example).
- Make the graphs shorter (by a factor of 2). The plots just look like a bunch of lines as they are and take up too much vertical space.
- Move the vertical text to the captions under the figures. Vertical text is hard to read.
- Label the figures (Fig. 1, Fig. 2, etc.) and center the caption text. The figure numbers should be referred to in the text above to assist the reader.
- In the figure "The Fourier transform of f(t)" increase the font size of the boxed text. It is too hard to read.
Putnam.lance (talk) 08:58, 4 November 2012 (UTC)
Until a very recent edit these graphs were so small that the text you talk about was just incoherent squiggles. I'm not sure that that was such a bad thing, since the captions (outside of the images) and the text that refers to them seems to be descriptive enough. So maybe we should just shrink them back, assuming no one can be bothered to make versions with the text removed entirely. I agree with your first and third points though. Quietbritishjim (talk) 11:55, 4 November 2012 (UTC)
- I have no objection to the Oct 26 version. The "thumbnails" are not meant to be legible. They are links to the large size versions.
- --Bob K (talk) 13:54, 4 November 2012 (UTC)
Suspect wrong equations in section 'Square-integrable functions'
The equations in the form below section 'Square-integrable functions' should be taken from reference: The Fourier transforms in this table may be found in (Campbell & Foster 1948), (Erdélyi 1954), or the appendix of (Kammler 2000). I would suggest someone have these to double check these equations (equations 201 to 204).
Here's the suspect problem. There're three columns in the table: unitary ordinary frequency, unitary angular frequency, and non-unitory angular frequency. I think the Fourier transform for the last two columns should not have PI in their denominators. While there should be PI for the first column results. Because there're no 2*PI in the index of last two types of Fourier transforms, doing the integral, there's no way to generate a PI coefficient in denominator. I did calculation for the rectangular function, which showed the equations are wrong. I actually also corrected the equations in page of 'Rectangular function' under section 'Fourier transform of rectangular function'. But I'm not a math student and don't have enough resources, I'm not confident to make changes here. Anyone familiar with Fourier transforms please take a look at these equations.
--Allenleeshining (talk) 05:29, 16 November 2012 (UTC)
- You don't say which transform you're talking about but it sounds like you're talking about the Gaussian (at least mostly that). Maybe this fact will help:
- so that might explain where the pi comes from. Quietbritishjim (talk) 21:26, 30 December 2012 (UTC)
- Our normalization of the sinc function differs from the one in those references, and this explains the discrepancy. Sławomir Biały (talk) 22:04, 30 December 2012 (UTC)
I'm sorry I didn't make it clear. I was talking about equantions from 201 to 204 about the pi in denominator. Sławomir Biały could you explain more if the normalization is the problem. Thanks. Allenleeshining (talk) 17:30, 4 January 2013 (UTC)
- Our convention for the sinc is
- This is mentioned in the "Comment" column of the table in the article. For more information, please consult the article sinc function. The convention in the links you gave ([2], [3]) is:
- Obviously there's going to be an extra 2π to account for if you use our conventions. Sławomir Biały (talk) 18:52, 4 January 2013 (UTC)
Rewrite of Fourier inversion theorem article - request for comments
Since people who have this article on their watchlist are likely to also be interested in the Fourier inversion theorem article, this is just a note to let everyone here know that I'm proposing a rewrite of that article. Please leave any comments you have over on that talk page so that they're kept together. Thanks! Quietbritishjim (talk) 01:26, 31 December 2012 (UTC)
Plancherel theorem equivalent to Parseval's theorem?
According to the article, Plancherel theorem and Parseval's theorem are equivalent. So how do you go from Plancherel theorem to Parseval's theorem?
Maybe there should be a proof of some kind in the article that they are in fact equivalent, because to me it's not obvious. —Kri (talk) 11:44, 3 February 2013 (UTC)
- Both assert that the Fourier transform is unitary, and it's well known that these are equivalent characterizations of unitarity. To get from one to the other, apply Plancherel's theorem to with t a complex parameter. Sławomir Biały (talk) 13:41, 3 February 2013 (UTC)
- If I substitute for h in , I eventually reach the expression
- How do I get from there to Parseval's? —Kri (talk) 12:56, 4 February 2013 (UTC)
- Differentiate with respect to t. (Alternatively, take .) Sławomir Biały (talk) 13:05, 4 February 2013 (UTC)
¨
- If I differenciate with respect to t (which is the same in this case as just taking t = 1) I just get
- which is not Parseval's. The other aterlnative seems very complicated. Could you go ahead and show me what you mean by taking ? —Kri (talk) 13:51, 4 February 2013 (UTC)
- You haven't taken the derivative correctly. See complex derivative for how to take the derivative with respect to a complex parameter. For the alternative suggestion, collect t and to obtain
- This is true for all t, and in particular it is true when :
- or
- as required. Sławomir Biały (talk) 14:06, 4 February 2013 (UTC)
- You haven't taken the derivative correctly. See complex derivative for how to take the derivative with respect to a complex parameter. For the alternative suggestion, collect t and to obtain
- Ah, okay. I forgot that the derivative becomes a bit more complicated when the expression containes a complex conjugate involving the active parameter. This was actually a very nice proof. Thank you very much. —Kri (talk) 14:57, 4 February 2013 (UTC)
\LimitsOnNames There are analogies between the theory of Fourier series and that of the Fourier transform. Parseval's theorem for Fourier series has an analogue for the Fourier transform, which was proved by Plancherel as part of the Plancherel Theorem. If the periodic function $f(x)$ has a Fourier series expansion $f(x) = \sum_{n=-\infty } ^{\infty} a_n e^{2\pi int}$ then Parseval proved that $$ \int_0^1\vert f(x)\vert^2 dx = \sum_{n=-\infty}^\infty\vert a_n\vert^2,$$ and so the $L^2$ norm of $f$ is equal to the $L^2$-norm of the sequence $a_n$. (Parseval only proved this under the assumption that the infinite series could be integrated term by term, which is an unnecessarily strong assumption. In 1896, Liapounoff, and later, but independently, Hurwitz, proved this in greater generality, so it is often called the Hurwitz--Liapounoff theorem.)
Plancherel, in 1910, proved a continuous analogue of this for the purpose of first extending the usual definition of the Fourier transform in terms of an integral to all square-integrable functions by means of passing to the limit in the $L^2$ norm, and then proved Fourier inversion for all square-integrable functions:
Provided that $f$ is square-integrable, $\int_{-A}^A f(x) e^{-2\pi i\xi x}dx$ converges in mean-square as $A$ goes to $\infty$ and can thus be used as the definition of $\hat f$ as an $L^2$ equivalence class of functions. Then the continuous analogue of Parseval's formula holds: $$\int_{-\infty}^\infty \vert f(x)\vert^2 dx = \int_{-\infty}^\infty \vert \hat f(\xi)\vert^2 d\xi.$$ From this and the lemmas used in its proof, he proved the $L^2$ form of the Fourier inversion formula: $$ f(x) = {{\roman l.\roman i.\roman m.}\atop{\ssize L\rightarrow\infty}} \int_{-L}^L \hat f(\xi)e^{2\pi ix\xi} d\xi.$$ %$ f(x) = $\ l.i.m.\hskip-20pt\vskip1pt$_{\ssize L\rightarrow\infty} $ \hskip 30pt \vskip-10pt$ \int\limits_{-L}^L \hat f(\xi)e^{2\pi ix\xi} d\xi$. %$$ f(x) = {{\roman l.i.m.}\above{L\rightarrow\infty}} \int_{-L}^L \hat f(\xi)e^{2\pi ix\xi} d\xi.$$ For locally compact Lie groups, the measure on the unitary dual of the group that makes the analogue of Fourier inversion hold good is referred to as ``Plancherel measure in his honour. From the general perspective of Fourier analysis on a locally compact abelian group, the Parseval formula and Plancherel's continuous analogue of it are both special cases of the same general theorem.
Comments
ok cat FourierTransform/CommentsPlanch.txt Engineering texts and other textbooks violate accepted usage and terminology about the Parseval theorem, the Parseval formula, and the Plancherel theorem. Parseval's theorem is only about Fourier series. (And it was first proved by Liapounoff, 60 years later.) Parseval wrote two formulas, which in inner product notation are (f,f) = ($\hat f, \hat f)$ and $(f,g) = (\hat f , \hat g)$, where here, $\hat$ means the set of Fourier coefficients. Those formulas hold good for the F.T. as well, simply by re-interpreting all the symbols. So Parseval's formula or Parseval's relation are phrases which are also used for the parts of the Plancherel's Theorem which Plancherel proved in 1910. Mostly, the first formula is called Plancherel's, and it is the second formula which is called Parseval's formula.
Plancherel's theorem is both those, plus the fact that the definition of the F.T. can be extended from L1 functions to L2 functions by unitarity, and then he proves that the F.T. is one-to-one on L2 equivalence classes of functions and so has an inverse, and proves Fourier Inversion for L2 functions (up to these equivalence classes). Parseval's work has nothing in it which is even remotely similar to proving that one can extend the defintion of Fourier series to L2 periodic functions (for Fourier series, that is the Riesz--Fischer theorem).
Many no-name textbooks can be found which will be all over the map. Well, they hardly count. Rudin is "a reliable source", but he is way contradicted by lots of other, equally reliable source, and decisively contradicted by much weightier sources: Whittake Watson, Kolmogoroff, Feller, Wiener, etc.
Why does not everyone respect this exact usage? Well, one reason is that from the higher viewpoint of Hilbert spaces and orthogonla expansions, all of this boils down to Riesz--Fischer. But from the viewpoint of concrete functions, the distinction is still valid, and ought to be maintained in this article in spite of the widespread confusion in textbooks.
Take-away: Parseval's theorem is only for Fourier series, and Plancherel's theorem is about the F.T. on L2(R). But Parseval's formula makes sense in both contexts, so occasionally (e.g., Feller, Probability Theory vol. 2) one sees Plancherel's results given Parseval's name. — Preceding unsigned comment added by 181.225.234.9 (talk) 17:51, 28 November 2014 (UTC)
- Yes, this seems like it is worthwhile to clarify. Sławomir Biały (talk) 21:15, 28 November 2014 (UTC)
Proposal: Add complex conjugation to "Tables of important Fourier transforms" under "Functional relationships"
Even though it might not be in Erdélyi (1954), the following Fourier transform schould be added as a generalization of relationship 110:
Function | Fourier transform unitary, ordinary frequency |
Fourier transform unitary, angular frequency |
Fourier transform non-unitary, angular frequency |
Remarks |
---|---|---|---|---|
Complex conjugation, generalization of 110 |
- I dont have a problem with adding it. It's almost certain to be in Erdelyi (or at least one of the books listed). But someone should check just to be sure. Sławomir Biały (talk) 16:55, 9 August 2013 (UTC)
Done Just checked it. It's in Erdélyi (1954) on page 117, number (2). I'll add it as 113 at the end of the list.
- Ok, good work. Sławomir Biały (talk) 23:44, 9 August 2013 (UTC)
Example appears to be incorrect
The example appears to be incorrect. The result of ƒ̂(ξ) shows maxima of around -3 and 3. The result of ƒ̂(ξ) according to WolframAlpha (which is likely to be correct) shows maxima of around -20 and 20. Source. Could someone confirm/refute this? If I'm correct then new images should be made. Jdp407 (talk) 20:31, 21 August 2013 (UTC)
- Careful. Wolfram Alpha uses conventions that differ from those in the article. (In fact, the Fourier transform convention used by Wolfram is extremely unusual: apart from normalization they have a +i where most have a −i.) Sławomir Biały (talk) 22:52, 21 August 2013 (UTC)
- Ah ok. Thanks for clearing that up! Jdp407 (talk) 17:05, 28 August 2013 (UTC)
- The peaks in [our example] are at f = ± 3 hertz. The peak at [wolframalpha] is at ω = 2π f = 18.8 radians/sec. It's just the frequency units that differ.
- The definition at [wolframalpha], after clicking on "more Details", is this one from our table:
- including the "-i" convention.
- But the height of the peak is lower than our example by the factor So they are apparently using this one, also from our table:
- --Bob K (talk) 22:17, 28 August 2013 (UTC)
Fourier transform of an integral
The Fourier transform of the derivative of f(t) is listed, but not the fourier transform of the definite integral of f(t). I think that would be helpful as well, especially since naively putting -1 into the "n'th derivative" formula to find it would overlook a few subtleties. --AndreRD (talk) 16:23, 21 August 2014 (UTC)
Animated example not representative of transform
The animation in the Introduction section creates confusion between the Fourier Transform and Fourier Series. It shows a short section of the partial sum of the Fourier Series for a square wave, but then uses the notation for the transform to label a diagram that is essentially a visualization of the series coefficients. The Fourier Transform of the section of function shown is a continuous function that spans the entire real line, which can be thought of as the linear combination of the transforms of each sinusoidal component, each of which is a scaled, translated version of the sinc function. The transform would be shaped more like this (ignore units, as none are given in the example graphic): [4] The expression shown is also very general and does not add anything to the example, especially since the function shown would normally be represented as the sum of sines (or cosines, but not both) and only with odd values for n and coefficients in the form of . Adamsmith (talk) 16:19, 2 September 2014 (UTC)
- The Fourier transform of a periodic function (shown) is a linear combination of delta functions (also shown). What's wrong with this visualization? (Incidentally, I don't know what went wrong with the image at the link you gave, but as the Fourier transform of a periodic function, it is very clearly not correct.) Sławomir Biały (talk) 23:56, 2 September 2014 (UTC)
- Periodic functions do not have a Fourier transform in the ordinary sense. In the distributional sense, yes, but the article starts out by saying the fourier transform is "an integral transform" and the F.T. in the distributional sense is not an integral transform. So, yes, this example is inappropriate and creates confusion between the Fourier series and the Fourier transform. The Fourier Series and the F.T. can indeed by unified from a higher perspective, but then the value of the F.T. at those points is a Dirac delta function, not a finite coefficient, so the example is *still* wrong even in that generalised sense.
- I don't understand what you mean "the value of the F.T. at those points is a Dirac delta function". Surely it is a linear combination of Dirac delta functions with finite coefficients. If this is how the last frame of the animation is interpreted (representing a linear combination of delta functions), then surely it is a "correct" visualization. But in any event, regardless of whether it is strictly technically correct on a literal interpretation of the ordinate axis of the last scale, this does convey a useful intuitive idea of the Fourier transform nevertheless. The article should clarify that the last frame is meant to be a visualization of a linear conmbination of delta functions, but otherwise it seems quite useful. Sławomir Biały (talk) 21:54, 21 November 2014 (UTC)
- Let me point out that many readers of this page will simply be interested in obtaining sufficient understanding to use Fourier transforms in contexts that might be relatively straightforward compared to more advanced issues related to integration theory. I know that many mathematicians consider some of the profound issues of whether or not a Dirac delta function is actually a function, but many physicist don't worry about it, and they still end up using Fourier transforms anyawy. As always, those same scientists need to check their results and ask if they make sense. I don't know if this provides a way forward here, but I hope it does. Sincerely, Grandma (talk) 22:10, 21 November 2014 (UTC)
- These remarks seem to miss the point of an encyclopedia article. No one, unless unusually gifted, can learn how to use the F.T. from reading an encyclopedia article. At least a textbook...or a month's worth of taking a course with a teacher...is necessary. An encyclopedia article is for other purposes: it can give someone with no knowledge some idea of where the F.T. is situated in the wider world so they can decide whether to read a book or attend a course.
For someone with a little learning already, it can supplement their course material, for example, by correcting idiosyncratic or sloppy or parochial presentations which may work in a class but a minority of interested students might question it and want to check up on their teacher. It can serve as a reference, and the bibliography section should be carefully chosen to be authoritative works, not someone's old electrical engineering text, or whatever free on-line text could be found in a hurry to back up a formula. — Preceding unsigned comment added by 181.225.234.104 (talk) 19:52, 8 December 2014 (UTC)
Lead paragraphs
Thanks to @Slawekb and @Bob K for assistance in fixing up the lead paragraphs of this important article. One of my complaints about some of the Wikiarticles on applied mathematical subjects is that their introductions are not always very clear. Why is the subject useful? Why is it interesting? And, then, there is the inevitable drift in the content as editors tweak this or that. I think, now, the lead paragraphs of the FT article are much better. Can I also ask interested persons to look at the lead paragraphs of other corresponding articles like, say, the Laplace Transform and other similar articles? Thanks, and sincerely, DoctorTerrella (talk) 13:46, 13 October 2014 (UTC)
cat FourierTransform/comments.txt
There has been a certain amount of confusion in the lead paragraph. Amounting to false statements. In the lead paragraph, one must be very clear which of the two (main) attitudes one is taking: is the Fourier Transform the usual undergraduate integral transform, to begin with, and later in the article its generalisations (in several directions, not all compabible: Fourier--Stieltjes, locall compact Abelian groups, and distributional) are given, or do we start right out with the viewpoint of distributions and Dirac combs?
IMHO, the former. And the lead paragraph as I found it started out by clearly saying, The F.T. is "an integral transform". It didn't say, " can mean any one of a number of related integral transforms", it didn't say, "can refer to any one of a number of related concepts", etc. That commits one to the usual integral. Which is, unless one specifies, either the Riemann integral or the Lebesgue integral. (Which of these does not need to be specified in the lead paragraph, of course). But it cannot mean distributional F.T. since that is not given by an integral. Therefore, it cannot include the Fourier series and the delta function as special cases.
Therefore the example of a musical chord was false. The Fourier transform in the ordinary sense is only for transient phenomena. It would be better not to give any examples at all than to give a false example. It is possible to give a simple example in words, but someone X-d it out, saying it was too long. (I took the structure and length of the lead paragraph as I found it, simply trying to eliminate the mistakes, so I included an example since the existing paragraph did.) Well, fine, but then kindly do not re-introduce a false example, equally too long.
If there is really a consensus for the second approach, it should be made clear to the reader even in the lead paragraph that the lead paragraph is talking about an extended, generalised meaning of the F.T. because such usage is not universal and is not what one always runs into in undergraduate textbooks, for example. And this is why the second approach necessarily would involve a more tortuous lead paragraph, and one much more carefully written.
And the Fourier Transform is not "reversible", whatever that is supposed to mean. There is a very similar integral transform going the other way, but it does not quite succeed in reversing the effects of the first transform, except e.g. on the Schwartz space and for "nice" functions. At jump discontinuities it gives a value different from the original value.
- Anonymous IP: Thank you for your edits, especially straightening out the inverse FT. I suppose one could just consider the FT as just an integral transform, but whether or not the result is of integration either exists or is "well behaved" is something that needs to be considered in some circumstances. In my experience, most scientists don't worry too much about these sorts of things, possibly getting away with it most of the time, but also possibly occasionally making some mistaken interpretations. Yours, Grandma (talk) 17:29, 21 November 2014 (UTC)
- The purpose of the examples of the lead paragraph is not to strive for mathematical precision, but to convey an intuitive idea of the Fourier transform to readers of the article who have zero mathematical background, and possibly very little background in the physical sciences. I don't think that such a person is mislead in any way by some version of the notion that the Fourier transform is "like" decomposing a musical chord into its constituent notes. I am open to the idea of a different way to express this, but you seem to be hostile towards having non-technical things in the article at all when such things are not exactly precise, and require interpretation on suitable function spaces in order to be strictly true. That attitude is not going to result in an encyclopedia article that is useful to anyone who is not a mathematician, since as you know the general definition of the Fourier transform including even cases that typical undergraduate engineering students encounter, already requires comparatively advanced mathematics to make precise. Sławomir Biały (talk) 22:08, 21 November 2014 (UTC)
- I'm not sure who are referring to, but I am certainly not hostile to material that introduces intuition. I do find the description in term of musical sound to be interesting, but I also kind of find it distracting for the lead section, but I won't belabor my opinions on this. Looking after things, Grandma (talk) 22:32, 21 November 2014 (UTC)
- There is a difference between precision and accuracy.
Nor should an encyclopedia article concern itself primarily with ;the priorities of practical scientists. Accuracy is very important.
But the difference between a function and the pheonomenbon it models is more than just acdcurcay or pre cision, and your changes have reinvtoruced serious error: A function of frequency cannot be represented by a function of time. Both functions, here, are mathematical models of the same ph\ysical phenomenon, but they are not representations or models of each other.
Periodic and transient phenomenon
This edit by User:I'm your Grandma. changed a sentence from the lead, which said:
- "The Fourier transform is a continuous function, suitable for analyzing both transient and periodic phenomena. "
to
- "The Fourier transform is continuous function, most suitable for analyzing periodic phenomena."
While the former sentence is not ideal (since Fourier transform has certain deficiencies when dealing with transient phenomenon, which inspired the development of Wavelet transforms and the whole field of Time–frequency representation) it is certainly accurate since FT can be and is widely used for analysis of non-periodic functions. The latter statement, on the other hand, throws away the baby with the bathwater, because if FT were only "most suitable for analyzing periodic phenomena", it would hardly ever be preferred over the good old Fourier series.
Pinging @Bob K: who I believe formulated the original version. Abecedare (talk) 18:17, 24 November 2014 (UTC)
- Three points: (1) I am often mystified that people so often apply Fourier Transforms to transient functions when they could also use, say, Laplace Transforms that are, actually, specifically designed for transient functions. (2) And, then, this sentence that @Abecedare finds objectionable doesn't actually exclude the transient case, it just says that Fourier methods are best used for those that actually are periodic. (3) Finally, please read the sentence in its context! It follows a sentence that IS about a periodic signal. So what follows needs to be compatible with that which preceded it. Sentences are not to be read in isolation. Still ticking, Grandma (talk) 18:40, 24 November 2014 (UTC)
- "Transient" in its ordinary English meaning does not mean it starts suddenly and then dies down. It just means it will pass away, so it does not precisely rule out, say, the Hermite functions. IN electrical engineering usage, it means zero for negative time and good decay.
But in the general usage, it does not have to be zero for negative time. transient is Wiener's word, it is just an ordinary word to refer to the facdt the the signal must have finite energy and zero averagae power. It is in contrast to periodic and in contrast to stationary. The F.T. cannot be used for stationary or periocic phenomena, not without using distributions or the Fourie--STieltjes integral, which come later.
- All this is about a small part of the use of Fourier transform. In quantum mechanics, it relates coordinate and momentum representations of the quantum state. In probability theory it is of great help for the Central limit theorem. In both cases it is not just one-dimensional. But even in the one-dimensional case, a quantum state is not periodic; and a probability distribution is not periodic. Boris Tsirelson (talk) 14:26, 26 November 2014 (UTC)
- @Tsirel, the discussion, here, is specifically about what belongs in the lead. Grandma (talk) 14:30, 26 November 2014 (UTC)
- The discussion below has nothing to do with "periodic and transient phenomena". Rather Grandma is there complaining about the general state of the lead. It is appropriate to break this out as a separate section to encourage outside input. Boris's reply in this section has apparently missed the point, thanks to your insistence that the discussion below be forced into this thread. Please do not merge the two sections again. I have placed an anchor to the next section at WT:WPM, because I want people to comment there rather than respond to "transient versus periodic" issue raised here, which seems to be now irrelevant. Sławomir Biały (talk) 15:44, 26 November 2014 (UTC)
- @Slawekb, thank you for your recent work on the lead. Feeling better every day, Grandma (talk) 15:02, 25 November 2014 (UTC)
- @Slawekb, would you consider moving the material about integration theory and other technical (but true!) to a subsequent section? Grandma (talk) 11:45, 26 November 2014 (UTC)
Lead
Comments copied from above.
:@Slawekb, thank you for your recent work on the lead. Feeling better every day, Grandma (talk) 15:02, 25 November 2014 (UTC)
@Slawekb, would you consider moving the material about integration theory and other technical (but true!) to a subsequent section? Grandma (talk) 11:45, 26 November 2014 (UTC)
- I did not write these comments here, in this section. Instead, Slawekb, inserted a new section divider. I have, therefore, struckthrough my comments in this new section. Grandma (talk) 16:44, 26 November 2014 (UTC)
- It's unclear exactly what you want the lead to look like. The recently-added second paragraph that concentrated on the periodic case went into inappropriate detail, yet you were apparently fine with that. But the periodic case is only appropriate for the lead if it is used to illustrate the general principle that defining the Fourier transform is often not a straightforward integral. Rather it requires the theory of distributions to make precise, even in simple day-to-day cases. But if we remove the material about "integration theory" as you call it, then the rest of that paragraph should also be removed. Regardless, since a lot of the article already concentrates on what the appropriate definition of the Fourier transform is in different function spaces, there is no need to move it elsewhere in the article. The lead here is merely summarizing what is already in the article. (See WP:LEAD.) Sławomir Biały (talk) 11:54, 26 November 2014 (UTC)
- @Slawekb, note that there are several editors that would prefer technical material to be removed from the lead. I'm trying to find a compromise. Hence, my suggestion to keep the technical material, but in a subsequent section. In good wiki spirit, Grandma (talk) 12:05, 26 November 2014 (UTC)
- @Slawekb! I'm afraid that the lead is now out of control. Please play along, Grandma (talk) 12:12, 26 November 2014 (UTC)
- I don't know what you mean "the lead is now out of control". The lead now almost exactly summarizes the main points of the article, in accordance with WP:LEAD. Sławomir Biały (talk) 12:21, 26 November 2014 (UTC)
- @Slawekb, while I know that this message was for me, in general it would help if you address editors by their name. In some of your previous communication, it was not clear whom you were addressing. Now, as for the lead, it is now too long, too technical, and not well organized. I will allow other editors to weigh in, but please be prepared for some paring! Enough for now, I have to bake some pies! Grandma (talk) 12:28, 26 November 2014 (UTC)
- In response to the claim that the lead is "not well organized", the structure of the lead roughly follows the structure of the article itself. Paragraphs 1 and 2 summarizing the main points of the earlier sections of the article. Paragraph 3 summarizes later sections of the article concerning the well-definedness of the Fourier transform on various function spaces. I have left in the example of periodic functions in order to make that section more accessible. We do not mention the Poisson summation formula in connection with this, although perhaps we should. We also leave out the details of the Lp theory, although this is a substantial part of contemporary mathematical research on the Fourier transform, as reflected in the article itself. The final paragraph concerns the various generalizations of the Fourier transform, which includes many cases of practical interest (e.g., discrete Fourier transform and spherical harmonics). Is there some other way that you would prefer this content to be organized? Sławomir Biały (talk) 12:51, 26 November 2014 (UTC) Also, customarily on Wikipedia replies directed at a person are distinguished by their indentation level. See WP:TALK.
- @Slawekb, yes, of course, we can, and do, indent. But you've been referring to "you" when I'm not the only editor disagreeing with your preference for inserting too much technical material in the lead. You are clearly an accomplished mathematician, but Wikipedia has a broader readership than those who might appreciate issues of integration theory. Also, there doesn't need to be a one-to-one correspondence between the lead and the content that follows. Really, an introduction need not be so tightly constrained in that way. While I might have been able to accept a much earlier version of what you were proposing for the lead, before it got loaded up with all this stuff, in general, we will all have to accept some compromise, including me and including you. Okay, now back to baking pies, Grandma (talk) 13:46, 26 November 2014 (UTC)
- What other editor is disagreeing with me? You have several times referred to such an editor, but as far as I know this has strictly been a dialogue between the two of us. Sławomir Biały (talk) 15:11, 26 November 2014 (UTC)
- @Bob K, please weigh in. Thank you, Grandma (talk) 15:34, 26 November 2014 (UTC)
- It's true that I would prefer not to see the discussion of integration in the lead. But I'm pretty happy overall. I'm not going to quibble about it. I'm making jalapeno & bacon deviled eggs. --Bob K (talk) 17:00, 26 November 2014 (UTC)
- Well, the lead is much more succinct now. Thank you, @Slawekb, @Bob K, and @D.Lazard. Now I can go back to baking pies. Grandma (talk) 17:04, 26 November 2014 (UTC)
- Hold on, the lead seems to be growing again, getting more technical, when the technical material should probably appear in subsequent sections. Grandma (talk) 17:09, 26 November 2014 (UTC)
I am not a specialist of the subject, but this lead appears to me as confusing, too long and too technical. For example the first sentence supposes implicitly that Fourier transform applies only to functions of the time. This first sentence is
The Fourier transform, named for Joseph Fourier, is an integral transform that operates on the mathematical representation of a function in time and produces a function in frequency.
I would have written something like (too much "function" in this sentence)
The Fourier transform, named for Joseph Fourier, is an integral transform that, when applied to a function expressing an amplitude as a function of the time, produces a function of the amplitude as a function of the frequency
.
The first paragraph describes an application to physics, while the second paragraph begins by The Fourier transform has many applications in physics and engineering
. Surprisingly the remainder of this second paragraph is absolutely not about physical applications.
The second sentence of the second paragraph asserts
Many functions of practical interest are unchanged when subjected to the transform and its inverse
I had to think a lot (and to search on formulas in the body of the article) to understand that this means
The composition of the transform and its inverse leaves unchanged many functions of practical interest.
The remainder of the paragraph requires similar clarifications: as is it, it may be understood only if one has already used the Fourier transform.
IMO, the lead deserve to be completely rewritten, for better distinguishing the mathematical framework from the applications (the body of the article must also be edited in this direction). It must be stated that the Fourier transform is the starting tool of signal theory (all examples of this lead may be considered as belonging to signal theory). Moreover, the two last paragraphs must be summarized in two sentences, like:
For some functions, the classical Riemann integral does not work and one has to consider Lebesgue integration and distributions. The Fourier transform may be generalized to functions on a space of higher dimension and to functions on a space on which acts another group than the group of translations.
This is sufficient for the interesting readers and not confusing for the others. D.Lazard (talk) 16:35, 26 November 2014 (UTC)
- I do not thinkg the lead paragraph should ignore the usual distinction between Fourier series and the Fourier transform. I admit that from the higher point of view, the Fourier series can be viewed as an example of the Fourier transform, but we would be doing our readers a great disservice to start off by ignoring the usual distinction. Furthermore, there seems to be some confusion about the higher point of view anyway. There are two "higher" points of view: using distributions, and using locally compact abelian groups. Take the distribution point of view, for example. Then the F.T. of the slowly increasing functions sin(x) exists as a distribution, and is a delta function. But it is still wrong to say the amplitude of the Fourier transform of sin(x) gives the strength of that frequency: the amplitude is infinite, but the strength is finite. And from this point of view, the Fourier series has finite coefficients, and so is still not the Fourier transform, which is a delta function. I suppose one can say that the amplitude of the F.T. gives the relative strengths of the frequencies: for a transient function, the amplitudes are all finite, and for a periodic component in some noisy function, the amplitude is infinite. Since the strengths of the frequencies in a transient function are all zero and the strength of a periodic component is finite, the relative strengths are well represented by this distributional F.T.
- The second "higher" point of view is to consider the F.T. on the real line as one example, and the F.T. on the circle as another example: the F.T. on the real line is an integral transform, and is undefined for periodic functions, while the F.T. on the circle takes values on the dual group of the circle, which is the integral multiples of the fundamental frequency, and it takes finite values, so these finite values are equal to the Fourier coefficients of the Fourier series and are equal to the strengths of the frequency components.
- These two ways of uniting the F.T. with the Fourier series are not compatible...so I recommend we not rely even implicitly on either of these higher points of view while writing the lead.
- The lead should be simple, as many have urged, and the simplest approach to the lead is that the F.T. is an integral transform for functions and is different from the Fourier series. The senses in which one can speak of a "Fourier transform" of a periodic function should be postponed until later sections of the article.
- The example of the musical chord is really and truly not a good thing to mention in the lead: because it is misleading, I suggest it go missing from the lead.... and be migrated to the article on Fourier Series, where it belongs. — Preceding unsigned comment added by 181.225.234.8 (talk) 18:28, 27 November 2014 (UTC)
- Let me just respond to the claim that "The example of the musical chord is really and truly not a good thing to mention in the lead: because it is misleading." I disagree very strongly with this statement, and have already replied to you (or another IP address making the same point) in an earlier thread. Please stop raising the same point in new places, ignoring others' replies. I will say once again, it is not misleading to say that resolving a function into frequencies is like decomposing a musical chord into its constituent notes. The Fourier transform does decompose a function into its spectrum. This is indeed "like" decomposing a musical note into its spectrum. While it is true that there are nuances that need to be explained, the word "like" certainly does not mean that these are literally the same thing. If you think this is misleading, then I suggest that you need to put Rudin away for a little while and start thinking about what the Fourier transform means, rather than how various harmonic analysis textbooks define it. It seems that you are hung up a bit too much on the mathematical formalism, and are missing the forest for the trees. Sławomir Biały (talk) 20:43, 27 November 2014 (UTC)
- Let me explain a little more carefully. Consider a sum of two cosine waves with different frequencies. It's F.T. as a distribution is a sum of two delta functions. So what are the "values" of the F.T.? At most frequencies, zero, and at those two points, infinity. Never is its value a finite non-zero Fourier coefficient.
- The animation conveys an intuitive idea of the Fourier coefficients. It belongs in that article. The animation is inaccurate about the F.T. The words in the lead, apart from the animation, about the chord, where one says "like", escape falsity by being vague, but are still misleading even though not actually false. But the animation is actually false. — Preceding unsigned comment added by 181.225.234.104 (talk) 18:57, 8 December 2014 (UTC)
- You need to integrate over frequency. Recall that for given a delta function of x, say, the integral over x gives 1. Grandma (talk) 19:06, 8 December 2014 (UTC)
- According to the caption of that animation "Dirac delta functions, shown in the last frames of the animation". That seems to agree with what you have written. How is that wrong? Also, regarding "escape[s] falsity by being vague", this is true in a sense, the intuitive idea can be made very precise and correct using distributions, so it just as well escapes exact correctness by being vague. So I don't really see what your point is. Just because intuitive explanations of mathematical concepts cannot immediately be made precise in the lead paragraph of an article is not a reason to eschew them. Formal definitions follow in the article. Sławomir Biały (talk) 19:11, 8 December 2014 (UTC)
- It is often said: mathematicians, do not forget that a lead should say what is this good for, and how does this relate to other things, not only what this is. Now a mathematician (Slawomir) wants to do so, and some others object! Rather strange. Before explaining the distinction between Fourier series and Fourier transform, it should be mentioned that they have something in common, and explained, what is this "something". In this context the musical chord fits nicely. And moreover, it could be mentioned in the "Fourier analysis" as well. Boris Tsirelson (talk) 21:54, 27 November 2014 (UTC)
More input: I still think we need input and comment on the lead section from a mixture of editors, including those concerned with using the Fourier Transform (users such as engineers and scientists), as well as mathematicians. — Preceding unsigned comment added by I'm your Grandma. (talk • contribs) 18:05, 29 November 2014 (UTC)
- The lead has probably been rewritten a hundred times, over the years. Any consensus you achieve will likely be a transient one, so you might have better things to do than this... just sayin'. --Bob K (talk) 16:27, 30 November 2014 (UTC)
- Indeed, @Slawekb has already reverted your change. I'd just like to promote the idea of working cooperatively on this, as it is an important Wikipage. Trying my best, Grandma (talk) 16:30, 30 November 2014 (UTC)
Opening example: I'm not sure that the present lead example (and illustration) of the Fourier transform of a Gaussian function is ideal. Why not, instead, concentrate on a more conventional example: the Fourier transform of a sinusoid? Can this be discussed? Grandma (talk) 01:43, 7 December 2014 (UTC)
- The Fourier transform of a Gaussian is much more fundamental than the Fourier transform of a sinusoid. It connects the Fourier transform to diverse areas such as statistics, diffusion, quantum mechanics, and heat transfer, as well as classical harmonic analysis. It is in this setting that the fact is mentioned, not merely to serve as an example of the Fourier transform, but to connect it to its significant applications. Incidentally, as has been observed already, the Fourier transform of a sinusoid may not be the best starting example since, in order to make such an example correct, one needs to invoke the theory of distributions (and delta functions). You yourself already objected strongly to a lead written along those lines. Sławomir Biały (talk) 15:16, 7 December 2014 (UTC)
- You are right that I don't think we should mention the theory of distributions in the lead, I also don't think we need to mention Riemann integration. What I do think we need in the lead is material that accommodates what is likely to be the typical reader. Many readers will be engineers and scientists interested in time series analysis. They will be using delta functions, but possibly not fully appreciating that they are distributions, and, of course, many readers will be contemplating the Fourier transform of a sinusoid, if only because it will be related to some problem they will be working on. I certainly supported the illustrative discussion of the FT of a sinusoid when it was more prominently seen in the lead. Mostly, at this point, however, I think this article needs to receive the input of other editors (not just you and not just me). Grandma (talk) 15:34, 7 December 2014 (UTC)
- I've no objection to mentioning the role of Fourier transform methods in time series analysis. But you need to realize that this is just one among many many applications of the Fourier transform. It's really just weird that you think the lead should not mention that the Fourier transform is an integral transform. Open any textbook on the Fourier transform, and this is one of the first things you see. Sławomir Biały (talk) 15:50, 7 December 2014 (UTC)
- You are right that I don't think we should mention the theory of distributions in the lead, I also don't think we need to mention Riemann integration. What I do think we need in the lead is material that accommodates what is likely to be the typical reader. Many readers will be engineers and scientists interested in time series analysis. They will be using delta functions, but possibly not fully appreciating that they are distributions, and, of course, many readers will be contemplating the Fourier transform of a sinusoid, if only because it will be related to some problem they will be working on. I certainly supported the illustrative discussion of the FT of a sinusoid when it was more prominently seen in the lead. Mostly, at this point, however, I think this article needs to receive the input of other editors (not just you and not just me). Grandma (talk) 15:34, 7 December 2014 (UTC)
- I have no objection at all to mentioning that the FT is an integral transform. Indeed, in previous incarnations of the lead, I worked with such text. And, hello? yes, of course, I recognize that time series analysis is just one of many applications of the FT. I never said otherwise. This seeming lack of our communication is why outside input from other editors is needed. Grandma (talk) 15:56, 7 December 2014 (UTC)
- You seem to be contradicting yourself. The present version of the lead mentions in one sentence that the Fourier transform is an integral transform. It also mentions in that same sentence the need for more sophisticated integration theory (e.g., distributions) without going into details. It mentions the Gaussian in connection with important applications to statistics, diffusion, and heat transfer. Many engineers may not care about these things, like many physicists may not care about time series, or many mathematicians may not be concerned wit the physical applications. The article should not take a position with regard to what kind of applications its readers have in mind. I do not understand what your objection is. I think you are just wasting everyone's time. Perhaps you should stans aside and let someone who knows something about the Fourier transform participate instead of this continued trolling? Sławomir Biały (talk) 16:19, 7 December 2014 (UTC)
- The preceding comment is another example of why this important (very important) article on the Fourier transform needs input from a diversity of editors. Thank you, Grandma (talk) 16:27, 7 December 2014 (UTC)
- Agreed. We need more knowledgeable editors, and fewer incoherent trolls. The signal to noise ratio is too low. Sławomir Biały (talk) 16:30, 7 December 2014 (UTC)
- The preceding comment is another example of why this important (very important) article on the Fourier transform needs input from a diversity of editors. Thank you, Grandma (talk) 16:27, 7 December 2014 (UTC)
- Now, it is one thing to say "integration theory", but to most people, that means things like the difference between a Riemann and a Lebesgue integral, the difference between Lp and L1, and so on. Nothing to do with distributions. And, anyone using software to calculate an integral has noticed that some integrals diverge, so whether an integral converges or not is not a mere detail of "integration theory". You cannot input a polynomial into the software package R, for example, for calculating the F.T., and get a meaningful result. To put this another way, using a software package's numerical integration algorithm to calculate a Fourier integral has nothing to do with integration theory, as complained about here, and has everything to do with scientific practice: it returns an error, like, "this integral probably diverges", or, "maximal number of subdivisions exceeded". Or, worse, sometimes does not return an error, but does return a meaningless result which will only be detected as meaningless if you try to check it with higher accuracy parameters. — Preceding unsigned comment added by 181.225.234.104 (talk) 19:02, 8 December 2014 (UTC)
- Well, most readers probably wouldn't even know about the Lebesgue integral, let alone the differences between Lebesgue integration, Riemann integration, and distributions. But, yeah, there's lot's of stuff hidden in the "integration theory" catch-all that it would be inappropriate to go into in the lead. The rest of the article has details. If you're interested in such details, may I suggest reading further in the article? Sławomir Biały (talk) 19:16, 8 December 2014 (UTC)
- UMM, you have totally misunderstood me. I am objecting to Gramda's overly vague and slightly pejorative use of that moniker. I have other objections to your contributions. I want to say: distribution vs. function is *nicht* an issue of integration theory, because distributions don't get integrated. So if someone objects to your including distributions becuase they object to integration theory, that is just a misguided objection to your attempts to contribute. Secondly, whether an integral converges or not is no way a "detail" of integration theory, but something of practical concern to every scientist or number cruncher. On this, I would hav e thought, you and I are in agreement. — Preceding unsigned comment added by 181.225.234.104 (talk) 20:00, 8 December 2014 (UTC)
- 181.225.234.104: I encourage you to please consider editing the lead. Thank you, Grandma (talk) 00:22, 9 December 2014 (UTC)
- The consensus has decidedly been against the sort of edits that the IP is proposing. (See Boris' reply above, despite your own attempts to prevent others from commenting here.) Why do you not practice what you have already suggested and allow those editors that know something about the subject to discuss things? Are you going to continue to this unconstructive style of kibitzing? Sławomir Biały (talk) 01:27, 9 December 2014 (UTC)
- Sławomir Biały, I'm not yet sure what the IP is proposing, not until he/she actually makes edits. More generally, it is good to get input from others so that the consensus you mention emerges. Let's see. Thanks, Grandma (talk) 01:45, 9 December 2014 (UTC)
- "Grandma", I have disagreed with many of the points raised by the IP, as has Boris. You admittedly lack the capacity to understand what the IP has been talking about. So, why in the world would you imagine that it is your duty to encourage such edits? Your continued presence on this discussion page is not constructive (and has been so pretty much since your tantrum here last week). Please cease your kibitzing, before this winds up at the administrator's noticeboard. Thanks, Sławomir Biały (talk) 08:02, 9 December 2014 (UTC)
My 2c: The example in the lead is fine, at least until a better substitute for it is found. A musical chord is, in practice, a transient phenomenon. It is also not periodic, even if not transient by artificial means, unless the instrument is perfectly tuned. Thus an actual chord should be ideal for analysis using the Fourier transform. The technical issues with integration need attention in the lead, they cannot go out. The reason is that delta functions (and worse) appear immediately in simple applications. Perhaps the relevant paragraph could be rewritten to introduce delta functions explicitly and specifically instead of vaguely referring to integration theory and distributions. The reference (Vretblad) I added, by the way, is ideal for the physicist/engineer that wants a rigorous understanding, without delving deeply into functional analysis, of what underlies the (more or less formal) delta function manipulations when dealing with Fourier transforms so that he or she can use them with confidence. YohanN7 (talk) 10:28, 9 December 2014 (UTC)
This sentence in the lede needs help: "The result is a function of frequency whose values are complex numbers, encoding both the real amplitude (labelled 1 in the diagram) and the phase offset (labelled θ) of each contributing frequency." (1) The "result" is due to Fourier transformation, something that might be said. (2) Reads like "frequencies" are complex numbers, but they are actually real (Laplace transforms give complex frequencies, but that is not the subject, here). (3) Amplitudes are complex. Anyway, one of the involved editors needs to fix this up a bit. LadyLeodia (talk) 21:23, 19 December 2014 (UTC)
- Something like this? Incidentally, the "values of a function" is the idiom in use. I do not know if a comma clarifies the referent. Sławomir Biały (talk) 21:49, 19 December 2014 (UTC)
The examples
There are two examples, this entry is about the one in the body, not the lead. It is a correct example, that is nearly Gaussian, but is not that enlightening. What is the point of giving an example where the transform is much like the original function? I much prefer the example of the box-car function (unity on an interval, zero elsewhere), not only because it is very enlightening as to what the F.T. does to functions, but even more because it is very physical: its FT is the diffraction pattern formed by light's passing through a slit of that width. I am working on a graph of that example, in R, so it will be open-source and not copyright. The parameters can be chosen so that it is real-valued. — Preceding unsigned comment added by 181.225.234.104 (talk) 19:00, 8 December 2014 (UTC)
- The example was selected because it illustrates a number of things that the lead paragraphs discuss. First, it illustrates something "nearly Gaussian" going to something else "nearly Gaussian", because the lead refers to these features. Also I wanted something with a non-trivial complex phase, which a (symmetric) boxcar function would lack, again because that is something that we try to explain. I don't necessarily think the idea should be to illustrate what the "Fourier transform does to functions", since really this will involve quite a lot of explanation which would be inappropriate for the lead. (E.g., like in the boxcar example, the boxcar is compactly supported, so its Fourier transform is smeared out across all frequencies. That's an important feature of Fourier transforms, but is not something that can be succinctly explained in the lead paragraphs. That kind of explanation belongs in the article proper.) Sławomir Biały (talk) 19:28, 8 December 2014 (UTC)
- To me this discussion is interesting. I had always heard that Fourier used the theta function and the use of the Gaussian was due to Laplace. Now when I browse through Fourier's book, I see that he never actually takes the f.T. of the Gaussian. He does use it as an alternative to his theta function for a convolution kernel (but for all we know, he is copying Laplace here, who has priority). What he takes the F.T. of can be written with a Gaussian as one its factors, but so can any function (e.g., the function sin can be written as the product of a Gaussian with another function). And he does not take its F.T. Do you know what is the very first function in the book that he takes the F.T. of? the box-car. I didn't know that... his original memoire is, btw, not included in his book, so if you can look at his original memoir and see if he ever takes the F.t. of the Gaussian, let me know.
- but more on point is: it doesn't matter how much mor fundamental the Gaussian the the Hermite functions are...it matters how instructive it is to a reader in the lead paragraph to see the graphic. And this graphic makes no real impreession...it's importance is all "inner". So it is a poor choice, from the standpoint of writing style. — Preceding unsigned comment added by [[User:{{{1}}}|{{{1}}}]] ([[User talk:{{{1}}}|talk]] • [[Special:Contributions/{{{1}}}|contribs]])
- Well, it's not really a big deal either way, but a boxcar us a poor illustration for the lead because it lacks a non trivial complex phase. It may be a better illustration of what the Fourier transform "does to functions" or whatever, and so better off in a more specific section. Sławomir Biały (talk) 11:48, 10 December 2014 (UTC)
- From the standpoint of good writing, it is bigger than you might think. And, underlining the complex numbers is just a mistake for the lead paragraphs. The F.T. can be done without complex numbers, they are not a very important thing to mention, and since they get in the way of illustrating an example, it is just a strategic mistake to be concerned with them in the lead. — Preceding unsigned comment added by 181.225.234.43 (talk) 19:50, 12 December 2014 (UTC)
- Ok, fair enough. Sławomir Biały (talk) 20:44, 12 December 2014 (UTC)
- IP 181.225.234.104, you might know that Bracewell's book contains a very nice visual list of different functions and their corresponding Fourier transforms. It is very enlightening just to look through those pages. Cheers, Grandma (talk) 01:55, 9 December 2014 (UTC)
I was unhappy with the example of the Gaussian, and the fluffy style of rhetoric in the lead paragraphs about the Gaussian, but now I am convinced you are factually incorrect. Both historically and really. I just read Fourier, and he does not use the characteristic properties of the Gaussian. For him, it is just another function. From which secondary source are you copying (and, perhaps, misunderstanding)?
Fourier, like others before him (and he calls it well-known) is using the operational calculus on the polynomial PDEs he is interested in. It happens, by coincidence, that for the heat equation you get the Gaussian that way, but later in the same chapter he does some random equations the same way. That is, he puts D=(d/dx) into the power series for exp as an infinitesimal generator. Then he does this for D squared. Yes, he gets the Gaussian that way but he doesn't use any of its remarkable properties. Later, he does this for more complicated polynomials, and everything goes through. The other polynomials do not have any particular properties.
You, or perhaps even your source, are getting mixed up with Wiener's proof of Fourier inversion relying on Hermite functions. Yes, the Gaussian has remarkable properties but Fourier doesn't use them and they don't belong in the lead paragraphs, either. The Fourier transform does not really tie in with the Gaussian much. The heat equation does, certainly. But the F.T. is much broader than the heat equ., so your fluffy comments are not even true. — Preceding unsigned comment added by 181.225.234.6 (talk) 23:52, 12 December 2014 (UTC)
- I'm not absolutely married to the idea of the Gaussian. It is a convenient pivot to bring in various applications, where the Gaussian is fundamental (diffusion, heat transfer, and probability), nothing more. There are certainly other ways to do that. The article does not actually assert that Fourier was concerned primarily with the properties of Gaussians, rather that Fourier used the integral transform to study heat transfer, so perhaps you are reading too much into the "fluffy" prose. Yes, it's also true that there is a well-known proof of Fourier inversion using Gaussian functions. But I am puzzled why that would count as a strike against including them in the lead, as you seem to be arguing. Regardless, I do think connections to probability and diffusion deserve to be mentioned, although the precise context and prominence are certainly negotiable. Sławomir Biały (talk) 01:08, 13 December 2014 (UTC)
- The lead discusses, in the first paragraph, the idea of a musical chord. But then this idea is not developed. Instead, a more complex example, Gaussian function, is discussed. Not much light is being provided by all of this. — Preceding unsigned comment added by 166.137.118.113 (talk) 08:49, 13 December 2014 (UTC)
- I don't think there is space in the lead to really develop examples. We should summarize important points of the article, and give context for the transform by discussing its various applications. This is probably why you feel the "examples", such as they are, are undeveloped and poorly selected. As I have said, the purpose of the sentence about Gaussians is just to glue together some applications of the transform (as you say "fluff"). I have tried to be a little less fluffy, saying that the Gaussian is the critical case in the uncertainty principle.
- An earlier version of the lead did develop on the musical chord idea, but this was almost universally disliked by editors, and I think the current lead is much better than that earlier one anyway.
- I will not insist much on the current lead image, if that's what you are still concerned about. Go ahead and replace it if you think you have a better one. Sławomir Biały (talk) 10:13, 13 December 2014 (UTC)
I have added an image of a rectangular pulse instead, following Fidel's suggestion. @Fidel, something like this you had in mind? Sławomir Biały (talk) 15:33, 13 December 2014 (UTC)
- Yes, but it seems unnecessary clutter to include two examples, one of which has complex numbers, thus making three images in all. Less is more.
The discussion of how a delay = mult. by an exponential factor is excellent, but does it really belong in the lead? (no....)200.55.128.88 (talk) 03:44, 16 December 2014 (UTC)
Probability theory?
Hi. Looking over the introduction, I see a reference to probability theory in the context of discussion of the Gaussian function. This seems confusing. In probability, the Gaussian distribution (which describes a discrete statistical process, not a continuous process) results from the central limit theorem, not, as far as I know, from a Fourier superposition. Maybe I'm wrong, maybe there is a connection, but I've not heard of it. I recommend removing the reference to probability theory, unless there is actual reason to have such. 166.137.136.116 (talk) 18:00, 13 December 2014 (UTC)
- Well you learn something every day! I did a Google on Fourier transform and central limit theorem, lots of connections! 166.173.187.155 (talk) 19:13, 13 December 2014 (UTC)
Lead again
I have
- simplified the language and got rid of unnecessarily complicated constructions of sentences,
- (as I perceive it) clarified in several spots by spelling things out instead of having mystifying parentheses,
- illustrated several of the spelled out clarifications by including basic images.
I should probably not have done this without prior discussion in light of the semi-heated postings here, but I honestly think the lead is much improved with these edits. There is always revert and rollback (and the edit button). YohanN7 (talk) 08:58, 16 December 2014 (UTC) Paragraph 4:
- Elaborated a little on how the FT is generalized to groups and, in the examples, which groups. I hope I got this right
- FFT mentioned as an algorithm, not a special type of FT
- Dirac Delta now present
YohanN7 (talk) 11:33, 16 December 2014 (UTC)
- Looks mostly ok to me. The first several sentences were a little hard to understand, so I rewrote and reorganized them. Sławomir Biały (talk) 11:44, 16 December 2014 (UTC)
- That's further improvement. Looks rather good now IMO. Curiosity: Gibb's phenomenon was known experimentally by physicists before it was understood mathematically (ref Vretblad). Think I'll put that in the Gibb's phenomenon article. YohanN7 (talk) 12:39, 16 December 2014 (UTC)
Is there a way to write in HTML ({{hat|f}} is not a good idea)? YohanN7 (talk) 13:10, 16 December 2014 (UTC)
- I vaguely remember some solution being implemented somewhere (perhaps circa 2008), but the consensus at the time was that it was to kludgy to make it work properly. But the web has "improved" since those days. It might be worth looking into. Sławomir Biały (talk) 15:26, 16 December 2014 (UTC)
- ƒ̂ , f̂ , , , ?
- Not very nice. YohanN7 (talk) 17:34, 18 December 2014 (UTC)
- Vector-valued functions may not belong in the lead, but the FT is in use for them, e.g. for spinors. The Feynman propagator for a particle with spin has two indices, both in position and momentum space (I can provide references). But I insist on complex-valued functions being mentioned. This is the norm in QM. YohanN7 (talk) 14:03, 16 December 2014 (UTC)
- All the functions are complex-valued by default. It seems wrong to emphasize that in the last paragraph as a "generalization". So, the Fourier transform of a spinor field gives the propagator. This seems interesting and useful to mention, but perhaps not in the context of a generic "vector-valued function". Not sure. Sławomir Biały (talk) 14:40, 16 December 2014 (UTC)
- No, the FT of a spinor is only that, a momentum space representation, but with (typically) 4 components. The Feynman propagator in spacetime representation describes how these components (are expected to) go into each other as the particle propagates from x to y in spacetime. (It rather expresses a probability-related quantity for this to happen.) It is a function of x − y and endowed with two Dirac indices. It is instrumental in computing the S-matrix elements. These vector-valued or even operator-valued functions are all over the place in QFT, and are usually simpler in the momentum domain.
- A very good example would be the expression of the free field operator (function of spacetime) in terms of creation and annihilation operators in momentum space (creates and destroys particles of well defined momenta). This expression is precisely a Fourier transform, provided the chosen basis wave functions are plane waves (which is not necessary, any complete set would do). Give me a min and I'll display an example here. YohanN7 (talk) 15:11, 16 December 2014 (UTC)
- Yes, a clear account of this would be very useful I think. Sławomir Biały (talk) 15:19, 16 December 2014 (UTC)
- From Greiner & Reinhardt, Field quantization: The free field operator for a spin 1/2-particle can be given by
- Here,
- destroys a particle/antiparticle with the given momentum and appropriate spin z-projection when applied to the Hilbert space of states, the four values of α represent spin up/down and particle/antiparticle,
- is (for each α) a solution of the ordinary free field Dirac equation (obtained by applying a boost + rotation from
the spin repthe (1/2, 0) ⊕ (0, 1/2) rep of the Lorentz group to a trivial solution in the rest frame (essentially (1, 0, 0, 0)Teiωpt etc if I remember correctly), - is a sign factor (particle and antiparticle parts differ, originally the problem of "negative energy solutions" in RQM),
- is the energy obtained from the energy-momentum relation. The field operator has a Hermitean adjoint involving the creation operator. The relation can be inverted, expressing the creation and annihilation operators in terms of the field operator and its adjoint using the inverse Fourier transform. (Weinberg defines things slightly differently, with both creation and annihilation operators in the expansion of the field operator.)
- The Feynman propagator is defined as
- where T is the time-ordered product. It is essentially the amplitude for a particle created at x traveling to y. It is associated to an internal line in a Feynman diagram and is a factor in the expression for an S-matrix element. Its expression is less nasty in the momentum domain than in the spacetime domain.
- It is worth mentioning as well that the S-matrix too (a huge thing with both "continuous indices" and discrete indices) is usually Fourier transformed. The FT version contains, among other things, delta-functions expressing energy and momentum conservation in its elements.YohanN7 (talk) 17:46, 16 December 2014 (UTC)
- From Greiner & Reinhardt, Field quantization: The free field operator for a spin 1/2-particle can be given by
- Naturally, this is not intended for the article, but might motivate the presence of vector valued functions (with reference to Weinberg's QFT vol I (brilliant but difficult) or Greiner & Reinhardt (many explicit computations).) in the lead. YohanN7 (talk) 16:08, 16 December 2014 (UTC)
- Off topic, but fascinating: Weinberg builds all this up from scratch, and in the process derives the Dirac equation (for the field operator) by considering the required LT properties + little else (parity inversion is required to be a symmetry of the theoty + microcausality, i.e. [ψ(x), ψ(y)] = 0 for spacelike x − y). The same method yields the Klein-Gordon, Proca, Maxwell, Einstein and, generally, the Bargmann-Wigner free field equations. for the operators. These can be "inverted" to apply to the states, and are, in most cases, formally identical to the operator versions. YohanN7 (talk) 16:49, 16 December 2014 (UTC)
- This is just elephentiasis — Preceding unsigned comment added by 181.225.234.64 (talk) 20:12, 16 December 2014 (UTC)
- No, it is a response to a request for a clear account of operator-valued functions with a Fourier transform. Have your spelling checked in the future. YohanN7 (talk) 05:37, 17 December 2014 (UTC)
representation theory
There is a confusion of thought and language here. The coefficient function is one thing, the integrand is another, and the integral is yet another. The integral represents the original function (that is Fourier inversion) But the transform is only a coefficient within the integrand. This is an equally valid representation of the physical phenomenon, but it is not a representation of the original function. It is an alternative to the original function, an alternative way of representing the phenomenon.
in
the left hand side represents the right hand side. Both are functions in the time domain.
The integrand inside the left hand side is a frequency component of f, it is a simple harmonic motion in the time domain with a frequency parameter.
The coefficient function within the integrand is the Fourier transform. It does not represent f, it is a function in the frequency domain.
It is marvellous how some people insist on getting this all wrong. — Preceding unsigned comment added by 181.225.234.64 (talk) 20:22, 16 December 2014 (UTC)
- I don't think the condescending rhetoric is helpful. This is an abuse of language that is used all over the place. It is strictly true that a function and its Fourier transform are different functions in different function spaces, but the Fourier transform is usually thought of as a "change of variables" from one space to the other. They are "the same function" in "different coordinates". We can try "signal" instead of "function" or "phenomenon", ok?
- Actually, I think that cuts the Gordian knot. Sławomir Biały (talk) 21:27, 16 December 2014 (UTC)
- It is debatable whether it is even abuse of language. You cannot infer from g represents f that g equals f. The term equals has a universal well-defined mathematical meaning. The term represents does not, and can be used like in plain English in informal mathematics as is done in the article. A function and its Fourier transform certainly represent each other in an obvious way, since they constitute a Fourier transform pair. No external references to "signals" needed. YohanN7 (talk) 05:21, 17 December 2014 (UTC)
- these remarks are not true. There is a relatively well established usage for "representation" as in Cauchy's integral representation, and things like that. It almost always means rewriting the function in some other way. This is a maths article and maths usage should be followed, with notices taken (in the body of the article) of other parochial or ghetto usages.... — Preceding unsigned comment added by 181.225.234.25 (talk) 19:45, 17 December 2014 (UTC)
- Generally, it is in the body of the article that we should strive for maximal precision. The lead of the article is an introduction, meant to be read by non-specialists. We need to get the main points across without getting too hung up on formal considerations (an exercise which I humbly submit may not be your calling.) "Parochial", "ghetto" — more unhelpful rhetoric. But I think this is now a moot point, so I suggest that you stop arguing it. Sławomir Biały (talk) 20:01, 17 December 2014 (UTC)
ok cat enviar/immediateresponse.txt While we are on the subject of parochialism, your statements about unified language are a) false, and b) not suitable for a lead anyway. As to b), if you have to apologise and explain, then it doesn't belong in the lead ... it isn't really necessary to use the terms time-domain and frequency domain in the lead, just because they happen to be ways of expressing things that helped you in the past does not make them, necessarily, helpful to the stray reader. You must keep in mind that we do not know where the reader is coming from.
As to a), only within signal processing and within statistics are the terms time-domain and frequency-domain really that standard. In physics, one says "reciprocal space" for the domain of the transform. Or "momentum space" if the dual variables are momentum variables. So it is false to even suggest that this is a unifying language. And in maths, one says "dual space" or "dual variables". So although there is a great deal to be said in favour of using time and frequency in the lead for the sake of concreteness, this must not be explained and amplified and it must not be even insinuated that the terms "time-domain" and "frequency-domain" are the norm. If we cannot figure out how to use the terms "time-domain" and "frequency-domain" without explanations, they should be omitted since they are just incidental aids.
The emphasis on complex numbers is similarly misplaced. If one has to explain and illustrate, then they don't belong in the lead. They are not the default for the values of f except in Quantum Mechanics and electrical engineering. In the study of the heat equation the default is that f is real-valued. In the statistical analysis of time-series, f is real-valued. That material should be moved to the Introduction.
And the use of the word "amplitude" instead of simply, say, "value" of f hat, is very ill-advised. In elementary physics courses, and later in this very same article, the amplitude of a wave or complex number is the absolute value of the complex coefficient. So you are needlessly putting an obstacle in the way of many readers. The word "value" should be used: it is normal and good. "The value of fhat" is the normal way to say this sort of thing. The average reader is not going to understand your language of "complex amplitude" and "real amplitude" etc. And the whole explanation of complex numbers is so needless. Just say something like I already put up some time ago, "the value taken by the F.T. at a frequency gives the relative strength of the contribution of that frequency to the original function." All normal words, with their normal meanings. — Preceding unsigned comment added by 181.225.234.25 (talk) 20:07, 17 December 2014 (UTC)
- I find all of these arguments rather strange and unconvincing. They seem to reflect a kind of functional fixedness, that things must always be nailed down 100% the way certain mathematicians have done it, and anything less than a Bourbaki-style treatment should be trashed. As a general attitudinal bias, that is not very helpful for writing the lead of an article. But in case you're interested, here are the refutations, point by point.
- For the simplest case of the Fourier transform, as one encounters it for the first time in, say, an undergraduate differential equations or engineering course, the variables are time and frequency. So these designations are almost universally understood, even if not actually employed in particular applications where more natural terms are available.
- The article defines the Fourier transform for complex valued functions, as do most of the references in the article, very few of which explicitly concern quantum mechanics. The Fourier transform itself is complex-valued, even if we are only concerned with real functions. (Yes, yes, I know... we can get away with sine can cosine transforms. But this is not the way it is usually done.) So there can be no good reason, other than personal taste, not to mention complex numbers early on. Both the modulus and argument of the Fourier transform contain information that needs to be explained, and by far the most natural way to do that is using complex numbers. The arguments against mentioning them are thoroughly unconvincing.
- Amplitude has a very standard meaning in mathematics as the height of a waveform. That's exactly the way the term is being used here. Sławomir Biały (talk) 21:13, 17 December 2014 (UTC)
- To echo what Sławomir just said, value ≠ amplitude. In this case both terms have well-defined mathematical meaning. It is clear that you are trying to push through your personal POV that is just that, a personal, slightly eccentric and inconsistent, POV. You want to be exact where the lead is not and explains why (time domain, frequency domain), and you want to be whimsical where the lead is exact (amplitudes, complex numbers). Are you trolling? YohanN7 (talk) 05:54, 18 December 2014 (UTC)
Recently expanded sections
While the recently expanded sections (three of them, DE, QM, SP) seem acceptable (needing copy editing), I don't see a place in this article for everything. Why have a longish tutorial on QM here? All of these should be split off into main articles of their own, leaving a summary here. YohanN7 (talk) 06:21, 18 December 2014 (UTC)
To motivate this beyond size arguments, this article is (supposedly) B-class. The addition of big chunks of newly written unreferenced material, usually somewhere between start-class and (rarely close to) C-class, is not beneficial for the article. It should serve as an overview of the subject together with discussion about generally applicable definitions, theorems, properties, and history etc. YohanN7 (talk) 06:29, 18 December 2014 (UTC)
- I think they are valuable additions that we should do our best to curate. There are references in at least some of it, but in-text references rather than footnotes, so it needs editing for style as well as additional references. I think a long term goal would be to find a separate home for much of this (Fourier transform applications, maybe), observing summary style here.
- As an aside, I do not think that the article was ever B class, although I often find these ratings rather arbitrary. Sławomir Biały (talk) 11:50, 18 December 2014 (UTC)
- I have moved most of the recently-added material out to the appropriate sub-articles (Poisson summation formula and sine and cosine transforms). The section "Units and duality" does not seem to be well-placed, and suffers from the extreme glut of wordiness of the other additions. Also, it seems to be largely redundant with the existing section "Other conventions" (a section which, by the way, it seems much easier to extract useful information from). More editing is needed. Sławomir Biały (talk) 12:55, 23 December 2014 (UTC)
Figures in lede
The first figure, showing sinusoidal functions (complete with labels), is not not explained, no caption is given. LadyLeodia (talk) 15:16, 20 December 2014 (UTC)
Also, I see that the sinc function figure in the lede is partly redundant with that in the section entitled "Uniform continuity and the Riemann–Lebesgue lemma". LadyLeodia (talk) 15:59, 20 December 2014 (UTC)
- I removed the figure. LadyLeodia (talk) 14:13, 22 December 2014 (UTC)
- The figure illustrated the concepts amplitude and phase. I don't know why the references to it in the lead were removed. YohanN7 (talk) 06:30, 25 December 2014 (UTC)
- Hmm... that is strange. There were captions in the wiki source, but apparently none in the final output. Sławomir Biały (talk) 22:26, 26 December 2014 (UTC)
- Ok, I've fixed the captions, fwiw. Sławomir Biały (talk) 22:37, 26 December 2014 (UTC)
- The way it was first was intentional. No visible captions, but references to (1) and θ in the main text. It had a better visual appeal and was even clearer imo, but this is to little to argue about. Amplitude and phase are kind of important concepts in Fourier analysis, and I am happy to see these illustrated again. YohanN7 (talk) 08:12, 27 December 2014 (UTC)
Error on a formula
When it says:
"but only $L^2$, "
i guess that's an error.
Whoever knows how to fix it (or explain that it's not an error) please help! — Preceding unsigned comment added by 190.139.56.100 (talk) 21:26, 28 January 2015 (UTC)
- It means square integrable function. I've fixed the formula, but the whole text of that section would benefit from some rather substantial copy editing and clarifying. Sławomir Biały (talk) 21:34, 28 January 2015 (UTC)
New addition to lead
Maybe the new addition to the lead actually means something, but I can't figure out what. YohanN7 (talk) 08:09, 14 March 2015 (UTC)
1/s=cycles/s?
Is it correct to use the expression "cycles per second" to refer 1/2? Would we say cycles per meter or just m^-1? Are radians a unit, or is it just rad=1? --193.146.9.132 (talk) 08:32, 27 May 2015 (UTC)
- Suppose that t has units of seconds. If we were to write , then has dimensions of 1/time, but the units are radians per second. In contrast, if we were instead to write , then still has dimensions 1/time, but the units are cycles per second. Since most of the Fourier transforms on this page use conventions, has units of cycles per second (assuming t is in seconds). Sławomir Biały (talk) 12:14, 27 May 2015 (UTC)
Operators and mathematical constants are upright, not italic
I must have pressed the wrong key because my edit was implemented before I had finished typing my edit summary - hence this expanded explanation here. It is not just the 'd' operator that should be upright, but also the constants e, i and pi. For further details, see ISO 80000-2 Quantities and Units Part 2 - Mathematics. Dondervogel 2 (talk) 11:57, 24 July 2015 (UTC)
- Nothing in Wikipedia's guidelines demands that we must use style guides published by third parties. Our style guideline recommends using italic d. This is in agreement with most sources on Fourier analysis. See, for example, J. Fourier "Analytical theory of heat", Stein and Weiss "Fourier analysis on Euclidean space", Dym and McKean "Fourier integrals and their applications", Titchmarsh "Introduction to the theory of Fourier integrals", etc. Sławomir Biały (talk) 12:09, 24 July 2015 (UTC)
- Can you show me the place where the guidelines state this? I will take it up there. Dondervogel 2 (talk) 14:16, 24 July 2015 (UTC)
- "for the differential, imaginary unit, and Euler's number, Wikipedia articles usually use an italic font" ... Also the proposed edit broke consistency within the article: "what is most important is consistency within an article." Moreover, note the reference to WP:RETAIN: "It is considered inappropriate for an editor to go through articles doing mass changes from one style to another. This is much the same principle as the guidelines in the Manual of Style for the colour/color spelling choice, etc." Sławomir Biały (talk) 14:24, 24 July 2015 (UTC)
Question on the example images
This image gives the Fourier transforms of a unit pulse function. Its source code uses Mathematica function FourierTransform, which however, by default uses the definition:
- ,
which is different from the one used throughout this article. The graph shape of imaginary can be quite different between them.
Assuming the definition in this article, for a translated unit pulse function
- ,
we have
On the other hand, assuming the default definition of FourierTransform, we have
The plot below gives the two different versions of imaginary .
Note that here the purple curve agrees with the red curve in the example image.
Could we place it in a more consistent way?
--Shiyu Ji (talk) 15:49, 12 August 2015 (UTC)
- But the plot in the article does not indicate the unit on the horizontal axis. Yes, the two definitions lead to different (by 6.28...) scales. So what? Not seeing the unit, how can you say that the curve in the article is wrong? Boris Tsirelson (talk) 16:37, 12 August 2015 (UTC)
- Though, the two curves differ also in the sign. This is indeed an issue. Boris Tsirelson (talk) 16:39, 12 August 2015 (UTC)
- I have made a replacement edit. Hope by doing this, we could fix the issue. --Shiyu Ji (talk) 04:48, 13 August 2015 (UTC)
- I am confused by the edit summary that an image lacking scale is confusing. If anything, introducing scale is bad at this stage, precisely because there are different conventions for the Fourier transform. This way, we don't commit to any one choice. Also, I disagree with the implication that the images should show scale, even if we could universally agree with "Fourier transform" means. Including a minuscule scale adds to the visual complexity of an already very complicated image, with almost no added benefit. The images simply are not meant to show the scale. To do so is to include needless complication. A good image should convey precisely the information it is meant to, and no more. Sławomir
Biały 02:23, 14 August 2015 (UTC)
- I am confused by the edit summary that an image lacking scale is confusing. If anything, introducing scale is bad at this stage, precisely because there are different conventions for the Fourier transform. This way, we don't commit to any one choice. Also, I disagree with the implication that the images should show scale, even if we could universally agree with "Fourier transform" means. Including a minuscule scale adds to the visual complexity of an already very complicated image, with almost no added benefit. The images simply are not meant to show the scale. To do so is to include needless complication. A good image should convey precisely the information it is meant to, and no more. Sławomir
- I strongly agree with the no scale recommendation. Sorry I did not notice this Talk information!
Now I have removed the scales and tics. Please improve my image rather than replace by PNG. Thanks. --Shiyu (talk) 02:33, 14 August 2015 (UTC)
- Both images lack axes. These are needed to show at least the horizontal offset of the second unit pulse. Sławomir
Biały 02:42, 14 August 2015 (UTC)
- Good point. I will do that later. --Shiyu (talk) 02:49, 14 August 2015 (UTC)
- The images look good now. The annotations look very smart! Sławomir
Biały 11:37, 14 August 2015 (UTC)
- The images look good now. The annotations look very smart! Sławomir
- Done Thank you for your very helpful comments! I also added you as the first author of the plot. --Shiyu (talk) 01:21, 15 August 2015 (UTC)
History
Can it have a small history section, how Fourier found it? — Preceding unsigned comment added by 2A01:E35:8A8D:FE80:DDE7:8821:696:22EC (talk) 07:00, 24 October 2015 (UTC)
Inconsistency in formula with definition given in Wolfram
The definition of the Fourier transform of the negative exponential function given in the
Table Square-integrable functions
Row 207, Column "Fourier transform unitary, ordinary frequency"
is NOT consistent with the same Fourier transform given by Wolfram : http://mathworld.wolfram.com/FourierTransformExponentialFunction.html
The (2*pi)^2 is missing on the bottom in the Wolfram article.
Could someone please clarify this? — Preceding unsigned comment added by Asweatherbee (talk • contribs) 17:12, 27 January 2017 (UTC)
- Wolfram uses conventions for the Fourier transform that no one else uses. That turns out not to be the problem here, though. Our article is consistent with that one: holds, with . Sławomir Biały (talk) 17:25, 27 January 2017 (UTC)
- Thank you for the response. I have found that there is an error on the Wolfram page http://mathworld.wolfram.com/FourierTransformExponentialFunction.html in that there should have been a included in the exponent of the first three instances of at the very top of the page. The suddenly "appears" in the exponent after the second equals sign! The is also included on Wolfram's main Fourier Transform page in the table at the bottom of the page: http://mathworld.wolfram.com/FourierTransform.html. I will notify Wolfram of the error. — Preceding unsigned comment added by Asweatherbee (talk • contribs) 18:37, 27 January 2017 (UTC)
Carleman or Carleson?
It says "Carleman and Hunt". Is that supposed to be "Carleson and Hunt"? —Bromskloss (talk) 01:27, 15 March 2017 (UTC)
the Dirac delta function _is_ a function
- To editor D.Lazard: Hello D.Lazard,
I'd appreaciate if you undid the reverts of my contribution to this particular article and the 4 others, please. Right now this article states that the Dirac delta function is not a function, which is certainly false. A Dirac delta function is distribution and thus a linear functional. Being a linear functional implies being a real-valued map. A real-valued map is a function by definition. Could you please discuss my contributions on the corresponding talk pages first before reverting them. And thank you for understanding. Konstantin Pavlovskii (talk) 13:51, 10 August 2017 (UTC)
- Not quite so; see Talk:Dirac delta function/Archive 2#Is Dirac delta function a function? (my edit there of 14:03, 11 August 2017). Boris Tsirelson (talk) 15:01, 11 August 2017 (UTC)
- The Dirac "delta function" is indisputably not a function, according to the (extremely important) strict definition of a function: the assignment to each point of its domain exactly one point of its codomain.
- It can, however, be rigorously viewed as a linear functional on a function space, taking the function f to the value f(0). 2601:200:C000:1A0:FDA6:8A26:FCCA:2C1B (talk) 23:50, 22 September 2022 (UTC)
On simplifying expressions in 502
Given the definition , the coefficients in the transforms of 502 can be written as . The three transforms can then be written like
which are, arguably, slightly simpler and more handy for practical use since you don't have to keep track of the subtractions in the exponents and subscripts.
While we're at it, I would also flip the ordering of and in the definition of (previously ) since is more general.
-- Mortenpi (talk) 00:36, 25 March 2018 (UTC)
- I'm not sure what I thought was wrong with this. I could not verify it at the time, though. Very often edits to these tables introduce errors, though. Ideally, we should have a reference for all of these. Sławomir Biały (talk) 11:41, 25 March 2018 (UTC)
Angular frequency Fourier transform and cyclic frequency Fourier transform plot values
We need somebody to explain how or why the Fourier transform values in angular frequency plots are scaled up by a factor relative to values in the cyclic frequency plots. That is, G() is always G(f). Obviously, such scaling is necessary for linking g(t) to both G() and G(f). For example, the Fourier transform of a basic cosine signal cos(t) translates to impulses scaled by when angular frequency is involved, while cos(2πt) leads to impulses scaled by 0.5 when cyclic frequency f is involved. It will be very nice indeed if the Wikipedia Fourier transform article can show or provide information that demonstrates how we can arrive at the relationship G() = 2πG(f). At the moment, I am assuming that the transformation to angular frequency domain and transformation to cyclic frequency domain are simply defined as they are, and the relation G() = 2πG(f) comes with the package. KorgBoy (talk) 05:33, 14 January 2019 (UTC)
- First of all, excusing your abuse of notation, it's not true that " [i.e. ] is always [i.e. ]". It isn't true for any of the functions at Fourier_transform#Square-integrable_functions,_one-dimensional. You appear to be talking about some of the functions at Fourier_transform#Distributions,_one-dimensional, generally the ones whose transform is some form of Dirac delta, and specifically row 304, which indicates that:
- and
- which is neither nor even . It's just which is obvious by comparing the two integral expressions, but is also explicitly stated in the table at Fourier_transform#Other_conventions. The equivalence of the closed form expressions for and is not so obvious in this case because of the scaling property of the Dirac delta, which has nothing to do with the Fourier transform, except that this particular example you chose has Dirac deltas. The scaling property says:
Table formatting
Hey, so at least on my laptop, the "Formulas for general n-dimensional functions" table extends off the right side of the page – because of this, when I'm scrolling on my track pad, I have to make sure I'm using strictly up-and-down movements, because moving side-to-side will move the page, too, which is pretty annoying. I don't really know how to fix it, but surely it can be fixed? Seeing as how none of the other tables have that problem. PaulodiCapistrano 03:27, 1 February 2019 (UTC)
- Maybe this info will help: When I change my Chrome browser "zoom" setting to 125%, a horizontal scroll bar appears, allowing me to "move the page" to the left when I want to see the right-hand side of that table or the other tables that don't fit. That is a desirable feature, not a bug that needs fixing. Also, when I do that, I can see that the "Formulas for general n-dimensional functions" table is simply the widest one. There is no other difference between it and the other tables. Thus, I assume they will also trigger horizontal page-scrolling when your track pad movement wanders too far from the vertical.
- --Bob K (talk) 14:51, 1 February 2019 (UTC)
Animations of propeties
I've added an animation showing the timeshift properties of the fourier transform. If people could leave their feedback on whether it's helpful or not, that would be great.
Davidjessop (talk) 09:15, 24 February 2019 (UTC)
- OK. Since I started this, I will follow up here. But I don't intend to get into a prolonged debate, and I am not going to revert the current version again. And I appreciate the good effort that went into making the gif. After staring at it for a while, it says two things (and the caption is helpful):
- 1. A periodic signal is a sum of discrete frequencies (sinusoids); i.e. a Fourier series:
- 2. The vectors rotate clockwise at rates proportional to k, as the time-shift, T, increases.
- A. The caption is good, but I would prefer to see it in conjunction with formulas similar to those above. The gif is harder (for me) to comprehend.
- B. Instead of the fast-moving gif, I would prefer something that steps along, each time I click on it at my own rate.
- C. But let me also compliment you on the fact that it's a great example of the potential of a digital encyclopedia/textbook. My college days would have been so much easier if my textbooks had had things like this, plus unlimited space and Wikilinks.
FTs, Pixel 4, and better Dab'n Technology
A little more specifically, Web versions of the NYT an article (fresh presumably today) concerning IIRC a new Polaroid and titled "The Reason Your Photos Are About to Get a Lot Better", and it reeks to this physics/math/computing guy, of Monsieur Fourier, the FFTs he's effectively graced us with, and the clumsy manual methods we are applying, on WP, to The Disambiguation Problem. (My main thot, until reading that article, was that our old (and perhaps, for all i can see, abandoned) mechanism of user-specifiable parameters for at least sorting tabular data has faded awayi. Nevertheless, the can surely be reintegated with the newer features. Yes, indeed: many aspects of dab'n are not very amenable to data-base methods. But it seems to me that Wikidata should leaves us with a strong impression that we may not yet have plumbed the potential of treating additional aspects of our tasks as machine assistable.) I'd like to imagine that folk (of more wit and more command of resources than I) have gone down such roads and returned with at least scouting notes. I hope it's not just a matter of we proles reaching beyond our real grasp, and would welcome hearing others' takeaway.
--JerzyA (talk) 16:47 & :54, 18 October 2019 (UTC)
Applications/Analysis of differential equations
In the above section, "initial value problem" and "initial conditions" should be used instead of "boundary problem" and "boundary conditions", respectively.
--Gozo032 (talk) 06:44, 6 September 2020 (UTC)
- Initial value problems are boundary value problems where the boundary is given in the time domain. While, upon a brief examination, the examples given use the time domain, the FT can be used on other domains. Constant314 (talk) 16:15, 6 September 2020 (UTC)