Talk:Discrete Fourier transform/Archive 1
This is an archive of past discussions about Discrete Fourier transform. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 |
Question re spectral analasys
I would like to do a spectral analasys, but I have NC on how to interpret the results I get after the transformation. Can anyone explain it please? --Marco 17:54, 18 February 2006 (UTC)
- Your original data is in the time domain (what you'd see on a oscilloscope or something) and the result of taking the DFT gives you the spectral content of the time data for that window.
- Take N samples of your data and do the math. What you get in the end is N complex numbers representing the spectral content of frequencies from 0 (DC) to 1/(N-1) of your nyquist rate (half sampling rate). So if you do N=10 and your sampling rate is 20 Hz then your 10 values resulting from the DFT should be freqiences from 0 (DC) to 9 Hz. Cburnett 18:37, 18 February 2006 (UTC)
Unknown character
I can't see the character ℱ, even when I select Unicode encoding from the "view" menu of explorer (unless it's supposed to be an empty box). -rs2 21:25, 24 Mar 2004 (UTC)
- It's U+2131 ℱ SCRIPT CAPITAL F = Fourier transform (see Unicode letterlike symbols). You can't see it because you don't have a font installed that contains that character. It's nothing to do with the encoding of the Wikipedia page, which was ISO-8859-1. — Gdr, 21:30 2004-03-31.
Normalization constants
This is wrong. The discrete fourier transform is given by
The inverse DFT is given by
In both equations, N is the total number of sampling points.
- This is purely a matter of convention, as noted in the article. The convention in the Wikipedia article is extremely common (for example, it is the one used in Matlab [1]). —Steven G. Johnson 03:19, Jan 7, 2005 (UTC)
Hello Steven - about the reason for the 1 and 1/n convention - I see your point that it avoids a multiplication for the DFT but I don't think that an actual algorithm would put the 1/n inside the summation and multiply all n times. It would just perform the summation and then multiply by 1/n. Shouldn't we say that its a combination of avoiding a multiplication for the DFT, AND not having to spend the time calculating the square root? Paul Reiser 18:58, 25 Jan 2005 (UTC)
- You have n outputs, therefore you have to multiply n times. You only have to compute the square root once, and furthermore the square root is independent of the data so it can be precomputed. (In principle, there are ways to absorb the scaling into the twiddle factors of the FFT, but this seems rarely done in practice—one has to accomodate the different conventions desired by users, so it is easier to do no scaling it all in an FFT code and leave it to the user. Often, the normalization factor can be absorbed into some other part of a calculation at negligible cost.)
- Even putting aside computational considerations, the η=1 scaling is sometimes more convenient than a unitary scaling. For example, in deriving FFT algorithms everyone drops the normalization because it is just an annoyance that can be tacked on later. Or, if you want to interpret your as sinusoid amplitudes it is nice to have an unscaled IDFT. Etcetera...unitarity isn't sacrosanct. —Steven G. Johnson 01:41, Jan 26, 2005 (UTC)
- I'm think I'm going to go back to the old definitions without the η's. The η factors are confusing to a new reader, are not conventional notation, are inconsistent with the FFT articles, and we had already stated equivalent information in words. —Steven G. Johnson 01:44, Jan 26, 2005 (UTC)
Ok, I get it about the n times thing. I think we are coming at this thing from two different directions. Your idea of "best" normalization seems to be from an algorithmic point of view, mine is from a mathematical point of view. Casting the DFT as a unitary operator is absolutely the best thing to do from my point of view. All of physics is based on things that are invariant under some transformation or other. When the DFT is a unitary transformation, the dot product, the angle between two vectors, the length of a vector, all are preserved. Viewing the DFT as a unitary operator is like saying it is a simple rigid rotation in space. With the 1,1/n normalization, it is like saying its a rigid rotation plus converting from meters to inches. Its klugy from a mathematical point of view. Ok, thats my rant. I agree that using the η symbols is not the best thing for a new reader to see, and we have to come to a compromise here. How about we do it in 1,1/n normalization, and I put in a section about the unitary normalization and how neat it is. I was about to say I needed to correct some things, but I see you are editing the page as I am writing this so I will check it tomorrow. Paul Reiser 02:51, 26 Jan 2005 (UTC)
- I agree that unitarity is often a convenient property (I'm a physicist myself). Stating that it is "absolutely the best thing to do" from a "mathematical perspective" is going too far, though—conventions are are there for convenience, and convenience varies with the mathematical context. (For example, with the unitary normalization the convolution theorem would have an extra constant factor of √n. Or, consider the trigonometric interpolation polynomial, for which the most convenient thing is arguably to have a coefficient of 1 in the IDFT so that the amplitudes f are the sinusoid amplitudes.)
- With regard to what to do with the article, it already mentions the unitary normalization in several places. I'm not sure whether adding a whole new subsection would be an improvement. (If any inconsistencies have crept into the article, however, by all means correct them.) —Steven G. Johnson 03:30, Jan 26, 2005 (UTC)
Hello Steven - I put the unitary material into one section, I think it works, but I took the matrix out of the orthogonality section. I'm not sure if you thought it was serving a purpose there, I couldn't see one, but I did see a purpose for the unitary section, in pointing out that it is a unitary matrix (with the proper normalization). Paul Reiser 11:58, 27 Jan 2005 (UTC)
Merge?
Should this article be merged with discrete-time Fourier transform? Michael Hardy 01:34, 2 Jun 2005 (UTC)
- I think not. They are as distinct from each other as the Fourier series is from the continuous Fourier transform. (See Fourier_transform#Family of Fourier transforms.) PAR 09:44, 2 Jun 2005 (UTC)
- However, I do not see anything wrong with merging discrete-time Fourier transform with Fourier series since the forward discrete-time Fourier transform is just the reverse Fourier series.PAR 10:03, 4 Jun 2005 (UTC)
- Actually this issue could be clarified somewhat if somewhere it is explained how sampling (or lack of sampling), and periodicity (or lack of periodicity) relates all four Fourier transforms. --HappyCamper 3 July 2005 22:08 (UTC)
External Reference to Consider
I have an online book about the DFT that is aimed at the beginning undergraduate level: Mathematics of the Discrete Fourier Transform (http://ccrma.stanford.edu/~jos/mdft/) I am looking into splitting out some of the material into articles for the Wikipedia. However, since there is already a fine DFT article at the Wikipedia, I would merely recommend the book as an external reference, especially for "beginners". -- Julius
Also, it would be nice to have pointers to "live" Fourier transform software, perhaps:
... is there a good place on Wikipedia or Wikibooks to put public-domain source code?
involutions
Hi StevenJ. You reverted my changes because you liked your own better. Well, I don't. Shall we discuss the matter and collaborate to improve instead of reverting each others changes ?
- When counting, don't stop after 'secondly'. You state 'several' and leave it to the reader to finish counting.
- I've added the word "third" since you feel strongly about it. —Steven G. Johnson
- You choose your definition of the DFT / IDFT pair arbitrarily and then talk about 'tricks' to connect them. Our readers deserve better than that. Why not take the involution as the basic definition and avoid further trickery ?
- No, I choose the Wikipedia definition. And readers are not served by Wikipedia adopting a transform convention that conflicts with what the rest of the world does. These things are conventions—there is no right or wrong, only standard and nonstandard. The Wikipedia definition is very standard, while the involutary transform is not. —Steven G. Johnson
- The DFT transforms the vector - not the component . There is no such thing as You may talk about
- Notation is neither right nor wrong; it can be unclear, inconsistent, misleading, or nonstandard, but not wrong per se. The question is, is DFT() clear in this context, or do we need to write DFT() or something similarly verbose? In this case, I think the meaning is hard to mistake. However, I've used the explicit tuple notation in the page to make you happier. —Steven G. Johnson
- You may use the left arrow to signify the function . Thus is a power function, is an exponential function, is a root, and is a logarithm. Thus you may talk about and revert indexes:
- Using is needlessly technical here, and without it you suffer the same ambiguity in meaning.
- The notation is simpler than . See root of unity. Bo Jacoby 08:08, 26 September 2005 (UTC)
- For the DFT, the standard notation in the literature is . A couple of places in the article already use this notation where the space/clarity gains are significant, since those places are already discussing more advanced concepts. It doesn't make sense to use it in the introductory sections, however, because the minor space gains are outweighed by the barrier they impose on readers unfamiliar with that notation. —Steven G. Johnson 18:16, 26 September 2005 (UTC)
- Hi Steven. Wikipedia may be the reader's first meeting with a subject. So it is more important that the text is understandable than that it conforms to the 'rest of the world', meaning the technical or scientific literature. We have three competing notations for a concept which is very important in science: The occurrence of the symbols are scary to the beginner, and perfectly unnecessary. The omega-notation furthermore does not indicate that it depends only on and not on and separately. However, everybody knows the number one, and the argument is indispensible, so while the complexity of the notation is sufficient, the additional complexities of the notations are unnecessary. Furthermore connects to other branches of mathematics such as the theory of cyclic groups. This insight was new to Linas and he may not be the only one.
- The complex conjugation of the input vector connects the fourier transform to the theory of the inner product. The fact that it also makes the transformation involutary is merely another hint that it is the right thing to do.
- The fourier series and the fourier integral are limiting cases of the discrete fourier transform for large values of . This connection is historically not the first insight, nor is it the approach of the 'rest of the world', but it is the insight that the beginner might appreciate because it is coherent and simple. I'd like to edit wikipedia to show it, but of cause I won't do it if you revoke it afterwards. I can leave it for you to do, since you feel so much about the control over these Wikipedia articles. Bo Jacoby 12:39, 27 September 2005 (UTC)
- I don't think StevenJ (or myself for that matter) want to control the article so much as to create and maintain high a high quality article. The article as it stands is a compromise between a number of people, in its present form its not the way any of us would write it, but thats life on Wikipedia. I think the 1^k/n insight is good and should be included, but I think that the Wikipedia article will only be one of many sources that someone learning the subject will use. I don't know of any source that uses the 1^k/n notation and so the learner will be constantly needing to translate between the Wikipedia article and everyone else. With regards to the "Expressing the inverse DFT in terms of the DFT" I hope I can edit that to conform to the notation of the rest of the article, such as using script F to denote DFT rather than the letters DFT. I also want to remove all "computer tricks" and put them where they belong, in the FFT article, for example. PAR 17:49, 27 September 2005 (UTC)
- Regarding the involution-based DFT definition, if you think it is the "right" thing then by all means publish a paper and convince the rest of the world (although you will have a very hard time of it). In all cases, Wikipedia is not the right place for promulgation of new standards, however persuasive you find them (see Wikipedia:No original research). —Steven G. Johnson 23:30, 27 September 2005 (UTC)
- PAR, regarding the "computer tricks" — ways to compute the IDFT from the DFT are simply useful general properties of the transform. They are have nothing to do with FFTs per se, although they are sometimes used to obviate the need for programming separate inverse FFTs. Because of that, I don't see any better place to put them than in the DFT article. (Regarding script F vs. DFT, I'm agree that it should be changed and I will do so ASAP...I had totally forgotten that the article had already defined a notation for this, duh!) —Steven G. Johnson 23:30, 27 September 2005 (UTC)
Cleanup
- Somewhere the fourier transform of x was called f, and elsewhere it was called X. I have changed f to X. I also changed DFT into script F in accordance with Stevenj.
- Most Wikipedia pages on DFT-related subjects currently use the notation that the DFT of x is f; changing it everywhere would be pointless. Therefore, we should primarily stick to that for consistency. On the other hand, when we have the transform of x and y, since we need two letters X and Y are fine—there's nothing mathematically wrong with using a different letter, as long as it is clearly stated in the text. Script F is useful in equations, but using "DFT" as the name of the transform in English is preferable. —Steven G. Johnson
- I agree with StevenJ PAR 20:12, 28 September 2005 (UTC)
- Most Wikipedia pages on DFT-related subjects currently use the notation that the DFT of x is f; changing it everywhere would be pointless. Therefore, we should primarily stick to that for consistency. On the other hand, when we have the transform of x and y, since we need two letters X and Y are fine—there's nothing mathematically wrong with using a different letter, as long as it is clearly stated in the text. Script F is useful in equations, but using "DFT" as the name of the transform in English is preferable. —Steven G. Johnson
- I should also say that I, personally, would greatly prefer the x -> X notation. I would also prefer to use k as the output index, and j as the input index...or better, N as the transform size and n as the input index (to avoid confusion between j and i). I tried to do this early on but was outvoted, and at this point changing it everywhere would be too much hassle. —Steven G. Johnson 17:18, 28 September 2005 (UTC)
- I agree with that and I would be willing to do the changes. PAR 20:12, 28 September 2005 (UTC)
- PAR, take a closer look at the sheer number of articles that would need to be changed. (See Category:Fourier analysis.) —Steven G. Johnson 22:40, 29 September 2005 (UTC)
- I agree with that and I would be willing to do the changes. PAR 20:12, 28 September 2005 (UTC)
- I should also say that I, personally, would greatly prefer the x -> X notation. I would also prefer to use k as the output index, and j as the input index...or better, N as the transform size and n as the input index (to avoid confusion between j and i). I tried to do this early on but was outvoted, and at this point changing it everywhere would be too much hassle. —Steven G. Johnson 17:18, 28 September 2005 (UTC)
- I think that the idea of cyclic indexes should be introduced generally at an early stage.
- I agree that the implicit periodic boundary conditions should be discussed explicitly, probably under "Properties". —Steven G. Johnson
- I don't agree that the trigonometric polynomial is unique. The function e^(2 pi i(n-1)x) differ from e^(-2 pi i x) even if they match at the sampled points. This phenomenon is called 'aliasing'.
- Note that it says the unique function of this form, so the article is correct as stated. The aliased polynomials are of a different form. —Steven G. Johnson
- On the other hand, I suspect that it is not the same as the trigonometric interpolation polynomial used in practice. In practice, people generally use j from -n/2+1 to n/2, which has the important property of interpolating real inputs to a real function, and also uses the minimum-frequency interpolation. So, probably we should give this form instead and make a note of the different possible formulations. —Steven G. Johnson
- Note that it says the unique function of this form, so the article is correct as stated. The aliased polynomials are of a different form. —Steven G. Johnson
- The unitaryty section still uses a different definition of script F.
- It could use F_u or something like that to distinguish...however, it is probably preferable for the section to simply clearly state that it is using a different normalization convention than the main transform definition in the article, but to emphasize that it is essentially the same transform. —Steven G. Johnson
- I agree with this and will change it, maybe to U for "unitary matrix"
- The problem I have with U is that there is no visual connection with F, even though it is the same transform (essentially), which is why I suggested F_u or just saying in words that it is a different normalization. —Steven G. Johnson 22:17, 29 September 2005 (UTC)
- I agree with this and will change it, maybe to U for "unitary matrix"
- It could use F_u or something like that to distinguish...however, it is probably preferable for the section to simply clearly state that it is using a different normalization convention than the main transform definition in the article, but to emphasize that it is essentially the same transform. —Steven G. Johnson
- I think that this article is very important and should be made perfect. Bo Jacoby 08:42, 28 September 2005 (UTC)
- Everyone else thinks it should be a terrible article, so you are outvoted. —Steven G. Johnson 17:09, 28 September 2005 (UTC)
- AT LAST! I was getting tired of being so agreeable. I disagree, Bo Jacoby is right. PAR 20:12, 28 September 2005 (UTC)
- Everyone else thinks it should be a terrible article, so you are outvoted. —Steven G. Johnson 17:09, 28 September 2005 (UTC)
Hi Steven. The article is terrible: confusing, inconsistent, incomprehensible and far too long. Even if it is not mathematically wrong to change notations and definitions in the middle of an article, it is no good to the reader. I'd like very much to help in cleaning it up, but when you just revoke everything I do, you cannot be helped. I'm really sorry. Bo Jacoby 07:29, 29 September 2005 (UTC)
- Bo - why don't you pick out one particular correction, maybe the biggest one in your mind, that you would like to make. Then we can discuss it. If you convince me, then Steven will probably accept it. If you dont, then hopefully you will accept it. Then we can go on to the next problem. What is the most "confusing, inconsistent, incomprehensible" aspect of the article? PAR 18:42, 29 September 2005 (UTC)
- PS - Please realize that this is one of a "collection" of related articles, which include Continuous Fourier transform , Fourier series and discrete-time Fourier transform. These articles should have consistent notation as much as possible, and anyone who suggests a radical change in one better be ready to do the work required to change all, and in a relatively error free manner. That implies a thorough understanding of all four articles. PAR 18:52, 29 September 2005 (UTC)
- Don't forget all of the fast Fourier transform articles (which includes several articles on particular algorithms). Not to mention discrete cosine transform, discrete Hartley transform, and probably others I am forgetting. —Steven G. Johnson 21:43, 29 September 2005 (UTC)
Thank you for your suggestion, PAR. I'll pick out one particular correction. It is certainly not the biggest in my mind, for that would be the union of all the changes that I have in mind, but it is the first one that I stumble on while reading. It was stated that the word 'sequence' would be used in the meaning of 'vector'. Then the reader could have the burden to remember this and translate 'sequence' into 'vector' while reading. I think this suggests the simple edit of changing 'sequence' to 'vector', taking the burden from the shoulders of the reader. Let me know if anyone disagrees. Bo Jacoby 07:55, 30 September 2005 (UTC)
Vector or sequence
Bo - Before I revert some of your edits, let me explain myself. A sequence is an ordered set of numbers, and that's about it. A vector is a mathematical object which has associated operators etc. and by a logical process it can be deduced that if you define a set of basis vectors (a coordinate system), you can represent a vector in terms of this basis set by a sequence. The sequence is not the vector, at least from a mathematicians or a physicists point of view. It is from a computer programmers point of view, I think, but this is a mathematical article. That means the statement:
- The vector of n complex numbers x0, ..., xn−1 are transformed into the vector of n complex numbers f0, ..., fn−1 by the DFT according to the formula:
is not correct, "vector" should read "sequence". There is no such thing as a "vector of complex numbers", only a sequence of complex numbers that may or may not represent a vector, depending on whether a coordinate system and all the vector operators have been defined. The bottom line is that the DFT operates on sequences. It is often very helpful (and interesting) to assume that the sequences are representative of vectors. The DFT then can be thought of as a linear vector operator, etc. etc. The article is a bit sloppy in this regard and needs to be fixed. PAR 10:38, 30 September 2005 (UTC)
- Hi PAR. That subtle distinction between sequence and vector was not at all supported by the original version of the article. On the contrary it was explicitely stated that 'sequence' means 'vector'. is a vector space over the field . So the elements are vectors. These vectors are subject to discrete fourier transforms, which are linear (or, preferably antilinear) transformations of the vector space. The limiting cases of the DFT where , such as the fourier integrals and the fourier series, are still linear or antilinear transformations of complex vector spaces. So I don't agree that any distinction between 'sequence' and 'vector' is neither necessary, nor useful, nor easy to explain. The better way to get rid of the sloppyness is to abandon the concept of 'sequence' in this context. Bo Jacoby 12:21, 30 September 2005 (UTC)
I agree, that distinction should be made more clear. However, I still say that the DFT fundamentally operates on sequences which may or may not be sequences of vector components. I don't need to know whether a sequence of complex numbers represents a vector in order to perform a DFT on that sequence. A sequence of complex numbers is not necessarily PAR 14:38, 30 September 2005 (UTC)
- PAR, as soon as you say that the DFT is a linear operator, this immediately implies that you are talking about an operator on a vector space. So, I agree with Bo in that I don't think it's very useful to talk about DFTs that do not operate on vector space. On the other hand, we are not talking about any vector space in general; we are talking about the particular vector space formed of a sequence of complex numbers (where addition and multiplication are defined in the natural way). Moreover, the abstract (non-spatial) definition of a vector space may be unfamiliar to many readers, and because of that I suspect it is a gentler introduction to begin by calling it a "sequence" of inputs. (There's certainly nothing mathematically incorrect about calling it a sequence.) So, while I disagree in part with PAR's reasoning, I agree with his conclusion (that the previous wording is arguably better). —Steven G. Johnson 17:43, 30 September 2005 (UTC)
Ok, I see your point, once you have the transform, you've implicitly defined the operators that make the sequence a vector space. It still grates my sensibilities, I mean there are so many properties of the DFT that aren't of a vector nature, like the shift property. Should we be using "vector" in this instance? I don't think so. Should we only use vector when its a property of a vector space or some higher-ordered space (eg metric) and use sequence otherwise? The previous wording mixed it up, but was there a method to it? PAR 21:08, 30 September 2005 (UTC)
- A cyclic shift is just a unitary operator (i.e. a "rotation") on the vector space, as is multiplication by a linear phase. I'm not sure what you mean by properties "of a vector nature". I still don't see the problem with calling it by the more-familiar term of "sequence" in addition to the more-technical term of "vector", though. As George Orwell pointed out in 1984, synonyms (and their slightly differing shades of meaning) make language more expressive, not less. —Steven G. Johnson 01:55, 1 October 2005 (UTC)
Ok, see your point. Its between you and Bo, then, but if I have to break a tie, I would say the old way was good, except a little longer note than "sequence and vector are used interchangeably" would be needed. PAR 14:27, 1 October 2005 (UTC)
I agree with Steven that the discrete fourier transform operations are vector operations. Any finite dimensional vector space over the field of complex numbers can be subject to the discrete fourier transform by chosing a basis, so PAR's feeling that the discrete fourier transform applies to sequencies rather that to abstract vectors is incorrect. Though I generally agree with Orwell about synonyms, the connection between concepts and words is more precise in mathematics than in everyday language. I agree with Steven that one concept is logically sufficient, but we disagree about using two words. To me using two words for one mathematical concept is an unnecessary complication, and this very discussion confirms that it does introduce misunderstanding. Bo Jacoby 08:14, 3 October 2005 (UTC)
The shift theorem
A second suggestion. Why not simply write: and instead of: if then and ? The f is unnecessary. Bo Jacoby 09:02, 3 October 2005 (UTC)
- I am against that - its a true statement, but its significance is not as immediately obvious. A reader is liable to look at the first one and wonder "so what?" but looking at the same involving f, its immediately clear: multiply the input by the exponential factor amounts to shifting the output. PAR 20:50, 3 October 2005 (UTC)
Different readers look differently on things. I just see the f as a useless complication which the author should have eliminated himself instead of leaving it to the reader. Bo Jacoby 09:27, 4 October 2005 (UTC)
The DFT as a unitary transformation
This section introduces a 4th notation for the fourier transform : The primed index: . Why not reuse one of the 3: , , . Further cleanup may reduce the number even more, hopefully to one. Bo Jacoby 09:25, 3 October 2005 (UTC)
- I agree. The point of the primed index rather than the primed vector is to emphasize that the vector has not changed, only its components have, as a result of a coordinate system change. This is in the flavor of Einstein notation for tensors, but it shouldn't be introduced here out of the blue. Either introduce it with explanation or stick to the old notation. (Yes, I am the one that wrote it that way) PAR 20:50, 3 October 2005 (UTC)
Any linear bijection defines a change of coordinate system. That is an important point. It should be explained somewhere in WP, but not necessarily here. Bo Jacoby 09:39, 4 October 2005 (UTC)
Introduction. The only important thing
Says: "The only important thing is that the DFT and IDFT have opposite-sign exponents, and that the product of their normalization factors be 1/n." The words "The only important thing" indicate that only one thing is important. The explanation gives two things. It is depressing to the reader to accept the fact that the author has difficulties in counting to two.
The opposite-sign exponent thing is not important, because the involutary DFT has DFT=IDFT, with equal exponents. Bo Jacoby 11:52, 3 October 2005 (UTC)
- The "involutary DFT" is a different operator than the usual DFT. The two things remain important for the usual DFT, and I don't think we should make this an article about the involutary DFT disguised as the usual DFT. PAR 20:50, 3 October 2005 (UTC)
Consider the important special case of a sequence of real numbers x being transformed according to the formula How is this formula generalized to a sequence of complex numbers x ? There are two options : and Both formulae generalize the real case. They are not that different. The lazy mathematician choses the first one because then he dosn't need to modify the formula. The careful mathematician is guided by algebraic simplicity: Because the second one is involutary and the first one is not, the second one must be chosen and the first one sacrificed on the altar of Occam. PS. Note that I resisted the temptation to write :-) Bo Jacoby 08:42, 4 October 2005 (UTC)
- I don't think the DFT as it stands is the result of early muddled thinking being carved in stone. The DFT is a linear operator while the involutary version is not. If is the involutary operator then . The linearity of the DFT is vitally important. PAR 13:08, 4 October 2005 (UTC)
"Early muddled thinking being carved in stone" it is. I love your poetry! The better generalization of the concept of a linear operator on a real vector space is that of an antilinear operator on a complex vector space. The antilinearity gives precisely what you need. (The name 'antilinear' is ill chosen, however, because it seems to imply that it is the opposite of linear, which it is not.) Bo Jacoby 13:28, 4 October 2005 (UTC)
- Well, it doesn't matter, we are not talking about this article any more, we are talking about another article "Involutary DFT" or something. This article is about the DFT, which is linear. But I don't understand what you mean by "what you need" - could you explain that. Also, do you have any references I could look at? PAR 20:42, 4 October 2005 (UTC)
- Bo, please go read Wikipedia:No original research. Then read it again. If you are so convinced that the involutary definition is a good thing, then great! But Wikipedia is not the place to push new standards, or anything new for that matter. Go publish an article or two, convince people that it is worthwhile (i.e. see whether your article is ever cited), and then come back and write Involutary DFT. This has nothing to do with whether the involutary thing is a good idea, so that argument is pointless here. —Steven G. Johnson 00:44, 5 October 2005 (UTC)
Steven, after studying No original research I think you are completely wrong. I quote: Research that consists of collecting and organizing information from existing primary and/or secondary sources is strongly encouraged... "No original research" does not mean that experts on a specific topic cannot contribute to Wikipedia. Indeed, Wikipedia welcomes experts and academics. This means that a new calculation or a new notation or an alternative definition is not considered 'original research' in this context. It does not rely on information which is inaccessible to you or to anybody else. I am very flattered that you consider the involutary fourier transform to be original research. If I claim that I have made an invention, I expect everybody to object against it saying that it has been done before, or that it is trivial, so I should not consider myself an original scientist. Now the situation is up-side down. I personally don't believe that the involutary fourier transform is original research on my part. I consider it fairly trivial. I merely point out a minor error in one of your definitions of the fourier transform: one missing asterisk. It can easily be corrected. It is no big deal. I don't consider this to be original research. But you do ! Thank you very much, I'm honored and pleased. Bo Jacoby 08:07, 5 October 2005 (UTC)
- The definition of prohibited original research, from that page, is:
- The phrase "original research" in this context refers to untested theories; data, statements, concepts and ideas that have not been published in a reputable publication; or any new interpretation, analysis, or synthesis of published data, statements, concepts or ideas that, in the words of Wikipedia's founder Jimbo Wales, would amount to a "novel narrative or historical interpretation".
- The involutary DFT definition is a concept or idea that has not been published in a reputable publication; or, if you prefer, it is a new interpretation or synthesis of the standard DFT concepts. Hence "original research". (This has nothing to do with whether it is a good idea, or whether you could get it published in a reputable journal.) It is certainly a indisputed fact that one can define such a transform, and it is (somewhat) reasonable to mention this fact in passing (as we do), if only to highlight the connection with the discrete Hartley transform. However, promoting it as a new transform definition, especially replacing the standard DFT definition as you seem to want, goes beyond what is acceptable practice in Wikipedia. —Steven G. Johnson 16:56, 5 October 2005 (UTC)
PAR, what you need is expanding the expression T(ax+by). In the real case it is aT(x)+bT(y). In the complex case it can be either aT(x)+bT(y) or a*T(x)+b*T(y). In both cases T(ax+by) is evaluated, and that is what is needed. Bo Jacoby 08:07, 5 October 2005 (UTC)
Introduction. Definition
The introduction should give a short answer to the short question: "What is the discrete fourier transform?". Smalltalk should be avoided here. Using the involutary DFT as the definition will do this, and also save the The Plancherel theorem from being mentioned twice in the article. Bo Jacoby 11:52, 3 October 2005 (UTC)
- Some history on this article: I changed the definition to include two general normalization constants, constrained only by the fact that their product be 1/n. StevenJ then argued that this was too much of a jolt for someone trying to use Wikipedia as one source among many, an argument which I now agree with. On this basis, we should use the most common definition in the introduction, then expand from there. So that means, in my mind, introducing with either (1,1/n) (the present normalization) or (1/√n,1/√n) (the unitary normalization). The present normalization is appealing to algorithmic types, because you don't need to calculate square roots. The unitary is appealing to physics types because it is symmetric, and analogous to a rigid rotation in Euclidean space, all sorts of things are conserved, dot product, angles, etc. But, given the present normalization, the Plancherel theorem needs to be stated in the present normalization, and then restated to illustrate the efficacy of the unitary normalization. PAR 20:50, 3 October 2005 (UTC)
- (PAR, I'm a physicist myself. In physics, you rarely use the discrete version of the transform at all for theoretical work, so your argument is a bit strange. The DFT in my experience is mainly used for computation, and occasionally for expression of discrete convolutions, in which cases the unitary normalization is actually often not the most convenient form as I mentioned below. —Steven G. Johnson 04:42, 5 October 2005 (UTC))
If your programmer wants to spare his computer from computing the square root, by all means, let him code a fast fourier transform subroutine called SQRT_FFT computing √N*DFT(X). It is no excuse for the mathematician to make non-unitarian or non-involutarian definitions. The unitarity and involutarity is appealing to every user, who neither wants to remember two versions of the Plancherel theorem, nor to bother about the difference between DFT and IDFT. Bo Jacoby 12:57, 4 October 2005 (UTC)
- Unitarity is one convenient property. The fact that you apparently think it is the only convenient property tells me that you've never done much digital signal processing. For example, putting the 1/n factor in the idft (or only on the dft) makes the convolution theorem come out in a simpler form. (This is probably why Matlab, for example, uses the same convention as in Wikipedia.) From my own personal experience, in talking about FFT algorithms to compute the DFT (which themselves are a rich source of interesting mathematics and not just a practical "programming" matter), the normalization factors are an annoyance that is universally omitted. (See above re: involution form.) —Steven G. Johnson 00:28, 5 October 2005 (UTC)
- PS. From a programming perspective, the computation of the square root is irrelevant as it is only done once. More expensive are the 2N multiplications. —Steven G. Johnson
Actually I do have FFT-experience. I completely agree on your PS that the time for taking the square root is epsilon. But the time for the 2N real multiplications is neglectible too because the inner loop of the FFT requires Nlog(N)/log(2) complex multiplications, so this inner loop is more timecomsuming when N is a large number. When N is not a large number everything is fast, so nothing matters. Of cause you know all that. The convolution theorem asserts that the transform of a convolution is the product of the transforms, so the convolution is the transform of the product of the transforms. Can that really be simplified? Please show me. I don't think that unitarity is the only convenient property. Involutarity is nice too, because it is timeconsuming to debug the error of having chosen the wrong one of IDFT and DFT. Not having this to worry about makes programming safer. Bo Jacoby 08:39, 5 October 2005 (UTC)
- That statement about the convolution is no longer true if you use the unitary normalization. You either have to change the definition of convolution, or have an extra sqrt(N) factor in the convolution theorem. But of course, you knew that? —Steven G. Johnson 16:23, 5 October 2005 (UTC)
I was mistaken. Thank you for telling me. Bo Jacoby 11:54, 6 October 2005 (UTC)
- By the way, in practical circumstances the scaling is not quite so negligible as you imagine, especially in the common case where it is implemented as an extra pass over the array (due to the memory implications more than the arithmetic cost), because the number of passes over the array is quite small already (even though it grows logarithmically) for an optimized FFT. It is possible to absorb the normalization into the twiddle factors, but this has other disadvantages (e.g. the same twiddle factors can't be re-used for the forward and inverse transforms). It is just as efficient (or more efficient), and more flexible, to absorb it into the user's computations. —Steven G. Johnson
Implement any variation of the discrete fourier transform to suit your needs of saving computer time, but stick to one definition and explain your subroutine in terms of that. Bo Jacoby 11:54, 6 October 2005 (UTC)
- As another example of a Fourier variant where the unitary normalization is not the most convenient, by the way, consider the Fourier series (where almost no one uses a normalization corresponding to the unitary FT). Your "one true normalization" crusade is just not sensible, I think. —Steven G. Johnson
Yes. Here the symmetry is broken. The coefficients of the fourier series are integrals. Yet my point is the same as before: define one discrete fourier transform and express everything else in terms of that, instead of confusing the reader regarding the meaning of the word. My 'crusade' is for simplicity and clarity, and you are not my opponent in that respect. Bo Jacoby 11:54, 6 October 2005 (UTC)
- Initially the article did use one definition, the one at the top. (In my experience this definition is by far the most common in actual practice. It makes things like the convolution theorem come out more nicely.) Then one section was added to say how a unitary normalization has some advantages in expressing certain things in a nice form. I think this is reasonable — indeed, different conventions have different advantages and disadvantages, and it is a service to readers to point this out. It does not make the article "unreadable", especially since the one place where a different convention is used is confined clearly to one sub-section. The alternatives are (a) don't present the unitary representation at all, which misses out on those nice expressions or (b) adopt the unitary representation everywhere, which is at odds with common practice and is inconvenient for actual usage. —Steven G. Johnson 15:50, 6 October 2005 (UTC)
Connection to the continuous fourier transforms
My next suggestion for improvement of the article. Some people consider the discrete transform to be a special case of the continuous transform, when generalized to distributions like the dirac delta 'function'. I think it is better to consider the continuous transformations as limiting cases of the discrete transformation, because then the reader doesn't need to know about integration but can just consider a large value of n. Consider the ring of integers , the ideal of multipla of n , and the quotient ring . The vector space in question is The indexes can be represented by integers from 0 to n-1, or from 1 to n. The indexes can be represented by integers from -n to n-1. Substituting into gives the limit Bo Jacoby 12:04, 5 October 2005 (UTC)
- That's a lot of unnecessary technical notation just to say that one possible limit of the sum is the integral for the Fourier series (note that you haven't gotten to the continuous Fourier transform yet, for which you have to be much more careful with your limit process...even here one really needs some caveats about convergence). It is reasonable to add a sentence saying that, for many applications, the DFT is used as an approximation for the discrete-time Fourier transform (which in turn is used an approximation for the continuous Fourier transform of band-limited signals) or the Fourier series, which are related by appropriate limits for well-behaved signals. (This was already there, somewhat, in discussions of spectral approximation and partial differential equations, and I've expanded it slightly.) —Steven G. Johnson 16:33, 5 October 2005 (UTC)
For the continuous transform, choose the dimension to be an even square number: . Substituting
into
gives the limit
Note that does not appear in this expression for the unitary involutary continuous fourier transform. (The asterisk signifies complex conjugation). Bo Jacoby 12:46, 6 October 2005 (UTC)
- Oh, I wasn't doubting that you can do this. Or, once you have the Fourier series for a length L you can take L to infinity. The problem with positing this as a formal limit process, however, is that it is a little too glib. More care is required to ensure (and even to define) convergence of the limits. It would be great to have a nice article on this in Wikipedia, but it would really be an article in its own right. (And the 1^{1/n} notation, of course, should not be used because it is very nonstandard. Worse, it redefines standard symbols because 1^{1/n} = 1 in the usual branch cut. But that issue has, I hope, been settled in Root of unity.) —Steven G. Johnson 15:58, 6 October 2005 (UTC)
1. In the context of discrete fourier transform it is interesting to compute a finite sum approximately by means of a known integral. That is uncomplicated. 2. In the context of integration theory, the integral is defined as a limit of a sequence of finite sums. That is where all the complication belong. 3. Regarding : You are not expected to change your mind, but the mathematicians following you may not feel the same way as you do. Today's nonstandard original reseach may be tomorrow's standard notation. When contempleting your life's net value to humanity you may count your battle against progress as a negative contribution, but while there is death, there is hope. 4. Surely the usual branch cut is redefined. That was no holy cow. is multivalued, and that cannot be helped in general. The usual branch cut leaves unpleasant and unnatural discontinuities. The conventional choice that is nonnegative real for nonnegative real z is natural when working with real numbers. The conventional choice that is utterly unimportant. It is not used in practice. Why write instead of 1 ? The notation is free to be redefined to something useful. The expression is repeated endlessly in technical and mathematical litterature, and a many readers get confused. Where did and and come from ? Because , then is a natural and useful convention. Bo Jacoby 11:27, 7 October 2005 (UTC)
- 1. Regarding the limits, you are oversimplifying; even things like the Fourier transform of 1 correspond to integrals that are not absolutely convergent and must be defined with care. (Or then there are famous examples like the function that is 1 on the rationals and 0 on the irrationals; if you treat its integral as the limit of a discrete sum, you'll get the wrong answer.) Handwaving is fine, of course, to get a general intuition, but a proper treatment will require an article in its own right. (Books have whole chapters on this stuff.)
- 2. Of course, conventions are merely that and can be redefined. But not by Wikipedia. For the umpteenth time, please take your notational revolutions elsewhere. —Steven G. Johnson 15:45, 7 October 2005 (UTC)
- Bo - I have to agree with Steven - and I am definitely not used to being on the conservative side of a revolution, and I am not used to agreeing with Steven so often in so short a space of time.
- 1^x is used implicitly when the single valued definition of exponentiation is given as where ln is the usual single valued definition of the natural logarithm. 1^x is not free to use.
- A branch cut leaves unpleasant discontinuities no matter where you make the cut.
- I agree an article explaining how the various transforms can be seen as limiting cases of each other would be a good, but separate article.
- Again, this article is about a particular subject - the usual DFT, not the involutary DFT. It should not be presented as anything but.
- A good revolutionary has to know how and where to join battle to maximum effect. If you were an American revolutionary, you would not sail to the north pole and howl at the moon. This ain't the place, we're not the enemy. You need to publish papers, GOOD papers, and introduce your notation memes there, and if they are robust their adoption will be inevitable.
- Just in case you succeed, I have this idea - no joke - the metric tensor is really just the unit tensor, so its covariant components should be represented in Einstein notation as . Unitary transformations are really just the metric tensor with mixed coordinates, so all unitary tensors should have their covariant components written like where prime coordinates signify a different coordinate system from unprimed. These tensors are simply index changers so you have expressions like . Its obvious that 1 is the right thing to use. Using g or whatever is a less natural and informative notation. Come the revolution, I'll be dead and gone, but can I count on you to implement this as well? It would be such a comfort when contemplating my life's net value to humanity. PAR 16:07, 8 October 2005 (UTC)
to Steven. The discrete fourier transform of x(t)=1 is , (using Iverson bracket), while the continuous fourier transform of 1 requires a definitional extention of the concept of a function. The indicator function of the rationals, , does not appear as a limit of discrete functions. The complications of definition belong to the article Distribution (mathematics). The complications of integration theory belong to the article Lebesgue integration. No such complication belong to discrete fourier transform. Bo Jacoby 10:27, 10 October 2005 (UTC)
- Sigh. My point is that the expressing the continuous Fourier transform as the limit of the DFT inevitably involves such advanced concepts. It isn't obvious or elementary and one should be cautious about presenting it as such. —Steven G. Johnson 16:38, 10 October 2005 (UTC)
- I respectfully disagree. What can be easily done should be easily done. Advanced cases may be postponed. Problems should not be solved until they appear. Bo Jacoby 11:06, 11 October 2005 (UTC)
to PAR.
- The 'usual single valued definition of the natural logarithm' is not that usual. See Natural_logarithm#Derivative.2C_Taylor_series_and_complex_arguments. Until you provide a reference where the interpretation is used, the interpretation is free to be used without misunderstanding.
- I'll work on it, but your statement is wrong. Its not until I provide a reference that its free to use. I'm not that much of an authority. PAR 22:45, 10 October 2005 (UTC)
- You will not misunderstand it. Who will ? That is the question. Bo Jacoby 11:06, 11 October 2005 (UTC)
- The concept of branch cut is outdated and being replaced by Riemann surface. So the argument, that violates the 'standard branch cut', is not decisive. Note that the definition leads to a singlevalued interpretation of only after chosing one of the values of , namely 1, rather than for example . Likewise a singlevalued interpretation of is made by chosing one of the values of log(1), namely , rather than for example 0.
- I don't think thats true. The idea of the branch cut being a sort of artificial thing to yield a single valued function from a multi-valued function is very old. Also, I specified to begin with that the logarithm was the single-valued logarithm, so that log(e)=1 automatically. PAR 22:45, 10 October 2005 (UTC)
- The limits leading to fourier series and continuous fourier transforms could be given in a few lines, which I might write myself when I feel more confident that it will not be removed immediately. I encourage your writing separate articles, of cause.
- It certainly won't be removed as long as its correct, yet not revolutionary. PAR 22:45, 10 October 2005 (UTC)
- There is no such thing as the usual DFT but a mess of related definitions, even in the present WP article.
- As the article notes, a few slight variations are common. However, some variations are definitely not a "usual" variant of the DFT. For example, anything which is not linear. —Steven G. Johnson
- Are you aware that the restriction of the involutary fourier transform to real functions is linear ? Bo Jacoby 11:06, 11 October 2005 (UTC)
- As the article notes, a few slight variations are common. However, some variations are definitely not a "usual" variant of the DFT. For example, anything which is not linear. —Steven G. Johnson
- No good revolutionary sits back and waits for the revolution to come. He creates it himself with vision and vigour. No, you are no enemy of mine, nor am I an enemy of yours. I enjoy your comments. I listen to your opinions with respect, and to your arguments with attention. To oppose an idea just because it is new notation and original research is the lamest argument in all history of science, and those who used it are mercifully forgotten, while their victims are honored: Hippasus, Galileo, Giordano Bruno ...
- Bo, no one is opposing it because it is new. They are opposing it here because it is new. Please understand the difference. —Steven G. Johnson 16:38, 10 October 2005 (UTC)
- Is 'new' the only objection you have left against it by now ? Have you really become positively enthusiastic about it ? Will you spread the word elsewhere ? Bo Jacoby 11:06, 11 October 2005 (UTC)
- Bo - I agree with you completely. COMPLETELY. DO IT. If you need my help let me know, I'll help. But not here, this is the north pole, and the decisive battles of the revolution are being waged elsewhere. This is where we analyze and explain past, present, and maybe future revolutions, but no engagements should be fought here.
- Thank you very much. I need all the help I can get. I dont think I need to do more writing myself, however. Refer this WP discussion to the authors of the books that uses non-involutary fourier transforms and write instead of . Bo Jacoby 11:06, 11 October 2005 (UTC)
- Bo, no one is opposing it because it is new. They are opposing it here because it is new. Please understand the difference. —Steven G. Johnson 16:38, 10 October 2005 (UTC)
- I love your idea of using 1 instead of g for the metric tensor. It is accordance with Occam. Go ahead and use it. I'm with you. Sharing an idea is implementing an idea. If it creates trouble somebody will be tell you. (The idea might have been used by Jean-Marie Souriau in his fine book Géométrie et Relativité, but I am not sure.) Bo Jacoby 10:27, 10 October 2005 (UTC)
- If I ever write a refereed paper involving the metric tensor, I will use it and hope it catches on. If I ever speak to anyone who is interested, I will lobby them, and hope it catches on. But I won't try to get it into a Wikipedia article where it doesn't belong, at least not one that is nominally describing something as established as the DFT. PAR 22:45, 10 October 2005 (UTC)
- I was wrong about Souriau using '1' for the metric tensor. He uses the vertical bar | with indexes in the lower right and the upper left, to indicate covariant and contravariant components of the basis vectors, les clefs matricielles. But he writes the inner product of two vectors without the letter g. He uses 1E for the identity function on the set E, ( ), not as the indicator function of E, ( using Iverson bracket). You may enjoy reading Souriau. Bo Jacoby 11:06, 11 October 2005 (UTC)
Redo unitary
I should practice what I preach, so I redid the unitary section to get rid of the primed indices. The problem is, I replaced it with primed vector symbols, and this is still not in conformity with previous use of capitals as transformed quantities. I want to emphasize with the notation as much as possible the idea of having a constant vector object, and its components are the only things that change, the DFT being simply a coordinate transformation. That was the idea of the primed coordinate indices rather than the primed vector symbols. However, to express the unitary stuff using capital letters would further obscure this idea. Is there any objection to changing the capital letters to primed letters in the rest of the article? PAR 15:56, 8 October 2005 (UTC)
- Capital letters are more conventional, and are used in most of the other Fourier transform pages (other than the discrete transforms, sigh). They are easier to distinguish visually than primed letters. And, most importantly, the frequency domain is sufficiently distinct, in practice, from the time domain that I don't agree that we want to minimize the differences between the two representations. I would rather you changed to capitals in the unitary section. —Steven G. Johnson 16:25, 8 October 2005 (UTC)
For those who think of the Fourier transform as a time-frequency pair, yes, you want to emphasize the distinction. For those who think of it as a coordinate transformation between the momentum and space representations of a quantum wave function, you want to emphasize the lack of distinction. We should not favor the time-frequency POV. PAR 18:54, 8 October 2005 (UTC)
- First of all, the discrete Fourier transform is not typically used for quantum wave functions, and the article should favor the contexts in which the DFT is typically used. Second, even in quantum mechanics your prime notation is not common either. (When talking about wave functions, the most common notation I've seen is and for distinguishing the position and momentum-space representations...or rather, in QM the most common representation is probably Dirac notation which is basis-independent, and then using <x|ψ> etc.) The fact that it is a linear unitary transformation is already emphasized enough; we don't need to add an unusual notation on top of that. —Steven G. Johnson 22:27, 8 October 2005 (UTC)'
' Ok, I'm focusing on the concept and not the particular calculation technique. You know that the study of invariants under linear transformations is fundamental in physics. Also, you know I agree we don't need any unsual notation. But Parsevals and Plancherel's theorems are just so much mathematical gobbledygook to a beginner, and I am trying to convey an intuitive understanding of these theorems. Will you help me do that ? PAR 23:30, 8 October 2005 (UTC)
Involutions 2
I separated the involutions from the tricks. Is the Hartley transformation really an involution ? Shouldn't the divisor '2' be under the square root sign ? Shouldn't the formula be found in the Discrete Hartley transform article ? Bo Jacoby 13:13, 13 October 2005 (UTC)
- Yes. Yes. Already mentioned.—Steven G. Johnson
Dear friend. The Discrete Hartley transform says: Its main distinction from the DFT is that it transforms real inputs to real outputs, with no intrinsic involvement of complex numbers, in variance your statement: taking the real part is necessary to get the Hartley transform (even for the real inputs, the outputs of H(x) are complex). Please make up your mind. That the involution form is basically an aside; it does not merit its own section is your opinion, but not your decision. You know that the matter is controversial. Nobody prevents you from expressing your view based on your experience, nor should you prevent others from expressing their view based on their experience. My experience-based view is that this involution has the simpler algebraic properties. Please restore my edit and stop acting like a censor of the spanish inquisition. Bo Jacoby 08:56, 14 October 2005 (UTC)
- H(x) is not the Hartley transform. Its real part is, for real inputs. (The imaginary part is the Hartley transform with a different sign convention.) Personal attacks ala the Inquisition aside (nobody expects the Spanish Inquisition!), anything more than an aside is inappropriate because promotion of your personal non-standard DFT variant, whatever its merits, is original research, as has been repeatedly explained to you. If you prefer, we can delete mention of it entirely. —Steven G. Johnson 15:19, 14 October 2005 (UTC)
You claimed that the notation is considered 'original research' and I didn't pursue it. The involutary fourier transform was accepted by you. Nobody called it 'original research' until now. I just changed the division into sections. I trusted PAR that you would not remove my edits, but you did. Spanish Inquisition refer to your actions, not to your person. You remove anything that you personally don't like, as herecy. This is getting ridiculous. I must give up my attempt to clean up your messy article on discrete fourier transform, because you are working against it. Yes, you may delete it entirely. You are on your own. Good bye. Bo Jacoby 11:26, 17 October 2005 (UTC)
- Bo - NO - I need help when I'm right and Steven J. is wrong. Its just that he's more or less right on this. PAR 14:50, 17 October 2005 (UTC)
What does this have to do with the DFT? Aliasing is caused by periodic sampling. We don't need a DFT to see that and are indistinguishable. --Bob K 04:14, 13 December 2005 (UTC)
- Yes, aliasing is caused by discrete sampling. And discrete sampling is part and parcel of the DFT as it is usually applied, and is part of what makes it distinct from the continuous Fourier transform. Yes, it's equally true for the DTFT (where it should also be mentioned), but in this case it's true for both forwards and inverse transforms. What exactly is your objection? —Steven G. Johnson
- My objection is that I don't really know what you are trying to say that is actually useful information about the DFT. I have a feeling that there is a much better way to say it. I would like to help, but first I have to figure out what the point is. --Bob K 07:50, 13 December 2005 (UTC)
- Yes, aliasing is caused by discrete sampling. And discrete sampling is part and parcel of the DFT as it is usually applied, and is part of what makes it distinct from the continuous Fourier transform. Yes, it's equally true for the DTFT (where it should also be mentioned), but in this case it's true for both forwards and inverse transforms. What exactly is your objection? —Steven G. Johnson
What purpose is served by the word finite in the sentence?: "and it means that in a finite discrete signal one cannot distinguish between frequencies j that differ by n". --Bob K 04:14, 13 December 2005 (UTC)
- You're right, that should be removed. —Steven G. Johnson
What is the point of this?: "If one simply evaluates the DFT formula at then one finds that ," I.e., it appears that we are assuming the sequence is periodic in order to "prove" that the DFT thinks it is periodic. --Bob K 04:14, 13 December 2005 (UTC)
- The effective periodicity of x is not used in this statement, so I don't know what you are referring to. —Steven G. Johnson 06:25, 13 December 2005 (UTC)
- OK, I see that now. (I'm not used to your choice of symbols.) All you are saying is that the DFT is periodic in frequency. But the conclusion is that: "in a discrete signal one cannot distinguish between frequencies j that differ by n", which is true, but it isn't what you seem to have proved. You've at least skipped a step, haven't you? Furthermore, wouldn't it be simpler to just define: (whose frequencies all differ from x by n). And then it is trivial to see that y = x. It has nothing to do with the DFT. --Bob K 07:50, 13 December 2005 (UTC)
- The effective periodicity of x is not used in this statement, so I don't know what you are referring to. —Steven G. Johnson 06:25, 13 December 2005 (UTC)
Hi Bob, your basic issue seems to be that aliasing "has nothing to do with the DFT". Although aliasing can also be derived in other circumstances and in other ways, I can't agree with you entirely. Aliasing is still implicit in the definition of the DFT, and the implied periodicities are an intrinsic consideration in almost any application of the transform; it would be remiss not to mention it here.
I don't think you can avoid using the DFT definition in deriving the aliasing in this case, moreover, because the DFT definition defines what you mean by "samples" and "frequency" in this context. Because this corresponds to the usual definition for uniformly spaced samples, you don't even think about it, but it is there. For example, you could certainly apply the DFT to a sequence of numbers corresponding to non-uniformly spaced samples of some signal — the resulting DFT would not correspond to amplitudes of "frequencies" of the original signal in the usual sense, but would still have an aliasing property of a different sort. In general, the aliasing property can arise in contexts that have nothing to do with signals and sampling, e.g. many FFT algorithms exploit the implicit periodicity. Abstractly, there is also a close relation between the DFT and representation theory for the irreducible representations of the cyclic group (which would be a nice connection to mention in the article, although currently the representation theory articles in Wikipedia are painful to read).
You can quibble about the precise way in which the property is proved, I suppose, but any sufficiently short and clear proof seems fine to me. I'm not sure where you think the current article skips a step. —Steven G. Johnson 17:25, 13 December 2005 (UTC)
- What you actually show is that the DFT of x has period n. And what the conclusion says (paraphrased) is: "If y is another signal whose frequencies differ (by n) from those of x, then one cannot distinguish the DFT of y from the DFT of x. (And therefore one cannot distinguish y from x.)" I am suggesting that we actually say those things, rather than trust them to the readers' intelligence. Moreover, when we do say them, we realize that the condition: "y is another signal whose frequencies differ (by n) from those of x" means simply this: . If you must talk about aliasing, then I suggest you at least make it clearer that it is not caused by the DFT, does not depend on the DFT in any way. My sensitivity probably stems from the misconceptions in the spectral leakage (stub) article (before I just re-wrote it). --Bob K 20:01, 13 December 2005 (UTC)
- This discussion is conflating a statement and its converse; can we please be more precise? I think we can both agree that aliasing does not imply the DFT — aliasing can occur whenever you sample a signal. But it seems that you are also insisting upon the converse, with which I do not agree — you seem to also be saying that the DFT does not imply aliasing. —Steven G. Johnson 06:48, 14 December 2005 (UTC)
- What I am trying to say, precisely, is that I think many people will get another misconception about the DFT. That misconception is that the DFT causes aliasing, regardless of what the article actually says. I am not asking you to retract your treatment. All I am now suggesting is that you explicitly refute that potential misconception. It can't hurt, and it will probably help somebody. --Bob K 21:55, 14 December 2005 (UTC)
- You aren't making any sense to me. If the DFT implies aliasing, which you seem not to contest, then in that sense the DFT causes aliasing...but the latter statement you call a "misconception." Note that this is not the same thing as saying that the DFT is the sole context in which aliasing phenomena occur—you still seem to be confusing a statement with its converse. —Steven G. Johnson 23:14, 14 December 2005 (UTC)
- I don't think I really care if the DFT implies aliasing or not. But maybe you will enjoy thinking about a signal defined this way:
- What I am trying to say, precisely, is that I think many people will get another misconception about the DFT. That misconception is that the DFT causes aliasing, regardless of what the article actually says. I am not asking you to retract your treatment. All I am now suggesting is that you explicitly refute that potential misconception. It can't hurt, and it will probably help somebody. --Bob K 21:55, 14 December 2005 (UTC)
- This discussion is conflating a statement and its converse; can we please be more precise? I think we can both agree that aliasing does not imply the DFT — aliasing can occur whenever you sample a signal. But it seems that you are also insisting upon the converse, with which I do not agree — you seem to also be saying that the DFT does not imply aliasing. —Steven G. Johnson 06:48, 14 December 2005 (UTC)
- ,
- I.e., the spectrum is periodic, by definition. So the periodicity of the DFT does not imply aliasing in that case. (Just a thought.) --Bob K 21:55, 14 December 2005 (UTC)
- This equation makes no sense. Did you mean w(m) instead of w(k)? (It looks more like a Fourier series; the w(m) amplitudes are only uniquely determined if k is a continuous periodic coordinate.) What point are you trying to make? —Steven G. Johnson 23:14, 14 December 2005 (UTC)
- w(k) is just about anything you wish. What I actually had in mind was some sort of finite window function. Its spectrum gets mixed (heterodyned) up to frequency f and periodically extended thereafter. Of course that doesn't make it look any different than before. But the point is that I defined it to comprise all the components that you would normally infer are aliases. Since they are defined as part of the signal to begin with, they aren't aliases. In that case, the periodicity of the DFT merely implies that the original, defined input was discrete to begin with. (A trivial point, but perhaps not to a mathematician.) --Bob K 00:50, 15 December 2005 (UTC)
- I can't really agree with your interpretation here. The only case in which you can say for certain whether there is "really" aliasing is when you are sampling a continuous signal and you know the spectrum of the continuous signal and whether it is sufficiently bandlimited. If you start with a discrete signal there is an inherent ambiguity. —Steven G. Johnson 01:38, 15 December 2005 (UTC)
- When you start with a discrete signal, aliasing is either undefined (because there was never any "true" frequency to begin with [as you just said]) or impossible (because all the frequency components are already periodically extended). In either case, ambiguous does not seem to be the right description, because it implies that you started with a continuous signal. --Bob K 02:09, 15 December 2005 (UTC)
- I can't really agree with your interpretation here. The only case in which you can say for certain whether there is "really" aliasing is when you are sampling a continuous signal and you know the spectrum of the continuous signal and whether it is sufficiently bandlimited. If you start with a discrete signal there is an inherent ambiguity. —Steven G. Johnson 01:38, 15 December 2005 (UTC)
- w(k) is just about anything you wish. What I actually had in mind was some sort of finite window function. Its spectrum gets mixed (heterodyned) up to frequency f and periodically extended thereafter. Of course that doesn't make it look any different than before. But the point is that I defined it to comprise all the components that you would normally infer are aliases. Since they are defined as part of the signal to begin with, they aren't aliases. In that case, the periodicity of the DFT merely implies that the original, defined input was discrete to begin with. (A trivial point, but perhaps not to a mathematician.) --Bob K 00:50, 15 December 2005 (UTC)
- This equation makes no sense. Did you mean w(m) instead of w(k)? (It looks more like a Fourier series; the w(m) amplitudes are only uniquely determined if k is a continuous periodic coordinate.) What point are you trying to make? —Steven G. Johnson 23:14, 14 December 2005 (UTC)
- I.e., the spectrum is periodic, by definition. So the periodicity of the DFT does not imply aliasing in that case. (Just a thought.) --Bob K 21:55, 14 December 2005 (UTC)
Would you be happy if the article stated something like, "More generally, aliasing phenomena also occur whenever one is analyzing the frequency components of a discrete signal (or discrete samples of a continuous signal), such as in the discrete-time Fourier transform." I have no problem with this. —Steven G. Johnson 23:14, 14 December 2005 (UTC)
- Not really (sorry), but thank you. Let's just leave it and move on. There are much bigger and easier fish to fry. --Bob K 00:50, 15 December 2005 (UTC)
"stacked" fractions in superscripts?
I like the stacked version better than the unstacked version. IMO, that's the whole point of having a cool math editor. --Bob K 21:11, 15 December 2005 (UTC)
- Me too, but I don't have strong enough feelings about it to go through and change it. Knock yourself out if you want, but first you might ask User:Michael Hardy to explain his reasoning on this edit. —Steven G. Johnson 21:23, 15 December 2005 (UTC)
A major point of having a cool math editor is versatility: you can make it do what you want it to. "Stacked" is appropriate in some cases, but seldom within superscripts. Bob K, did you think I was saying "stacked" should never be used? If so, go back and read what I wrote, since you missed it entirely. Michael Hardy 21:53, 15 December 2005 (UTC)
- PS: I'm betting you two will get outvoted on this one if it comes to that. Michael Hardy 21:53, 15 December 2005 (UTC)
Stacked notation does not work well for inline formulae. Using stacked for display and unstacked for inline places an extra burden on the reader. For a fraction with both numerator and denominator complicated, stacked display can be a great help to clarity; here, the benefit seems small. Outside of TeX, k/n can be marked up to appear as k⁄n; but that's of little help in this case, partly because of Wikipedia's LaTeX limitations. Finally, stacking causes shrinking in an already-small superscript, cancelling some of the benefit of stacking in general. --KSmrqT 01:57, 16 December 2005 (UTC)
- So is this the voting process? How does everyone else know that it's happening? When is it over? --Bob K 02:18, 16 December 2005 (UTC)
Michael, in general I agree with you about stacking in superscripts. However, in this particular case, the stacking serves a useful purpose: to clearly separate the 2πi/n from the jk index product. If you don't do this, the ijk is somewhat confusing because all of the letters look similar. —Steven G. Johnson 04:31, 16 December 2005 (UTC)
- Quite true. But I have to point out that my preferred (linear) notation is: or , if you insist, and I never have the problem you are worried about. I just think stacking looks great, compared to all the ugly computer code that has passed before my eyes. --Bob K 06:02, 16 December 2005 (UTC)
- That is an improvement. But it still is clearer, I think, to keep the 2πi/n as a single unit (since mathematically most of the important properties of the DFT follow from the fact that it is a root of unity to the jk-th power). I also agree that it looks nicer stacked in this particular case. I'm changing it back. —Steven G. Johnson 20:07, 16 December 2005 (UTC)
we should change notation from subscripts to index in brackets.
hi guys, i've only recently started taking an interest in all of the Fourier/Laplace/Z transform pages and had no idea that there was such recent activity on this one.
i want to suggest something (that i am willing to do the grunt work) that i feel really strongly about. while the Laplace Transform, Fourier transform, Continuous Fourier transform, and Fourier series articles will have much broader interest and readership, i really believe that the discrete-time transforms, DTFT, DFT, Z, have much more narrow interest and constituency. while i do not claim that only electrical engineers doing signal processing are the only persons interested in or using these concepts, techniques, or algorithms, i do believe that it is principally people doing signal processing having such an interest. in that there is already a widely accepted notational convention for these discrete-time or discrete-frequency signals (that is and ) in the textbooks (i still consider Oppenheim and Schafer Discrete-Time Signal Processing to be the standard, but i s'pose we could argue about that) and lit, i really believe that we should adhere to that convention here. there are some of us agitating to revamp a bunch of EE electronics, control systems, communications systems, filtering, and general signal processing articles and we really should have notational consistency, and i really think that all of the discrete-time linear transforms (DTFT, DFT, Z) fall mostly in the EE and signal processing discipline.
may i change that notational convention from to and to (spending some considerable time) without someone reverting all of my effort away? FYI, there are two reasons i know of for why the EE discipline and lit has changed the notation. the main reason is that it is not uncommon to be processing a signal vector. in continuous time it looks like and, if sampled it would be . but, if we need to refer to the individual components of the signal vector, it would be and . do you see how it gets ugly if we have to say instead? also is harder to do by hand than . r b-j 02:59, 16 December 2005 (UTC)
- It takes a brave soul to open that can of worms. I salute you. If wikipedia could achieve a standard notation, it would probably be a first. The world would beat a path to your door. I'm all for it, not so much for myself, because I have had sufficient time (about 4 decades) to get used to translating different notations. But I still recall the difficulties of trying to deal with that while trying to grasp the actual important concepts for the first time. It was frustrating, to say the least. I'm sure everyone can agree on that. But the sticking point is always getting consensus on a standard (as you clearly know). I think you have made a good case for Oppenheim and Schafer Discrete-Time Signal Processing. I support you, thank you, and wish you the best of luck, because you will probably need all of it. --Bob K 03:28, 16 December 2005 (UTC)
- well, thanks. you clearly see almost precisely what my concern is. i'm used to opening these kind of cans (i've had long extended debates with Cleve Moler on comp.soft-sys.matlab about MATLAB's inability to have indices below 1, like 0 for the DFT), but i seldom "win" (the best argument doesn't always win, sometimes it matters who you are rather than the strength of what what you know or say). we'll see what others say, but this whole notational inconsistency is already a little ugly and will only get worse if it isn't fixed soon. r b-j 03:40, 16 December 2005 (UTC)
- Let me guess... old Cleve celebrated the end of the 2nd millennium 1-Jan-2001, and he used to be a FORTRAN programmer. Been there, done that. But I got over it!! After growing to love C (which took about 5 minutes), it was very disappointing to discover FORTRAN's influence on Matlab. However, I have pretty much gotten used to it again. Some people crusade for right vs. wrong (bless your hearts), and others (like Cleve) stubbornly cling to whatever idea they heard first, and some of us just try to be the blades of grass that bend whichever way the wind blows. --Bob K 04:02, 16 December 2005 (UTC)
- FYI (in case you didn't know), old Cleve is the primary designer of MATLAB and VP and cofounder of The Math Works. so he has other stubborn reasons (like he just can admit that they screwed up so badly not allowing the base indices to be adjustable). but also, for the sake of the product, you'ld think he'd be open-minded enough to want to see improvements, even if there would be a sorta "paradigm shift" in it. r b-j 21:59, 16 December 2005 (UTC)
The DFT is used by a wide audience, and so it is important to use as universal a notation as possible. If you use subscripts, everyone knows what you mean, but I think brackets are a bit too overloaded as far as notation goes. The Wikipedia articles aren't using a vector signal so that argument is moot. (I admit I don't have terribly strong feelings about this. I'll point out, however, that most of the literature on FFT algorithms uses subscripts as far as I can tell.) Moreover, you'll have to change literally dozens of Wikipedia articles that use the subscript notation. —Steven G. Johnson 04:38, 16 December 2005 (UTC)
- Actually, I would like to see a lot more use of in a learning environment like this is supposed to be. And since is a step closer to it than is , that's another point in its favor (IMO). The literature you refer to is not written on an undergrad level. Are we trying to emulate grad school here? Maybe we should be trying for a consensus on that, because without it, we surely won't agree on much of anything else.
- Have to say I am not swayed by the argument that brackets are more overloaded than subscripts... seen too much subscript abuse to accept that. (But it's you viewpoint, and you are entitled to it.)
- Also think it's a bit short-sighted to argue that vectors won't be needed in the future, because they aren't needed now.
- If the miracle occurs, and a consensus is reached, I don't think anyone would consider the effort required a good enough reason not to go forward. It could be done incrementally, over time. Or N volunteers could each take on 1 article. I would volunteer for that. And then there is the amazing Michael Hardy... apparently a human editing machine.
- But I'm wasting my
breathkeystrokes, and I'm old enough to know better. - --Bob K 05:40, 16 December 2005 (UTC)
- You definitely shouldn't use in most places, because that imposes a particular interpretation of the DFT as discrete samples of a putative function over a real domain. The DFT itself "knows" nothing about such interpretations, so we shouldn't imply that they are intrinsic, nor is this interpretation used for all applications. (It's fine to mention this under applications or whatever.) —Steven G. Johnson 18:22, 16 December 2005 (UTC)
- i kinda agree with Steven about that. should exist only in the Sampling Theorem and related/supporting articles. but we should stick to the convention that if it surrounded by parenths, the argument is continuous. if an argument is surrounded by brackets, it is discrete. r b-j 21:59, 16 December 2005 (UTC)
- I'm opposed to notation. I can't imagine how anyone could say is easier to write than or how is uglier than . Also the statement "all of the discrete-time linear transforms (DTFT, DFT, Z) fall mostly in the EE and signal processing discipline" is offensive, narrow-minded blindness. What is it about EE types that make them think they are the center of the universe? The only point in favor is Oppenheim and Schafer, which is a standard in my mind as well, but its not the bible. PAR 18:48, 16 December 2005 (UTC)
- oh, c'mon PAR! when you see or , you know which index is "time" or "frequency" and which index is the component number of the signal vector. when you see or , it's not quite so obvious, is it? in fact, it looks like m and n are sorta the same kinda species of animal. if you had a vector of continuous-time functions, you wouldn't call the components "", would you? wouldn't we say ? when sampling that function and converting the notation to continuous time, we need to have a change of notation to indicate that n is discrete while t is continuous and the brackets, if faithfully adhered to, really do that job nicely, yet they do not change the qualitative nature of the subscripts for the times we need subscripts.
- You say "when you see or , you know which index is "time" or "frequency" and which index is the component number of the signal vector." In fact I don't. You assume (as all EE's do) that I cannot possibly be using the DFT for anything but time analysis of a signal. In fact I use it as a digital approximation to a continuous Fourier transform when solving physics problems. What you call a discrete time index, I call the component of a vector! So when you mention "signal vector" I really don't know, from my point of view, which is the time index and which is the signal component. Only by looking at what you have been saying previously am I able to guess that m is probably the component number of the signal vector, and k is the component number of something I am used to calling a vector, you are not. You really, really have to stop assuming that EE's are the only people in the world who use discrete transforms. Any discipline which uses a continuous transform (e.g. physics as in quantum mechanics, heat transfer, etc.) will likely use a discrete transform as a calculational tool to calculate the continuous transform. Take the blinders off, please.
- The bottom line is, we should follow the literature, rather than arguing about this. If there is multiple notations in the literature, then we will want one that is least restrictive, least specific to a particular discipline. If thats the EE notation, then so be it, but can we agree on how to solve the disagreement? PAR 23:56, 16 December 2005 (UTC)
- whether the n in x[n] is "time" or "position", say on a photograph (then it would be x[n,m] for row and column (not necessarily respectively), or the independent variable of a p.d.f. (and DFTing would get you the characteristic function), or some other completely different physical quantity and you're breaking this thing up into a spectrum, it doesn't matter what you call it. the point is, if you have one of them, it's x[n]. if you have a few (or many) of them, it's the same x[n] but you subscript it (xm[n]) then you have something to differentiate which one of the many. i do not assume that you "cannot possibly be using the DFT for anything but time analysis of a signal". doesn't matter if it's "time" or something else, it is the independent variable that is also the argument to the sinusoidal (or complex exponential) basis functions. doesn't have to be physical "time". r b-j 03:23, 17 December 2005 (UTC)
- The point I am making is that what you call x[n] is what I call (and in my applications, correctly) the n-th component of a vector, and the Fourier transform is a vector operator. Are you proposing to change vector and tensor notation as well? How do you represent covariant versus contravariant indices of a vector or tensor with this bracket notation? If you are giving up on any such representation, then subscript notation is superior, because it is more general and yet is widely used, at least outside the EE sub-universe. PAR 18:44, 17 December 2005 (UTC)
- in terms of writing by hand, having to write tiny characters legibly is, for most people i know, harder to do than to write moderately-sized characters. the same is true for the prof at the blackboard. the s is far easier to write than , even if, on the blackboard (or whiteboard) the subscript is not miniature like on paper. r b-j 21:59, 16 December 2005 (UTC)
- I would also say that it is not so much a matter of which one is easier to write or to type, but rather, which one is easier to read or to interpret. I am not so worried about the authors and the editors, I am more concerned about the readers and making sure that they can understand what is going on. In that sense, the bracket notation is much easier to understand when describing discrete-time signals, vectors, and matrices, than is the sub-script notation. -- Metacomet 22:53, 16 December 2005 (UTC)
I don't know if you realize, but is by definition identical to . It is purely an issue of notation, not of meaning. I think artices dealing with the DTFT, the DFT, and the ZT should use the bracketed notation rather than the subscript notation, for all of the reasons mentioned above. It does not need to create any confusion if you simply note, near the beginning of each article, that the article will be using the convention that
and then explain briefly the rationale (common convention in signal processing, indicative of vector/matrix notation, etc) for using this notation. It should not be a big deal for non-EE's to understand, and it would be a big help for EE's. Why are non-EE's so afraid of trying something different? There are good reasons why EE's have chosen these newer conventions, and they actually can help in other disciplines as well. It's helpful if you drop the not invented here paradigm. -- Metacomet 22:06, 16 December 2005 (UTC)
- Why are EE's so afraid of trying something different? Why don't EE's realize that this is purely a matter of notation? Seriously, Metacomet, the condescension is coming a bit thick... —Steven G. Johnson 22:53, 16 December 2005 (UTC)
- Steven -- I aplologize if I came across as condescending. I will try to be more respectful of other people in the future. -- Metacomet 03:11, 17 December 2005 (UTC)
One other point: I believe that the bracketed notation is actually older than the subscript notation. Back in the old days, before WYSIWIG computers, people used to use typewriters or text-based computers. Yes, I know it is hard to believe, but these antique devices did not provide the capability of creating sub-scripts or super-scripts the way we can today in our modern word processing applications or with LaTeX. So the way that people used to indicate sub-scripts was to place the sub-scripted character inside of square brackets, as in, for a one-dimensional vector or even for a two-dimensional matrix.
- That's an interesting theory, but you should be a bit more cautious about speculation. I have an 1866 re-printing of Gauss' 1805 notebook, in Latin, which describes what is perhaps the first realization of the full sine+cosine DFT (and an FFT), for trigonometric interpolation of real data. He uses superscripts ("si termini ultimi in X statuuntur " etc.). You're perhaps forgetting that even before typewriters came paper and pencil, and blackboards, and mathematicians are notoriously lazy — in my opinion (r-b-j notwithstanding) subscripts are quicker to write than bracketed indices (higher chalk/math ratio!). (By the way, in 1866 they apparently had no trouble typesetting equations, and my understanding is that sub/superscripts were common on technical typewriters.) —Steven G. Johnson 22:53, 16 December 2005 (UTC)
It's only half-speculative, because I am pretty sure about it. And I preceded my comments with the words "I believe that...". -- Metacomet 00:25, 17 December 2005 (UTC)
I never said that subscripts and superscripts were not also used. Obviously, in handwriting or typesetting, or with an advanced enough typewriter, you can do subscripts. What I did say was that brackets were also used, when you had a typewriter that could not handle subscripts, and people had no trouble understanding what they meant. -- Metacomet 00:23, 17 December 2005 (UTC)
- By the way, like all right-thinking folk, Gauss uses i as the "quantitas imaginaria", not this goofy j. Although he also uses capital E for the "basis logarithmorum naturalium", I have to admit. =) —Steven G. Johnson
- What's goofy is that it is called the "imaginary unit". I don't know what a better name would be, but it is anything but "imaginary". -- Metacomet 00:28, 17 December 2005 (UTC)
- Let's call it Johnson, so maybe Steve won't mind the notation. --Bob K 01:40, 17 December 2005 (UTC)
- Or since it stands for the vector [0,1], we could use this symbol: --Bob K 01:40, 17 December 2005 (UTC)
- What's goofy is that it is called the "imaginary unit". I don't know what a better name would be, but it is anything but "imaginary". -- Metacomet 00:28, 17 December 2005 (UTC)
- i actally agree with the semantic of calling it the "imaginary unit" and calling (real) multiples of it "imaginary numbers". i think we can come up with real physical quantities legitimately quantified as "negative", but there are no real numbers that when you square them, become negative. only numbers we "imagine". r b-j 03:23, 17 December 2005 (UTC)
I am not suggesting that we should abandon WYSIWIG and go back to using square brackets in all situations. What I am saying, however, is that this convention existed long before EE's adopted it as notation for describing discrete-time signals, and I believe that it is a perfectly acceptable convention for vector and matrix notation. -- Metacomet 22:20, 16 December 2005 (UTC)
- The question is not whether it is acceptable, it is whether it is preferable to the extent of a mass conversion. (If there is a mass conversion, please also change k to the frequency index, n to the input index, and N to the size of the transform. The current labels drive me crazy.) —Steven G. Johnson 22:53, 16 December 2005 (UTC)
- That would be awesome. --Bob K 23:58, 16 December 2005 (UTC)
- yeah! once i felt safe enough to attack this thing without getting my work reverted, that goes without saying. my intention, as much as possible, is to implement the notational convention as i see it in Oppenhiem & Schafer as well as other texts and common in the IEEE lit (not all IEEE papers use it). anyway, i most certainly would make those changes, Steven. r b-j 03:23, 17 December 2005 (UTC)
I agree 100 percent on your notation changes. We should not use i or j as index variables, since either one can represent the imaginary unit. -- Metacomet 00:15, 17 December 2005 (UTC)
As far as a mass conversion is concerned, I would say that we should only change articles where it makes sense to do so (not to state the obvious) – mainly articles related to discrete-time signals, discrete-time systems, discrete transforms, and image processing. -- Metacomet 00:15, 17 December 2005 (UTC)
- that was my original proposal and intent. r b-j 03:23, 17 December 2005 (UTC)
Need to find common ground
Okay, I am sorry for getting in the middle of this discussion. I lost my head for a while. It is very difficult to get a consensus with such a diverse group of people coming from so many different points of view.
I just took another look at the article itself, and regardless of where you stand on the issue of brackets versus subscripts, it is really clear that the current notation in the article is really problematic. Again, I cannot speak for everyone, but it seems to me that no one should be happy with the current situation.
So if my assumption is correct, then maybe we shoud see if we can find at least some common ground.
I would make the following recommentaions:
- Replace the symbol with representing the number of elements in the input data vector and the transformed data vector.
- Use the letter to represent the imaginary unit. Obviously, most EE's (myself included) would prefer to use , but the rest of the world uses , and I know I can live with either one, as long as it is consistent.
- Do not use the letter as an index variable in either the base domain or the transform domain. I know that this issue does not concern non-EE's, but I think you can understand how it is a major problem for EE's.
- Use any or all of the following lower case letters as index variables: That is five different options, which should be plenty. If not, we can always turn to the Greek alphabet for more.
Can we all agree on at least these initial steps to help move forward? Please comment.
-- Metacomet 03:28, 17 December 2005 (UTC)
- i agree (and meant to say that - it was not my intention to change i to j here) and your suggestion not to use j as an index (for the sake of EEs) is a good idea. i never thought of p and q as integer types (my 35 year old Fortran influence) but they look good. i don't know that we'll need them. we'll see.
- it is my intention to make such a notational change, including brackets, to this article (without changing the math or concepts) and see how far it flies. if someone want to disuade me, please do it now before i put some time into it. Steven, if this flies, i would like to extend this convention to the FFT and other related articles (you mentioned that there are at least a dozen). we'll keep i to some depth, but the closer we get to the EE domain, at some point that i is gonna become j. after this sticks, then i'm gonna attack LTI system theory and the Sampling theorem articles. but i feel like "what's the point" if some of the basis articles have such different notational convention that a neophyte might not connect them. r b-j 03:41, 17 December 2005 (UTC)
Even though I agree with changing from subscript to brackets, I would recommend that you hold off on that idea for now. If we can get consensus on the few items I listed above, and the one item I listed below, then maybe we can at least accomplish something in the short run.
In the meantime, we can continue to discuss the subscripts versus brackets issue and maybe try to find a solution. -- Metacomet 03:46, 17 December 2005 (UTC)
Another recommendation:
- Do not use the letters f or F to represent functions, vectors, or matrices in the context of a discussion of the Fourier transform. It is very common to use as an operator to represent the Fourier transform itself. It gets very confusing to talk about the F of f as in as to which is the vector and which is the operator....
-- Metacomet 03:44, 17 December 2005 (UTC)
- meta, i'm doing that and i'm moving the i to the front (and the 2 π right after the i and nk in the top of the fraction and N in the bottom). i'm dealing with that right now. please hold off for about another hour. oh, and f becomes X. r b-j 04:31, 17 December 2005 (UTC)
I just wanted to give a few things a bit of a test drive. There is no harm done, things can always be reverted if they don't work out. I will leave it alone for now though. I am going to sleep now anyway. I will take a look at it tomorrow. Good luck.
In addition to the changes you just mentioned, I would replace j with m and leave k alone for now. Or use n and k if you want to do the extra work. -- Metacomet 04:45, 17 December 2005 (UTC)
Actually, I do not agree with putting the nk on top of the fraction. Believe it or not, I think it makes sense to put the i in front of the fraction, the 2 π on top of N for the fraction, and then the nk after the fraction. The reason is that you can attach in important interpretation to the 2 π/N as representing the discrete angular frequency increment (in radians per sample) that is very useful in understanding the DFT and more importantly, in implementing the frequency domain in a computer simulation. If you don't know what I mean, I will try to explain later. It's a bit complicated. -- Metacomet 04:49, 17 December 2005 (UTC)
- Just want to clarify that 2π/N is radians per sample per "bin", and 2πk/N is radians per sample. What is often most significant to me is the ratio: , whose units are cycles per sample, which is quickly converted to "Hz" by a factor of the sample-rate. So when I want to emphasize frequency, I often find myself writing: . But of course, I don't always want to emphasize frequency. So I don't think there is one way to organize the symbols that is always the best way. --Bob K 03:49, 19 December 2005 (UTC)
- i'm just trying to make it look like the textbooks and the lit. the fraction represents the fraction of a complete turn or cycle on the unit circle. this is an encyclopedia, and the more conventional we present concepts and notation, the better. this issue is not exactly the original research issue, but it is a sorta original notation issue, and i see no compelling reason to introduce a notation that not used in the texts. most people who know me see me as a sorta unconventional person, but here, i want to be as orthodox as possible/reasonable. r b-j 19:30, 17 December 2005 (UTC)
- Using k as the frequency index and n as the input index seems vastly more standard to me than the current indexing. If you're going to change the indices, please change all three (). Note that there are literally dozens of articles to change. See Category:Digital signal processing and Category:FFT algorithms and Category:Fourier analysis. However, I think it was a bad idea to start making changes before there is a clear consensus on exactly what changes to make. And please wait at least a couple of days after making a proposal in order to give time for comments. —Steven G. Johnson 05:27, 17 December 2005 (UTC)
- i fully agree with you, Steven, about k and n and N. i am already starting these changes. i'm down to Relationship to trigonometric interpolation polynomials, where i see the need for substantive changes (involving the Dirichlet kernel). you can see where i made it to at User:Rbj/DFT. we will never get full consensus, but after the above discussion, i became motivated to start the change. it is inexcusable that the present notation became so pervasive in Category:Digital signal processing, since it is not the notation of the discipline. this is where the WP weakness of anarchy is exposed. but it will be many man-hours saved if this notation inconsistencies are ironed out now rather than a decade from now. r b-j 06:13, 17 December 2005 (UTC)
Although we will likely never agree on all aspects of the change, it looks like we can probably agree on some set of changes to make. However, it is important to list them all here before making them, since if we are going to change several things it will be much easier to change them all at once than piecemeal. Second, there is no huge rush — this is an article that lots of people have contributed to, and it is reasonable to wait for comments for a t least a couple of days before instituting major, wholesale changes across dozens and dozens of articles. —Steven G. Johnson 06:24, 17 December 2005 (UTC)
- Like I said above, there is no harm done by making a few changes, you can always revert back if you don't like them (which you did). I get tired of these endless and sometimes mindless discussions about the same issues over and over, with no clear consensus and a lot of subjective opinions flying back and forth. At some point, we need to stop talking and make something happen. I think WP is a great idea, but there is a severe weakness in trying to get anything done. I am not sure that writing and editing by committee is always best. It can and often does lead to chaos. Relax, I just wanted to get this process moving forward. There is always an opportunity for people to comment in four different ways:
- by adding their thoughts to the discussion page,
- by reverting changes they don't like,
- by refining changes that need improvement, or
- by making additional changes that support the original change.
- You didn't like the changes I made, so you reverted them. I didn't put a whole lot of time and effort into those changes, so there is really no harm done.
- Moreover, I agree fully with the notation changes that you have suggested. At this point, I will take a look at the draft that r-b-j is developing on his own user page, and then I will sit back and let him take the lead. I agree with most if not all of what he wants to do, so when it is finished, then I will have an opportunity to comment if I want. -- Metacomet 15:07, 17 December 2005 (UTC)
- Like I said above, there is no harm done by making a few changes, you can always revert back if you don't like them (which you did). I get tired of these endless and sometimes mindless discussions about the same issues over and over, with no clear consensus and a lot of subjective opinions flying back and forth. At some point, we need to stop talking and make something happen. I think WP is a great idea, but there is a severe weakness in trying to get anything done. I am not sure that writing and editing by committee is always best. It can and often does lead to chaos. Relax, I just wanted to get this process moving forward. There is always an opportunity for people to comment in four different ways:
- From the Wikipedia Introduction (original emphasis):
- Don't be afraid to edit articles—anyone can edit, and we encourage you to be bold (but please don't vandalize)! Find something that can be improved, either in content, grammar or formatting, and fix it.
- You can't break Wikipedia. Anything can be fixed or improved later. So go ahead, edit an article and help make Wikipedia the best information source on the Internet!
- -- Metacomet 15:19, 17 December 2005 (UTC)
- The k,n,N standardization sounds fine, and I would like to help in the editing. Is there a list of articles somewhere, with a list of agreed-upon changes that we can consult and check off as they are finished? I went to rbj's user page but could not find anything that looked like the above mentioned draft. PAR 18:33, 17 December 2005 (UTC)
- PAR, it's at User:Rbj/DFT (as mentioned above). i didn't reference it in my user page at all. i just wanted it in a subdirectory of mine. it's not at all done (as i said, i only got down to a certain section and then i wanted to rewrite most of that section and bring in the Dirichlet kernel into it, since it is germane. below that section, it's pretty messed up in my draft because of some changes made by a global search/replace, but i hadn't changed anything else. r b-j 19:23, 17 December 2005 (UTC)
- I was thinking of a list, like Steven mentioned below. Please see new section.
- That's my whole point: there is no clear list of agreed-upon changes, so it's better to have a list before people dive in. As for a list of articles, I would suggest everything in Category:Digital signal processing and sub-categories thereof, as long as they deal with discrete variables. —Steven G. Johnson 19:07, 17 December 2005 (UTC)
- besides the k,n,N changes, i wanted to show, to everyone, what the "conventional" EE notation would look like throughout the article and also show the "conventional" exponent (except, of course, EE's commonly use "j" which i won't put it). r b-j 19:23, 17 December 2005 (UTC)
- Realize that different forms of the exponent have different strengths. Your form, for example, hides the fact that you have a root of unity to the nk power, which is highlighted by grouping the (2πi/N). This is the key property for almost all of the mathematical properties of the DFT. (It's why, for example, number theoretic transforms have most of the same algebraic properties.) —Steven G. Johnson 04:20, 19 December 2005 (UTC)
Just as an idea of what I am talking about, I collected all the articles in Category:Digital signal processing including sub-categories and listed them on the above page. I also put a menu in listing notational changes, and we can check them off as needed. I assume the k,n,N changes are acceptable to everyone, so I listed that first. I didn't spend a lot of time alphabetizing or elaborating in case this idea doesn't fly. PAR 20:04, 17 December 2005 (UTC)
- This looks fine to me — so far, it seems there is universal agreement on the (j,k,n) to (k,n,N) changes, but not on other changes, so it makes sense to start on the k,n,N now. —Steven G. Johnson 04:23, 19 December 2005 (UTC)
Change is a comin'
If nobody objects, I am going to start working on a very modest plan to start changing the notation. I am going to work on this article only and on only one change at a time. The plan is as follows:
1. n becomes N: Change all instances of the symbol n to the new symbol N to represent the length of the sequences in both the time and frequency domains.
2. k becomes n: Change all instances of the symbol k to the new symbol n to represent the index variable for the time domain sequence.
3. j becomes k: Change all instances of the symbol j to the new symbol k to represent the index variable for the frequency domain sequence.
I am going to make one change at a time, and I am not going to proceed with one step until I have completed the prior step.
We have been discussing these changes for many weeks, and from what I can see, nobody has objected to these three changes, and many have expressed support for them. So I believe there is a consensus at least on these changes. If there are any objections, please speak up now. I will start working very soon.
-- Metacomet 02:00, 27 December 2005 (UTC)
- Those three changes seem to be universally agreed upon. However, I strongly suggest that you make all three at once. —Steven G. Johnson 18:28, 27 December 2005 (UTC)
To make all three at once would require changing all of the notation in the whole article in one shot. I don't have time to do that, unless I can figure out a way to do it with a global search and replace algorithm. I could possibly do that in Microsoft Word, but I am not sure it will work. Otherwise, I need to make the changes as an evolution over several days, and because of the nature of the changes, they must be done in the order that I have suggested.
If you have any better ideas, I am all ears (or eyes...)
Out of curiosity, why are you suggesting to do all the changes at once? The method I have propsed should not create any confusion during the transition.
-- Metacomet 19:52, 27 December 2005 (UTC)
- If you are going through the article, inspecting every equation (a global search and replace won't work unless you can write a script to distinguish "n" in an equation from "n" in a word) not to mention every article to look for things that need to be changed, then it will be faster if you don't have to do that three times. All three notation changes are likely to occur in the same places in the text. This seemed like common sense to me. —Steven G. Johnson 01:39, 28 December 2005 (UTC)
Are you suggesting that you don't think I have a lot of common sense? -- Metacomet 02:23, 28 December 2005 (UTC)
Actually, don't answer that. I think I already know the answer.
There are good reasons for making the changes one-at-a-time, if you think about it for more than 3 seconds. The reasons are related to concepts from manufacturing like error proofing and single piece flow. Although it is not at all obvious, it turns out that it will actually take less time to make one change at a time than it would to make all three changes together, and more importantly, it will be much less likely to make an error.
- I don't think those cases are comparable, because in this case the latency of finding the places in the text(s) to change is a substantial part of the time, so doing it three times will probably take longer. (Also, as a matter of courtesy, it will be clearer to other editors what is going on if you make all three notation changes in a single edit per article.) But whatever, it's your time. —Steven G. Johnson 18:02, 28 December 2005 (UTC)
- You're right. You've convinced me. I am not going to waste my time. -- Metacomet 18:20, 28 December 2005 (UTC)
BTW, with a little bit of clever scheming, it is not all that difficult to figure out a way for a global-search-and-replace to distinguish between "n" in an equation and "n" that is part of a word. Again, it just requires a little bit of thinking and planning.
-- Metacomet 02:56, 28 December 2005 (UTC)
- Oh, I believe it is possible, although expressions like ijk require a non-local matching strategy (i.e. you can't just look at the neighboring characters, which rules out the simplest regexes). But you were suggesting using Microsoft Word's search-and-replace, and it was in that context that I was pointing out that you would really need to write a script. —Steven G. Johnson 18:02, 28 December 2005 (UTC)
Also, I would appreciate it if you would avoid snide put-downs. Please see No personal attacks and Civility. Thanks. -- Metacomet 05:25, 28 December 2005 (UTC)
- listen, Meta, if you or Steven or anyone else wanna see incivility, check out my talk page (this is why i want to see karmafist desysopped). i don't see anything that Steven said as incivility or personal attack.
- please make the changes you mentioned. thanks in advance. the bracket change i would like to do (which was left half done in my username space) also made the same letter change you are doing. but few else like the bracket notation. r b-j 06:48, 28 December 2005 (UTC)
Sorry, RBJ, it's just too difficult to get anything done. I have other fish to fry. It would be nice to see the notation changed to something more standard, but ultimately, it is not all that important to me. Good luck. -- Metacomet 18:42, 28 December 2005 (UTC)
Another problem, while we're at it, is the unnecessary use of two different symbols for the same thing: and . is probably more conventional, given that the time sequence is . And for what it's worth, it avoids potential confusion with the in http://wiki.riteme.site/math/7/7/5/7757e67d47b0ba3f1f7e3628c813bd2d.png
--Bob K 20:48, 2 January 2006 (UTC)
- I have no objection to using capitals for the transformed variables. PAR 21:00, 2 January 2006 (UTC)
Conversion of other related articles
I have started converting the notation in Discrete cosine transform to match the changes that we have agreed to on this article. The four changes are:
1. n becomes N: Change all instances of the symbol n to the new symbol N to represent the length of the sequences in both the time and frequency domains.
2. k becomes n: Change all instances of the symbol k to the new symbol n to represent the index variable for the time domain sequence.
3. j becomes k: Change all instances of the symbol j to the new symbol k to represent the index variable for the frequency domain sequence.
4. f becomes X: Change all instances of the symbol f to the new symobl X to represent the transformation of the data sequence in the frequency domain.
Once I have finished the DCT article, I will start working on Discrete sine transform.
If others would like to help, please review the List of Fourier-related transforms for articles to work on (discrete versions only).
-- Metacomet 16:00, 6 January 2006 (UTC)
Exclude the case N=0
The section on Completeness states: "for any N ≥ 0, any N-dimensional complex vector has a DFT and an IDFT which are in turn N-dimensional complex vectors."
The case N=0 is unimportant, and all the formulas having N in the denominator ceases to make sense. It should be changed to "for any N > 0, any N-dimensional complex vector has a DFT and an IDFT which are in turn N-dimensional complex vectors."
Bo Jacoby 07:46, 5 January 2006 (UTC)
- Done. --Bob K 02:37, 7 January 2006 (UTC)
Periodicity and aliasing revisited
Here we go again... As you know, I think this section needs a little work. My new gripe is with the last paragraph:
- "For digital signal processing on the other hand, the periodicity is usually an obstacle—not only does it alias high frequencies with low frequencies, as noted above, but it also tends to introduce artifacts because natural signals are often non-periodic (resulting in implied discontinuities between the ends of the input). See also the applications section below."
- Aliasing is inherent in the DTFT, whether you do a DFT or not.
- "Artifacts" means spectral leakage, which is also inherent in the DTFT when you analyze a finite segment of an infinite signal. It happens whether you do a DFT or not and whether the "natural signal" is periodic or not. Periodicity is just one of the necessary conditions for constructing (actually contriving) a DFT that fails to reveal the leakage present in the DTFT. Another condition is that the signal period be an exact sub-multiple of the finite window duration (and that the window be unweighted (i.e. rectangular)).
- Where in "the applications section below" is this paragraph illuminated?
- "obstacle" is too strong, IMHO. "Inconvenience", perhaps. OTOH, what DSP alternative do we have that is less of an obstacle or more convenient? None... because the DFT is not the root cause of either aliasing or leakage. But I think that point is totally inconspicuous to a DSP newbie reading this section.
--Bob K 03:11, 7 January 2006 (UTC)
- hi, this is an issue periodically revisted at comp.dsp. I am convinced that the spectral leakage is a consequence of windowing (yanking out N samples from some sample stream) before it is passed to the DFT and the aliasing is, of course, due to inherent periodic extension of the DFT. people disagree with this (I still have no idea why, because the math is all there), but I still think that is the best (and most accurate) way to teach this to newbies. r b-j 21:23, 7 January 2006 (UTC)
- > Count me among those who disagree. Aliasing is not due to "inherent periodic extension of the DFT". It is inherent in the DTFT. It pre-exists the DFT. The DFT is an invention. Aliasing is not. It does not depend on the DFT for its existence. The DFT merely provides a way of viewing some values of the DTFT, including the aliases. --Bob K 22:09, 8 January 2006 (UTC)
- > I can't tell from your statement what you think is the best way to teach aliasing to newbies. Whatever they are doing out there, it doesn't seem to be working very well. Maybe we need more straight talk and less fancy math. It isn't rocket science. --Bob K 22:26, 8 January 2006 (UTC)
- i agree that it needs more straight talk and less fancy math but the straightest talk i can think of (which is evidently debated on comp.dsp but is completely congruent with Oppenheim & Schafer) is that the DFT maps an infinite and periodic sequence of (possibly complex) numbers with period N (so you only need N adjacent elements to fully describe it) in some domain (call it the "time domain" if you please) to another infinite and periodic sequence of (possibly complex) numbers of the same period N in the reciprocal domain (call it the "frequency domain" if you please). that is more fundamental than the fact that the DFT is equal to sampling the DTFT of a finite sequence of length N and padded with zeros. when you pass N numbers to the DFT, you are fundamentally periocially extending that finite sequence and all of the theorems involving shifting and convolution are consistent with that fact and even require it. putting in moduloN arithmetic into the indices (during shifting or circular convolution) is precisely the same thing as this periodic extension.
- when you say that "Aliasing ...does not depend on the DFT for its existence", i guess i would agree, but that does not refute the fact that aliasing within the context of the DFT is caused by the inherent periodic extension done by the DFT. r b-j 04:00, 9 January 2006 (UTC)
- > Sorry, no. Aliasing is not the result of periodically extending the N numbers passed to the DFT. For instance, suppose those numbers are:
- , for n=0,1,..., N-1 and some integer
- Whether the sequence is periodically extended or not, is ambiguous when all you have is the sequence. It's ambiguous to you, to the DFT, and to any other tool you can imagine. That's what aliasing is. That is more fundamental than anything you can say about a DFT. It was true before the DFT was invented. And that is the straightest talk I can think of. --Bob K 18:30, 9 January 2006 (UTC)
an idea
Since this article is basically totally incomprehensible to those who do not know higher math, perhaps we could include a visual representation of a DFT so the reader may at least get a vague idea of what it means/does. How about a nmr or ftir spectrum?--Deglr6328 08:09, 12 April 2006 (UTC)
- Good point. This is a very dry article for an encyclopedia. Unless someone has the time to spice it up, a better-than-nothing alternative is to refer the readers to the DFT pictures at Discrete-time_Fourier_transform#Finite_length_sequences. --Bob K 17:48, 12 April 2006 (UTC)
- Remember, if a reader needs a basic introduction to Fourier transforms in general, this article is not really the place to do it. (e.g. NMR and FTIR spectra are examples of continous Fourier transforms. In any case, a more familiar example to most readers will be the coarse spectrum displayed on a typical stereo.) —Steven G. Johnson 15:36, 13 June 2006 (UTC)
My sentiments exactly. I was really surprised at the detailed math of this. Good math, maybe, dry article. So now begins the...:
Poor man's explanation of the discrete Fourier Transform
The Finite Fourier Transform is an experiment you really can do at home; your spreadsheet is all you need.
Poor man's explanation of the discrete Fourier Transform: The basics
---A WORK IN PROGRESS--
All "real-world" waveforms can be described by a summation of sines and cosines with constant coefficients (amplitudes): Why do we care?
Why do we care? Answer this: Why does a bird chirp sound different than a cricket chirp? What frequency (pitch) was that crow's caw? Why does my voice sound different than yours? How do we make this awful picture of Aunt Millie sharper? Most scientists mathematicians, and engineers use the various processes called "the Fourier transform" to answer these and similar questions. They want to learn more about the waveforms of the signals-- the "waveform" is "the shape" of "the signal" (that comes for example, from their microphone).
Why not just look the waveform on an oscilloscope or a computer screen? Because it will be just a jumble of up-and-down "waves" with littler wavey wiggles in them -- like ocean waves, big waves with little waves riding on top. There is nothing we can use there.
The discrete Fourier transform freezes in time facts about a waveform's "pitch" (its frequency, its tone high or low as in A-440 cycles per second versus middle C-512 [?]) that we can look at as long as we'd like. And, given this transform, we can recover our original waveform.
When waveforms have been "transformed" to this stationary form we can do all kinds of neat things to them. Such as: use Adobe Photoshop to sharpen the digital photos of Aunt Millie.
Discrete signal recording:
Before the mid-1980's sound recording was done first (i) mechanically by phonograph, then later (ii) electro-optically on film (motion picture sound track), or tape (magnetically). Sound engineers used these "analog" (continuous as opposed to discrete) methods and looked at their signals on "analog" oscilloscopes.
But today the computer "sound card" takes the signal that comes from the scientist's microphone and "samples" it (40,000 times per second, or 44,000 per second, or 176,000 per second, etc). What the sound card produces and sends to the computer are streams of "bits and bytes" -- 16 bits, 24 bits, 32 bits -- that represent the signal but are no longer "the signal" itself; the sound is gone across the meadow into the hills, the signal from the microphone is converted to heat and lost forever. All that exists finally are "the bits and bytes" that the scientist's computer has recorded on her hard drive or her flash-card.
This type of "signal-processing" is called "analog-to-digital" (A-to-D) conversion and is done by special circuitry. In all cases the conversion involves (i) sampling the waveform, (2) A-to-D conversion at a fixed interval (such as 44,000 conversions per second, or 176,000 conversions per second for "high resolution" recordings). Quite often the processs involves more steps: (i) smooth the signal with a "low-pass filter", (ii)"hold" the signal steady for a moment while (iii) the A-to-D converter samples the held signal, (iv) actually A-to-D convert the sampled signal. We will see below why these extra steps are required.
Sampling is easy:
To sample a waveform just multiply it by 1 to take the sample; the rest of the time, multiply by 0. (The briefer the sample, the better). In our example, the waveform we are sampling changes from -10 to +10 (usually the waveforms are much smaller, this is just an example). Whenever our "sampler" takes a sample, it -- in effect -- multiplies the waveform at that instant by 1.
- waveform....0.......3.......5.......7.......9.......10......10......10.......9.......7.......5.......3.......0......-3......-5......-7......-9......-10....
- sampler:..1..0..0..0..0..0..0..0..1..0..0..0..0..0..0..0...1..0..0..0..0..0..0..0..1..0..0..0..0..0..0..0..1..0..0..0..0..0..0..0..1...
- samples:..0..0..0..0..0..0..0..0..7..0..0..0..0..0..0..0..10.0..0..0..0..0..0..0...4..0..0..0..0..0..0..0.-5..0..0..0..0..0..0..0.-10..
Our resulting string of numbers-as-samples looks like this:
- {...0,7,10,4,-5,-10,...etc}.
Notice something interesting: our waveform is compressed into just a few samples, now stored in space on a computer disk (or whereever). Is it really possible that we can recover our original waveform from this squished stuff? YES!
An aside: Quantization:
A crude A-to-D converter might change our analog number between -15 and +15 into 31 levels of "quantization" equivalent to steps (quanta) of 1. These "levels" will come out as bits and bytes. For example, +5 might be 00100, -5 might be 10101, etc. Fancy A-to-D converters convert to 32 bits or about 4 billion levels. Common audio conversion in a computer is to 16 bits, about 65,000 levels.
The role of the discrete Fourier transform is not fret about issues surrounding "quantization". The transform's role is to convert our waveform from a time-based event into frequencies that we can stare at forever, if we want to. The only job quantization has to do is to give the computer (or us) numbers that it (we) can crunch.
Analog samples would work just fine if we had a bunch of analog multipliers and adders to do the transform.
But what about the transform? Show me the transform!
All in good time. First we need to describe how we get the waveform back:
Discrete signal playback:
When the scientist plays back the sound on her computer, the computer delivers the saved bytes to the sound-card.
> [usually beforehand the scientist selects a "slice" to look at and transform... Sound Forge example? hence the sampling window]
The card uses a reverse process called Digital-to-Analog (D-to-A) to convert the bytes and bits back into a waveform. But the output from the D-to-A converter will again have discrete steps in it. If we were to look at the Sony Sound Forge reproduction of the "signal's waveform" we find it is represented by points (the discrete samples) connected by lines (interpolations so the result may look like triangles). After D-to-A conversion the sound card smoothes out the steps (or triangles) with a low-pass filter. The sound card amplifies this smoothed now-analog signal; it is now in a form our scientist can hear on her speakers [Is this correct re sigma-delta conversion?]
A question appears: Why do we believe we can trust our tools?:
In brief: we can't until we "calibrate them" -- test them against known good results. We wonder: will a reproduction be faithful to its original? Maybe yes, maybe no. Even if our crow-caw microphone introduces "junk" (wind noise is typical), a poorly-designed sound card may introduce unexpected junk into our scientist's reproduced waveform (growls, whistles, whines). If this is the case, she (hopefully) will find it when she looks at "the spectrum" of the waveform. With the use of the discrete Fourier transform she can do just that -- look at the "frequency spectrum" of her bird and insect sounds, present them on a sonogram that plots frequency on the Y axis, time on the X axis, and colors the intensity of the sound. Perhaps from what she expects and from applying known waveforms to her sampler, she can learn to "trust her tools".
Preliminaries
sines and cosines:
Why do we need to discuss "sines" and "cosines"? It just turns out that any snippet of waveform can be approximated by a bunch of them added together. The why and the proof of this is not so simple, however. [see Why it Works].
Most people think of sines and cosines in terms of trigonometry, but the words themselves do not contain those ideas. Rather, the word "sine" comes from Latin sinus meaning "curve": we speak of Marilyn Monroe as a "sinuous woman", for example. And "cosine" comes from Latin cosinus, co- (with, together) +sinus "curve" (Websters New Collegiate). The sine and cosine are close associates.
The pendulum is commonly used as the example of a repeating "cosinusoidal" motion (cf Cannon p. 171ff). Pull it to the right R, then let it go. The pendulum comes back to center C, swings to the left L, then back to center C:
- start = R, -> C, -> L, -> C, -> R, -> C, -> L, -> C, etc.
A "full cycle" is the motion R, C, L, C. Apply numbers to the symbols R, C, L; choose R = +1, C = 0, L= -1. Now record the motion -- at one-second snap-shots -- of a really slow pendulum -- our pendulum takes 8 seconds to make its complete cycle . We see that our pendulum seems to "dwell" longer at the extremes L = -1 and R = +1, and move faster in the center C = 0. Our records, in fact, show the pendulum's positions to be as follows:
- time in seconds { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ...}
- position of pendulum: { 1, 0.7, 0, -0.7, -1, -0.7, 0, 0.7, 1, ...}
The waveform in the first picture is a cosine. In the picture of its reconstruction the reconstruced waveform is "almost a sine" -- if it were to start at zero it would be a sine (ignoring the wiggles riding on it). Its shift to the right repreents a delay and is called its phase shift; the "hold" and the "filter" are responsible for this.
Observe what we have just done: we have sampled the motion of the pendulum. And we will soon use these same numbers to extract the Discrete Fourier Transform from our snippet of waveform!
We say the pendulum's (and hence its cosine's) frequency is how long in seconds the pendulum requires to make one full cycle. Our slow pendulum takes 8 seconds to move in one full cycle, so we say that its frequency is 1/8 Hz (Hz = Hertz = cycles per second).
If we plot the times on an X axis and the pendulum's position on a Y axis and draw a smooth curve through the points, the cosine is just this smooth curve. A sine curve is of identical shape, just shifted right so that it starts at 0 rather than 1. [footnote: derivative of sine is cosine and vice versa; easy to see with difference equations]. If we were to put a felt-tip pen on the bottom of our pendulum and, at a steady rate, pull a piece of paper beneath the pendulum, the motion of the pen would trace out the cosine (or sine). The shapes of the two waves -- sine and cosine -- are identical; they just start at different values.
Sampling Window:
Only rarely is a signal sampled for a long period of time and then have a discrete Fourier Transform applied to it. Usually we do this for just an interesting piece of the waveform. If we are recording a crow's caw we just "snip" the "caw" and discard the uninteresting road noise and lawn-mower sounds. The time-duration of the "snippet" (or length of the snippet if we are looking at the waveform on our computer screen) is called "the window".
For more about what can go wrong with windows and what we do about it, see "What can go wrong" below.
Fundamental frequency, Harmonics, frequency components, amplitudes, :
Anyone who plays a musical instrument knows what an "octave" is: it is just the next lower or higher register. For example: middle C is the note C just below the treble staff, the first octave above middle C is the C on the staff. The next octave higher puts C above the staff. A higher octaves represents doubling frequency. A harmonic is similar, but it represents an integer multiple of (usually positive with 0 a special case i.e. 1, 2, 3, 4, ...), rather than a doubling of, a base frequency called "the fundamental:"
- "harmonic: Overtone: esp: one whose vibration frequency is an integral multiple of that of the fundamental (Webster's)
- "fundamental: the harmonic component of a complex wave that has the lowest frequency and commonly the greatest amplitude (Webster's)
- "amplitude: the extent of a vibratory movement (Webster's)
Another name for the harmonics is the frequency components. The harmonics/frequency components together with the fundamental make up the spectrum of the waveform.
As it turns out we will need to find both cosine and sine harmonics. Neither a collection of sine harmonics nor a collection of cosine harmonics is sufficient (adequate) to reconstruct our waveform -- we need both sets. Why? Because the cosine always starts from 1, if we encounter a wave that starts from 0 we will need the sine harmonics too. More likely than not we will need both sets.
The fundamental is fundamental
So just who or what determines "the fundamental"?
Hamming (p. 508) provides us with a wonderful analogy: think about our "snippet" of waveform as if it were wrapped around a cylinder with a circumference the same length as the snippet. [The following may be O.R. [?]: a perhaps even better analogy is an early precursor to the motion-picture, the Mutoscope -- a cylinder with sampling slits in it, around the inside of which a sequence of images is wrapped. The viewer cranks a handle and watchs the scene move. Problem is, the scene repeats every rotation. But this is exactly like Hamming's analogy.]
The discrete Fourier transform treats our waveform as if it were "a snippet" wrapped around in a cyclider that is spinning. It is the circumference of the cylinder -- or "(time-)duration" of the cylinder to make one full roation-cycle -- that determines the so-called "wavelength of the fundamental", or in time, the longest duration and lowest frequency that the transform can deliver back to us.
All the other frequencies that the transform delivers/produces will be integer multiples of this fundamental. Why would this be? An obscure answer is: the average value of the "area under the curve" of sine or cosine curve is 0. If we were to use a non-integer (0.5 cycles or 1.5 cycles) this would not be the case.
But rather than this obscure answer, what we are going to do is show firstly that a cosine fundamental can "pluck out" of a waveform a cosine of the same (fundamental) frequency, but not a sine. Secondly we will show that a cosine with double the frequency does not pluck out either a cosine nor a cosine of the fundamental frequency. This will be true for every harmonic, not just the first with double the frequency.
Our first transform: cosine x cosine = 1 if the cosines are the same frequency and "in phase"
We will "transform" our sampled pendulum's cosine waveform with itself, and see what we get. What we do is write down one cycle of our pendulum -- sampled at the 8 1-second intervals -- and multiply each sample by itself. We add up the numbers, and divide by 4 to get what is called "(double) the weighted average".
- time in seconds.....: { 0,.... 1, ...2,... 3,... 4,... 5,....6,....7 }
- position of pendulum..: { +1.0, +0.7, +0.0, -0.7, -1.0, -0.7, +0.0, +0.7 }
- cosine wave............: { +1.0, +0.7, +0.0, -0.7, -1.0, -0.7, +0.0, +0.7 }
- products.................: { +1.0, +0.5, +0.0, +0.5, +1.0, +0.5, 0.0, +0.5 }
The sum of the individual products is 4 and the average is 1.
Our second Discrete Fourier Transform: a sine and cosine of same (integral) frequency produces 0
We will "transform" our pendulum's motion with a sine pendulum's motion -- same slow speed -- and see what we get. We add up the numbers, and divide by 4 to get "the weighted average".
time in seconds.........: { 0,.... 1, ...2,... 3,... 4,... 5,....6,....7 } position of cosine pendulum..: { +1.0, +0.7, +0.0, -0.7, -1.0, -0.7, +0.0, +0.7 } position of sine pendulum....: { +0.0, +0.7, +1.0, +0.7, +0.0, -0.7, -1.0, -0.7, } products.....................: { +0.0, +0.5, +0.0, -0.5, +0.0, +0.5, +0.0, -0.5 } running sum of products......: { +0.0, +0.5, +0.5, +0.0, +0.0, +0.5, +0.5, +0.0 } The sum of the products is 0 and the average is 0.
Our third Discrete Fourier Transform: two cosines, or a sine and cosine, of different (integral) frequencies produce 0
We will "transform" our pendulum's motion with a faster pendulum's motion -- three as fast -- and see what we get. What we do is put down three cycles of our fast pendulum -- sampled at the same 8 1-second intervals -- and multiply each slow sample by the fast sample. We add up the numbers, and divide by 4 to get "the weighted average".
- time in seconds.........: { 0,.... 1, ...2,... 3,... 4,... 5,....6,....7 }
- position of slow pendulum..: { +1.0, +0.7, +0.0, -0.7, -1.0, -0.7, +0.0, +0.7 }
- position of fast pendulum..: { +1.0, -0.7, +0.0, +0.7, -1.0, +0.7, +0.0, -0.7 }
- products....................: { +1.0, -0.5, +0.0, -0.5, +1.0, -0.5, +0.0, -0.5 }
- running sum of products.....: { +1.0, +0.5, +0.5, +0.0, +1.0, +0.5, +0.5, +0.0 }
The sum of the products is 0 and the average is 0.
This time we multiply our slow pendulum's motion by by the three-times-as-fast sine waveform:
- time in seconds.......: { 0,.... 1, ...2,... 3,... 4,... 5,....6,....7 }
- position of slow pendulum..: { +1.0, +0.7, +0.0, -0.7, -1.0, -0.7, +0.0, +0.7 }
- position of fast pendulum..: { +0.0, +0.7, -1.0, +0.7, +0.0, -0.7, +1.0, -0.7 }
- products...................: { +0.0, +0.5, +0.0, -0.5, +0.0, +0.5, +0.0, -0.5 }
- running sum of products.....: { +0.0, +0.5, +0.5, +0.0, +0.0, +0.5, +0.5, +0.0 }
The sum of the products is 0 and their average is 0.
So how many integer-multiples of this sort can we get? Unfortunately, not as many as we'd like: Just the number of samples in fundamental's cycle divided by 2 -- in other words 4 for the cosine collection and 4 for the sine collection. Why is this? The answer is not so simple. See Nyquist Rate below.
Our conclusions from the above examples:
- we need a sample of waveform of a distinct length or duration; the duration or length of this snippet is called "the window"
- the transform's fundamental frequency is the reciprocal (i.e. 1 divided by) the "window's" length or duration. If the window is 8 samples long and each sample takes 1 second then the window is 8 seconds and the fundamental frequency is 1/8 = 0.125 second (i.e. really slow).
- the number of samples taken in the window should be an integer multiple of the fundamental frequency: the powers of two { 2, 4, 8, 16 etc } are particularly congenial.
- the harmonics will be integer multiples of the fundamental frequency; the number of harmonics are allowed to be only one-half our sample size plus 0 (e.g. { 0, 1, 2, 3, 4 } for a sample size of 8.
- If our waveform has sines in it, only the sine sample-numbers will "pluck them out"
- If our waveform has cosines in it, only the cosine sample-numbers will "pluck them out"
- Sines pluck only sines of the same frequency
- Cosines pluck only cosines of the same frequency
- The amplitudes of the plucked sines or cosines will be computed by computing the individual products of the samples with their sampled sine or cosine counterparts, adding them up for each sine and cosine frequency, then dividing by the number of samples, then multiplying this by 2.
Let the transforms begin
1. Our sampled waveform: : { 1, 1, 1, 1, 0, 0, 0, 0 }
2. The fundamental frequency is 1/8 seconds
3. The integer sample size is 8
4. The various integer multiples of the fundamental will be
- {0, 1, 2, 3, 4 }
5. The various integer frequencies of the sines and cosines will be
- { 0, 1/8, 2/8, 3/8, 4/8 }
6. The sampled cosine fundamental and its harmonics:
- n=0: { 1, 1, 1, 1, 1, 1, 1, 1}
- n=1: { 1, 0.7, 0, -0.7, -1, -0.7, 0, +0.7 }
- n=2: { 1, 0, -1, 0, 1, 0, -1, 0 }
- n=3: { 1, -0.7, 0, 0.7, -1, 0.7, 0, -0.7 }
- n=4: { 1, -1, 1, -1, 1, -1, 1, -1}
7. The sampled sine fundamental and its harmonics:
- n=0: { 0, 0, 0, 0, 0, 0, 0, 0}
- n=1: { 0, 0.7, 1, 0.7, 0, -0.7, -1, -0.7 }
- n=2: { 0, 1, 0, -1, 0, 1, 0, -1 }
- n=3: { 0, 0.7, -1, 0.7, 0, -0.7, 1, -0.7 }
- n=4: { 0, 0, 0, 0, 0, 0, 0, 0}
To create our transform we simply multiply the counterparts (thus we have to do this 8 times, and we see that 2 of these yield 0 so they are trivial). here is an example of n=3 for the numbers of the sampled sine:
- n=3: { 1*1, 1*-0.7, 1*0, 1*0.7, 0*-1, 0*0.7, 0*0, 0*-0.7 }=
- { 1, -0.7, 0, +0.7, 0, 0, 0, 0 }
We add these and divide by 4 to get 0.25. This is the amplitude (height, value) of this "component" n=3. If one were to write cosine equation for this harmonic "wave" it would be:
- for n=3, 0.25*cos(2Π*3*t/8) where "t" is the times of the samples { 0, 1, 2, 3, 4, 5, 6, 7 }
A 16-sample transform
We can get a better feeling of what is going on if we go to larger sample sizes. Our original waveform shown here is the following:
{ -0.5, -0.5, -0.5, 1, 1, 1, -1, -1, -1, -1, -1, 0.5, 0.5, 0.5, 0.5, 0.5 }
The graphing together of sine and cosine amplitudes is somewhat difficult to look at, so we will plot the information in a different way -- in the more common presentation of "a spectrum". The "spectrum" has facinating quality of being position invariant -- unlike the varying sine-cosine harmonics that change depending on where we "start" our waveform on the cylinder, the spectrum remains the same.
A spectrum does not show the amplitudes of the individual sines and cosines, but rather it combines the two into a single number for each harmonic (for more about the trigonometry behind this see Why it Works):
- V = sqrt(an^2 + bn^2)
The square root is usually taken in the positive and represents a directionless quantity: (the length of a line but not its direction). This composite amplitude V, however, contains only has half the information. The rest is contained in "the phase" φ:
- φ = arc-tangent(sine-amplitude/cosine-amplitude)
An "arc-tangent" is "the angle whose tangent is ...". On an Excel spreadsheet the ATAN function "appears" in the form of radians, where a "radian" is defined as 2π of these puppies in one circle. If we want to convert "radians" to "degrees" (360 degrees in a circle), a radian is 360 degrees/(2π) = 57.29577951 degrees, a convenient-and-easy to remember number.
To do a reconstruction, then, either both the composite amplitudes Vn and the phases φn are required for each component n, or both the sines' and cosines' amplitudes an and bn are required for each component. Nothing comes for free.
A 16-sample reconstruction
Our original waveform shown here is the following, the same as appears on the cylinder above:
- { -0.5, -0.5, -0.5, 1, 1, 1, -1, -1, -1, -1, -1, 0.5, 0.5, 0.5, 0.5, 0.5 }
The original waveform is shown in black. Just below the X-axis (labelled 0.00) is the "d.c." (abbreviation for "Direct Current") or steady-state value that comes from the 0th cosine component. Our reconstruction sums this d.c. value and all the other sine and cosine terms for each component, then sums up the resulting waves into a partial reconstruction. Not all the partial-reconstruction sums are shown because the picture gets cluttered.
The odd wiggles on the final reconstruction (bold red line) is typical. See more at Gibb's phenomenon below.
Poor man's explanation of the discrete Fourier Transform: Discussion
---A WORK IN PROGRESS--
Nyquist frequency: the maximum frequency in our "transform" = 1/2 the sampling frequency
Observe an interesting fact: when m=4, the sine is 0 all the way down the column, but when n=4, the cosine alternates:
- { 1, -1, 1, -1, 1, -1, 1, -1 }
These numbers represent the cosine's maximum and minimum values. If our "waveform" contains a cosine or sine that is "faster" than this, our sampling will make an error. This is related to what is called the Nyquist frequency. Our maximum sampling frequency should be less than or equal to (at worst) one-half of our sample rate. Aliasing occurs when we don't to this problem (for more see Aliasing below).
Sometimes (for sound purposes) folks plot these as a 'sonogram' where frequency is on the Y axis, time is on the X-axis, and intensity is the color or the "brilliance" of a point in the plane.
Repeatitive numbers in the sine and cosine samples
Observe that only the following numbers appeared in the 8-sample transform when we sample the sines and cosines:
- -1, -0.71, 0, +0.71, +1
This lucky find means that if we are doing this with a microcontroller we don't have to compute vast numbers of sines and cosines, we can just store the necessary numbers in a table. It happens because our sines and cosines are periodic and related to one another by integers. This observation will be of use if we want to proceed to a "fast Fourier Transform" (see below).
But, we do have to be clever in our choice of sample size. An example (not shown) of 10 samples creates 5 harmonics for the sine and five for the cosine. But the numbers are not so fortunate and there are lots more of them. If you were to work this out on a spread-sheet you would observe the numbers below:
- -1.00, -0.95, -0.81, -0.59, -0.31, 0, +0.31, +0.59, +0.81, +1.00
Hamming seems fond of the number "12" (cf Hamming p. 543ff).
Prelude to the 'Fast Fourier Transform' The "FFT" is just a computing trick that depends on the fact noted above: that all the values of sine and cosines in the table are similar (cf. Chapter 33 "The Fast Fourier Transform" in Hamming, p. 539ff). The FFT has become a highly-developed art since Hamming. See Discrete cosine transform for nice examples.
Poor man's explanation of the discrete Fourier Transform: What can go wrong
---A WORK IN PROGRESS--
Hamming describes the many issues that can arise with the discrete Fourier Transform:
- "33.5 Dangers of the Fourier Transform:
- "...the ever present problem of aliasing (Sections 31.2, 31.6)
- "...leakage due to the use of a finite length of record (see Sec. 34.8)
- "...picket fence effect" (Hamming p. 543)
The origin of aliasing
The origin of "beat frequencies" (in sound recordings: unexpected groans, whistles, whines, throbbing pulsations; in photos: weird Moire patterns) and their role in aliasing can be found in some trigonometric identities that show what happens when we multiply sines by sines, or cosines by cosines. (We can easily derive these identities if we express sine and cosine in their complex (i.e. e^ix) forms). We use the following:
- cos(x+y) = cos x*cos y - sin y*sin x
- cos(x-y) = cos x*cos y + sin y*sin x
From the above we simply derive the equations below. These show that if we multiply two sines or cosines, we get the sum frequency (x+y) and the difference frequency (x - y) and there is nothing we can do about it:
- cos x*cos y = 1/2(cos(x - y) + cos(x + y))
- sin x*sin y = 1/2(cos(x - y) - cos(x + y))
Because any Fourier transform multiplies cos x*cosy and sin x*sin y we will always get "beats". Whether or not the "beats" have any affect on what we are doing depends on how cautious we are with respect to our sample rate and our waveform that we are sampling.
Sampling also causes aliasing
Suppose we sample our wave at 4 per second (t occuring at an interval of 0.125 second), a sine waveform with a frequency of 6 per second. In other words, we have not sampled at a high enough frquency. Sampling can be thought of as multiplying our 6 Hz waveform by "the sampler" occurring at 4 per second. Let's use the simplest sampler we can: a sinewave "lifted up" so that its peak occurs at 1 and its minimum at zero:
- 0.5 + 0.5*sin(2pi*4*t)
Our sampling thus multiplies our 6 Hz sinewave by the "sampler"
- sin(2pi*6*t)*[0.5+ 0.5*sin(2pi*4*t)] = 0.5*sin(2pi*6*t) + 0.5*sin(2pi*4*t)*sin(2pi*6*t)
And we have seen that this process of multiplication introduces "beats". The right-most term yields the "beats" at 2 and 10 Hz:
- sin x*sin y = 1/2(cos(x - y) - cos(x + y))
- 0.5*sin(2pi*4*t)*sin(2pi*6*t) = 0.5*{(1/2(cos(2pi*4*t - 2pi*6*t) - cos(2pi*4*t + 2pi*6*t) }
So our sampling has produced the following new waveform:
- 0.5*sin(2pi*6*t) + 0.25*{ cos(2pi*2*t) - cos(2pi*10*t) }
Observe that we have just picked up 2 Hz and 10 Hz in addition to the original 6 Hz.
Now, what happens when we do our transform? We create more beats! We can see this if we use the 4 Hz sine or cosine of the transform:
- 1*cos(2pi*4*t)*{0.5*sin(2pi*6*t) + 0.25*{ cos(2pi*2*t) - cos(2pi*10*t) }
The first term creates 2 and 10, the second term creates 2 and 6, and the third term creates 6 and 14!
The process of sampling and processing with discrete Fourier transforms is like making an infinitely long paper doll: we fold our infinitely-long paper, cut one shape and unfold it to reveal the repeating pattern, ad infinitum (cf drawing on pag 514 of Hamming, and his explanation p.513). If we cut too far our shape "overlaps" into the shapes beside it. This is called "folding".
Harmonic 6 is equivalent to 2. 6 has "folded back" to become 2, and the "2" shouldn't be there. 10 also folds back to 2.
Note that we could have done this in reverse order: multiplied, in the continuous domain, our waveform by the sine, then sampled it. The results would be the same.
Aperture Effect
The aperture (cf Carlson p. 285, Bracewell, p. 276ff) is the time e.g. 0.001 second for a slow sampler) during which a "snap-shot" of the waveform is taken and either held constant (while an A-to-D conversion occurs, in the above audio example) or allowed to change while either the sample or (if no "hold") the conversion is occurring). Thus we see two sources of error during the instant the sample is taken: (i) the waveform may be changing during "the sample", (ii) an A-to-D error may occur. But there is a third intrinsic error which is related to "windows".
Window error
As mentioned above, the fact that we have introduced a "window "of finite time-duration during which we take our samples itself introduces more "junk.
Our analogy points to a source of problems: we may "cut" our waveform at an inopportune place and make the splice to another inopportune place, and thereby create a "discontinuity." And such discontinuities create something called 'the Gibbs phenomenon' (see more below).
[need reference here e.g. Sound Forge uses various methods:] What to do about this? Engineers have devised ways of "smoothing" the signal -- multiplying the window by a "weighting factor" so that the discontinuity is not so odious. [Blackmun, Hamming, etc. windows]
Gibbs phenomenon
>(cf Hamming p. 532ff, Bracewell p. 209). [Is this Hamming's "leakage?]
Some examples produce tidy results -- the reverse DFT reproduces its waveform perfectly. But if we had picked the following waveform we would not be able to reproduce it exactly. In the little picture, the original is in blue and the red is the reproduced. this problem is either caused by (i) Gibbs phonomenon or by (ii) too-high frequency content for the sample rate:
- 11010000
- wave....reconstructed wave
- 1........+1.13
- 0........-0.13
- 1........+1.13
- 1........+0.88
- 0........+0.13
- 0........-0.13
- 0........+0.13
- 0........-0.13
It appears to have "ripples" in the waveform that we cannot get rid of [etc: is this Gibbs or is this the effect of aliasing -- too much high-freq crap in the original waveform?]
Poor man's explanation of the discrete Fourier Transform: Why it works
---A WORK IN PROGRESS--
> In the final analysis, can we can convince ourselves that a waveform of a given length/duration can be expressed as "the finite sum of a integral sines and cosines with various amplitudes"?
> An important observation: weighted average: If we plot one cycle of our fundamental sine and consider the area under the curve of "the upper half" (above the X-axis) to be positive area, and the "lower half" to be negative, then we see that the area below cancels out the area above. This is also true for the cosine, since they have the same shape.
This happens always when there are an integral number of cycles of the sine or cosine, but never when the number of cycles is not integral.
If we multiply our waveform by our "reference" sine or cosine, the total positive- and negative- areas under the resulting curve (the curve that represents the product of our waveform x sine, or waveform * cosine) may not be (and probably won't be) 0. The "residual" area gives us a clue about how much of that particular "component/harmonic" is "in the waveform".
>Why is "the component" that results from the transform just the simple average of the area? If we were to plot the "residual" area as a little rectangle, it will have either a positive-area or a negative-area) stretched over the extent of our "snippet window" = "length/duration of fundamental". Thus our rectangle's length is the "length" of the fundamental and its "height" is the area divided by the length. [Aren't things are off by a factor of 2? Why?]
> Then the notion of "plucking out" the amplitudes by use of the "unit harmonics" may be straight-forward (if you're willing to do some algebra). It is the assertion that "a finite waveform can be expressed as the finite sum...." that is hard to swallow.
- t = 0, 1, 2, 3, 4, 5, 6, 7
- W(t) = a0*sin2Π*0*t/8 + a1*sin2Π*1*t/8 + a2*sin2Π*2*t/8 + a3*sin2Π*3*t/8 +a4*sin2Π*4*t/8 + b0*cos2Π*0*t/8 + a1*cos2Π*1*t/8 + a2*cos2Π*2*t/8 + a3*cos2Π*3*t/8 +a4*cos2Π*4*t/8
- W(t) = 0 + a1*sin2Π*1*t/8 + a2*sin2Π*2*t/8 + a3*sin2Π*3*t/8 + 0 + b0*1 + a1*cos2Π*1*t/8 + a2*cos2Π*2*t/8 + a3*cos2Π*3*t/8 +a4*cos2Π*4*t/8
- W(t) = a1*sin2Π*1*t/8 + a2*sin2Π*2*t/8 + a3*sin2Π*3*t/8 + b0 + a1*cos2Π*1*t/8 + a2*cos2Π*2*t/8 + a3*cos2Π*3*t/8 +a4*cos2Π*4*t/8
To pluck out third cosine harmonic, multiply W(t) by unit cosine 1*sin2Π*3*t/8. This generates a slew of harmonics, in essence doubling the number of terms on the right:
- W(t)*1*sin2Π*3*t/8 = { a1*sin2Π*1*t/8 + a2*sin2Π*2*t/8 + a3*sin2Π*3*t/8 + b0 + a1*cos2Π*1*t/8 + a2*cos2Π*2*t/8 + a3*cos2Π*3*t/8 +a4*cos2Π*4*t/8 }*1*cos2Π*3*t/8
> concept #1: to locate a point on the plane we need two values, its "X-coordinate" and the "Y-coordinate". This is, ultimately, where the need for both the sine and the cosine waveforms comes from. If our X- and Y-axes are at right angles, we say that they are "orthogonal", a word used interchangeably with 'perpendicular':
- "orthogonal: Greek: from orth- straight, right, true +gōnia angle: intersecting or lying at right angles; of vectors: having the scalar product of zero" (Webster's)
> concept #2: the ultimate value of a*sine X + b*cos Y is "Z" in the third dimension. If we are to keep the cylinder analogy we really have to think about this in 3-D. The waveform is printed on the treads of a tire, the tire is mounted on a wheel on an axle that spins at the same rate as the fundamental sine and cosine. The 1st harmonic spins twice as fast. Etc.
> concept #3 requires a bit of trigonometry: the Pythagorean theorem. Given that our "X" and "Y-axes" are perpendicular/orthogonal, and the values for "X" and "Y" are the coordinates (location) of the point in the plane, the diagonal V-- measured from the origin to the point -- has the following length:
- V = sqrt(X^2 + Y^2)
An example: On a stairs the "run" is the length of the "tread" and the rise is the (perpendicular) height of the step. The diagonal is, well ... the diagonal.
- A typical stairway has a rise of 7.2 inches, a run of 10 inches, and a diagonal of 12.5 inches. The diagonal should be sqrt(7.2^2+10^2) =12.32, and indeed that is close to the measurement of 12.5 inches.
If this length-of-line V is given together with its "direction" we call it a "a vector". But how do we specify its "direction?"
> concept #4 requires a bit of trigonometry. A common way of describing a vector's "direction" (remember: a "vector" is the line drawn from the origin to a point on our waveform) is to give the angle, measured counter-clockwise, that it makes withthe X-axis. If our vector-line lies flat on the X-axis and points to the right, it has an angle of 0° (always described with respect to the X-axis). Straight up it has an angle of 90, straight left it has an angle of 270, straight down it has an angle of 360.
> concept #5 requires a bit of trigonometry. The definition of "the cosine of an angle" is "run" divided by the diagonal. the definition of "the sine of an angle" is the "rise" divided by the diagonal. The two concepts can be compressed into a third definition called "the tangent": "rise"/"run".
- Example: For our typical stairway, the tangent of the stairs is rise/run = 7.2/10 = 0.72. If we were to use the diagonal (12.5) and the run (10) the cosine of our angle is run/diagonal = 10/12.5 = 0.80, and the sine of our angle is rise/diagonal = 7.2/12.5 = 0.58.
- The angle that corresponds to a tangent of 0.72 is 36 degrees, the angle that correspondes to a cosine of 0.80 is 37 degrees, the angle that corresponds to a sine of 0.58 is 35°. These are close enough so that we can use the average of ~36 degrees as "the direction of the vector with length 12.5 inches" to describe our stairs.
Basis vectors:
What is the angle between the X-axis and Y-axis? We might start this way: we draw a line and call it "the X-axis". Somewhere on it we put a point; this we call "the origin". Then off the line somewhere we put a point, say to the upper right an inch or so. If we were to draw a line from the origin through this point it would form an angle. Now move the point directly above the origin so that the X-axis and the line are perpendicular (orthogonal). The angle of our perpendicular line is 90 degrees. What is its cosine? 0.
> This is the mathematical definition of orthogonal: cosine between two lines-as-vectors is 0.
>[So what is the point? Why does the reader want to care about this?: that for any point in an X-Y plane we can use two orthogonal vectors to locate it? these are "the basis vectors". In the "X-Y" plane we need only two: the (unit) X-leg and Y-leg of the (unit) triangle; any point in the plane can be located with the help of these two: 3*X + 4*Y locates a point at x=3, y=4. The length of "the vector" (measured from origin to the point) is sqrt(3^2 + 4^2) = 5, the angle is arctan(4/3) = 53 degrees. Given these two values L=5, angle=53 degrees, how do we locate our point? (x=5*cos(53°), Y=5*sin(53°)). But why not skip the X, Y step and go to a unit sine and unit cosine as a basis? How and why do "the harmonics" come into this?
> Here's an important observation: the product of two "vectors", i.e. a sine as "a vector" and a cosine as "a vector" times our waveform which is a bunch of vectors:
- point P =(a, b) = ax + by, point Q = (c, d) = cx + dy
What will be the sum of these two points? the simplest way would be to just add everything together in a mish-mash. But We would loos our vector. Vectors add "component ax" to component cx, "component by to component dy". The resulting vectors (a+c)x and (b+d)y are considered 'independent' and are not intermingled:
- point P + Q = (a+c)x + (b=d)y
What will the product of these two points, a product that we need for our transform? We will just do our multiplication in the "normal way" -- term by term and see what it looks like:
- product P*Q = ax* cx + by*cx + ax*dy + by*dy
We would expect that, like most things multiplicative, x*x = x, ditto for y*y = y
But what about y*x and x*y? Because they are at right angles to one another, both products are 0. [and why is that?]
Given that we accept the sentence above, our product leaves us with something simple: we see that we haven't "gained" any extra terms, so if we needed to we could do this again, and again (we don't need to):
- product P*Q = a*cx + b*dy
>[We have to move, in an intuitive manner, from a point to a sequence of points that trace out a curve and show without handwaving . Each point can be described by a single vector (length) and an angle with respect to the X-axis, or as two perpendicular vectors. for a circle we can see this easily, and the sines and cosines for the fundamental -- the circumference of the unit circle -- . But just how do the harmonics come into this? The Harmonics spin around the Z axis-as-axle twice as fast, three times as fast, four times as fast, etc.
>We have to show the reader how this stuff works. We have cylinders or disks spinning at the "fundamental rate" of 1 revolution per 8 seconds, twice as fast 2 revolutions per 8 seconds, three times as fast (3 revolutions per 8 seconds), 4 revolutions per 8 seconds.
Here, m and n are integers that will represent "the harmonics" as described below. They either positive or negative values. Usually they are just considered positive: For continuous sine and cosine wavesforms:
- integral from -pi to +pi (sin mx*sin nx) = 0 if m<>n
- integral from -pi to +pi (sin mx*cos nx) = 0
- integral from -pi to +pi (cos mx*cos nx) = 0 if m<>n
- integral from -pi to +pi (sin mx*sin mx) = pi if m<>0
- integral from -pi to +pi (cos mx*cos mx) = pi if m>0
- integral from -pi to +pi (cos mx*cos mx) = 2pi if m=0
- Proof of this can be seen if we fiddle around with the definitions of sine and cosine written in terms of their "imaginary" equivalents.
For discrete sine and cosine waveforms the following become summations over the number of samples S (e.g. S=8):
- summation from t=0 to t=S of sin(2*π*m*t/S)*sin(2*π*n*t/S) = 0 if m<>n
- summation from t=0 to t=S of sin(2*π*m*t/S)*cos(2*π*n*t/S) = 0
- summation from t=0 to t=S of cos(2*π*m*t/S)*cos(2*π*n*t/S) = 0 if m<>n
- summation from t=0 to t=S of sin(2*π*m*t/S)*sin(2*π*n*t/S) = S/2 if m=n
- summation from t=0 to t=S of cos(2*π*m*t/S)*cos(2*π*n*t/S) = S/2 if m=n
- summation from t=0 to t=S of cos(2*π*m*t/S)*cos(2*π*n*t/S) = S if m=n
Poor man's explanation of the discrete Fourier Transform: References
---A WORK IN PROGRESS--
- Kreider, Donald L. et. al. An Introduction to Linear Algebra, Addison-Wesley, Reading, Mass, 1966. After introducing the notion of orthogonal vectors and x*y = 0, Kreider just pops onto the reader the notion of "trigonometric polynomal(s) of degree 2n +1" with an inner product" defined as
- f*g = integral from -pi to +pi (f(x) g(x)dx)
- This is, of course, the definition of the (infinite) Fourier Transform. He then shows how to normalize the set of "vectors" derived from the sum (just the terms), and then shows via the integrals (above) of sin mx*sin nx etc. that these are orthogonal. In a a Corollary he shows how these become a basis for "an n-dimensional Euclidean space]. He then shows how to use "the perpendicular projections" these trigonometric polynomials to determine the "Fourier coefficients".
- Hamming, R. W., Numerical Methods for Scientists and engineers: Second Edition, Dover Publications Inc., New York, 1962, 1973. An excellent addition to any technical library.
- Chapter 31 Fourier Series: Periodic Functions p. 504ff,
- Chapter 32 convergence of Fourier Series,
- Section 32.5 The Gibbs Phenomenon,
- Chapter 33 The Fast Fourier Transform
- Bracewell, Ron, The Fourier Transform and its Applications, McGraw-Hill, Inc. 1965. cf in particular Chapter 3 Convolution where he demonstrates how to slide paper tapes past one another to do discrete running means ("serial products") in the process of doing "a convolution".
- peripheral ref. LePage, Wilbur R., Complex Variables and the Laplace Transform for Engineers, Dover Publications Inc., New York, 1961.
- Cannon, Dynamics of Physical Systems ...
wvbaileyWvbailey 18:50, 16 June 2006 (UTC)
Opinions cont'd
- Readers are unlikely to find this clearer. If a reader doesn't know enough math to understand complex exponentials, burying them in tables of numbers and real trigonometry is probably not going to be something they'd care to wade through. —Steven G. Johnson 15:36, 13 June 2006 (UTC)
- "A" for effort. But if a picture is worth 1000 words, then it is probably worth 100000 numbers. (Sorry) --Bob K 17:12, 13 June 2006 (UTC)
- The quote I've heard is: "A picture may be worth a thousand words, but an equation is worth a thousand pictures." (Another good one: "If a picture ain't worth a thousand words, forget it.") —Steven G. Johnson 17:34, 13 June 2006 (UTC)
Always begin with the simplest example
The simplest non-trivial example is N = 2 .
This table shows (−1)jk.
... k=-1 k=0 k=1 k=2 ... ... ... ... ... ... ... ... j=-1 ... -1 +1 -1 +1 ... j=0 ... +1 +1 +1 +1 ... j=1 ... -1 +1 -1 +1 ... j=2 ... +1 +1 +1 +1 ... ... ... ... ... ... ... ...
The rows and columns of the table are periodic with period 2. So the following tiny table is sufficient.
k=0 k=1 j=0 +1 +1 j=1 +1 -1
A pair of real numbers x = (x0, x1) is transformed into a pair of numbers Fx = ((Fx)0, (Fx)1) defined by
- (Fx)0 = 2−1/2(x0 + x1)
- (Fx)1 = 2−1/2(x0 − x1)
This F is the discrete fourier transformation.
These special cases follow from the definition:
- F(1, 0) = (2−1/2, 2−1/2)
- F(0, 1) = (2−1/2, −2−1/2)
- F(2−1/2, 2−1/2) = (1, 0)
- F(2−1/2, −2−1/2) = (0, 1)
so does the proof that it is a linear transformation:
- F(x + y) = (Fx) + (Fy)
- F(a × x) = a × (Fx)
and the fact that the transformation is an involution
- F(Fx) = x
So, any periodic sequence, of period N = 2, can be written as a linear combination of powers of (−1). If the variable j is interpreted as a point of time, and xj is the corresponding sample of a signal, then the variable k is interpreted as a frequency, and (Fx)k is the corresponding amplitude. The amplitude (Fx)0, corresponding to the frequency k = 0, is called the DC amplitude, for Direct Current, and the maximum frequency, k = 1, is called the Nyquist frequency.
The next case to consider is N = 4 .
(This case is simpler than the case N = 3 because the square root of minus one, i, is logically prior to the primitive cube root of unity, (−1+i×31/2)/2. In both cases it is necessary to consider complex numbers.)
One possible generalization is:
- (Fx)0 = 4−1/2(x0 + x1 + x2 + x3)
- (Fx)1 = 4−1/2(x0 + i x1 − x2 − i x3)
- (Fx)2 = 4−1/2(x0 − x1 + x2 − x3)
- (Fx)3 = 4−1/2(x0 − i x1 − x2 + i x3)
However, this definition loses the involutary property.
The following alternative generalization:
- (Fx)0 = 4−1/2(x0* + x1* + x2* + x3*)
- (Fx)1 = 4−1/2(x0* + i x1* − x2* − i x3*)
- (Fx)2 = 4−1/2(x0* − x1* + x2* − x3*)
- (Fx)3 = 4−1/2(x0* − i x1* − x2* + i x3*)
where the asterisk * signifies the complex conjugate: (a+ib)* = a−ib, makes F an involutary and antilinear map.
- F(Fx) = x
- F(x + y) = (Fx) + (Fy)
- F(a × x) = a* × (Fx)
The latter choice is logically preferable but not always chosen in the literature.
Note that this theory of fourier transform doesn't need the geometrical and analytical concepts of cos, sin, e, and π, which scare the beginner. It is pure algebra. Bo Jacoby 12:45, 16 June 2006 (UTC)
Discussion
- Good writing, good point. This stuff is so easy once you get the hang of it: as I say you can do it on a spreadsheet. I'll peruse what you did and see if I can figure out how to use your ideas in what I did.
- Thanks a lot. Bo Jacoby 09:03, 19 June 2006 (UTC)
Can you cite a reference? I need the reference. I haven't seen any FT's done without a consideration of sines and cosines first. N.O.R. even if its good and to the point.
- Just substitute N=2 into the general formula in discrete fourier transform. Bo Jacoby 09:03, 19 June 2006 (UTC)
The idea of a non-harmonic "basis" would be okay but I believe the true FT is always with sines and cosines...correct me if I'm wrong.
- This is not anharmonic. The sines and cosines are sampled. cos(0×π) = +1 and cos(1×π) = −1. Bo Jacoby 09:03, 19 June 2006 (UTC)
(It would be interesting to see Fourier's original work and see exactly what he was up to).
- Joseph Fourier worked with infinite trigonometric series. The finite fourier transform is historically later, even if it is simpler. Bo Jacoby 09:03, 19 June 2006 (UTC)
If so, Somewhere these ugly dudes will have to creep in.
- You may let them, but they do not have to. Bo Jacoby 09:03, 19 June 2006 (UTC)
They are not that bad if you can see them. Before i launched into this I did consider a simpler case. The problem is we don't get to see "the harmonics". And it's no fun (and pointless) unless you can see the harmonics.
- In the case N=2 the harmonics are +1,+1,+1,+1,..., (the DC component) and +1,−1,+1,−1,..., (the Nyquist component) Bo Jacoby 09:03, 19 June 2006 (UTC)
As you see I got some pretty nasty feedback when I started into my little effort. The folks sneered at the tables in general.
- A critic may or may not have a point. Bad manners is always besides the point. Bo Jacoby 09:03, 19 June 2006 (UTC)
So I cut out another table completely. Really what we need is, together with the tables, a picture or two of a simple "waveform" fitted with sines and cosines, or the point values, or whatever. Together with the harmonics. I don't know enough about wiki's esoteric importing to be able to get them out of Excel.
- 'Waveform fitted with sines' belong to fourier series, not to discrete fourier transform. Bo Jacoby 09:03, 19 June 2006 (UTC)
- The averaged sum of sines and cosines is Hamming's approach, which I have been using so far. But the notion of sampling them too (and just having to do this once to "get the numbers") has a nice simple feel to it. I think that is what you are saying too.wvbaileyWvbailey 12:43, 19 June 2006 (UTC)
Imaginary numbers = fear and loathing.
- The idea of a primitive root of unity needs complex numbers for N>2. Bo Jacoby 09:03, 19 June 2006 (UTC)
Matrices, forget it. "Orthogonality" will throw folks. So will "basis vectors" or anything like that. These concepts need to go way at the end (or just disappear as "a given", "proven in linear algebra"). Like I wrote, behind the fourier transform is 50 pages of dense linear algebra, enough to choke anybody but a determined math or engineering major. And that is just in the reals, not even including the complex domain.
- OK, let's kill the ortogonality section. Bo Jacoby 09:03, 19 June 2006 (UTC)
Here's my goal: to write (some of) this for a high-school kid good in math. Like my 9th-grade nephew -- he was exposed to fractals in 8th grade, loved them. I showed him the wiki page and he loved it. I had some books that I loaned him, got him all excited. Ditto for my son when he was that age, he loved the idea of a picture that you travel through -- I found a program (referenced on that page). As a post-doc he's had to use sonograms and FTs in Sound Forge extensively in his work. He uses them as a tool, not as an esoteric math concept. But if he wanted to know more, why the f*** should he have to slog through a bunch of equations if someone can present the ideas in a kindler, gentler way? The trick is to find the way. Wvbailey 13:45, 16 June 2006 (UTC)
- Rather than to have equations translated into pictures, he may learn to love equations. Make it short and keep it simple. Bo Jacoby 09:03, 19 June 2006 (UTC)
- He started out a double-major in math, but when he ran out of time moved to biology, his real love. What folks miss is the intuitive description. Why on this physical, real earth of ours, does it work? What would make anyone believe this would work? Einstein's little book Relativity, I believe it's called, is almost completely words, and includes his little man-on-train-sees-lightning-at-both-ends analogy. He does throw in an equation now and then and saves the harder stuff for appendices.
- The fact that Einstein could do it doesn't imply that we can do it. Einstein knew what he was talking about before he wrote the book, while you are 'fumbling', and I am not free to write what Steven considers original research. Bo Jacoby 15:54, 19 June 2006 (UTC)
- Steven? Is that you? Or is that an editor?wvbaileyWvbailey 17:32, 19 June 2006 (UTC)
- User:Stevenj is editing discrete fourier transform. He disapproved everything I wrote. So I stopped. Bo Jacoby 09:35, 20 June 2006 (UTC)
- Steven? Is that you? Or is that an editor?wvbaileyWvbailey 17:32, 19 June 2006 (UTC)
- The fact that Einstein could do it doesn't imply that we can do it. Einstein knew what he was talking about before he wrote the book, while you are 'fumbling', and I am not free to write what Steven considers original research. Bo Jacoby 15:54, 19 June 2006 (UTC)
- As you may see from my above, I split it into sections and am taking an approach a bit more like yours -- to get to the notion of "sampling the sine" early on (sampling the pendulum), but without the imaginary numbers. And then just multiplying each sample by its respective weighting, and averaging the sum. You see the notion of a "running average" in Bracewell. Also Hamming's visualization of waveform around a cylinder is key. (I am trying to stick with published sources in this).
- To that point, have you seen a simple explanation of what would have caused Fourier to come to the conclusion that the integer sines and cosines make a basis?
- Sure. Cosines and sines are the real and imaginary parts of the roots of unity. 1k/n=cos(2πk/n) + i sin(2πk/n). (Note that 11/n here means a primitive n'th root of unity.) The roots of unity are fundamental, the trigonometric functions are not.
- Now that you write this, the nth roots of 1 ... coming back, just a little. I saw it in high-school I believe, for the first time. Round and round in circles. Then maybe later in college, I dunno, in a complex variable course.
- Read the WP article root of unity. Bo Jacoby 09:35, 20 June 2006 (UTC)
- As i recollect, the engineering approach (Bracewell, Hamming) is to consider "sampling" a multiplication by the "shah" function, of the "input waveform" repeated over and over to plus and minus infinity. The resulting transform results in a convolution. One nice thing about this is it puts "the sampling window" and the "aperture" all into the pot, and we can see their effects. And this approach just pushes the theory back into "the continuous" (as opposed to discrete) domain. I see the two forms -- discrete and "continuous" just the same animal. But is what you're saying that, if viewed as truly discrete, the theory moves toward the "finite-field" concepts from abstract algebra? wvbaileyWvbailey 17:32, 19 June 2006 (UTC)
- I see the infinite case as just a special case of the finite case. The simpler case is N=2. Then N=4. Then N=3. Then any finite N. Then the limiting case N=infinity, leading to fourier series and fourier integrals. Bo Jacoby 09:35, 20 June 2006 (UTC)
- Now that you write this, the nth roots of 1 ... coming back, just a little. I saw it in high-school I believe, for the first time. Round and round in circles. Then maybe later in college, I dunno, in a complex variable course.
- Sure. Cosines and sines are the real and imaginary parts of the roots of unity. 1k/n=cos(2πk/n) + i sin(2πk/n). (Note that 11/n here means a primitive n'th root of unity.) The roots of unity are fundamental, the trigonometric functions are not.
The primitive roots of unity are purely algebraic concepts. Fourier (to the best of my knowledge) didn't use complex numbers, so he had to rely on the more complicated analytical properties of the real-valued trigonometric functions. The immensely powerful concept of involution is not used for the fourier transforms in the article, and the readers suffer. Bo Jacoby 15:54, 19 June 2006 (UTC)
- I am unfamiliar with involution. This must be a concept that appeared in college-level texts long after I learned my stuff (back then it was all analog, no discrete, hnece no Discrete Fourier Transform excepting Hamming 1962, 1973). "Involution" is not in Hamming (1973 2nd ed.), nor Krieder 1966, nor Marsden 1973, nor LePage 1961, nor Bracewell 1965, nor in my abstract algebra book Saricino 1990. So I will have to go hunting... ahh, here's a clue: in my kid's books I find: "involute" p. 539 in Stewart, James, Calculus: Early Transcendentals, Brooks/Cole Publishing, Pacific Grove CA, 1995:
- "A string is wound around a circle and then unwound while being held taut. The curve traced by the point P at the end of the string is called the involute of the circle ... parametric equations of the involute are:
- "x = r(cosΘ+ ΘsinΘ), y = r(sinΘ - ΘcosΘ)"(problem 39, p. 539)
- "A string is wound around a circle and then unwound while being held taut. The curve traced by the point P at the end of the string is called the involute of the circle ... parametric equations of the involute are:
- I am unfamiliar with involution. This must be a concept that appeared in college-level texts long after I learned my stuff (back then it was all analog, no discrete, hnece no Discrete Fourier Transform excepting Hamming 1962, 1973). "Involution" is not in Hamming (1973 2nd ed.), nor Krieder 1966, nor Marsden 1973, nor LePage 1961, nor Bracewell 1965, nor in my abstract algebra book Saricino 1990. So I will have to go hunting... ahh, here's a clue: in my kid's books I find: "involute" p. 539 in Stewart, James, Calculus: Early Transcendentals, Brooks/Cole Publishing, Pacific Grove CA, 1995:
- Ahh, but more to the point in Gorenstein, Daniel, Finite Simple Groups: An Introduction to their Classification, Plenum, 2nd edition 1985, I find:
- "The pioneer in the field [sporadic groups?, classifying simple groups?] was Richard Brauer, who began to study simple groups in the late 1940's. He was the first to see the intimate and fundamental relationship between the structure of a group and the centralizers (D4) of its involutions (elements of order 2; D5)... the starting point for the classification of simple groups in terms of the structure of the centralizers of involutions [etc]" (p. 2, my boldface)
- This is all I find in Gorenstein. Holy catfish! Does this mean I will have to learn something new and really hard? Any advice on where to start? (Think "spoon-feed the engineer": really really easy)? (What I am going thru is exactly the experiences the wiki-readers go through when they encounter new stuff. Yikes!) wvbaileyWvbailey 17:14, 19 June 2006 (UTC)
- Holy crawdaddies! Wiki says that "involution" is just f(f(x))=x? Sheesh. Okay so here we go, the first example that came to mind:
- d(cos(ix))/dx = -i*sin(ix)
- and d(-i*sin(ix))/dx = -i*icos(ix) = +cos(ix), right?
- We've "involuted" cos(ix). Or whatever... So what is sin(ix)? It is:
- sin(u) = { e^(+iu) - e^-(iu) }/(2*i), where x = (iu)
- sin(ix) = { e^(+i*iu) - e^(-i*iu) }/(2*i) = { e^(-u) - e^(+u) }/(2*i)
- this must be something weird like -i*(sinh)... and this is in fact what my book, the Handbook of Chemistry and Physics 1967, says it is [more or less]:
- sin(ix) = -i*sinh(ix) [correct?]
- And I presume the same is true for d(d(sin(ix)))? But this isn't helping me understand, as in "what does this mean in/to 'the real world' with respect to DFT". Our nasty dudes sine and cosine and their vile cousins sinh and cosh once again make their appearances. Nobody out there on planet earth but 4 folks (okay, I exaggerate -- 4000 folks) understand what a "hyperbolic sine" is, and I wonder if any but 100 of them know what they're good for (count me out on that one). I'm not done, I'm just thinking out loud. wvbaileyWvbailey 22:58, 19 June 2006 (UTC)
- Explanation: Consider the transformation F=(f→(x→f(−x))). That is: F maps any function f to the function that maps the variable x to the value f(−x). In other words: (F(f))(x)=Ff(x)=f(−x). F is an involution, F2=1, because FFf(x)=(F(Ff))(x)=Ff(−x)=f(−(−x))=f(x). Define P and Q as below such that P+Q=1 and P−Q=F, namely P=2−1(1+F) and Q=2−1(1−F). Pf is the even part of f and Qf is the odd part of f. Consider the exponential function exp=(x→ex). The hyperbolic cosine is the even part: cosh=P(exp) and the hyperbolic sine is the odd part: sinh=Q(exp). Next consider the exponential of a purely imaginary number: exp&i=(x→eix). The trigonometric cosine is the even part: cos=P(exp&i), and i times the trigonometric sine is the odd part: i×sin=Q(exp&i). Sine and cosine are real functions of real variables, while exp&i is a complex function of a real variable. So if you fear complex numbers, you must learn complicated formulas about sine and cosine. The good news is that if you master complex numbers, then you may forget all about trigonometry. Bo Jacoby 09:35, 20 June 2006 (UTC)
- Holy crawdaddies! Wiki says that "involution" is just f(f(x))=x? Sheesh. Okay so here we go, the first example that came to mind:
- Ahh, but more to the point in Gorenstein, Daniel, Finite Simple Groups: An Introduction to their Classification, Plenum, 2nd edition 1985, I find:
- What I'm up to is definitely a work-in-progress, and there will be some fumbling about. By the way, thanks for the constructive input it has changed my thinking. As you can see, I cut the first effort with all the tables.wvbaileyWvbailey 12:43, 19 June 2006 (UTC)
- My pleasure. Bo Jacoby 15:54, 19 June 2006 (UTC)
Eigenvalues and eigenvectors
The unsatisfactory section of the article could be replaced by this.
The eigenvalues of the involutary discrete fourier transform, F, are +1 and −1, because F satisfies the equation F2 = 1 and so any eigenvalue k satisfies k2 = 1.
The operators P = 2−1(1+F) and Q = 2−1(1−F) are idempotent :
- P2 = (2−1(1+F))2 = 2−2(1+2F+F2) = 2−2(1+2F+1) = 2−1(1+F) = P
- Q2 = (2−1(1−F))2 = 2−2(1−2F+F2) = 2−2(1−2F+1) = 2−1(1−F) = Q
If x is any vector, then Px is a eigenvector with eigenvalue +1, and Qx is a eigenvector with eigenvalue −1, because
- FP = F2−1(1+F) = 2−1(F+F2) = 2−1(F+1) = (+1)2−1(1+F) = (+1)P .
- FQ = F2−1(1−F) = 2−1(F−F2) = 2−1(F−1) = (−1)2−1(1−F) = (−1)Q .
Any vector x is a sum of an eigenvector with eigenvalue +1 and an eigenvector with eigenvalue −1:
- x = Px + Qx
because
- 1 = P + Q
and the involutary fourier transform of this vector is
- Fx = Px − Qx
because
- F = P − Q
Bo Jacoby 15:03, 19 June 2006 (UTC)
- Hi Bo, back to your old haunts, I see? Just to remind you, as you've been reminded repeatedly, Wikipedia is not the place for you to promote your own nonstandard transform definitions and notations (e.g. the "involutary DFT", , etcetera). This is not a question of whether they are "good" or "bad", merely whether they are nonstandard. I hope you won't force other editors to keep reminding you of this in perpetuity. —Steven G. Johnson 02:51, 20 June 2006 (UTC)
- Hi Steven. The article is unacceptably bad, as other readers have also remarked. Your position, to consider anything that is new to you as nonstandard, and anything that is old to you as trivial, prevents WP from excelling beyond your level. So be it. You must refer to the standardization organization that made the noninvolutary DTF definition 'standard'. Bo Jacoby 07:23, 20 June 2006 (UTC)
As the eigenvalue-multiplicities for λ=+i and λ=−i must be equal, the table in the article not correct. Bo Jacoby 10:52, 21 June 2006 (UTC)
size N | λ = +1 | λ = −1 | λ = +i | λ = −i |
---|---|---|---|---|
4m | m + 1 | m | m | m − 1 |
4m + 1 | m + 1 | m | m | m |
4m + 2 | m + 1 | m + 1 | m | m |
4m + 3 | m + 1 | m + 1 | m + 1 | m |
Aliasing section removed
I took it out after finding DFT linked from the aliasing page. Since the DFT has no relationship to continuous-time signals or aliasing, I took it out. Only afterward did I check here and see the long-running dispute about it between Bob and Stephen. I must say I agree with Bob. Furthermore, the concept of aliasing is very well described in other articles where it is relevant. See aliasing, Nyquist–Shannon sampling theorem, Nyquist–Shannon interpolation formula, Nyquist rate, Nyquist frequency, etc. I'd be interesting in knowing if any textbooks discuss aliasing with respect to the DFT. Dicklyon 06:10, 23 June 2006 (UTC)
- I would direct you to Hamming, R.W. Numerical Methods for Scientists and Engineers, Dover, NY, 1960, 1973.
- "31.5 The Finite fourier Series
- "It is a remarkable fact that the sines and cosines are orthogonal over both the continuous interval and sets of discrete, equally spaced points covering the period. ...This means, in turn, that we can find the coefficients exactly (except for roundoff) in the discrete sampled problem, instead of approximatedly in the continuous model by numerical integration "(p. 510)
- "31.5 The Finite fourier Series
- He goes on to "prove the orthogonality of the Fourier functions over the set of equally spaced points"(p 512ff) Then on page 514 he discusses aliasing in context of sampling. Here he shows folding graphically. And sampling is necessary for the DFT (...in the real world. In the Platonic world I guess Bo would say that its all in the magic of abstract algebra).
- Hamming admits in that book that his treatment is "not definitive". It's pretty casual, and doesn't even mention the DFT. Maybe it's what he called the Finite Fourier Series at the time. But he's addressing it as a numerical technique for waveform analysis, not as a discrete transform in its own right. That's a great analysis to do, as an application of discrete numerical techniques to continuous problems, but this is not the right place to complicate the DFT article with such application subtlties. His later book has a treatment of aliasing, but does NOT associate it with the transform per se: [2]
- Dicklyon 19:26, 23 June 2006 (UTC)
- Maybe I misunderstand Dicklyon's question above, but then I don't understand the opinion -- it seems reflected on the article page too -- that "the DFT has no relationship to continuous signals or aliasing". This assertion makes no sense to me at all, and it doesn't help readers understand, either. Sadly it appears to be the "modern" view. In the olden days you could move back and forth between the discrete and the continuous by use of convolution and multiplication (cf Bracewell 1965).
- And I don't understand your view. How can you not admit that aliasing happens from the sampling? Dicklyon 19:26, 23 June 2006 (UTC)
- Yes and no. This is my understanding: The potential for aliasing --or leakage if you want to give it a different name -- occurs as soon as you move out of the infinite domain by snipping a finite record length and then attempting to "transform it" (pick your method: infinite, finite, discrete). The problem begins when the waveform you've sampled isn't periodic in the window you've chosen. (It just gets uglier when you sample it with more aperiodic harmonics). Because it is a finite transform you can't "go to infinity" with higher and higher harmonics. So you always have the potential for a discontinuity or an aperiodic sample causing "aliasing" or leakage or whatever you want to call it. wvbaileyWvbailey 21:10, 23 June 2006 (UTC)
- In case you aren't aware, aliasing and leakage are different things... you have changed the subject in midstream. Aliasing is caused by sampling. And leakage (as you have correctly stated) is caused by windowing, regardless of whether the signal is sampled or continuous. But it does not matter if the waveform is "periodic in the window" or not. The DTFT always suffers the leakage. Of course if you don't look at the whole DTFT, you might not see it. The DFT is sometimes an example of that. But just because you aren't looking doesn't mean it's not there. Anyhow [back to the thread], since aliasing and leakage are not synonymous or interchangeable, your implication that aliasing is caused by windowing is not valid. --Bob K 06:02, 24 June 2006 (UTC)
- Yes and no. This is my understanding: The potential for aliasing --or leakage if you want to give it a different name -- occurs as soon as you move out of the infinite domain by snipping a finite record length and then attempting to "transform it" (pick your method: infinite, finite, discrete). The problem begins when the waveform you've sampled isn't periodic in the window you've chosen. (It just gets uglier when you sample it with more aperiodic harmonics). Because it is a finite transform you can't "go to infinity" with higher and higher harmonics. So you always have the potential for a discontinuity or an aperiodic sample causing "aliasing" or leakage or whatever you want to call it. wvbaileyWvbailey 21:10, 23 June 2006 (UTC)
- And I don't understand your view. How can you not admit that aliasing happens from the sampling? Dicklyon 19:26, 23 June 2006 (UTC)
- What you do later with the discrete values doesn't change that. I think we are all comfortable moving back and forth between domains, and using Fourier techniques to help. But the DFT stays entirely in the discrete domain, so is not the place for this. Dicklyon 19:26, 23 June 2006 (UTC)
- The reason to move back and forth is to help folks understand, to teach, to render clear to a new person wanting to know. I thought wiki was an encyclopedia but apparently not for this sequence of articles. This isn't the place for academics to strut their stuff.wvbaileyWvbailey 21:10, 23 June 2006 (UTC)
- There is some merit to the idea that aliasing is all in the sampling. But it's interesting to see what happens when you carry out the math "all the way".In the poor man section I derived what happens with regard to aliasing : sampling a continuous wave can be considered similar to multiplying it by a cosine wave 0.5+0.5*(cosine(2pi*n*t/N). (Nobody ever said the sampling had to be done with a delta. In fact anyone who believes in such fairy tales had better go play with real filters, sample-and-holds, and A-to-D converters, and then study about apertures etc.). Sampling creates beats. Then when you multiply again by the sampled sines and cosines you get more beats. These can fold back, etc. etc.wvbaileyWvbailey 14:22, 23 June 2006 (UTC)
- If you could uninvent the DFT (or just go back in time before it was invented), you would still be unable to distinguish between a 1 Hz sinusoid and an 11 Hz sinusoid, when both are sampled at 10 Hz. The samples are inherently ambiguous. Put them into a DFT and it will also reveal the inherent ambiguities at 21 Hz, 31 Hz, etc. There is nothing "modern" about this viewpoint. It hasn't changed in the 40 years since I became aware of it. --Bob K 15:38, 23 June 2006 (UTC)
- I agree. But I would leave something about aliasing for the reader to ponder. The DFT always presents the risk of aliasing because it is derived from discrete values.
- There exists DFT, and for all DFT: DFT --> discrete values & discrete values --> danger of aliasing; therefore DFT --> danger of aliasing.
- The converse is not true: Danger of aliasing -X-> discrete values. I quote in full from Hamming:
- "33.5 Dangers of the Fourier Transform
- "The fast Fourier transform calculates (within roundoff) the same quantities as does the direct method, so that the troubles we shall discuss are not functions of the fast Fourier transform but rather of the Fourier transform itself.
- " First, there is the ever present problem of aliasing (Secs. 31.2 and 31.6)
- " Second, there is the problem of leakage due to the use of a finite length of record. The effect on a single spike is that it is replaced by a (sin x)/x ripple, and on a continuous spectrum, a large peak tends to "leak out' into the nearby parts of the spectrum (see Sec. 34.8)
- " Finally, there is a picket-fence effect. Each value of the sample spectrum acts like a spike, and the sum of the (sin x)/x terms, each shifted to its proper place, gives the spectrum the appearance of a sort of picket-fence effect of equally spaced ripples. This is noticeable if other than the properly spaced frequencies are computed.
- " These effects will not be discussed in the text at greater length because of the theoretical difficulties involved. We mention them here only as a matter of caution. (p. 543, italics in original)
- wvbaileyWvbailey 16:55, 23 June 2006 (UTC)
- I agree. But I would leave something about aliasing for the reader to ponder. The DFT always presents the risk of aliasing because it is derived from discrete values.
- And your point is what?... that the Fourier transform itself also reveals the ambiguities of sampled-data? BTW, that is so well-known and commonly encountered that it even has a name: discrete-time Fourier transform. --Bob K 18:40, 23 June 2006 (UTC)
- My point:
- There exists "DFT", and for all "DFT": "DFT" --> "discrete values" & "discrete values" --> "danger of aliasing"; therefore "DFT" --> "danger of aliasing".
- FYI, you have an interesting habit (which we've seen elsewhere) of quoting a source and then drawing a completely wrong or unrelated inference. I don't think I've ever seen anything quite like it before. --Bob K 23:24, 23 June 2006 (UTC)
- I find your tone snotty and offensive. FYI, I've observed that you have an interesting habit of asserting yourself like a card-carrying WP:Dick.wvbaileyWvbailey 03:50, 24 June 2006 (UTC)
- I am disappointed, but not surprised, that you feel that way. If you comprehended what I was actually trying to do you would thank me. But rest assured, I don't intend to try anymore. --Bob K 06:02, 24 June 2006 (UTC)
- The logical flaw is your premise that "discrete values" --> "danger of aliasing"; not all sequences are samples of continuous-time signals. Dicklyon 22:12, 23 June 2006 (UTC)
- Of course they're not. I've done 2-D transforms of object-images just the same way. Dialog is over. wvbaileyWvbailey 03:50, 24 June 2006 (UTC)
- And, other old guys agree (e.g. Hamming).wvbaileyWvbailey 21:58, 23 June 2006 (UTC)
- He's not old -- he's dead. I don't think it's fair to use your funny intepretation of his 1971 book to say that he agrees with you. See if you can find someone alive who agrees with you. Dicklyon 22:12, 23 June 2006 (UTC)
- I don't need to find somebody alive. I have the sources to back up my interpretation, just as you have the sources to back up yours. Dialog is over. wvbaileyWvbailey 03:50, 24 June 2006 (UTC)
New submission: DFT use quick reference
This is my first major Wikipedia submission; please bear with me. I have written a quick reference to how to use the DFT in MATLAB which I would like to offer for inclusion. What say you all? --Cxw 19:33, 18 July 2006 (UTC)
- I would lean against including it. Wikipedia is not a Matlab manual, and good Matlab introductions to such things are available elsewhere. —Steven G. Johnson 14:36, 25 July 2006 (UTC)
- I agree. FWIW, if I were creating such an example, I would include windowing. I would also switch the roles of tmax and delf. I.e., delf would be the independent variable (and its name would be Hz_per_bin). We often have the luxury of larger tmax than what is necessary to produce the required Hz_per_bin. In that case, the Welch method will usually produce a more satisfying power spectrum than fft(). --Bob K 23:51, 25 July 2006 (UTC)
- Sounds like you guys are addressing the topic of estimation of power spectra. That's just one minor use of DFT, and as BobK points out, there are usually better tools for that job. DFT is often used to implement circular convolution, for applications such as filtering with long FIR filters. And there are other uses, I dare say. These would be better things to add to the article. An article on spectrum estimation might also be a good idea. Dicklyon 19:19, 26 July 2006 (UTC)
Dear Wikipedians, "I would lean against including it" is much gentler than saying "Shut up". But where I come from, the proper etiquette is to not only be gentle in telling someone that something is off-topic here, but also to tell them where that something is on-topic (or at least more on-topic than it is here).
In this particular case, I suspect http://en.wikibooks.org/wiki/MATLAB_Programming would be a good place for that quick reference.
-- DavidCary --65.70.89.241 15:36, 8 August 2006 (UTC)
- Dear Cxw. In the spirit of wikipedia you are free to contribute, and others are free to move your contribution to better places or to improve upon it. Nobody should "lean against". Welcome. Bo Jacoby 09:57, 9 August 2006 (UTC)
Relationship to trigonometric interpolation polynomials.
We are told that
and
-
- for even (where the Nyquist component should be handled specially) or, for odd :
are different forms of the same function. They are not. Different functions must have different names. Bo Jacoby 11:20, 9 August 2006 (UTC)
- I'm not sure what you are getting at. , for instance, represents two different functions: and .
- But the are indeed 3 different functions, despite the author's intent. Consider the last term of the general formula: . For the even-N and odd-N formulas to be equivalent to the general formula, the following would have to be true for all :
- However, that is only true for (integers)
- Bob, there are only two different functions. The even and odd formulae are alternate parts of the second p(t) function; you have to pick one based on the parity of N. And if you read the text, it's clear that there was no intent that the second p(t) should be equal to the first p(t). Giving it a modified name would help make that clearer, though I agree that names are sometimes used this way, leaving ambiguity to the reader. Dicklyon 01:07, 11 August 2006 (UTC)
- You just wrote the comment that I thought I was going to write the first time, until I realized that:
- as I proved above, using just one term (the last one). The author blew it.
- --Bob K 01:40, 11 August 2006 (UTC)
- He may have blown it using the word "form", but he never intended such an equality. His point, maybe not so clear in the text, is that the secord "form" with odd and even parts, has lower frequencies and a real result, which is DIFFERENT from the first form, which leaves you with a rapidly spinning complex waveform. Dicklyon 01:50, 11 August 2006 (UTC)
- Ahhh... I get it now. Thanks. And I agree with Bo... the word "forms" is misleading. And maybe it's just me, but I think this section needs a lot more groundwork if it is to remain here. Trigonometric_interpolation_polynomial would be a more appropriate home. --Bob K 02:22, 11 August 2006 (UTC)
I think the phrase "different forms" was intended in the sense of "different functional forms" (e.g. a*sin(x) + b*x has a different functional form than a*x^2 + b) rather than "different forms of the same function" which was obviously not meant here. I edited it slightly to emphasize this difference, although perhaps more substantial rephrasing would clarify things further. Definitely, a more complete discussion should go in trigonometric interpolation polynomial but I think there's a good argument for putting at least the minimal-slope interpolations here as a summary, since those interpolation polynomials, and their non-uniqueness, are critical for a lot of applications of the DFT (e.g. for spectral methods). —Steven G. Johnson 04:44, 11 August 2006 (UTC)
The word 'sinusoid'
'Sinusoid' is a new word in the context. It might be confusing to some readers. I changed it to 'wave' in accordance with the explanation given in the article sinusoid but it was changed back. The terms in the trigonometric polynomial as written are exponentials, not explicite trigonometrics. Perhaps this is the place to show a real-function expression of trigonometric polynomials involving the sine and cosine functions explicitely? If the reader has seen such an expression in some book it is nice if he recognizes it in wikipedia so that he doesn't think that the article here is about something quite different. Bo Jacoby 06:35, 16 August 2006 (UTC)
- I changed it to complex sinusoid frequencies. I don't think anyone who is reading this section is going to be any better off by having this explained here. Anyone who doesn't know that much is way out of their element, and this is not the place to try to fix that. Dicklyon 06:47, 16 August 2006 (UTC)
The sinusoid is complex, not the frequency. You can do better! Bo Jacoby 08:06, 16 August 2006 (UTC)
- Yes, you are right and I feel like an ass, as I'm usually the one putting in the missing hyphens to avoid exactly such misreadings. I will fix it to "complex-sinusoid frequencies". Dicklyon 14:58, 16 August 2006 (UTC)
'a common mistake'
"In contrast, the most obvious trigonometric interpolation polynomial is the one in which the frequencies range from 0 to N − 1 (instead of roughly − N / 2 to + N / 2 as above), similar to the inverse DFT formula. This interpolation does not minimize the slope, and is not generally real-valued for real xn; its use is a common mistake."
Dear Steven, the fact that you made this mistake yourself does not make it common. You do not have to document your mistakes in WP. :-) Bo Jacoby 06:44, 16 August 2006 (UTC)
- I have no idea what you are talking about, and your snarkiness is unwarranted. I did not make this mistake myself, although I may have phrased the explanation poorly. However, it is indeed a common mistake. (You see it over and over on Usenet, when people try to differentiate a function by multiplying its DFT by the frequencies, but forgetting aliasing...then they wonder why their inverse DFT looks weird. I deal with FFT users every day due to my software, and again I see this error repeatedly. I also see it from students when dealing with spectral methods for PDEs...as a result, I actually assigned a homework problem on exactly this issue last term.) —Steven G. Johnson 23:49, 16 August 2006 (UTC)
I am talking about that the history of this subsection is that you first mentioned the frequency range from 0 to N−1 rather than from −(2·N−1+(−1)N) / 4 to (2·N−3−(−1)N) / 4. Accept reader's comments constructively, especially as you are not letting anybody but yourself edit 'your' article. The subsection does not explain what you explained here. Errors made by students may be caused by errors in teaching, so your students make a biased sample. Still, WP is not the place for documenting errors, common or not. Bo Jacoby 10:19, 17 August 2006 (UTC)
- If you want to make constructive criticism, you should avoid snide insults. Another good tip would be to avoid manifestly false statements such as that I haven't "let" anyone else edit the article. As for the order, actually the simple 0..N-1 case was listed there before I ever edited that section to add the symmetric case; I left it because there is a certain merit to describing the concept of trigonometric interpolation with the simplest example before you get to the more complicated case of the (typically) most useful interpolation, and there is also merit to showing two inequivalent interpolations explicitly in order to highlight the differences. And as for the "errors in teaching" that you so politely imply, I didn't say they were my students; with my own students I assigned this question on the very first problem set to be sure they don't get confused. Over the past ten years, I've dealt with hundreds of students who have emailed me for advice regarding FFT usage, and as I said I see this mistake over and over. —Steven G. Johnson 18:06, 17 August 2006 (UTC)
As for WP not being the place for "documenting errors," I don't think you're being reasonable. From a pedagogical perspective it can be almost as valuable to explain what not to do, if there is a common pitfall, as to explain what to do. Moreover, the non-uniqueness of trigonometric interpolation is a fundamental concept that needs to be understood in order to properly apply the DFT/DTFT for everything from PDEs to the sampling theorem. What better way to explain it than to briefly mention an example of an inequivalent interpolation? —Steven G. Johnson
- Would this be considered "documenting an error": Tobacco_smoking#Health_risks_of_smoking ? --Bob K 03:48, 18 August 2006 (UTC)
- Hi Bob. What do you think? Is tobacco smoking just a 'common error' of mathematics, or is it a deadly and costly problem of society ? Bo Jacoby 09:24, 18 August 2006 (UTC)
- I see no reason that [Wiki] mathematics should be exempt from the common courtesy of helping new people avoid old mistakes. --Bob K 15:50, 18 August 2006 (UTC)
- Hi Bob. What do you think? Is tobacco smoking just a 'common error' of mathematics, or is it a deadly and costly problem of society ? Bo Jacoby 09:24, 18 August 2006 (UTC)
Hi Steven. I made my point. A pitfall is avoided by building a road that circumvents the pitfall, rather than by placing a warning sign at the end of the road that leads to the pitfall. You might introduce the balanced frequency range early, rather than to use an unbalanced frequency range for theoretical work, thus leading the reader to use it also for interpolation, giving bad results. If I make an edit, will you promise to leave it unremoved for a week? You do have a long history of removing my WP contributions, you remember. Be nice to people and they will be nice to you. Show respect, and respect will be shown to you. I am your mirror, my friend. Bo Jacoby 09:24, 18 August 2006 (UTC)
- I'm afraid that no one's edits are exempt from the actions of other editors. As for my "history of removing" your "contributions", I assume you're referring to my reversions of your edits that violate Wikipedia policy against nonstandard personal notations, a policy that has been explained to you repeatedly by numerous editors to no avail. It is unfortunate that you cannot see the difference between this and your own discourtesy. If you are requesting special privileges to keep your edits inviolate, or the special privilege to violate consensus policy, then I'm afraid I must decline your request. —Steven G. Johnson 20:28, 18 August 2006 (UTC)
I am referring to correcting your errors such as including n=0 in the definition of the discrete fourier transform. I stopped editing because of your vandalism, not because you are right. Can you see a constructive aspect of my comment or are you just permanently offended by my implying that you are not perfect? Bo Jacoby 14:41, 20 August 2006 (UTC)
- Correction of typos is always welcome, and I've never said that you don't make any useful edits. Nor do I claim to never make mistakes. I do object to false and meanspirited accusations, and to your pointlessly insulting manner above. Don't try to twist my words to conflate different subjects. —Steven G. Johnson 17:41, 20 August 2006 (UTC)
This is an archive of past discussions about Discrete Fourier transform. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 |