Jump to content

Talk:Quantization (signal processing)/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1

Definition of floor function

To stress that the transition from continuous to discrete data is achieved by the floor function, it might be useful to require to be continuous. Additionally, I think

  • is the floor function, yielding the integer

is confusing, it may be better to use or instead of .

--134.109.80.239 14:50, 11 October 2006 (UTC)

I liked your suggestion, and just put it into the article. -SudoMonas 17:22, 13 October 2006 (UTC)

Incorrect statement about quantization in nature

This page incorrectly stated that at a fundamental level, all quantities in nature are quantized. This is not true. For example, the position of a particle or an atom is not quantized, and while the energy of an electron orbiting an atomic nucleus is quantized, an electron's energy in free space is not quantized. I have changed the word "all" to "some" in the text to correct the false statement, but a much more optimal revision could be made.

71.242.70.246 18:03, 12 May 2007 (UTC)

Agree. I find that the whole section is unrelated to quantisation in signal processing, and it hasn't been edited in years. I've decided to delete the whole section. C xong (talk) 04:06, 31 March 2010 (UTC)

pi and e

" For example we can design a quantizer such that it represents a signal with a single bit (just two levels) such that, one level is "pi=3,14..." (say encoded with a 1) and the other level is "e=2.7183..." ( say encoded with a 0), as we can see, the quantized values of the signal take on infinite precision, irrational numbers. But there are only two levels. "

How you will build, test and prove that?

How you will measure "pi" and "e" levels?


P. Petrov —Preceding unsigned comment added by 78.90.230.235 (talk) 18:59, 20 March 2010 (UTC)

The example is poorly written, but its premise is correct. Infinite precision is possible only in theory, so it cannot be tested in practice. The example could be better worded. C xong (talk) 04:09, 31 March 2010 (UTC)

OK I admit that the example is poorly stated :) The reason for such an example is the condition I have seen of the prior editors, having an opinion biased towards " a quantizer should have integer (like 1,2,3) or at least rational fraction wise (like 0.25,0.50,0.75) output values." This illusive bias is a mere and natural result of using computers for digital signal processing and inputing analog signals with soundcards for practical applications of quantizers within ADC of such devices. It is true , that a compuer needs fractional numbers that can be exactly representable within finite N-bits of binary resolution (either in integer format or in floating point formats). BUT a quantizer is something else. Its main function is to map a range of uncountable/countabe things into a set of countable ones with a much smaller number in case of countable to countable mapping. Weather those things have numerical values or not is a secondary issue and even if that numerical values have integer,real or rational values is completely irrelevant from a quantizer's point of view. —Preceding unsigned comment added by 88.226.19.210 (talk) 14:55, 2 April 2011 (UTC)

The new state of the page

The following represents my personal point of view.

OK now after several enhancements, corrections, modifications and additions, the page seems not any better ? :)) why so ?

1- whole page seems not relating to the main topic (Quantization - "Signal Processing" ?) It seems mainly about quantization in "mathematics","communication systems" and "source coding" (data compression), however the sole purpose of quantization for signal processing is simply the representation of analog signals by digital ones. This point is almost not discussed. That is a practical point of view (ADC/DAC,Data Acquisition, Instrumentation) I think that this point must be stressed. There are many considerations, input signal conditioning, clipping,distortions, AGC, dynamic range modifications, loading factor calculations, independent noise assumptions, SQNRdB calculations, input types and their effects on the resulting signal fidelity...

2- modifications dont work: The previous stage was based on my very personal style. I like personal writing :). And it doesnt fit wiki, but your (SudoMonas) modifications now are too much constrained and limited by that previous things. And that creates too frequent style-mismatches which make it difficult to read. lets consider writing this page from "scratch" :)), so that it at least becomes consistent in terminology and style.

3- it seems boring without actual examples, applicaitons and figures.

4- and it is quite long now.

now a few suggestions.

1- That whole Rate-Distortion based mathematical stuff shall either be omitted or be moved to a proper place. Analysis and Design of a quantizer shall better be treated separate than its definitions, types, usages and properties.

2- Shorter is better ! : at various places too lengthy explanations pervade.(some of them belonging to me) Even the first few sentences are unnecessarily (almost redundantly like this one) long, what is wrong with saying => "Quantization is the process of mapping large set of input values to a much smaller set" conscise, compact, and if any ambiguity happens (definetely it does) it can always be expanded and clarified on the what follows, instead of inside a single sentence.

3- Quantization in Signal Prcessing, Mathematics, Communications and Source Coding have quite different purpose/type of usages. Therefore they shall better be treated separately.

in Signal Processing => ADC/DAC characterizations,binary data representation formats, rounding, rounding in matlab/C, rounding in IEEE floating point formats, input signal conditioning, independent quantization noise assumption and its effcets on outputs, spectral noise shaping via noise feedback applications, input loading factors. quantizer resolution wrt bit-size, relations to sampling rate. It is very natural to consider quantization with sampling in here.

in Communications => telephone lines, PCM, DPCM, ADPCM, Delta Modulation, Nonuniform Max-Lloyd and Adaptive Qauntizers, a-low, u-low Companders. the ones employed in codecs likes ITU-T G.723,G.726,G.722 standarts.

in Source Coding => rate distorion based encoder-decoder design, vector quantization , Psychoacoustic/Psychovisual facts for shaping the design, quantizers used in JPEG, MPEG-audio, H.263/4 would show some nice examples. —Preceding unsigned comment added by 88.226.198.117 (talk) 23:52, 13 April 2011 (UTC)

I think it is some better. My perception is that it jumps into specialized math too quickly. My thoughts are that roughly the first half ought to be descriptive and qualitative with simple examples and only a few simple equations and should only be about uniform step size quanitization. Then the second half could have all that math. First anything to do with the uniform quantitizer then the others.Constant314 (talk) 18:13, 14 April 2011 (UTC)
Upon further reflection, I think this article should deal only with uniform quantization and the other types moved to their own pages.Constant314 (talk) 18:24, 14 April 2011 (UTC)
I just noticed these comments – some further edits have been done since those comments were made. I just included the suggestion regarding the simplification of the first sentence. As you have probably seen, I have just started at the beginning and have been trying to improve what I saw from paragraph-to-paragraph as I moved forward. It's true that this is an incremental approach. I haven't yet gotten to the later sections or really attempted any significant restructuring or added substantial new topics. I had planned to get to some of that, but hadn't yet had time. The rate-distortion and Lloyd-Max material was already there – I have only refined them. I certainly think that the article has been getting substantially more correct and that there has been some improvement in the logical flow, consistency, notation, and referencing. In my opinion, quantization for source coding and communication are within the scope of signal processing. Of course, I have been the one doing the recent edits, so I may not be perfectly objective about them. –SudoMonas (talk) 19:12, 14 April 2011 (UTC)
I think you are making improvements. I don't know how this article got to where it was. It looked like two guys who knew a lot about the subject were in a contest to see who could add the most stuff. Regarding "quantization for source coding and communication are within the scope of signal processing" I agree but that is no reason why they could not have their own pages with a link in this page.Constant314 (talk) 21:46, 14 April 2011 (UTC)

1- well, first of all there are certainly improvements: At the very beginning, once upon a time, quantization was described almost like rounding to integer. Now it is definetely better.

2- The fundamental problem results from the fact that while doing my edits, I thought it would be a good idea to start from the most general, rate distortion based case and move on to the specific cases as special examples (a rather logical axiomatic approach). Now I think that is not good. It seems better to go, as Constant314 points, from simpler uniform quantizer to more general cases. For me it is definetely better in this state, from general theory to specific examples. But I guess most people visiting this page have no idea about either entropy or rate distortion theory and for those people (the majority) it is difficult to read in this fashion.

3- There are no different quantizers for signal processing, communiciation or source coding. However the application target and hence the constraints may get radically different. For example dithering has no meaning in source coding while it is a useful tool for image/audio post-processing. For most DSP applications, for example, due to practical CPU architectures, FLC is used, that is the natural machine arithmetic and machine word size. It would be difficult to use entropy techniques there. As all these are different application constraints on the same general problem, that is why I assume treating them separateley would be better. By the way, my edits were geared towards source coding and scalar quantization in particular. SudoMonas seems to have vector quantization, VQ, basis. That "classification of input" argument, instead of simply calling them decision intervals, has very little meaning and significance for a scalar quantizer, although it is understandable for pattern recognition or vector quantization. I strongly suggest avoiding a mixture of VQ and SQ. It would be much better to treat VQ in a separate brand new and free page.

4- Since quantization is a vast subject there is no last word on it. Anybody who knows about it would like to add an extra paragraph of his own. Expanding some vague overly compressed definitions, giving a more unambigious description, adding a new point of view or some application examples...And that would make this page too long. I guess only those necessary and sufficient explanations should be included.

5- finally, I am not in a contest, as suggested by Constant314. I am not puting anything new. Possibly I wont either. I wish good luck for the remaining editors.

—Preceding unsigned comment added by 88.224.26.202 (talk) 12:20, 15 April 2011 (UTC) 
Re your #5: Sorry I don't mean that you were in a contest. I think it was that condition before you started working on it.
Re your #3: Your approach of general to specific would appeal to mathematicians, which few readers are.Constant314 (talk) 13:25, 15 April 2011 (UTC)

Since these further comments, I have done various things to try to simplify the presentation. I have tried to restrict the introduction section to basic ideas and applications without getting into detailed equations. I have also moved more of the simpler uniform quantization discussion up before the discussion of rate-distortion optimization. (I agree that the axiomatic approach was a bit too tough for most readers.) I have substantially condensed and simplified much of the material after the rate-distortion and Lloyd-Max sections and removed some of the unreferenced material that seemed confusingly written, overly mathematical, and in some cases not especially noteworthy. There is already a separate article on VQ, and it is linked near the beginning of the article. I am becoming reasonably satisfied with the article, although I do still plan some further refinements. —SudoMonas (talk) 01:23, 20 April 2011 (UTC)

I'd like to see the Quantization Noise sub section reinstated. People do sometimes analize quantization error as noise and sometimes that is OK and sometimes it yields wrong answers.Constant314 (talk) 17:07, 21 April 2011 (UTC)
Excellent suggestion – although I think that the material that was previously in the article on that subject was not such a good presentation of the subject. If someone else doesn't do it, I'll add some discussion of that topic soon. —SudoMonas (talk) 21:58, 21 April 2011 (UTC)
You are doing fine. I would suggest that you use PDF instead of pdf and that you write it out fully at least the first time in every section.Constant314 (talk) 17:38, 22 April 2011 (UTC)
Thanks. I just inserted a section about the additive noise model. Regarding pdf, I put some changes in the article to improve that aspect, although not exactly as suggested. According to the PDF (disambiguation)#In science and probability density function pages (and my personal experience), the usual abbreviation uses lowercase letters. To me (and I think to most people), PDF refers to the file format, and that assumption is reflected in the Wikilink redirect on the PDF page. In the article modification, I defined the abbreviation in parenthesis in the first place where it is used in the article and put Wikilinks in the first use in each other section. In some places, defining the term in parentheses might mix with math formulas that immediately follow the term. —SudoMonas (talk) 20:22, 22 April 2011 (UTC)
LOL, I have just the opposite reaction: I think pdf is a file type and PDF is an acronym.Constant314 (talk) 16:01, 23 April 2011 (UTC)

Figure illustrating sampling

I find the top figure of the article showing quantization quite confusing. First of all it seems to show the entire chain of analog-to-digital and digital-to-analog conversion. The graph is interesting but stands in conflict with the typical illustrations in textbooks, like e.g. the plot titled "Original and Quantized Signal". It is furthermore not clear what sampling scheme for analog-to-digital and what interpolation scheme for digital-to-analog conversion has been used. I would suggest to remove the figure or to move it further down the page with an appropriate description. Sascha.spors (talk) 15:15, 16 January 2015 (UTC)

I am going to guess that you are more comfortable with the stair-step representations in the figures further down. The problem with those is that they do not take realistic signal reconstruction into account and so don't give and accurate representation of the error induced. On thing that is making things difficult here is that neither the caption nor the legend indicates that the black dots are the quantized signal. I have updated the caption to try an help with this. ~KvnG 14:45, 19 January 2015 (UTC)
I completely agree with Sascha on this. Focusing on the first figure, it seems to be primarily a depiction of periodic sampling, not quantization. Upon very close and careful inspection, there is quantization evident in the amplitudes illustrated in that figure, but that is not something anyone would ordinarily notice without staring at the figure for a very long time. If we want to illustrate (scalar) quantization, one axis should show the (continuous-domain) input value to a quantizer and the other axis should show the corresponding quantized output value – i.e., we should have a figure that looks roughly like a staircase with a constant rise-to-run ratio – like the first figure in the linked article by Widrow. This does not refer to figures of the sort referred to above as "stair-step representations". In all five of the figures in the current article, the horizontal axis seems to be showing time or frequency, which are basically irrelevant to explaining the concept of quantization. Quantization is something that is done to all sorts of numerical values. An illustration of two-dimensional VQ would also be nice to include in the article. —BarrelProof (talk) 18:51, 19 January 2015 (UTC)
The title is Quantization (signal processing), so I don't see anything wrong with illustrations of "signals" (i.e. amplitude vs time). The new caption is very good. And of course the full-sized picture is better than the thumbnail pic.
--Bob K (talk) 13:19, 21 January 2015 (UTC)
"Signals" are not, in general, restricted to amplitude versus time. An obvious case is a photographic image. Image processing is certainly signal processing, but the set of digital color samples that represents a photograph has no time domain. Information in a transformed domain is also a "signal". See, for example, the article Moura, J.M.F. (2009). "What is signal processing?, President's Message". IEEE Signal Processing Magazine. 26 (6). doi:10.1109/MSP.2009.934636. {{cite journal}}: |access-date= requires |url= (help), which explicitly says that the assumption that a signal needs to vary with time (or space) is only an assumption that applied "ages ago". —BarrelProof (talk) 20:41, 22 January 2015 (UTC)
So changing "time" to "x" would solve your problem? Is that what we're really talking about?
--Bob K (talk) 19:47, 23 January 2015 (UTC)
No. What I think we're talking about is the desirability of adding a figure illustrating the input-output function for a scalar quantizer, roughly like the five figures specifically identified as examples below. —BarrelProof (talk) 20:09, 23 January 2015 (UTC)
"...stands in conflict with the typical illustrations..." What do you mean by this exactly? You think a different type of graph is more appropriate or there is something incorrect about the figure? Radiodef (talk) 20:05, 22 January 2015 (UTC)
I believe the "typical illustration" this refers to is one roughly like the first figure in the 1961 Widrow article that is cited with a PDF link in the article. Such figures are found in many publications about quantization – I only mention that one because it is so easily accessible. Figures 1 and 2 in the cited 1998 article "Quantization" by Gray and Neuhoff are similar. Also Figures 1 and 2 in the cited 1977 article "Quantization" by Gersho. The accompanying quantization error (figure 4) in the Gersho article is also nice. —BarrelProof (talk) 20:41, 22 January 2015 (UTC)
I think I understand the concern now. An input vs. output transfer function diagram as you suggest would be interesting to try. If readers are able to get their heads around it, it is the most direct way to represent quantization. Plots like this are being used successfully in Dynamic range compression. ~KvnG 16:12, 27 January 2015 (UTC)

Rounding example for half-step values

In the basic rounding example describing simple (mid-tread) rounding as an example of quantization, someone changed it to use the tie-breaking rule of rounding upwards always (towards infinity) for half-step input values. My impression is that this is not the usual definition of rounding that most people are familiar with. I reverted the change, saying "In the usual definition of rounding, the value −0.5 should be rounded to −1, not 0." My revert was then reverted by someone saying "The formula was correct. +0.5 and -0.5 should both round in the same direction (positive for example), otherwise there is a statistical bias away from zero." While it is true that rounding away from zero creates a statistical bias away from zero, I believe it is the most appropriate example to use here, for several reasons:

  • It is the most common form of rounding that is commonly taught and used in practice (e.g., by schoolchildren and accountants). The general public is not familiar with other rounding rules.
  • It is what is typically built into software (such as Microsoft Excel and other general-purpose software).
  • As noted in the Rounding article, "This method treats positive and negative values symmetrically, and therefore is free of overall bias if the original numbers are positive or negative with equal probability." Typical pdfs such as the Gaussian (a.k.a. Normal) and Laplacian pdfs have that property.
  • Also as noted in the Rounding article, "It is often used for currency conversions and price roundings (when the amount is first converted into the smallest significant subdivision of the currency, such as cents of a euro) as it is easy to explain by just considering the first fractional digit, independently of supplementary precision digits or sign of the amount (for strict equivalence between the paying and recipient of the amount)."
  • It seems generally desirable for a mid-tread quantizer to have symmetric behavior around zero.

Having an overall upward bias toward infinity seems no better than having an outward bias away from zero – at least an outward bias will average to zero for symmetric input. Both types of rounding have some disadvantages, but there is nothing "incorrect" about rounding away from zero, and that's the type of rounding most people are familiar with. —BarrelProof (talk) 20:05, 4 December 2015 (UTC)

You make good points. This article is about quantization rather than rounding. Always rounding positive creates a positive bias which is often unimportant or easily removed (by capacitive coupling in electronics). Biasing away from zero creates a non-linearity near zero which is more difficult to deal with. In the quantization of analog signals, zero is not a special number and so you want to avoid biasing away from it.Constant314 (talk) 20:27, 4 December 2015 (UTC)
Upon further reflection, it might be beneficial to include both formulas and an explanation of the difference. Constant314 (talk) 20:37, 4 December 2015 (UTC)
It might, but if we're trying to provide a familiar example, the previously described form of rounding is the most common and familiar, so I think it is the most important one to include. —BarrelProof (talk) 21:33, 6 December 2015 (UTC)
The article is about quantization of signals (whether it's A/D conversion or bit depth reduction). A/D conversion has enough slop that you could never tell exactly where the "tie-breaking" point is. Some DSPs have "convergent rounding" in which they will round to the nearest even if the value is precisely midway between quantization levels. Sometimes we just lob off the bits on the right which is the floor function. In no case anywhere does any bit depth reduction round negative values in the other direction as positive. This language with sgn() and abs value is confusing and superfluous and does not belong in the article at all. 173.48.62.104 (talk) 18:23, 7 December 2015 (UTC)
I agree mostly that for signal processing rounding is toward positive. I'm not sure that it never happens otherwise but the formula showing always rounding positive is the most appropriate for this article. Constant314 (talk) 18:56, 7 December 2015 (UTC)
This may depend somewhat on whether you consider things like data compression (e.g., image, video, and audio compression) to be signal processing. I do. JPEG and JPEG 2000 encoders, for example, typically use symmetric rounding around 0. Some of what is in the article (e.g., Rate–distortion quantizer design and Lloyd–Max quantization) is about the principles of quantization for compression purposes. Many of the cited sources are academic papers that discuss usage for compression applications. —BarrelProof (talk) 21:49, 7 December 2015 (UTC)
I have no opposition to having both formulas. There is plenty of room. Constant314 (talk) 23:30, 7 December 2015 (UTC)
OK, as you have probably noticed, I just expanded the dead-zone discussion to cover the symmetric case, and included consideration of arbitrary dead-zone widths. —BarrelProof (talk) 05:36, 8 December 2015 (UTC)
Interesting. Is there a more important use than noise gate or squelch? Constant314 (talk) 12:55, 8 December 2015 (UTC)
Just for the record, "always rounding positive" has been my general experience. Heuristically, round[s(t)+1]=round[s(t)]+1 seems more useful than mag[round[s(t)]]=round[mag[s(t)]], since addition is linear, and abs value is not (FWIW). But I agree that if both methods are found in practice, neither should be excluded. And if we can produce a list of the types of applications where each is likely to be used, and reasons why, that would be ideal.--Bob K (talk) 13:25, 8 December 2015 (UTC)
I am in complete agreement that any method found in practice (and it would be nice to have that cited and verified eventually) should be included here. So if those dead-zone quantizers have a real application somewhere, they should be included. Perhaps also the convergent rounding where round(n+1/2) goes to the n or n+1 that is even. But within scope and topical limits. Should noise-shaped quantizers be included? Even the simple "fraction saving" where bits lobbed off of a samples are zero-extended and added into the next sample before truncation? So far this has only memoryless quantizers. Maybe it should have only memoryless quantizers. 173.48.62.104 (talk) 14:05, 8 December 2015 (UTC)

Hello fellow Wikipedians,

I have just modified one external link on Quantization (signal processing). Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}).

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 12:37, 21 July 2016 (UTC)


Entropy claim for mid riser.

Upon reflection, I realize that my statement "The output of the quantizer switches randomly between two adjacent levels." is not entirely accurate. It the noise was strong enough the output could switch between more than two levels, although it switches between adjacent levels when the noise amplitude is small. I hope someone can say it better. We need to better than "output entropy of at least 1 bit per sample" doesn't tell the average reader anything. We need to tell the reader what that means. A digital signal randomly switching between two adjacent levels where the samples are independent would be an example of such a signal. Perhaps if I change the statement to "An output where the samples are independent and switch randomly between two adjacent levels is an example of a signal with 1 bit of entropy." Would that be acceptable? Constant314 (talk) 16:05, 1 January 2019 (UTC)

To know how to say it better, we need a reference. I have trimmed all of this back until one is available. For AC signals, there is no fundamental difference between mid-riser and mid-tread quantization. ~Kvng (talk) 13:52, 5 January 2019 (UTC)
Good choice. Constant314 (talk) 21:01, 5 January 2019 (UTC)