Jump to content

Talk:Noisy-channel coding theorem

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This article should be moved to Noisy-channel coding theorem. The hyphen becomes necessary when "noisy channel" is used as an adjective. Otherwise we descrive the theorem itself as noisy, which is certainly not the intent here. -- 130.94.162.61 16:15, 19 January 2006 (UTC)[reply]

[edit]

The english article doesn't match the french one. And the french article is linked with http://wiki.riteme.site/wiki/Nyquist–Shannon_sampling_theorem , witch is the real match. —Preceding unsigned comment added by 83.199.254.14 (talk) 20:50, 30 April 2011 (UTC)[reply]

Copyediting error in page title

[edit]

I agree with the comment above. "Noisy channel" should be hyphenated when used as an adjective (as it is, inconsistently, in the body text). 38.113.17.3 19:35, 12 October 2006 (UTC)VP[reply]

Error in noisy-channel coding theorem

[edit]

Before the change I just made, the statement of the theorem implied that when rate equaled capacity, error could never be made arbitrarily small. This is clearly wrong; a lossless channel can achieve its capacity rate and, in spite of its being somewhat degenerate, does fall within this framework. The 1 December 2005 "fix" was wrong (though what it attempted to correct was also wrong). I've fixed this so that it's clear that the R=C case is not addressed in the noisy-channel coding theorem, but someone might want to double-check my wording on this (which is also in Shannon–Hartley theorem) and redraw the math PNG. Calbaer 23:25, 25 April 2006 (UTC)[reply]

Attribution to 1948 Shannon

[edit]

Theorem 10 of Shannon's 1948 paper corresponds to the noisy channel coding theorem, but this only has part 1 of the theorem as presented here. Could someone double-check this and re-attribute accordingly? Calbaer 22:36, 31 August 2006 (UTC)[reply]

"Theorem 11: Let a discrete channel have the capacity C and a discrete source the entropy per second H. If H ≤ C there exists a coding system such that the output of the source can be transmitted over the channel with an arbitrarily small frequency of errors (or an arbitrarily small equivocation). If H > C it is possible to encode the source so that the equivocation is less than H -C+ ε where ε is arbitrarily small. There is no method of encoding which gives an equivocation less than H - C".
Shannon's H becomes our R, Shannon's equivocation becomes our HR, and thus Shannon's statement is equivalent to our parts 1, 2 and 3 Jheald 23:36, 31 August 2006 (UTC).[reply]

I agree: the article omits the key point that the encoded error-rate can be made arbitrarily small with a cost that rises logarithmically. This means that you can get essentially perfect encoding, without paying a large price in bandwidth (as long as you are willing to let the block size increase)--RichardNeill (talk) 16:57, 30 January 2009 (UTC)[reply]

Note that Shannon did not provide a formal proof. That was finally done by Feinstein for the coding part, and Fano for the converse part. I added the references. Please check. Wullj 27 December 2006.

Merge with Shannon–Hartley theorem article?

[edit]

There is a lot of overlap between this article and the Shannon–Hartley theorem. Should these pages be merged? Please contribute your thoughts on the Talk:Shannon–Hartley theorem discussion page. -- technopilgrim 20:29, 30 October 2006 (UTC)[reply]

What about non-discrete channels?

[edit]

I'm a student civil engineering, and in information coding class, we saw a variation of the shannon theorem. It states that, on a non-discrete memory-less channel, the shannon capacity [math]C_{shannon} = B log_2(1 + P_u / P_n)[/math]. Can this be added to the article?

Take a look at Shannon-Hartley theorem. And the merge discussion mentioned above.--LutzL 15:53, 28 November 2006 (UTC)[reply]

How about a simple intuitive/non-mathematical explanation?

[edit]

Something like: "You have a finite bandwidth, so at some point you can't switch any faster, and you have a finite signal-to-noise level, so at some point you can't go to increased multi-level signaling." But better worded of course! It would be nice for the non-math-majors plus those who are first learning about it to get an intuitive feel for why it exists. —Preceding unsigned comment added by 71.164.9.240 (talk) 06:10, 28 June 2008 (UTC)[reply]

The above, I think, misses the point. I think an example would make the point best. Consider a hard disk. This consists of a platter, with an analog read/write head. Obviously, we want the highest possible density of bits/cm^2, but we also must guarantee read-integrity. (Typically, 1 incorrect byte read per 10^15 is considered acceptable). The underlying storage layer is subject to random fluctuations, which get worse as the domain size is made smaller. It may be that 1 in 100 bits are read-back wrongly. So, what should we do? The answer is that we must do error-correction.

You can do simple error-correction with a 7-4 code (every 4 bits have 3 check bits), as a result, any single bit-error can be detected and corrected, but a 2-bit error (~0.2% chance) will escape detection. That's not good enough for the reliability, but it costs 75% overhead. So we need a better error-correcting process. To get vastly improved transmission reliability, must we pay by having an enormously higher error-correction overhead? The answer, surprisingly, is no.

What Shannon says is that the error-correction overhead only increases with the logarithm of the required accuracy. In fact, for the above numbers (1% raw bit-errors, 1 in 10^15 transmission error), only 6% of the disk's raw-capacity needs to be dedicated to error-checking overhead!

But, we don't get something for nothing. The cost is that the error-correcting scheme becomes relatively complex. We must use a rather long block of data (such that the number of errors in a given block remains essentially uniform, and below the correctable level, for each block) --RichardNeill (talk) 17:30, 30 January 2009 (UTC)[reply]

Title

[edit]

At a university level this is generally known as 'Shannon's capacity formula', not 'Noisy-channel coding theorem'. —Preceding unsigned comment added by 220.253.153.87 (talk) 23:39, 6 October 2008 (UTC)[reply]


Loose definition

[edit]

In the definition, it is amusing to read "...it is possible to communicate discrete data (digital information) nearly error-free..." without bothering to define or hint at what is meant by 'discrete data'. Admittedly, such definition poses several difficulties, as the very word 'discrete' has different connotations in Mathematics and Statistics, to name just a couple of fields. Anyone can help? Help! --SciCorrector (talk) 18:34, 2 September 2011 (UTC)[reply]

Discrete in time and amplitude. But you are right, it is a strange word combination.--LutzL (talk) 14:48, 3 September 2011 (UTC)[reply]

I(X;Y)

[edit]

Can't find the definition of this function, anyone can help? — Preceding unsigned comment added by Doru001 (talkcontribs) 16:50, 12 March 2017 (UTC)[reply]