Add polar code to list of examples.
|
|||
(41 intermediate revisions by 27 users not shown) | |||
Line 1:
In [[coding theory]], '''block codes''' are a large and important family of [[Channel coding|error-correcting codes]] that encode data in blocks.
There is a vast number of examples for block codes, many of which have a wide range of practical applications.
Such limitations often take the form of ''bounds'' that relate different parameters of the block code to each other, such as its rate and its ability to detect and correct errors.
Examples of block codes are [[Reed–Solomon code]]s, [[Hamming code]]s, [[Hadamard code]]s, [[Expander code]]s, [[Golay code (disambiguation)|Golay code]]s, [[Reed–Muller code]]s and [[
Algebraic block codes are typically [[Soft-decision decoder|hard-decoded]] using algebraic decoders.{{Technical statement|date=May 2015}}
The term ''block code'' may also refer to any error-correcting code that acts on a block of
This article deals with "algebraic block codes".
Line 39 ⟶ 40:
=== {{anchor|Minimum distance}}The distance ''d'' ===
The '''distance''' or '''minimum distance'''
Formally, for received words <math>c_1,c_2\in\Sigma^n</math>, let <math>\Delta(c_1,c_2)</math> denote the [[Hamming distance]] between <math>c_1</math> and <math>c_2</math>, that is, the number of positions in which <math>c_1</math> and <math>c_2</math> differ.
Then the minimum distance <math>d</math> of the code <math>C</math> is defined as
:<math>d := \min_{m_1,m_2\in\Sigma^k
Since any code has to be [[injective]], any two codewords will disagree in at least one position, so the distance of any code is at least <math>1</math>. Besides, the '''distance''' equals the '''[[Hamming weight#Minimum weight|minimum weight]]''' for linear block codes because:
:<math>\min_{m_1,m_2\in\Sigma^k
A larger distance allows for more error correction and detection.
For example, if we only consider errors that may change symbols of the sent codeword but never erase or add them, then the number of errors is the number of positions in which the sent codeword and the received word differ.
A code with distance
=== Popular notation ===
Line 74 ⟶ 75:
== Lower and upper bounds of block codes ==
[[File:HammingLimit.png|thumb|720px|Hamming limit{{clarify|reason='Base' from y-axis legend does not occur in this article's textual content.|date=January 2022}}]]
[[File:Linear Binary Block Codes and their needed Check Symbols.png|thumb|720px|
There are theoretical limits (such as the Hamming limit), but another question is which codes can actually constructed.{{clarify|reason='Base' from y-axis legend does not occur in this article's textual content.|date=January 2022}} It is like [[Sphere packing|packing spheres in a box]] in many dimensions. This diagram shows the constructible codes, which are linear and binary. The ''x'' axis shows the number of protected symbols ''k'', the ''y'' axis the number of needed check symbols ''n–k''. Plotted are the limits for different Hamming distances from 1 (unprotected) to 34.
Marked with dots are perfect codes:
{{bulleted list
Line 90 ⟶ 91:
<math>C =\{C_i\}_{i\ge1}</math> is called '' family of codes'', where <math>C_i</math> is an <math>(n_i,k_i,d_i)_q</math> code with monotonic increasing <math>n_i</math>.
'''Rate''' of family of codes
'''Relative distance''' of family of codes
To explore the relationship between <math>R(C)</math> and <math>\delta(C)</math>, a set of lower and upper bounds of block codes are known.
===
{{main article|Hamming bound}}
: <math> R \le 1- {1 \over n} \cdot \log_{q} \cdot \left[\sum_{i=0}^{\left\lfloor {{\delta \cdot n-1}\over 2}\right\rfloor}\binom{n}{i}(q-1)^i\right]</math>
===
{{main article|Singleton bound}}
The Singleton bound is that the sum of the rate and the relative distance of a block code cannot be much larger than 1:
:<math> R + \delta \le 1+\frac{1}{n}</math>.
Line 105 ⟶ 108:
[[Reed–Solomon code]]s are non-trivial examples of codes that satisfy the singleton bound with equality.
===
{{main article|Plotkin bound}}
For <math>q=2</math>, <math>R+2\delta\le1</math>. In other words, <math>k + 2d \le n</math>.
For the general case, the following Plotkin bounds holds for any <math>C \subseteq \mathbb{F}_q^{n} </math> with distance
# If <math>d > \left(1-{1 \over q}\right)n, |C| \le {qd \over {qd -\left(q-1\right)n}} </math>
=== Gilbert–Varshamov bound ===
{{main article|Gilbert–Varshamov bound}}
<math>R\ge1-H_q\left(\delta\right)-\epsilon</math>, where <math>0 \le \delta \le 1-{1\over q}, 0\le \epsilon \le 1- H_q\left(\delta\right)</math>,▼
<math> H_q\left(x\right) ~\overset{\underset{\mathrm{def}}{}}{=}~ -x\cdot\log_q{x \over {q-1}}-\left(1-x\right)\cdot\log_q{\left(1-x\right)} </math> is the {{mvar|q}}-ary entropy function.
===
▲<math>R\ge1-H_q(\delta)-\epsilon</math>, where <math>0 \le \delta \le 1-{1\over q}, 0\le \epsilon \le 1- H_q(\delta)</math>,
Define <math>
Let <math>J_q\left(n, d, e\right)</math> be the maximum number of codewords in a Hamming ball of radius
Then we have the ''Johnson Bound'' : <math>J_q\left(n,d,e\right)\le qnd</math>, if <math>{e \over n} \le {{q-1}\over q}\left( {1-\sqrt{1-{q \over{q-1}}\cdot{d \over n}}}\, \right)=J_q\left({d \over n}\right)</math>▼
▲=== [[Johnson bound]] ===
▲Let <math>J_q(n, d, e)</math> be the maximum number of codewords in a Hamming ball of radius <math>e</math> for any code <math>C \subseteq \mathbb{F}_q^n</math> of distance <math>d</math>.
▲Then we have the ''Johnson Bound'' : <math>J_q(n,d,e)\le qnd</math>, if <math>{e \over n} \le {{q-1}\over q}\left( {1-\sqrt{1-{q \over{q-1}}\cdot{d \over n}}}\, \right)=J_q({d \over n})</math>
{{main article|Elias Bassalygo bound}}
: <math>R={\log_q{|C|} \over n} \le 1-H_q\left(J_q\left(\delta\right)\right)+o\left(1\right) </math>▼
▲=== [[Elias Bassalygo bound|Elias–Bassalygo bound]] ===
== Sphere packings and lattices ==
Line 158 ⟶ 163:
* [[Shannon–Hartley theorem]]
* [[Noisy channel]]
* [[List decoding]]<ref name="schlegel" />
* [[Sphere packing]]
Line 167 ⟶ 172:
{{Refimprove|date=September 2008}}
* {{cite book | author=J.H. van Lint | authorlink=Jack van Lint | title=Introduction to Coding Theory | edition=2nd | publisher=Springer-Verlag | series=[[Graduate Texts in Mathematics|GTM]] | volume=86 | year=1992 | isbn=3-540-54894-7 | page=[https://archive.org/details/introductiontoco0000lint/page/3131] | url=https://archive.org/details/introductiontoco0000lint/page/31 }}
* {{cite book | author=F.J. MacWilliams | authorlink=Jessie MacWilliams |author2=N.J.A. Sloane |authorlink2=Neil Sloane | title=The Theory of Error-Correcting Codes | url=https://archive.org/details/theoryoferrorcor0000macw | url-access=registration | publisher=North-Holland | year=1977 | isbn=0-444-85193-3 | page=[https://archive.org/details/theoryoferrorcor0000macw/page/35 35]}}
* {{cite book | author=W. Huffman |author2=V.Pless | authorlink2=Vera Pless | title= Fundamentals of error-correcting codes | url=https://archive.org/details/fundamentalsofer0000huff | url-access=registration | publisher=Cambridge University Press | year=2003 | isbn=978-0-521-78280-7}}
* {{cite book | author=S. Lin |author2=D. J. Jr. Costello | title= Error Control Coding: Fundamentals and Applications | publisher=Prentice-Hall | year=1983 | isbn=0-13-283796-X}}
|