Home  

Random  

Nearby  



Log in  



Settings  



Donate  



About Wikipedia  

Disclaimers  



Wikipedia





Block code: Difference between revisions





Article  

Talk  



Language  

Watch  

View history  

Edit  






Browse history interactively
 Previous edit
Content deleted Content added
VisualWikitext
Ncik0075 (talk | contribs)
1 edit
No edit summary
Add polar code to list of examples.
 
(42 intermediate revisions by 28 users not shown)
Line 1:
{{Short description|Family of error-correcting codes that encode data in blocks}}
In [[coding theory]], a '''block code''' is "any law that prevented freed African Americans from having freedom in the southern United States" says 11th grade PPHS student Charles Tolson and N'Yah Mathis and Jayden "Can"tos and Roberto, but Roberto was kicked out lol so he doesn't matter hahahaha. Back to the academic portion of this, yes, according to Charles, they had computers in the 1860s America, but not anywhere else. of the large and important family of [[Channel coding|error-correcting codes]] that encode data in blocks.
In [[coding theory]], '''block codes''' are a large and important family of [[Channel coding|error-correcting codes]] that encode data in blocks.
There is a vast number of examples for block codes, many of which have a wide range of practical applications. BlockThe abstract definition of block codes areis conceptually useful because theyit allowallows coding theorists, [[mathematics|mathematiciansmathematician]]s, and [[computer science|computer scientists]] to study the limitations of ''all'' block codes in a unified way.
Such limitations often take the form of ''bounds'' that relate different parameters of the block code to each other, such as its rate and its ability to detect and correct errors.
 
Examples of block codes are [[Reed–Solomon code]]s, [[Hamming code]]s, [[Hadamard code]]s, [[Expander code]]s, [[Golay code (disambiguation)|Golay code]]s, [[Reed–Muller code]]s and [[Reed–MullerPolar code (coding theory)|Polar code]]s. These examples also belong to the class of [[linear code]]s, and hence they are called '''linear block codes'''. More particularly, these codes are known as algebraic block codes, or cyclic block codes, because they can be generated using boolean polynomials.
 
Algebraic block codes are typically [[Soft-decision decoder|hard-decoded]] using algebraic decoders.{{Technical statement|date=May 2015}}
 
The term ''block code'' may also refer to any error-correcting code that acts on a block of ''<math>k''</math> bits of input data to produce ''<math>n''</math> bits of output data <math>(n,k)</math>. Consequently, the block coder is a ''memoryless'' device. Under this definition codes such as [[turbo code]]s, terminated convolutional codes and other iteratively decodable codes (turbo-like codes) would also be considered block codes. A non-terminated convolutional encoder would be an example of a non-block (unframed) code, which has ''memory'' and is instead classified as a ''tree code''.
 
This article deals with "algebraic block codes".
Line 39 ⟶ 40:
 
=== {{anchor|Minimum distance}}The distance ''d'' ===
The '''distance''' or '''minimum distance''' <math>{{mvar|d</math>}} of a block code is the minimum number of positions in which any two distinct codewords differ, and the '''relative distance''' <math>\delta</math> is the fraction <math>d/n</math>.
Formally, for received words <math>c_1,c_2\in\Sigma^n</math>, let <math>\Delta(c_1,c_2)</math> denote the [[Hamming distance]] between <math>c_1</math> and <math>c_2</math>, that is, the number of positions in which <math>c_1</math> and <math>c_2</math> differ.
Then the minimum distance <math>d</math> of the code <math>C</math> is defined as
:<math>d := \min_{m_1,m_2\in\Sigma^k;\atop m_1\neq m_2} \Delta[C(m_1),C(m_2)]</math>.
Since any code has to be [[injective]], any two codewords will disagree in at least one position, so the distance of any code is at least <math>1</math>. Besides, the '''distance''' equals the '''[[Hamming weight#Minimum weight|minimum weight]]''' for linear block codes because:
:<math>\min_{m_1,m_2\in\Sigma^k;\atop m_1\neq m_2} \Delta[C(m_1),C(m_2)] = \min_{m_1,m_2\in\Sigma^k;\atop m_1\neq m_2} \Delta[\mathbf{0},C(m_1)+C(m_2)] = \min_{m\in\Sigma^k;\atop m\neq\mathbf{0}} w[C(m)] = w_{\min}</math>.
 
A larger distance allows for more error correction and detection.
For example, if we only consider errors that may change symbols of the sent codeword but never erase or add them, then the number of errors is the number of positions in which the sent codeword and the received word differ.
A code with distance <math>{{mvar|d</math>}} allows the receiver to detect up to <math>d-1</math> transmission errors since changing <math>d-1</math> positions of a codeword can never accidentally yield another codeword. Furthermore, if no more than <math>(d-1)/2</math> transmission errors occur, the receiver can uniquely decode the received word to a codeword. This is because every received word has at most one codeword at distance <math>(d-1)/2</math>. If more than <math>(d-1)/2</math> transmission errors occur, the receiver cannot uniquely decode the received word in general as there might be several possible codewords. One way for the receiver to cope with this situation is to use [[list decoding]], in which the decoder outputs a list of all codewords in a certain radius.
 
=== Popular notation ===
Line 74 ⟶ 75:
 
== Lower and upper bounds of block codes ==
[[File:HammingLimit.png|thumb|720px|Hamming limit{{clarify|reason='Base' from y-axis legend does not occur in this article's textual content.|date=January 2022}}]]
[[File:Linear Binary Block Codes and their needed Check Symbols.png|thumb|720px|
There are theoretical limits (such as the Hamming limit), but another question is which codes can actually constructed.{{clarify|reason='Base' from y-axis legend does not occur in this article's textual content.|date=January 2022}} It is like [[Sphere packing|packing spheres in a box]] in many dimensions. This diagram shows the constructible codes, which are linear and binary. The ''x'' axis shows the number of protected symbols ''k'', the ''y'' axis the number of needed check symbols ''n–k''. Plotted are the limits for different Hamming distances from 1 (unprotected) to 34.
Marked with dots are perfect codes:
{{bulleted list
Line 90 ⟶ 91:
<math>C =\{C_i\}_{i\ge1}</math> is called '' family of codes'', where <math>C_i</math> is an <math>(n_i,k_i,d_i)_q</math> code with monotonic increasing <math>n_i</math>.
 
'''Rate''' of family of codes <math>{{mvar|C</math>}} is defined as <math>R(C)=\lim_{i\to\infty}{k_i \over n_i}</math>
 
'''Relative distance''' of family of codes <math>{{mvar|C</math>}} is defined as <math>\delta(C)=\lim_{i\to\infty}{d_i \over n_i}</math>
 
To explore the relationship between <math>R(C)</math> and <math>\delta(C)</math>, a set of lower and upper bounds of block codes are known.
 
=== [[Hamming bound]] ===
{{main article|Hamming bound}}
: <math> R \le 1- {1 \over n} \cdot \log_{q} \cdot \left[\sum_{i=0}^{\left\lfloor {{\delta \cdot n-1}\over 2}\right\rfloor}\binom{n}{i}(q-1)^i\right]</math>
 
=== [[Singleton bound]] ===
{{main article|Singleton bound}}
The Singleton bound is that the sum of the rate and the relative distance of a block code cannot be much larger than 1:
:<math> R + \delta \le 1+\frac{1}{n}</math>.
Line 105 ⟶ 108:
[[Reed–Solomon code]]s are non-trivial examples of codes that satisfy the singleton bound with equality.
 
===[[ Plotkin bound]] ===
{{main article|Plotkin bound}}
For <math>q=2</math>, <math>R+2\delta\le1</math>. In other words, <math>k + 2d \le n</math>.
 
For the general case, the following Plotkin bounds holds for any <math>C \subseteq \mathbb{F}_q^{n} </math> with distance <math>{{mvar|d</math>}}:
 
1.# If <math>d=\left(1-{1 \over q}\right)n, |C| \le 2qn </math>
# If <math>d > \left(1-{1 \over q}\right)n, |C| \le {qd \over {qd -\left(q-1\right)n}} </math>
 
2.For Ifany {{mvar|q}}-ary code with distance <math>d\delta</math> >, <math>R \le (1- \left({1q \over {q-1}}\right)n, |C| \ledelta {qd+ o\over {qd -left(q-1\right)n}} </math>
 
=== Gilbert–Varshamov bound ===
For any <math>q</math>-ary code with distance <math>\delta</math>, <math>R \le 1- ({q \over {q-1}}) \delta + o(1)</math>
{{main article|Gilbert–Varshamov bound}}
<math>R\ge1-H_q\left(\delta\right)-\epsilon</math>, where <math>0 \le \delta \le 1-{1\over q}, 0\le \epsilon \le 1- H_q\left(\delta\right)</math>,
<math> H_q\left(x\right) ~\overset{\underset{\mathrm{def}}{}}{=}~ -x\cdot\log_q{x \over {q-1}}-\left(1-x\right)\cdot\log_q{\left(1-x\right)} </math> is the {{mvar|q}}-ary entropy function.
 
===[[Gilbert-Varshamov bound|Gilbert–VarshamovJohnson bound]] ===
==={{main [[article|Johnson bound]] ===}}
<math>R\ge1-H_q(\delta)-\epsilon</math>, where <math>0 \le \delta \le 1-{1\over q}, 0\le \epsilon \le 1- H_q(\delta)</math>,
Define <math> H_qJ_q\left(x\delta\right) ~\equiv_overset{\underset{\mathrm{def}}{}}{=}~ -x\cdot\log_qleft(1-{x1\over {q-1}}-\right)\left(1-x)\cdot\log_qsqrt{(1-x){q \delta \over{q-1}}}\right) </math>. is the <math>q<br/math>-ary entropy function.
Let <math>J_q\left(n, d, e\right)</math> be the maximum number of codewords in a Hamming ball of radius <math>{{mvar|e</math>}} for any code <math>C \subseteq \mathbb{F}_q^n</math> of distance <math>{{mvar|d</math>}}.
 
Then we have the ''Johnson Bound'' : <math>J_q\left(n,d,e\right)\le qnd</math>, if <math>{e \over n} \le {{q-1}\over q}\left( {1-\sqrt{1-{q \over{q-1}}\cdot{d \over n}}}\, \right)=J_q\left({d \over n}\right)</math>
=== [[Johnson bound]] ===
Define <math>J_q(\delta) \equiv_{def} (1-{1\over q})(1-\sqrt{1-{q \delta \over{q-1}}}) </math>. <br />
Let <math>J_q(n, d, e)</math> be the maximum number of codewords in a Hamming ball of radius <math>e</math> for any code <math>C \subseteq \mathbb{F}_q^n</math> of distance <math>d</math>.
 
=== [[Elias Bassalygo bound|Elias–Bassalygo bound]] ===
Then we have the ''Johnson Bound'' : <math>J_q(n,d,e)\le qnd</math>, if <math>{e \over n} \le {{q-1}\over q}\left( {1-\sqrt{1-{q \over{q-1}}\cdot{d \over n}}}\, \right)=J_q({d \over n})</math>
{{main article|Elias Bassalygo bound}}
 
: <math>R={\log_q{|C|} \over n} \le 1-H_q\left(J_q\left(\delta\right)\right)+o\left(1\right) </math>
=== [[Elias Bassalygo bound|Elias–Bassalygo bound]] ===
 
: <math>R={\log_q{|C|} \over n} \le 1-H_q(J_q(\delta))+o(1) </math>
 
== Sphere packings and lattices ==
Line 158 ⟶ 163:
* [[Shannon–Hartley theorem]]
* [[Noisy channel]]
* [[List decoding]]<ref name="schlegel" />
* [[Sphere packing]]
 
Line 167 ⟶ 172:
{{Refimprove|date=September 2008}}
 
* {{cite book | author=J.H. van Lint | authorlink=Jack van Lint | title=Introduction to Coding Theory | edition=2nd | publisher=Springer-Verlag | series=[[Graduate Texts in Mathematics|GTM]] | volume=86 | year=1992 | isbn=3-540-54894-7 | page=[https://archive.org/details/introductiontoco0000lint/page/3131] | url=https://archive.org/details/introductiontoco0000lint/page/31 }}
* {{cite book | author=F.J. MacWilliams | authorlink=Jessie MacWilliams |author2=N.J.A. Sloane |authorlink2=Neil Sloane | title=The Theory of Error-Correcting Codes | url=https://archive.org/details/theoryoferrorcor0000macw | url-access=registration | publisher=North-Holland | year=1977 | isbn=0-444-85193-3 | page=[https://archive.org/details/theoryoferrorcor0000macw/page/35 35]}}
* {{cite book | author=W. Huffman |author2=V.Pless | authorlink2=Vera Pless | title= Fundamentals of error-correcting codes | url=https://archive.org/details/fundamentalsofer0000huff | url-access=registration | publisher=Cambridge University Press | year=2003 | isbn=978-0-521-78280-7}}
* {{cite book | author=S. Lin |author2=D. J. Jr. Costello | title= Error Control Coding: Fundamentals and Applications | publisher=Prentice-Hall | year=1983 | isbn=0-13-283796-X}}
 

Retrieved from "https://en.wikipedia.org/wiki/Block_code"
 




Languages

 



This page is not available in other languages.
 

Wikipedia




Privacy policy

About Wikipedia

Disclaimers

Contact Wikipedia

Code of Conduct

Developers

Statistics

Cookie statement

Terms of Use

Desktop