Jump to content
 







Main menu
   


Navigation  



Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Donate
 




Contribute  



Help
Learn to edit
Community portal
Recent changes
Upload file
 








Search  

































Create account

Log in
 









Create account
 Log in
 




Pages for logged out editors learn more  



Contributions
Talk
 



















Contents

   



(Top)
 


1 Formal definition  





2 A[n,d]  





3 Information rate  





4 Sphere packings and lattices  





5 References  





6 External links  














Block code: Difference between revisions






Català
Čeština
Deutsch
Español

Italiano
Қазақша
Nederlands

Русский
Українська
Tiếng Vit

 

Edit links
 









Article
Talk
 

















Read
Edit
View history
 








Tools
   


Actions  



Read
Edit
View history
 




General  



What links here
Related changes
Upload file
Special pages
Permanent link
Page information
Cite this page
Get shortened URL
Download QR code
Wikidata item
 




Print/export  



Download as PDF
Printable version
 




Print/export  



















Appearance
   

 





Help
 

From Wikipedia, the free encyclopedia
 


Browse history interactively
 Previous editNext edit 
Content deleted Content added
Line 40: Line 40:

* {{cite book | author=J.H. van Lint | authorlink=Jack van Lint | title=Introduction to Coding Theory | edition=2nd ed | publisher=Springer-Verlag | series=[[Graduate Texts in Mathematics|GTM]] | volume=86 | date=1992 | isbn=3-540-54894-7 | page=31 }}

* {{cite book | author=J.H. van Lint | authorlink=Jack van Lint | title=Introduction to Coding Theory | edition=2nd ed | publisher=Springer-Verlag | series=[[Graduate Texts in Mathematics|GTM]] | volume=86 | date=1992 | isbn=3-540-54894-7 | page=31 }}

* {{cite book | author=F.J. MacWilliams | authorlink=Jessie MacWilliams | coauthors=[[Neil Sloane|N.J.A. Sloane]] | title=The Theory of Error-Correcting Codes | publisher=North-Holland | date=1977 | isbn=0-444-85193-3 | page=35 }}

* {{cite book | author=F.J. MacWilliams | authorlink=Jessie MacWilliams | coauthors=[[Neil Sloane|N.J.A. Sloane]] | title=The Theory of Error-Correcting Codes | publisher=North-Holland | date=1977 | isbn=0-444-85193-3 | page=35 }}



== External links ==

* [http://www.complextoreal.com/chapters/block.pdf Coding Concepts and Block Coding ]





[[Category:Coding theory]]

[[Category:Coding theory]]


Revision as of 13:38, 26 August 2009

Incomputer science, a block code is a type of channel coding. It adds redundancy to a message so that, at the receiver, one can decode with minimal (theoretically zero) errors, provided that the information rate (amount of transported informationinbits per sec) would not exceed the channel capacity.

The main characterization of a block code is that it is a fixed length channel code (unlike source coding schemes such as Huffman coding, and unlike channel coding methods like convolutional encoding). Typically, a block code takes a k-digit information word, s, and transforms this into an n-digit codeword, C(s). The block length of such a code would be n.

Block coding was the primary type of channel coding used in earlier mobile communication systems.

Formal definition

A block code is a code which encodes strings formed from an alphabet set into code words by encoding each letter of separately. Let be a sequence of natural numbers each less than . If and a particular word is written as , then the code word corresponding to , namely , is

.

A[n,d]

The trade-off between efficiency (large code rate) and correction capabilities can also be seen from the attempt to, given a fixed codeword length and a fixed correction capability (represented by the Hamming distanced) maximize the total amount of codewords. A[n,d] is the maximum number of codewords for a given codeword length n and Hamming distance d.

Information rate

When is a binary block code, consisting of codewords of length n bits, then the code rateof is defined as

.

When if the first k bits of a codeword are independent information bits, then the code rate is

.

Sphere packings and lattices

Block codes are tied to the sphere packing problem which has received some attention over the years. In two dimensions, it is easy to visualize. Take a bunch of pennies flat on the table and push them together. The result is a hexagon pattern like a bee's nest. But block codes rely on more dimensions which cannot easily be visualized. The powerful Golay code used in deep space communications uses 24 dimensions. If used as a binary code (which it usually is,) the dimensions refer to the length of the codeword as defined above.

The theory of coding uses the N-dimensional sphere model. For example, how many pennies can be packed into a circle on a tabletop or in 3 dimensions, how many marbles can be packed into a globe. Other considerations enter the choice of a code. For example, hexagon packing into the constraint of a rectangular box will leave empty space at the corners. As the dimensions get larger, the percentage of empty space grows smaller. But at certain dimensions, the packing uses all the space and these codes are the so called perfect codes. There are very few of these codes.

Another item which is often overlooked is the number of neighbors a single codeword may have. Again, let's use pennies as an example. First we pack the pennies in a rectangular grid. Each penny will have 4 near neighbors (and 4 at the corners which are farther away). In a hexagon, each penny will have 6 near neighbors. Respectively, in three and four dimensions, the maximum packing is given by the 12-face and 24-cell with 12 and 24 neighbors, respectively. When we increase the dimensions, the number of near neighbors increases very rapidly. In general, the value is given by the kissing numbers.

The result is the number of ways for noise to make the receiver choose a neighbor (hence an error) grows as well. This is a fundamental limitation of block codes, and indeed all codes. It may be harder to cause an error to a single neighbor, but the number of neighbors can be large enough so the total error probability actually suffers.

References


External links


Retrieved from "https://en.wikipedia.org/w/index.php?title=Block_code&oldid=310162137"

Category: 
Coding theory
Hidden categories: 
Articles needing additional references from September 2008
All articles needing additional references
CS1 errors: extra text: edition
CS1 errors: unsupported parameter
 



This page was last edited on 26 August 2009, at 13:38 (UTC).

This version of the page has been revised. Besides normal editing, the reason for revision may have been that this version contains factual inaccuracies, vandalism, or material not compatible with the Creative Commons Attribution-ShareAlike License.



Privacy policy

About Wikipedia

Disclaimers

Contact Wikipedia

Code of Conduct

Developers

Statistics

Cookie statement

Mobile view



Wikimedia Foundation
Powered by MediaWiki