1. Technical Field
Aspects of this document relate generally to systems and methods for transmitting data across a telecommunication channel.
2. Background Art
In a telecommunication system, an encoded codeword of a linear block code may be sent across a noisy channel, such as a wireless communication link or other connection. Bits of the codeword may be initially assigned values of either −1 or 1 when first placed in the channel. As the bits travel across the channel, noise in the channel can increase or weaken the magnitude of a particular sent bit. On the receiving side, once the noisy codeword is acquired by the decoder, the codeword may be called a received vector. The decoder's purpose is to examine the received vector and find the codeword that was originally sent. Finding the original sent codeword may involve a Euclidean squared distance calculation or correlation between the received vector and a collection of candidate codewords. The candidate codeword that is the least square distance (Euclidean distance) from the received vector or possesses the largest correlation with the received vector is generally selected as the most likely codeword that was sent.
In one aspect, a method of searching for candidate codewords for a telecommunications system may comprise receiving a constellation point, comparing the received point with points within a Dorsch decoding process using an optimal pattern, and terminating the search when a codeword is found residing within a specified distance of the received point.
The foregoing and other aspects, features, and advantages will be apparent to those artisans of ordinary skill in the art from the DESCRIPTION and DRAWINGS, and from the CLAIMS.
Implementations will hereinafter be described in conjunction with the appended drawings, where like designations denote like elements, and:
Implementations of a method of searching for the candidate codewords in a Dorsch decoding process using an optimal pattern are disclosed. A Dorsch decoder is unusual in that it is not necessary to know how to construct a decoder for a given code. The decoding is accomplished by using an encoder multiple times to iteratively search for the closest codeword to a received vector. Non-limiting examples of implementations of methods for terminating the search when a codeword is found residing within a specified distance of the received point are disclosed. In addition, various non-limiting examples of implementations of a method for selectively mapping the received point onto a hypercube when computing the distance to a given candidate codeword are also disclosed. In implementations of encoding and decoding systems disclosed in this document and the appendix to the previously filed provisional application, the disclosure of which was previously incorporated herein by reference, the various method and system implementations may serve to minimize the average number of codewords that will need to be evaluated during the decoding process, correspondingly impacting the speed (data rates) at which the decoder can be operated. Additionally, non-limiting examples of how multiple decoder instantiations can be interconnected to increase the overall throughput of the decoder are disclosed.
In implementations of a method of searching for the candidate codewords in a Dorsch decoding process using an optimal pattern and in implementations of a method of terminating the search when a codeword is found residing within a specified distance of the received point, the collection of candidate codewords to test with the received vector can be generated in an ordered manner such that the probability of each successive codeword occurring is monotonically decreasing. This process enables the most probable codewords for matching to be checked first.
A codeword for an (n,k) linear block code will contain n bits, k of which can used to uniquely represent the original data that was to be sent (prior to being encoded into a codeword). These k bits can arbitrarily be copied to the first k bits of the encoded codeword, whereas the remaining n-k bits are parity bits, generated using the first k bits and an encoding process. When an encoded n-bit codeword is sent over a noisy channel, the magnitudes of each of the sent bit positions become either more or less confident. The received vector can be sorted by the magnitude (or confidence) of each of its bit positions, with bits of the largest magnitude appearing first, and the bits with the least magnitude occurring last. In the sorted vector, the k most confident bits of the received vector can now be treated as if they were the original user data that was sent and the n-k least confident bits can be treated as the parity bits during the candidate codeword generation and distance calculation/correlation process.
The process of generating candidate codewords requires creating perturbations to the first k bits (user data) of the base codeword of the sorted received vector and then using the perturbations in the comparison process. The sorted received vector may have a base codeword, represented by each of the k most likely bit positions being mapped to a −1 if the bit position value is less than 0, and 1 otherwise. The remaining n-k bit positions are generated as if a codeword was being encoded with the first k bits, but with a modified generation method. In implementations of encoding and decoding methods disclosed in this document, the methods include steps that determine how to choose a collection of the first k bit positions to use during perturbation of the base codeword to enable generation of new candidate codewords.
If the noise on the communication channel can be described as Additive White Gaussian Noise (though the noise may take any other form in various implementations), the magnitude of each of the received bit positions can be classified as an LLR (log likelihood ratio), describing the log of the probability that one received bit position takes on one sent value versus the probability that the received position takes on the opposite value. The value of the LLR function monotonically increases for increasing received magnitudes. To introduce error patterns in a simple way, each of the k most reliable points in the received vector may be quantized to a fixed number of levels with a uniform integer scalar quantizer. Perturbation points may then be chosen if they are equal to a target LLR sum, or if any combination of the quantized points would reach that sum. A perturbation point may then have a hard decision value in the base codeword flipped and subsequently, a new codeword may be generated and tested using the perturbation point. If two points are included in a candidate codeword, the probability of both occurring simultaneously is described by the sum of each point's quantized magnitude. Accordingly, if the LLR sum starts at zero and increases by one only after all possible quantized magnitudes of the k most-reliable positions have been used to try to reach that sum, candidate codewords will be tried in decreasing order of probability of occurrence, to maximize the opportunity for a matching codeword to be found at the beginning of the evaluation. For the exemplary purposes of this disclose, an example is provided illustrating a particular evaluation flow of selection of candidate codewords for a (7,4) Hamming code. In the example, the notation p1, p2, etc. represents a parity bit.
The first k magnitudes of the sorted quantized received vector that are used to form the LLR sum: [18, 11, 11, 10].
Evaluation of Target LLR sum values:
The foregoing evaluation process may be continued until all possible candidate codewords have been generated or a fixed number of candidate codewords have been generated. If a candidate codeword is within a fixed squared distance of the received vector, it can be deemed to be the codeword that was sent across the channel and no further codewords may be tested or generated.
In implementations of a method for selectively mapping the received point onto a hypercube when computing the distance to a given candidate codeword, when a squared distance calculation is made between a received vector, r, and candidate codeword, c, a bit position (dimension) in the codeword, ci, may have the same sign as the corresponding position (dimension), ri, in the received vector r. If both points agree in sign for a given dimension, and the magnitude of r in that dimension is greater than 1, there is a distance contribution that may be referred to as being ‘bad’ in that dimension. This overly confident position is good for a correlation measurement between the two vectors but is undesirable for a squared distance calculation because the distance is contributed from a dimension that has a high probability of being correct.
In implementations of the method, the bad distance is not included if the sign of a received bit position matches the sign of the same bit position in the prospective codeword and the magnitude of the received bit position is greater than 1. This effectively maps bit positions made extra confident by noise back onto a hyper-cube containing codewords as vertices when computing the distance from the candidate codeword.
In implementations of a method of placing decoders like those disclosed in this document and in the appendix in an interconnected network, the overall decoding speed of a stream of received vectors 300 may be increased. In an interconnected network 310 any individual decoder 320 implementing the methods described in this document may be assigned any received vector 300. Each decoder 320 will decode the assigned received vector 300, and signal that decoding is complete, releasing the best match codeword into an output buffer 330. The output buffer, 330 which can be of any size, may release best match codewords in the order they were originally received to a downstream receiver. The array of decoders may permit one received vector to be worked on for an extended period of time, while still allowing other codewords to be simultaneously decoded. For exemplary purposes,
Implementations of encoding and decoding systems and related methods may reduce the average number of codewords that will need to be evaluated during the decoding process, reduce the average number of codewords evaluated while not substantially increasing the risk of error despite significantly more received vectors possibly being deemed the codeword that was sent across the channel without increasing the probability of a false identification, and significantly increase the speed at which the stream can be processed due to multiple decoders decoding a stream of received vectors.
The materials used for implementations of encoding and decoding systems may be made of conventional materials used to make goods similar to these in the art, such as, by non-limiting example, plastic, metals, semiconductor materials, and composites. Those of ordinary skill in the art will readily be able to select appropriate materials and manufacture these products from the disclosures provided herein.
The implementations listed here, and many others, will become readily apparent from this disclosure. From this, those of ordinary skill in the art will readily understand the versatility with which this disclosure may be applied.
This document claims the benefit of the filing date of U.S. Provisional Patent Application 61/161,843, entitled “Encoding and Decoding Systems and Related Methods” to Banister et al., which was filed on Mar. 20, 2009, the disclosure of which is hereby incorporated entirely herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5388124 | Laroia et al. | Feb 1995 | A |
6202189 | Hinedi et al. | Mar 2001 | B1 |
6697885 | Goodfellow | Feb 2004 | B1 |
6771196 | Hsiun | Aug 2004 | B2 |
6940928 | Cameron | Sep 2005 | B2 |
6956872 | Djokovic et al. | Oct 2005 | B1 |
7039846 | Hewitt et al. | May 2006 | B2 |
7085987 | Hewitt et al. | Aug 2006 | B2 |
7095341 | Hsiun | Aug 2006 | B2 |
7113554 | Dielissen et al. | Sep 2006 | B2 |
7242726 | Cameron et al. | Jul 2007 | B2 |
7421638 | Hewitt et al. | Sep 2008 | B2 |
8209579 | Belogolovy | Jun 2012 | B2 |
20020026615 | Hewitt et al. | Feb 2002 | A1 |
20040019842 | Argon et al. | Jan 2004 | A1 |
20050265387 | Khojastepour et al. | Dec 2005 | A1 |
20090019334 | Tomlinson et al. | Jan 2009 | A1 |
20100146372 | Tomlinson et al. | Jun 2010 | A1 |
20120026022 | Banister et al. | Feb 2012 | A1 |
Number | Date | Country | |
---|---|---|---|
61161843 | Mar 2009 | US |