The invention relates to LDPC encoders, decoders, systems and methods.
It has been demonstrated by a number of research works that the performance of LDPC (low density parity check) codes exceed that of the turbo code, and can be as little as 0.045 dB below the Shannon limit. The algorithm used for message passing allows parallel calculations and requires less memory and calculation costs than the turbo decoding algorithm.
The message passing algorithm is based upon the property of the parity check matrix H of the linear block code including the following: any valid code word multiplied at the parity check matrix results in the zero vector. Another important property of the check matrix is that it is sparse, and the number of units in the matrix is a linear function of the code word size. Hence, the decoding complexity is a linear function of the code word length.
The availability of efficient decoding algorithms does not guarantee the availability of efficient encoding algorithms. As known from the theory of linear block codes, a generation matrix G is typically used to encode message. The generation matrix G related to the check matrix H as HGT=0 mod 2. The encoding complexity increases in quadratic proportion to the encoded block. In order to encode the code word with a length ˜103, the required number of operations is ˜106, and this would be difficult for practical application. One way to resolve this issue is to take advantage of the code concept RA-IRA (repeat accumulative-irregular repeat accumulative) that uses linear time encoding.
According to one broad aspect, the invention provides an LDPC encoder with linear complexity adapted to produce a systematic output and a parity output, the encoder comprising: a repeater that implements an irregular repetition code; an interleaver that performs interleaving on the repeater output; an accumulator performing accumulations on outputs of the interleaver and outputting the parity output.
In some embodiments, the interleaver is either an S-random or congruential interleaver.
In some embodiments, the accumulator performs irregular accumulations.
In some embodiments, the accumulator performs regular accumulations.
In some embodiments, the LDPC encoder further comprises a parallel to serial function between the repeater and the interleaver.
In some embodiments, the LDPC encoder is parameterized by a repetition pattern for the repeater, and a permutation for the interleaver.
In some embodiments, the repetition pattern and the permutation are optimized
In some embodiments, the S-random interleaver is a semi-random algorithm with rejection providing s pre-interleaving sequential bits with the post-interleaving distance of not less than s,
In some embodiments, the LDPC code can be represented by a parity check matrix with dual diagonal structure:
In some embodiments, the entire code is representable in matrix form as follows:
where H is a matrix of size m-by-n, where n is the length of the code and m is the number of parity check bits in the code, where Pi,j is one of a set of z-by-z right-shifted identity matrices or a z-by-z zero matrix; the matrix H is expanded from a binary base matrix Hb of size mb-by-nb, where n=z·nb and m=z·mb, and z is a positive integer, the base matrix is expanded by replacing each 1 in the base matrix with a z-by-z right-shifted identity matrix, and each 0 with a z-by-z zero matrix; partitioning Hb into two sections where Hb1 corresponds to the systematic bits, and Hb2 corresponds to the parity-check bits; partitioning section Hb2 into two sections, where vector hb has odd weight, and Hb2 has a dual-diagonal structure with matrix elements at row i, column j equal to 1 for i=j, 1 for i=j+1, and 0 elsewhere:
where hb(0)=1, hb(m−1)=1, and a third value hb(j), 0<j<(mb−1) equal to 1.
In some embodiments, the LDPC encoder is further adapted to allow construction for a set of different coding rates based on a common encoder/decoder featuring rate compatible check node processor construction and puncture based rate matching with check node concatenation.
According to another broad aspect, the invention provides an LDPC encoder having a base matrix structure that avoids having multiple weight-1 columns in an expanded matrix.
According to another broad aspect, the invention provides an LDPC encoder implementing an LDPC code that can be represented by a parity check matrix with dual diagonal structure:
In some embodiments, the entire code is representable in matrix form as follows;
according to another broad aspect, the invention provides a method of performing LDPC encoding to determine a parity sequence p given an information sequence s comprising: dividing the information sequence a into kb=nb−mb groups of z bits—let this grouped s be denoted u,
u=[u(0) u(1) u(kb−1)],
where each element of u is a column vector as follows
u(i)=[siz siz+1 . . . s(i−1)z−1]T
using a model matrix Hb, determining the parity sequence p in groups of z—let the grouped parity sequence p by denoted v,
v=[v(0) v(1) . . . v(mb−1)],
where each element of v is a column vector as follows
v(i)=[piz piz+1 . . . p(i+1)z−1]T
performing an initialization step to determine v(0);
performing recursion to determine v(i+1) from v(i), 0≦i≦mb−2.
In some embodiments, an expression for v(0) is derived by summing over the rows of Hb to obtain
where x, 1≦x≦mb−2, is the row index of hb where the entry is nonnegative and unpaired, and Pi represents the z×z identity matrix circularly right shifted by size i. Equation (1) is solved for v(0) by multiplying by Pp(x,k
In some embodiments, the recursion is defined according to:
where
P−1≡0z×z.
In some embodiments, an LDPC decoder comprises: a parallel node processing structure, adapted for any selected code rate and interleaver selection, for use in decoding a code implemented in accordance with any one of the methods as summarized above.
An encoder provided by an embodiments of the invention is shown in block diagram form in
In order to increase error correcting ability, preferably the irregular repetition of systematic symbols, as well as irregular summing after the interleaver can also be used. Hence, the three following tables can be used to set the code within the structure of
1. Table of the systematic part repetition;
2. Interleaver;
3. Table of summing after interleaving.
An example of a table containing both the systematic part repetition and summing information is shown below. The repetition table column identifies how the repeater output is generated from the systematic bit input. In the illustrated example, there is an eight bit systematic input. The repetition column indicates how many times in addition to once the bit is repeated, and the summing table column indicates how many bits are summed of the interleaver output to produce each parity bit output. The summing table would control the production of parity bits 66 at the output XOR function 60 of the example of
We have 8 bits systematic bit, we use the left column repeat cyclically for 8 bits to expand to:
1 1 2 2 2 3 3 3=17 bits
These bits are interleaved and then the right column summation table is used to sum to produce parity bits:
3→1 4→1 5→1 3→1 2→1 we have total 5 parity bits
The coding rate for this example is 8/(8+5)=8/13
As seen from the above table, the first bit and second bit are included once, the third, fourth and fifth bits are included twice, and the sixth, seventh and eighth bits are included three times. As seen from the above table, in order to obtain the first check bit, the XOR operation must be performed with 3 bits sequentially at the interleaver output. The next check bit is obtained by the XOR operation with the next 4 bits of the interleaver output and the first check bit, and so on using the number of such bits is specified in the table. The use of a table to indicate how many times each systematic bit is repeated is simply one example of a structure that can be used to achieve this function. The number of repetitions of each bit of course will be a function of the particular code design as will the number of systematic bits that are being used, i.e., the block size. Similarly, the number of parity bits generated will be a function of the particular code design and the number of interleaver output bits used to generate a respective parity bit is a code-specific parameter. The examples given in the encoding table of Table 1 are only a very specific example.
In an example the irregular repetition factor is shown in
In a preferred embodiment, the interleaver is an “s-random” interleaver. The selected algorithm is a semi-random algorithm with rejection providing s pre-interleaving sequential bits with the post-interleaving distance of not less than s. As demonstrated, such an interleaving process converges if
Such an approach to interleaving allows the exclusion of short cycles in the obtained matrix.
Direct Encoding
In general, each of the LDPC codes is a systematic linear block code. Each LDPC code in the set of LDPC codes is defined by a matrix H of size m-by-n, where n is the length of the code and m is the number of parity check bits in the code. The number of systematic bits is k=n−m.
The matrix H is defined as an expansion of a base matrix and can be represented by
where Pi,j is one of a set of z-by-z right-shifted identity matrices or a z-by-z zero matrix. The matrix N is expanded from a binary base matrix Eb of size mb-by-nb, where n=z·nb, and m=z·mb, and z is a positive integer. The base matrix is expanded by replacing each 1 in the base matrix with a z-by-z right-shifted identity matrix, and each 0 with a z-by-z zero matrix. Therefore the design accommodates various packet sizes by varying the submatrix size z.
It is known that such an Hb can be partitioned into two sections where Hb1 corresponds to the systematic bits, and Hb2 corresponds to the parity-check bits.
According to an aspect of the invention, section Hb2 is further partitioned into two sections, where vector hb has odd weight, and Hb2 has a dual-diagonal structure with matrix elements at row i, column j equal to 1 for in i=j, 1 for i=j+1, and 0 elsewhere:
The base matrix has hb(0)=1, hb(m−1)=1, and a third value hb(j), 0<J<(mb−1) equal to 1. The base matrix structure avoids having multiple weight-1 columns in the expanded matrix, this can be realized by optimization of the interleavers.
In particular, the non-zero submatrices are circularly right shifted by a particular circular shift value. Each 1 in H′b2 is assigned a shift size of 0, and is replaced by a z×z identity matrix when expanding to H. This allows the realization of the dual diagonal structure with simple recursive circuitry. The two 1s located at the top and the bottom of hb are assigned equal shift sizes, and the third 1 in the middle of hb is given an unpaired shift size. The unpaired shift size is 0.
Encoding is the process of determining the parity sequence p given an information sequence s. To encode, the information block s is divided into kb=nb−mb groups of z bits. Let this grouped s be denoted u,
u=[u(0) u(1) . . . u(kb−1)],
where each element of u is a column vector as follows
u(i)=[siz siz+1 . . . s(i+1)z−1]T
Using the model matrix Hb, the parity sequence p is determined in groups of z. Let the grouped parity sequence p by denoted v,
v=[v(0) v(1) . . . v(mb−1)],
where each element of v is a column vector as follows
v(i)=[piz piz+1 . . . p(i+1)z−1]T
Encoding proceeds in two steps, (a) initialization, which determines v(0), and (b) recursion, which determines v(i+1) from v(i), 0≦i≦mb−2.
An expression for v(0) can be derived by summing over the rows of Hb to obtain
where x, 1≦x≦mb−2, is the row index of hb where the entry is nonnegative and unpaired, and Pi represents the z×z identity matrix circularly right shifted by size i. Equation (1) is solved for v(0) by multiplying by Pp(x,k
Considering the structure of H′b2, the recursion can be derived as follows,
where
P−1≡0z×z.
Thus all parity bits not in v(0) are determined by evaluating Equation (2) for 0≦i≦mb−2.
Equations (1) and (2) completely describe the encoding algorithm. These equations also have a straightforward interpretation in terms of standard digital logic architectures. In particular, they are easily implemented suing the encoding architecture described with reference to
Basic Decoder Structure
The following modifications of decoding algorithm are possible:
1. Min-sum algorithm (analog of Max-Log MAP)
2. Sum-product (analog of Log-MAP)
With the well known graph-decoding concept, the decoding is performed iteratively, i.e., the purpose of the iteration is providing the a priory information to the next iteration. Hence, the search for the nearest valid code word is performed by a sequential approximation method. It has been shown that the result in this is close to maximum likelihood decision.
An example of a message-passing algorithm block diagram is shown in
For the RA codes the internal interleaver of the decoder is differentiated from the internal interleaver of the encoder by the parity bits being included into the interleaving process. See
The tables of repetition and summing are changed according to the interleaver. The systematic nodes and parity nodes are not differentiated while decoding, but are both considered as variable nodes.
The operations calculating outgoing messages from the variable node to the check node include summing of all messages received by the variable node from other check nodes, except the one supposed to receive the message, plus a soft decision received from the demodulator. See
During calculation of the outgoing message from the check node to the variable node the incoming message from the variable node, for which the outgoing message is calculated, is neglected.
The operation of message calculation from the check node to the variable node has two parts:
Definition of the message sign, and
Definition of the message module.
The message sign is defined by zero equality condition of the check sum. Hence, the sub of incoming messages from the variable nodes is calculated, and the obtained sign is assigned to the outgoing message.
A Simple Way to Calculate the Outgoing Message from the Check Node (Min-Sum).
The outgoing message module is calculated by the function of
f(a,b,c,d)=f(a,f(b,f(c,f(d))))
f(d)=d
There are several methods of setting the function f.
The function E(x)=log(1−exp(−|x|)), widely used in the turbo coding, can be set as a table.
1. There is a modification of method 2, where;
Taking the commutative feature of the function f into consideration, let us consider the elementary “box-sum” operation or calculation of the log-likelihood ratio.
where p—a posteriori probability of transmission of 1, on the condition that the received signal is equal to x, the signal is BPSK; +1, −1; and the noise is AWGN with dispersion σ2.
Respectively, λ—the decision function or the log-likelihood ratio logarithm.
Statement of problem: What is a posteriori probability p3 (the log-likelihood ratio logarithm λ3) of
The complete group of events is built in order to solve the above stated problem as shown in
on the condition that eλ
λ2=ln(eλ
where the turbo code function E is widely used
E(a,b)=ln(ea+eb)=max(a,b)+ln(1+e−|a−b|) (7)
Hence, the message with a correct sign can be calculated at once. In addition to two calls of the function E, only one addition and one subtraction are performed.
Another Way to Compute Log-Likelihood Ratio (Probability Decoding—tan h( ) Rule).
It is shown in (3) the probability p3 can be represented by
p3=p1(1−p2)+p2(1−p1)=p1+p2−2p1p2 (8)
Multiplying both parts by 2 and subtracting 1, we obtain:
2p3−1=2p1+2p2−4p1p2−1 (9)
Decomposing the right part into the multipliers:
(2p1−1)(2p2−1)=4p1p2−2p1−2p21 (10)
Substituting the dissipation result, we obtain:
(2p3−1)=−(2p1−1)(2p2−1) (11)
or, with the logarithms except the case of 2p1−1=0:
[ln(2p3−1)]=−[ln(2p1−1)]−[ln(2p2−1)],p1≠0.5 (12)
where the logarithms is expressed by λ as follows:
Let us consider the properties of the function
The function has following properties:
1. even function f(λ)=f(−λ)
2. the function is plotted in
3. the function is not defined at the 0
the function is a self-inverse function f(f(x))=x
The operation sequence of computations at the check node is given below.
Two algorithms of check node operating are provided: “E-presentation” and “tan h( ) rule”. Both have no losses and give ML decision. Consideration of these methods will be continued in H/W architecture design section. The next section will provide comparison with turbo-code at simulator with B-presentation.
Stimulation and Turbo-Code Comparison
The first simulation aim is to perform comparison at the fixed rate ½. Then the rate compatible LDPC code based repeat-accumulative encoder structure has been introduced. The next simulation aim is to compare with punctured 3GPP2 turbo-decoder specified below in Table 4.
Input Signal Quantization
If the E function table is used, the input signal must be converted to the unit dispersion and represented as
where r1 and r2—the distance to the respective constellation points. Then 8-bit quantization is performed with the step of
The E function table is initialized with the same step.
Code Parameters and Simulation Results
The comparison is performed at the rate ½, the information block size is 1000 bits. Fox the LDPC IRA code an irregular repetition table is selected (
For the purpose of comparison, the 3GPP2 turbo code is selected.
The rate compatible LDPC decoder based RA encoder structure is obtained by concatenation check nodes demuxed by puncturing parity symbols. The parity sum at concatenated check node has mandatory equal zero if component nodes has zero check sum. The
At the encoder side the rate compatible code generated by changing the period of locking the key “Parity Symbol End”. At the decoder side the number of the permuted messages to the check node corresponds to the parity symbol generating period. The simulation results and 3GPP2 turbo decoder comparison are given at the
The above Table has been selected for the repetition factor α=7, and an optimized irregular structure of the repetition table is shown in
The rate compatible LDPC IRA codes has no losses than turbo codes except for very low rate ¼ and very high rate (⅘). The repetition table is optimized in order to obtain the best performance at the main rates (½, ⅓, ⅔, ¾). In this case the rate ⅛ is worse than the rate ¼, i.e., the data should be repeated twice at the rate ¼, rather than coded at the rate ⅛. In addition, error floor for the IRA code is much lower in comparison to the turbo-code.
The average repetition factor of 7 is selected based on the maximum code rate ⅛.
Also it should be noted that the described code is a rate compatible code, i.e., it forms the variable rate code family, allowing the HARQ-II implementation.
The following table is rate set and the simulation result with dual—maxima demodulator from SNR for modulation symbol is presented.
IRA-7 Codes Performance Conclusion
There are BER, FER curves from modulation symbol energy to noise ratio. All MCS from Table 4 are presented. The remaining problem is that code matrix for each rate is different. The code that is developed in this project according to the aims of the project is rate compatible (⅛-⅘). It means that the high rates are obtained by puncturing mother code. The systematic part of the code must be the same and stable. However, the obtained results give good benchmarks for code comparison.
H/W Architecture Design and Complexity Analysis
Basic LDPC Decoder Architecture.
An example decoder structure is shown in
The main implementation problem of such architecture is the FPGA limitations of the RAM access. In particular, Virtex 812EM implements dual-port access to the 4k memory blocks. That is, within one clock either two numbers can be read from such block, or one number can be read and another one can be written. Hence, the implementation of parallel message processing requires arrangement of messages from the variable node in the accessible memory banks, in order to read the messages, processed by the check node processor, from different memory banks, and written to different memory banks in order to avoid access conflicts.
The structure of parallel check node processor is shown in
where Fd—FPGA clock rate, Z—parallel factor of computation, R—coding rate, α—code symbol repetition factor (in average), I—number of iterations.
Z is equal to number of parallel computing messages. The number of corresponding data bits which processing at one FPGA clock is compute and divide at iteration number to obtain throughput and decoder operation speed. The following formula is applied to parallel LDPC decoder to compute its operation speed,
Interleaver Design
There are two approaches to interleaver design.
The performance losses for Algebraic interleaver for rates R>½ (means ⅔, ¾, ⅘) is observed. It is not applicable for rate compatible decoder with rate in the range of ⅛-⅘.
Random interleaver with RA encoder structure is in focus or the present work.
The requirements to the interleaver can be denoted as
Considering the RA code structure one can conclude that
The random interleaver numbers need to be stored. The implementation for block 1000 bit with repetition factor 7 requires interleaver with size 7000. It takes 7000*14 bits. The on the fly numbers generation is not good idea because it takes FPGA clocks. The dividing interleaver into small interleavers (the number is equal to parallel factor) is the solution. Each small interleaver uploaded into FPGA RAM block. So, each number from interleaver requires 8 bit (see
The size of RAM blocks at high FPGA boards is large (at least 18 k). This block will not be full and no difference to store 8 bits or 16 bits.
A good interleaver can be found through random search.
The interleaver memory requirement can be further reduced by using the 2 following techniques:
a even-odd symmetry
a expanded interleaver
The even-odd interleaver is a symmetrical interleaver, it allow to swap the every odd position with an even position, and vice versa. This interleaver structure has satisfied the two restrictions:
Odd to even conversion; i mod 2≈π(i)mod 2, ∀i
Symmetry: π(i)=jπ(j)=i
Assume that only odd position in the interleaver vector are stored. All the stored addresses are then even integers, implying that the LSB in the binary representation of each address is always zero. Thus offering additional memory saving. The operation of odd-even interleaver is shown in
In order to preserve the odd-even symmetry property, while expand the interleaver length, we need to insert two undefined elements after every second entry in the original interleaver. This modification ensures that each element on an even position in the original interleaver remains on an even position after expansion and vice versa.
Rate Compatible Check Node Processor
The architecture of rate compatible check node processor is observed. The high rates are obtained by puncturing parity symbols and by concatenation elementary check nodes demarcated by these symbols. The structure of elementary check node processors is shown in
The following variants are possible starting from the rate ½.
The considered variant has a delay of 8 “box-sum” clocks and requires 7 “box-sum” modules simultaneously at the 8th clock, and 2 modules at all other clocks. Another implementation variant of the processor check node is also possible.
Four modules are required at the first clock, 6—at the second clock, 8—at the third clock, and 7—at the final clock. The peak number of modules is increased up to 8, and the number of required clocks reduced down to 4. In addition, this processor can be used at two rates; ⅓. (close to ⅓) and ½. The results for the check node 3 are ready at the third clock.
However, in order to align the rate to ⅓, the combinings in the check node are performed in a variable period: 3-4, and the considered architecture generates check nodes by 3. This leads to the following variant of the check node implementation for the rates ¼, ⅓,½, as shown in
The cells with “1” are initialized by the greatest number for the selected quantization, which corresponds to the logic unit (1).
The clocks:
1st clock→5 “box-sum” modules
2nd clock→4+4+3=11 “box-sum” modules—peak load
3rd clock→6 “box-sum” modules
4th clock→7 “box-sum” modules
Hence, for the rate compatible processor check node, 11 “box-sum” modules (max) and 7 “box-sum” modules (min) are necessary. The computation delay takes 4 clocks of the “box-sum” module (min), and 7 clocks of the “box-sum” modules (max).
Rate set construction is shown in
Hence, the rates ⅔, ⅘ are left from a quite easy-implemented rate set
Complexity Reduced Node Processor Based ln((exp(x)−1)/(exp(x)+1) Function (tan h( ) Rule Based Architecture).
Another architecture of reduced complexity is shown in the
The LUT (Table 4) is represented by a direct conversion of the type −ln((exp(x)−1)/(exp(x)+1). Due to ache property of this function the same table is used both for forward and reverse conversion.
The bit width increase over 6 bits is not a problem. The input values are quantized in this case in 7 bits without zero. The 6th bit is the soft decision module, the 7th bit is a sign bit. In this case the LUT can be added by zero f(0)=0. This zero bears no physical meaning and will be used for combining of different node processors in the variable rate decoder.
The adders
The structure of the variable node processor, processing greater number of messages, is similar to that shown in
Implementation Issues and Operation Speed Estimation
The choice of repeat accumulative encoding scheme is made for the following reason:
where Z-interleaver parallel factor α-repetition factor of systematic bits. Fd—the FPGA board clock rate I—number of iteration.
The following problem is considering in the next section: How to upload parallel data stream to the interleaver with irregular repetition. It is possible to combine symbols with different degree of repetition to blocks with fixed rate, which length is equal parallel factor of the interleaver.
Irregular Codes
The purpose of computations at the variable node processor is reduced to calculation of the sum of all node-related messages except the message for which a response message is calculated.
For the variable parity node, with the degree of two, this operation is trivial: 2 numbers exchange their places. Such operation is performed directly after the processor check node being an analog of the systematic node interleaving. In order to simplify the variable node processing, the irregular repetition table is selected to maintain the number of repeated symbols multiple to the number of memory banks used for the message storage. In the present case this number is equal to 28, it is also the greatest number of messages that can be parallel processed by the check node processor. Obviously, the number of parallel processed messages cannot be greater than such number
The irregular repetition table can be organized in the following manner:
There are tolerable sequences of repeated symbols, such that the number of repeated symbols is multiple to 7 and is below 28.
3 3 3 5—4 information symbols are in all repeated 14 times
7—any number of messages can be repeated 7 times
11 3—the first bit is repeated 11 times, and the next bit is repeated 3 times
13 15—data bit are in all repeated 28 times
28—the greatest number of repetitions of one bit.
In addition to the outgoing message calculation from variable node, the output soft decisions are generated for the decoder by the processor. Such soft decisions can be used directly for the data unloading in the course of iteration. Here the operation “+” is summing with a sign.
On
The operation speed estimation is T=(Z*Fd)/(I*Alpha)=(28*Fd)/(I*7). Let number iteration be I=24, and Fd=100 MHz (28*100 MHz)/(24*7)=100 MHz/6=15 MBps. Maximum 32 parity symbols need to be uploaded in parallel, so minimum 16 blocks need to store them. Memory allocation 7000/250+16=28+16=44 RAM blocks minimum.
Simple Puncturing or Check Node Concatenation.
Simple puncturing work, but have 1 dB loss on FER=0.01, compared with check node concatenation. But algorithms of check node concatenation (
Regular Codes
To increase throughput and operation speed the regular repeat accumulative codes are implemented. The repetition factor was decreased to 3. That means every systematic bit is repeated only 3 times. The parallel factor 72 is achieved by reducing the interleaver diversity (S-factor). The interleaver size is 9000 and the systematic part size is 3000. The coding rate is ½.
The performance losses of regular codes are negligible. There is no problem with parallel data stream uploading. See
The operation speed estimation is T=(Z*Fd)/(I*Alpha)=(28*Fd)/(I*7). Let number iteration be I=24, and Fd=100 MHz (72*100 MHz)/(24*3)=100 MHz=100 MBit/s. Maximum 48 parity symbols need to be uploaded in parallel, so minimum 24 blocks need to store them. Memory allocation 9000/125+48=72+48=120 RAM blocks minimum
Impact Of Reducing Repetition Factor
The following simulation gives the answer to this question. The same interleaver is used with data repetition factor 3, 2, 1. The first code operates at rate 3000/(3000+3000)=½, the next operates at rate 4500/(4500+3000)=⅗, the last one operates at rate 9000/(9000+3000)=¾
The reduction repetition factor from 3 to 2 has 3 dB losses, The reduction repetition factor from 3 to 1 has 7 dB losses at BER=10−6. See
Impact of Simple Puncturing
Single puncturing works (⅔, ¾, ⅘) but have 1 db loss to algorithm of check node concatenation described in this report. See
The Impact of Reducing the Decoding Iteration Number
Examples are is shown in
Numerous modifications and variations of the present invention are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended claims, the invention may be practiced otherwise than as specifically described herein.
This application claims the benefit of PCT Application No. PCT/CA2005/000505 filed Apr. 4, 2005. This application claims the benefit of U.S. Provisional Application Nos. 60/558,566 filed Apr. 2, 2004 and 60/563,815 filed Apr. 21, 2004, hereby incorporated by reference in their entirety.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/CA2005/000505 | 4/4/2005 | WO | 00 | 10/2/2006 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2005/096510 | 10/13/2005 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
6453442 | Sadjadpour et al. | Sep 2002 | B1 |
7116710 | Jin et al. | Oct 2006 | B1 |
7210075 | Ferrari et al. | Apr 2007 | B2 |
7568145 | Dinoi et al. | Jul 2009 | B2 |
20050111564 | Kramer et al. | May 2005 | A1 |
20050246615 | Matsumoto | Nov 2005 | A1 |
Number | Date | Country | |
---|---|---|---|
20090083604 A1 | Mar 2009 | US |
Number | Date | Country | |
---|---|---|---|
60558566 | Apr 2004 | US | |
60563815 | Apr 2004 | US |