HIGH SPEED TURBO CODES DECODER FOR 3G USING PIPELINED SISO LOG-MAP DECODERS ARCHITECTURE

Abstract
Turbo Codes Decoder with Diversity processing computes signals from two separate antennas, and by making full use of so-called "multipath" signals that have arrived at the terminal via different routes after being reflected from buildings, trees or hills. Turbo Codes Decoder with Diversity processing can increase the signal to noise ratio (SNR) more than 6dB which enables 3G system to deliver the data rates from up to 2 Mbit/s. The invention encompasses several improved Turbo Codes Decoder method and apparatus to provide a more suitable, practical and simpler method for implementation a Turbo Codes Decoder in ASIC or DSP codes. (1) Two parallel Turbo Codes Decoder blocks to compute soft-decoded data RXDa, RXDb from two different receiver path. (2) Two pipelined Log-MAP decoders are used for iterative decoding of received data. (3) A Sliding Window of Block N data are used on the input Memory for pipeline operations. (4) The output block N data from the first decoder A are stored in the RAM memory A, and the second decoder B stores output data in the RAM memory B, such that in pipeline mode Decoder A decodes block N data from the RAM memory B while the Decoder B decodes block N data from the RAM memory A at the same clock cycle. (5) Log-MAP decoders are simpler to implement in ASIC and DSP codes with only Adder circuits, and are low-power consumption. (6) Pipelined Log-MAP decoders architecture provides high speed data throughput, one output per clock cycle.
Description

Summary of Invention

[0011] The present invention concentrates only on the Turbo Codes Decoder with diversity processing to implement a more efficient, practical and suitable architecture and method to achieve the requirements for 3G wirless systems including higher speed data throughput, lower power consumptions, lower costs, and suitable for implementation in ASIC or DSP codes. The present invention encompasses several improved and simplified Turbo Codes Decoder method and apparatus to deliver higher speed and lower power especially for 3G applications. Diversity processing can increase the signal to noise ratio (SNR) more than 6dB which enables 3G systems to deliver the data rates up to 2 Mbit/s. As shown in FIGURE 4., our invention Turbo Codes Decoder utilizes two parallel Turbo Codes Decoder for diversity processing. Each Turbo Codes Decoder has serially concatenated SISO Log-MAP Decoders. The two decoders function in a pipelined scheme with delay latency N; while the first decoder is decoding data in the second-decoder-Memory, the second decoder performs decoding data in the first-decoder-Memory, which produces a decoded output every clock cycle in results. As shown in FIGURE 6., our invention Turbo Codes Decoder utilizes a Sliding Window of Block N on the input buffer memory to decode per block N data for improvement processing efficiency. Accordingly, several objects and advantages of our Turbo Codes Decoder are:


[0012] To implement diversity processing to increase the signal to noise ratio (SNR).


[0013] To deliver higher speed throughput and suitable for implementation in ASIC or DSP codes.


[0014] To utilize SISO Log-MAP decoder for faster decoding and simplified implementation in ASIC and DSP codes with the use of binary adders for computation.


[0015] To perform re-iterative decoding of data back-and-forth between the two Log-MAP decoders in a pipelined scheme until a decision is made. In such pipelined scheme, a decoded output data is produced each clock cycle.


[0016] To utilize a Sliding Window of Block N on the input buffer memory to decode per block N data for improvement pipeline processing efficiency


[0017] To improve higher performance in term of symbol error probability and low BER (10-6)for 3G applications such as 3G W-CDMA, and 3G CDMA2000 operating at very high bit-rate up to 100Mbps in a low power noisy environment.


[0018] To utilize an simplified and improved architecture of SISO Log-MAP decoder including branch-metric (BM) calculations module, recursive state-metric (SM) forward/backward calculations module, Add-Compare-Select (ACS) circuit, Log-MAP posteriori probability calculations module, and output decision module.


[0019] To reduce complexity of multiplier circuits in MAP algorithm by perform the entire MAP algorithm in Log Max approximation with the uses of binary adder circuits which are more suitable for ASIC and DSP codes implementation while still maintain a high level of performance output.


[0020] To design an improve Log-MAP Decoder using high level design language (HDL) such as Verilog, system-C and VHDL which can be synthesized into custom ASIC and FPGA devices.


[0021] To implement an improve Log-MAP Decoder in DSP (digital signal processor) using optimized high level language C, C++, or assembly language.


[0022] Still further objects and advantages will become apparent to one skill in the art from a consideration of the ensuing descriptions and accompanying drawings.





Brief Description of Drawings

[0023]
FIGURE 1. is a typical 3G Receiver Functional Block Diagram which use Turbo Codes Decoder for error-correction. (Prior Art).


[0024]
FIGURE 2. is an example of an 16-states Superorthogonal Turbo Code (SOTC) Encoder with Walsh code generator. (Prior Art).


[0025]
FIGURE 3. is a block diagram of the 8-states 3GPP Parallel Concatenated Convolutional Codes. (Prior Art).


[0026]
FIGURE 4. is the Turbo Codes Decoder System Block Diagram showing Log-MAP Decoders, Interleavers, Memory Buffers, and control logics.


[0027]
FIGURE 5. is a Turbo Codes Decoder State Diagram.


[0028]
FIGURE 6. is the Block N Sliding Window Diagram.


[0029]
FIGURE 7. is a block diagram of the SISO Log-MAP Decoder showing Branch Metric module, State Metric module, Log-MAP module, am State and Branch Memory modules.


[0030]
FIGURE 8a. is the 8-States Trellis Diagram of a SISO Log-MAP Decoder using for the 3GPP 8-state PCCC Turbo codes.


[0031]
FIGURE 8b. is the 16-States Trellis Diagram of a SISO Log-MAP Decoder using for the superorthogonal Turbo codes (SOTC).


[0032]
FIGURE 9. is a block diagram of the BRANCH METRIC COMPUTING module.


[0033]
FIGURE 10a. is a block diagram of the Log-MAP computing for u=0.


[0034]
FIGURE 10b. is a block diagram of the Log-MAP computing for u=1.


[0035]
FIGURE 11. is a block diagram of the Log-MAP Compare & Select l maximum logic for each state.


[0036]
FIGURE 12. is a block diagram of the Soft Decode module.


[0037]
FIGURE 13. is a block diagram of the Computation of Forward Recursion of State Metric module (FACS).


[0038]
FIGURE 14. is a block diagram of the Computation of Backward Recursion of State Metric module (BACS).


[0039]
FIGURE 15. showing State Metric Forward computing of Trellis state transitions.


[0040]
FIGURE 16. showing State Metric Backward computing of Trellis state transitions.


[0041]
FIGURE 17. is a block diagram of the State Machine operations of Log-MAP Decoder.


[0042]
FIGURE 18. is a block diagram of the BM dual-port Memory Module.


[0043]
FIGURE 19. is a block diagram of the SM dual-port Memory Module.


[0044]
FIGURE 20. is a block diagram of the De-Interleaver dual-port RAM Memory Memory Module for interleaved input R2.


[0045]
FIGURE 21. is a block diagram of the dual RAM Memory Module for input R0,R1.


[0046]
FIGURE 24. is a block diagram of the intrinsic feedback Adder of the Turbo Codes Decoder.


[0047]
FIGURE 23. is a block diagram of the Iterative decoding feedback control.





Detailed Description



Turbo Codes Decoder



[0048] An illustration of a 3GPP 8-state Parallel Concatenated Convolutional Code (PCCC), with coding rate 1/3, constraint length K=4 in FIGURE 3., using SISO Log- MAP Decoders in FIGURE 4. is provided for simplicity in descriptions of the invention.


[0049] In accordance with the invention, a diversity processing Turbo Codes Decoder comprises of two parallel blocks 40a, 40b of Turbo Codes Decoders for each path of received data RXDa and RXDb. Each identical Turbo Codes Decoder block 40a, 40b has concatenated max Log-MAP SISO Decoders A 42 and B 44 connected in a feedback loop with Interleaver Memory 43 and Interleaver Memory 45 in between. The Soft output of Turbo Codes Decoder block 40a is fed-back into the input of Turbo Codes Decoder block 40b. Conversely, the Soft output of Turbo Codes Decoder block 40b is fed-back into the input of Turbo Codes Decoder block 40a. The sum of the two outputs Z1, Z3 of the Turbo Codes Decoder block 40a, 40b is fed into the Hard-Decoder to generate output Y data.


[0050] Signals Ra2, Ra1, Ra0 are the received soft decision of data path A from the system receiver. Signal XO1, and XO2 are the output soft decision of the Log-MAP Decoders A 42 and B 44 respectively, which are stored in the Interleaver Memory 43 and Memory 45 module. Signal Z2 and Z1 are the output of the Interleaver Memory 43 and Interleaver Memory 45. The Z2 is fed into Log-MAP decoder B 44 , and Z1 is looped back into Log-MAP decoder A 42 through an Adder 231.


[0051] Signals Rb2, Rb1, Rb0 are the received soft decision of data path B from the system receiver. Signal XO1, and XO2 are the output soft decision of the Log-MAP Decoders A 42 and B 44 respectively, which are stored in the Interleaver Memory 43 and Memory 45 module. Signal Z4 and Z3 are the output of the Interleaver Memory 43 and Interleaver Memory 45. The Z4 is fed into Log-MAP decoder B 44 , and Z3 is looped back into Log-MAP decoder A 42 through an Adder 231.


[0052] In accordance with the invention, signal Z3 is fed back into Log-MAP decoder A 42 of block 40a through an Adder 231, and Signal Z1 is fed back into Log-MAP decoder A 42 of block 40b through an Adder 231for diversity processing.


[0053] Each Interleaver Memory 43 45, shown in details FIGURE. 20, has one interleaver 201, and dual-port RAM memory 202. Input Memory blocks 41 48 49, shown in details FIGURE. 21, have dual-port RAM memory 202. A control logic module (CLSM) 47, consists of various state-machines, which control all the operations of the Turbo Codes Decoder. The hard-decoder module 46 outputs the final decoded data.


[0054] More particularly, the R0 is the data bit corresponding to the the transmit data bit u, R1 is the first parity bit corresponding to the output bit of the first RSC encoder, and R2 is interleaved second parity bit corresponding to the output bit of the second RSC encoder as reference to FIGURE 3.


[0055] In accordance with the invention, the R0 data is added to the feedback Z1 data then fed into the decoder A, and R1 is also fed into decoder A for decoding the first stage of decoding output X01. The Z2 and R2 are fed into decoder B for decoding the second stage of decoding output X02.


[0056] In accordance with the invention, as shown in FIGURE 6., the Turbo Codes Decoder utilizes a Sliding Window of Block N 61 on the input buffers 62 to decode one block N data at a time, the next block N of data is decoded after the previous block N is done in a circular wrap-around scheme for pipeline operations.


[0057] In accordance with the invention, the Turbo Codes Decoder decodes an 8-state Parallel Concatenated Convolutional Code (PCCC), and also decodes a 16-states Superorthogonal Turbo Codes SOTC with different code rates. The Turbo Codes Decoder also decodes a higher n-state Parallel Concatenated Convolutional Code (PCCC)


[0058] As shown in FIGURE 4. the Turbo Codes Decoder functions effectively as follows:


[0059] Received soft decision data (RXDa[2:0]) are stored in three input buffers Memory 48 49 41 to produce Ra0, Ra1, and Ra2 output data words. Each output data word Ra0, Ra1, Ra2 contains a number of binary bits.


[0060] Received soft decision data (RXDb[2:0]) are stored in three input buffers Memory 48 49 41 to produce Rb0, Rb1, and Rb2 output data words. Each output data word Rb0, Rb1, Rb2 contains a number of binary bits.


[0061] A Sliding Window of Block N is imposed onto each input memory to produce R0, R1, and R2 output data words.


[0062] In accordance with the method of the invention, when a block of N input data is ready, the Turbo Decoder starts the Log-MAP Decoder A, in block 40a, to decode the N input data based on the soft-values of Ra0, Z1, Z3 and Ra1, then stores the outputs in the Interleaver Memory A.


[0063] The Turbo Decoder also starts the Log-MAP Decoder B, in block 40a, to decode the N input data based on the soft-values of Ra2 and Z2, in pipelined mode with a delay latency of N, then store the outputs in the Interleaver Memory.


[0064] When a block of N input data is ready, the Turbo Decoder starts the Log-MAP Decoder A, in block 40b, to decode the N input data based on the soft-values of Rb0, Z1, Z3 and Rb1, then stores the outputs in the Interleaver Memory A.


[0065] The Turbo Decoder also starts the Log-MAP Decoder B, in block 40b, to decode the N input data based on the soft-values of Rb2 and Z4, in pipelined mode with a delay latency of N, then store the outputs in the Interleaver Memory.


[0066] The Turbo Decoder will do the iterative decoding for L number of times (L=1 ,2 , ..., M). The Log-MAP Decoder A uses the sum of Z1 and R1 and R0 as inputs. The Log-MAP Decoder B uses the data Z2 and R2 as inputs.


[0067] When the iterative decoding sequences are done, the Turbo Decoder starts the hard-decision operations to compute and produce soft-decision outputs.





SISO Log-MAP Decoder



[0068] As shown in FIGURE 7., an SISO Log-MAP Decoder42 44 comprises of a Branch Metric (BM) computation module 71, a State Metric (SM) computation module 72, a Log-MAP computation module 73, a BM Memory module 74, a SM Memory module 75, and a Control Logic State Machine module 76. Soft-values inputs enter the Branch Metric (BM) computation module 71, where Euclidean distance is calculated for each branch, the output branch metrics are stored in the BM Memory module 74. The State Metric (SM) computation module 72 reads branch metrics from the BM Memory 74 and compute the state metric for each state, the output state-metrics are stored in the SM Memory module 75. The Log-MAP computation module 73 reads both branch- metrics and state-metrics from BM memory 74 and SM memory 75 modules to compute the Log Maximum a Posteriori probability and produce soft-decision output. The Control Logic State- machine module 76 provides the overall operations of the decoding process.


[0069] As shown in FIGURE 7. and primary example of 3GPP Turbo Codes, the Log-MAP Decoder 42 44 functions effectively as follows:


[0070] The Log-MAP Decoder 42 44 reads each soft-values (SD) data pair input, then computes branch- metric (BM) values for all paths in the Turbo Codes Trellis 80 as shown in FIGURE 8a. (and Trellis 85 in 8b.), then stores all BM data into BM Memory 74. It repeats computing BM values for each input data until all N samples are calculated and stored in BM Memory 74.


[0071] The Log-MAP Decoder 42 44 reads BM values from BM Memory 74 and SM values from SM Memory 75, and computes the forward state-metric (SM) for all states in the Trellis 80 as shown in FIGURE 8a. (and Trellis 85 in 8b.), then store all forward SM data into SM Memory 75. It repeats computing forward SM values for each input data until all N samples are calculated and stored in SM Memory 75.


[0072] The Log-MAP Decoder 42 44 reads BM values from BM Memory 74 and SM values from SM Memory 75, and computes the backward state-metric (SM) for all states in the Trellis 80 as shown in FIGURE 8a. (and Trellis 85 in 8b.), then store all backward SM data into the SM Memory 75. It repeats computing backward SM values for each input data until all N samples are calculated and stored in SM Memory 75.


[0073] The Log-MAP Decoder 42 44 then computed Log-MAP posteriori probability for u=0 and u=1 using BM values and SM values from BM Memory 74 and SM Memory 75. It repeats computing Log-MAP posteriori probability for each input data until all N samples are calculated. The Log-MAP Decoder then decodes data by making soft decision based on the posteriori probability for each stage and produce soft-decision output, until all N inputs are decoded.





Branch Metric Computation module



[0074] The Branch Metric (BM) computation module 71 computes the Euclidean distance for each branch in the 8-states Trellis 80 as shown in the FIGURE 8a. based on the following equations:


[0075] Local Euclidean distances values = SD0*G0 + SD1*G1


[0076] The SD0 and SD1 are soft-values input data, G0 and G1 are the expected input for each path in the Trellis 80. G0 and G1 are coded as signed antipodal values, meaning that 0 corresponds to +1 and 1 corresponds to -1. Therefore, the local Euclidean distances for each path in the Trellis 80 are computed by the following equations:


[0077] M1 = SD0 + SD1


[0078] M2 = - M1


[0079] M3 = M2


[0080] M4 = M1


[0081] M5 = - SD0 + SD1


[0082] M6 = - M5


[0083] M7 = M6


[0084] M8 = M5


[0085] M9 = M6


[0086] M10 = M5


[0087] M11 = M5


[0088] M12 = M6


[0089] M13 = M2


[0090] M14 = M1


[0091] M15 = M1


[0092] M16 = M2


[0093] As shown in FIGURE 9., the Branch Metric Computing module comprise of one L-bit Adder 91, one L-bit Subtracter 92, and a 2'complemeter 93. It computes the Euclidean distances for path M1 and M5. Path M2 is 2'complement of path M1. Path M6 is 2'complement of M5. Path M3 is the same path M2, path M4 is the same as path M1, path M7 is the same as path M6, path M8 is the same as path M5, path M9 is the same as path M6, path M10 is the same as path M5, path M11 is the same as path M5, path M12 is the same as path M6, path M13 is the same as path M2, path M14 is the same as path M1, path M15 is the same as path M1, and path M16 is the same as path M2.





State Metric Computing module



[0094] The State Metric Computing module 72 calculates the probability A(k) of each state transition in forward recursion and the probability B(k) in backward recursion. FIGURE 13. shows the implementation of state-metric in forward recursion with Add-Compare-Select (ACS) logic, and FIGURE 14. shows the implementation of state-metric in backward recursion with Add-Compare-Select (ACS) logic. The calculations are performed at each node in the Turbo Codes Trellis 80 (FIGURE 8a.) in both forward and backward recursion. The FIGURE 15. shows the forward state transitions in the Turbo Codes Trellis 80 (FIGURE 8a.), and FIGURE 16 . show the backward state transitions in the Turbo Codes Trellis 80 (FIGURE 8a.). Each node in the Trellis 80 as shown in FIGURE 8a. has two entering paths: one-path 84 and zero-path 83 from the two nodes in the previous stage.


[0095] The ACS logic comprises of an Adder 132, an Adder 134, a Comparator 131, and a Multiplexer 133. In the forward recursion, the Adder 132 computes the sum of the branch metric and state metric in the one-path 84 from the state s(k-1) of previous stage (k-1). The Adder 134 computes the sum of the branch metric and state metric in the zero-path 83 from the state (k-1) of previous stage (k-1). The Comparator 131 compares the two sums and the Mulitplexer 133 selects the larger sum for the state s(k) of current stage (k). In the backward recursion, the Adder 142 computes the sum of the branch metric and state metric in the one-path 84 from the state s(j+1) of previous stage (J+1). The Adder 144 computes the sum of the branch metric and state metric in the zero-path 83 from the state s(j+1) of previous stage (J+1). The Comparator 141 compares the two sums and the Mulitplexer 143 selects the larger sum for the state s(j) of current stage (j).


[0096] The Equations for the ACS are shown below:


[0097] A(k) = MAX [(bm0 + sm0(k-1)), (bm1 + sm1(k-1)]


[0098] B(j) = MAX [(bm0 + sm0(j+1)), (bm1 + sm1(j+1)]


[0099] Time (k-1) is the previous stage of (k) in forward recursion as shown in FIGURE 15., and time (j+1) is the previous stage of (j) in backward recursion as shown in FIGURE 16.





Log-MAP computing module



[0100] The Log-MAP computing module calculates the posteriori probability for u=0 and u=1, for each path entering each state in the Turbo Codes Trellis 80 corresponding to u=0 and u=1 or referred as zero-path 83 and one-path 84. The accumulated probabilities are compared and selected the u with larger probability. The soft-decision are made based on the final probability selected for each bit. FIGURE 10a. shows the implementation for calculating the posteriori probability for u=0. FIGURE 10b. shows the implementation for calculate the posteriori probability for u=1. FIGURE 11. shows the implementation of compare-and-select the u with larger probability. FIGURE 12. shows the implementation of the soft-decode compare logic to produce output bits based on the posteriori probability of u = 0 and u = 1. The equations for calculation the accumulated probabilities for each state and compare-and-select are shown below:


[0101] sum_s00 = sm0i + bm1 + sm0j


[0102] sum_s01 = sm3i + bm7 + sm1j


[0103] sum_s02 = sm4i + bm9 + sm2j


[0104] sum_s03 = sm7i + bm15 + sm3j


[0105] sum_s04 = sm1i + bm4 + sm4j


[0106] sum_s05 = sm2i + bm6 + sm5j


[0107] sum_s06 = sm5i + bm12 + sm6j


[0108] sum_s07 = sm6i + bm14 + sm7j


[0109] sum_s10 = sm1i + bm3 + sm0j


[0110] sum_s11 = sm2i + bm5 + sm1j


[0111] sum_s12 = sm5i + bm11 + sm2j


[0112] sum_s13 = sm6i + bm13 + sm3j


[0113] sum_s14 = sm0i + bm2 + sm4j


[0114] sum_s15 = sm3i + bm8 + sm5j


[0115] sum_s16 = sm4i + bm10 + sm6j


[0116] sum_s17 = sm7i + bm16 + sm7j


[0117] s00sum = MAX[sum_s00, 0]


[0118] s01sum = MAX[sum_s01 , s00sum]


[0119] s02sum = MAX[sum_s02 , s01sum]


[0120] s03sum = MAX[sum_s03 , s02sum]


[0121] s04sum = MAX[sum_s04 , s03sum]


[0122] s05sum = MAX[sum_s05 , s04sum]


[0123] s06sum = MAX[sum_s06 , s05sum]


[0124] s07sum = MAX[sum_s07 , s06sum]


[0125] s10sum = MAX[sum_s10 , 0]


[0126] s11sum = MAX[sum_s11 , s10sum]


[0127] s12sum = MAX[sum_s12 , s11sum]


[0128] s13sum = MAX[sum_s13 , s12sum]


[0129] s14sum = MAX[sum_s14 , s13sum]


[0130] s15sum = MAX[sum_s15 , s14sum]


[0131] s16sum = MAX[sum_s16 , s15sum]


[0132] s17sum = MAX[sum_s17 , s16sum]





Control Logics - State Machine (CLSM) Module



[0133] As shown in FIGURE 7. the Control Logics module controls the overall operations of the Log- MAP Decoder. The control logic state machine 171, referred as CLSM, is shown in FIGURE 17. The CLSM module 171 (FIGURE 17.) operates effectively as the followings. Initially, it stays in IDLE state 172. When the decoder is enable, the CLSM transitions to CALC-BM state 173, it then starts the Branch Metric (BM) module operations and monitor for completion. When Branch Metric calculations are done, referred as bm-done the CLSM transitions to CALC-FWD-SM state 174, it then tarts the State Metric module (SM) in forward recursion operation. When the forward SM state metric calculations are done, referred as fwd-sm, the CLSM transitions to CALC-BWD-SM state 175, it then starts the State Metric module (SM ) in backward recursion operations. When backward SM state metric calculations are done, referred as bwd-sm-done the CLSM transitions to CALC-Log-MAP state 176, it then starts the Log-MAP computation module to calculate the maximum a posteriori probability to produce soft decode output. When Log-MAP calculations are done, referred as log-map-done, it transitions back to IDLE state 172.





BM Memory and SM Memory



[0134] The Branch-Metric Memory 74 and the State-Metric Memory 75 are shown in FIGURE 7. as the data storage components for BM module 71 and SM module 72. The Branch Metric Memory module is a dual-port RAM contains M-bit of N memory locations as shown in FIGURE 18. The State Metric Memory module is a dual-port RAM contains K-bit of N memory locations as shown in FIGURE 19. Data can be written into one port while reading at the other port.





Interleaver Memory



[0135] As shown in FIGURE 4., the Interleaver Memory A 43 stores data for the first decoder A 42, and Interleaver Memory B 45 stores data for the second decoder B 44. In an iterative pipelined decoding, the decoder A 42 reads data from Interleaver Memory B 45 and writes results data into Interleaver Memory B 43, the decoder B 44 reads data from Interleaver Memory A 43 and write results into Interleaver Memory B 45.


[0136] As shown in FIGURE 20., the De-Interleaver memory 41 comprises of an De-Interleaver module 201 and a dual-port RAM 202 contains M-bit of N memory locations. The Interleaver is a Turbo code internal interleaver as defined by 3GPP standard ETSI TS 125 222 V3.2.1 (2000-05), or other source. The Interleaver permutes the address input port A for all write operations into dual-port RAM module. Reading data from output port B are done with normal address input.


[0137] As shown in FIGURE 21., the Interleaver Memory 43 45 comprises of a dual-port RAM 212 contains M-bit of N memory locations.





Turbo Codes Decoder Control Logics - State Machine (TDCLSM)



[0138] As shown in FIGURE 4. the Turbo Decoder Control Logics module 47, referred as TDCLSM, controls the overall operations of the Turbo Codes Decoder. Log-MAP A 42 starts the operations of data in Memory B 45. At the same time, Log-MAP B starts the operations in Memory A 43. When Log-MAP A 42 and Log-MAP B 44 are done for a block N data, the TDCLSM 47 starts the iterative decoding for L number of times. When the iterative decoding sequences are done, the TDCLSM 47 transitions to HARD-DEC to generate the hard-decode outputs. Then the TDCLSM 47 transitions to start decoding another block of data.





Iterative Decoding and Diversity Processing



[0139] Turbo Codes decoder performs iterative decoding and diversity processing by feeding back the output Z1, Z3 of the second Log-MAP decoder B into the first Log-MAP decoder A, before making decision for hard- decoding output. As shown in FIGURE 23., the Counter 233 count the preset number L times.


Claims
  • 1. As shown in FIGURE 4.:
  • 2. A method for diversity processing and decoding two plurality of sequences of received data path Ran from Receiver A, and Rbn from Receiver B comprising the steps of:
  • 3. An apparatus of Turbo Codes Decoder 40a 40b used as a baseband processor subsystem for iterative decoding a plurality of sequences of received data Rn representative of coded data Xn generated by a Turbo Codes Encoder from a source of original data un into decoded data Yn comprising of:
  • 4. The Decoder system of claim c3, wherein each Log-MAP decoder uses the logarithm maximum a posteriori probability algorithm.
  • 5. The Decoder system of claim c3, wherein the two serially connected SISO Log-MAP Decoders each decoding input data from the other output data in pipeline mode to produce soft decoded data each clock cycle.
  • 6. The Decoder system of claim c3, wherein the Memory modules use dual-port memory RAM.
  • 7. The Decoder system of claim c3, wherein a Sliding Window of Block N is used on the input buffer Memory so that each block N data is decoded at a time one block after another in a pipeline scheme.
  • 8. A method for iterative decoding a plurality of sequences of received data Rn representative of coded data Xn generated by a Turbo Codes Encoder from a source of original data un into decoded data Yn comprising the steps of:
  • 9. An apparatus of SISO Log-MAP Decoder for decoding a plurality of sequences of soft-input data SD0 and SD1 generated by a receiver to produce decoded soft-output data Y comprising of:
  • 10. The Decoder system of claim c9, wherein the decoder uses the logarithm maximum a posteriori probability algorithm.
  • 11. The Decoder system of claim c9, wherein the decoder implements state-metric in forward recursion with Add-Compare-Select (ACS).
  • 12. The Decoder system of claim c9, wherein the decoder uses an 8-states Trellis state transition diagram for 3GPP PCCC Turbo Codes.
  • 13. The Decoder system of claim c9, wherein the the branch metric module uses a binary adder, a binary Subtracter, and two binary two-complementers logic.
  • 14. A method for Log-Map decoding a plurality of sequences of received data SD0 and SD1 generated by a receiver to produce decoded soft-output data Y comprising the steps of:
  • 15. An apparatus of an ACS (add-compare-select) for computing a plurality of sequences of sm0, bm0, sm1, bm1 data to select max output data A comprising of:
  • 16. An apparatus of Super Orthogonal Turbo Codes (SOTC) Decoder used as a baseband processor subsystem for iterative decoding a plurality of sequences of received Walsh code data RWi and RW-i representative of Walsh coded data Wi and W-i generated by a Super Orthogonal Turbo Codes (SOTC) Encoder from a source of original data un into decoded data Yn comprising of:
Cross Reference to Related Applications

[0001] This is a continuation of patent application Ser. No. 09/681093 filed Jan. 2, 2001 and patent application Ser. No. 10/065408 filed Oct. 15, 2002.

Continuations (2)
Number Date Country
Parent ICOMM-12 Jan 2001 US
Child 10248245 Dec 2002 US
Parent ICOMM-14 Mar 2001 US
Child 10248245 Dec 2002 US