Information
-
Patent Application
-
20030123563
-
Publication Number
20030123563
-
Date Filed
July 11, 200123 years ago
-
Date Published
July 03, 200321 years ago
-
CPC
-
US Classifications
-
International Classifications
- H03M013/03
- H04L005/12
- H04L023/02
Abstract
A digital processing apparatus and method for executing a turbo coding routine. The apparatus and method includes adapting a turbo coding algorithm for execution by one or more reconfigurable processing elements from an array of processing elements, and then mapping the adapted algorithm onto the array for execution. A method includes configuring a portion of an array of independently reconfigurable processing elements for performing a turbo coding routine, and executing the turbo coding routine on data blocks received at the configured portion of the array of processing elements. An apparatus includes an array of interconnected, reconfigurable processing elements, where each processing element is independently programmable with a context instruction. The apparatus further includes a context memory for storing and providing the context instruction to the processing elements, and a processor for controlling the loading of the context instruction to the processing elements, for configuring a portion the processing elements to perform the turbo coding routine.
Description
BACKGROUND OF THE INVENTION
[0001] The present invention generally relates to digital signal processing, and more particularly to a method and apparatus for turbo encoding and decoding.
[0002] The field of digital signal processing (DSP) is growing dramatically. Digital signal processors are a key component in many communication and computing devices, for various consumer and professional applications, such as communication of voice, video, and audio signals.
[0003] The execution of DSP involves a trade-off of performance and flexibility. At one extreme of performance, hardware-based application-specific integrated circuits (ASICs) execute a specific type of processing most rapidly. However, hardware-based processing circuits are either hard-wired or programmed for an inflexible range of functions. At the other extreme, software running on a multi-purpose or general purpose computer is easily adaptable to any type of processing, but is limited in its performance. The parallel processing capability of a general purpose processor is limited.
[0004] Devices performing DSP are increasingly smaller, more portable, and consume less energy. However, the size and power needs of a device limit the amount of processing resources that can be built into it. Thus, there is a need for a flexible processing system, i.e. one that can perform many different functions, yet which can also achieve high performance of a dedicated circuit.
[0005] One example of DSP is encoding and decoding digital data. Any data that is transmitted, whether text, voice, audio or video, is subject to attack during its transmission and processing. A flexible, high-performance system and method can perform many different types of processing on any type of data, including processing of cryptographic algorithms.
[0006] Turbo has become one of the most used and researched encoding and decoding methods, as its performance is close to the theoretical Shannon limit. Turbo codes has been adopted as a Forward Error Correct (FEC) standard in the so-called Third Generation (3G) wireless communication. Most of the development focus has been on a Very Large Scale Integration (VLSI), or hardware, implementation of Turbo Codes. However, VLSI implementation lacks flexibility in the face of multiple standards (WCMDA, CMDA2000, TD-SCDMA), different code rates (1/2, 1/3, 1/4, 1/6) and different data rates (from several kilo bits/s to 2 Mbits/s). Accordingly, different VLSI chips have to be designed toward different standards, code rates, data rates, etc. On the other hand, general-purpose processors or DSP processors cannot meet the requirements of high data rate and low power consumption for a mobile device.
BRIEF DESCRIPTION OF THE DRAWING
[0007]
FIG. 1 depicts a conventional Turbo encoder arrangement.
[0008]
FIG. 2 depicts a conventional Turbo decoder arrangement.
[0009]
FIG. 3 is a timing diagram of a sliding window BCJR algorithm.
[0010]
FIG. 4 is a trellis diagram for a 3G Turbo Coding routine with a code rate 1/3.
[0011]
FIG. 5 is a block diagram of a reconfigurable processor architecture according to the invention.
[0012]
FIGS. 6A and B are schematic diagrams of an array of reconfigurable processing elements illustrating internal express lanes and interconnections of the array.
[0013]
FIG. 7 illustrates a Single Instruction, Multiple Data (SIMD) mode for the array.
[0014]
FIG. 8 illustrates a method for mapping a log-gamma calculation routine for execution by a portion of the array of processing elements.
[0015]
FIG. 9 illustrates a method for mapping a log-alpha calculation routine for execution by a portion of the array.
[0016]
FIG. 10 illustrates a method for mapping a log-beta calculation routine for execution by a portion of the array.
[0017]
FIG. 11 illustrates a method for mapping an LLR calculation.
[0018]
FIG. 12 illustrates a method for calculating the enumerator and denominator values of the LLR operation.
[0019]
FIG. 13 is a flow chart illustrating a serial mapping method for executing a Turbo coding routine, according to an embodiment of the invention.
[0020]
FIG. 14 illustrates the allocation of processing elements and other resources for Turbo coding parallel computational routines.
[0021]
FIG. 15 is a flow chart illustrating a parallel mapping method for executing a Turbo coding routine, according to an embodiment of the invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0022] A reconfigurable DSP processor provides a solution to accomplish Turbo Coding according to different standards, code rates, data rates, etc., and still offer high performance and meet low-power constraints.
[0023]
FIG. 1 is a simplified block diagram of a standard Turbo encoder. Tail bits are added during encoding. The Turbo encoder employs first and second recursive, systematic, convolutional encoders (RSC) connected in parallel, with a Turbo interleaver preceding the second RSC encoder. The outputs of the constituent encoders are punctured and repeated to achieve the different code rate as shown in Table 1. Each category of code rate is designed to support various data rates, from several kilo-bits/second to 2 Mbits/second.
1TABLE 1
|
|
CDMA2000WCDMA & TD-SCDMA
Forward linkReverse LinkForward/Reverse link
|
Code rate½⅓¼½⅓¼⅓
|
[0024] A generic Turbo decoder is shown in FIG. 2. The Turbo decoder contains two “soft” decision decoders (DEC1 and DEC2), associated with two RSC encoders, and an interleaver and de-interleaver between these two decoders as shown in FIG. 2. The decoders generate “soft” outputs, which refers to the reliability of the outputs. Basically, there are two different algorithms for generating soft outputs. The first is called symbol-by-symbol MAP (Maximum A Posteriori). The second is known as Soft Output Viterbi Algorithm (SOVA). The details of MAP and SOVA, as well as a comparison of the two algorithms, are known to those with the requisite skill in the art, but beyond the scope of this description. The MAP algorithm essentially has better performance than the SOVA algorithm at the expense of a more complicated implementation. In accordance with the invention, MAP algorithm is preferably used for executing Turbo decoding on a reconfigurable SIMD processor array. However, this invention can also accomplish Turbo decoding by mapping the SOVA algorithm to the processor array. A summary of the MAP algorithm follows.
[0025] Let m be the constituent encoder memory and S is the set of all 2m constituent encoder states. Let xs=(x1s, x2s. . . , xNs)=(u1, u2, . . . , uN) be the encoder input word or systematic information, xp=(x1p, x2p. . . , xNp) be the parity word generated by a constituent encoder, and yk=(yks, ykp) be a noisy (AWGN) version of (xks,xkp) at time instant k. y=(y1, y2, . . . , yN) is the whole sequence of the received codewords, and yk=(y1, y2, . . . , yk) is the partial sequence till the time instant k.
[0026] In the symbol-by-symbol MAP decoder, the decoder decides uk=+1 if the conditional probability p(uk=+1|y) is greater than the conditional probability p(uk=−1|y), and it decides uk=−1 otherwise. More succinctly, the decision is given by the sign of L(uk), where L(uk) or Log Likelihood Ratio (LLR) is the log value of a posteriori probability (LAPP) ratio defined as:
1
[0027] Incorporating the code's trellis, this may be written as:
2
[0028] Where sk ε S is the state of the encoder at time k, S+ is the set of ordered pair (s′, s) corresponding to all state transitions (sk−1=s′)→(sk=s) caused by data input uk=+1, and S− is similarly defined for uk=−1.
[0029] Defining αk−1(s′)=p(s′, yj<k) it therefore follows:
3
[0030] It can then be shown that:
4
[0031] with initial conditions that α0(0)=1 and α0(s≠0)=0.
5
[0032] with initial conditions that βN(0)=1 and βN(s≠0)=0.
6
[0033] where Le(uk) is the extrinsic information from the previous stage and
7
[0034] is the signal to noise ratio in the channel.
8
[0035] Further, L(uk) (for the case of DEC1) can be rewritten as
L
1
(uk)=Lcyks+L21e(uk)+L12e(uk)
[0036] The first term Lcyks in the above equation is the channel value, the second term represents any a priori information about uk provided by a previous decoder, and the third term represents extrinsic information that can be passed on to a subsequent decoder.
[0037] Now, the LOG-MAP algorithm is described. If the log domain is considered, then it follows:
{tilde over (α)}
k
(s)=ln(αk(s))
{tilde over (β)}
k
(s)=ln(βk(s))
{tilde over (γ)}
k
(s′,s)=ln(γk(s′,s))
[0038]
9
[0039] Therefore, we have
10
[0040] This function can be solved by using the Jacobian logarithm:
ln
(eδ1+eδ2)=max(δ1, δ2)+ln(1+e−|δ2−δ1|)=max(δ1,δ2)+fc(|δ1−δ2|)
[0041] where fc(.) is a correction function. This function is called max*. Only very few values need to be stored in the lookup table.
[0042] The LOG-MAP algorithm can be simplified to a MAX-LOG-MAP algorithm by the following approximations:
ln
(eδ1+eδ2)≈max(δ1, δ2)
ln
(eδ1+. . . +eδn)≈max(δ1. . . , δn)
[0043] Then:
11
[0044] A “sliding window” is a technique used to reduce the search space, in order to reduce the complexity of the problem. A search space is first defined, called the “window.” Two sliding window approaches are SW1-BCJR and SW2-BCJR, each of which is a sliding window approach to the MAP algorithm. However the sliding window approach adopted in the MS1 Turbo decoding mapping requires only a small amount of memory independent of the block length for mapping to a reconfigurable array. FIG. 3 shows a timing diagram of a general SW-BCJR algorithm to illustrate the timing for one forward process and two synchronized backward processes with the received branch symbols.
[0045] The received branch symbols can be delayed by 2L branch times. It is sufficient if L is more than two times of the state number. Then the forward algorithm process starts at the initial node at branch time 2L, computing all state metrics for each node at every branch and storing these in a memory. The first backward process starts at the same time, but processes backward from the 2Lth node, setting every initial state metric to the same value, and not storing anything until branch time 3L, at which point it has built up reliable state metrics and encounters the last of the first set of L forward computed metrics as shown in FIG. 3. The unreliable metric branch computations are shown as dashed lines. The Lth branch soft decisions are outputs. Meanwhile, starting at time 3L, the second backward process begins processing with equal metrics at node 3L, discarding all metrics until time 4L,and so on. As shown in FIG. 3, three possible boundary cases exist for different L and block sizes.
[0046] In accordance with the invention, a new method, called MIX-LOG-MAP, is a hybrid of both Log-MAP and MAX-LOG-MAP. To compute α and β, LOG-MAP is used with a look-up table, and in LLR, the approximation approach of MAX-LOG-MAP is used. This method reduces the implementation complexity, and further can save power consumption and processing time.
[0047]
FIG. 4 shows a trellis diagram for the 3G Turbo codes with code rate 1/3. The notation for trellis branches used in the subsequent sections is (ub, cb). Branch start state at time (k−1) is m′, and end state at time k is m. The ub is the input label; it is the input into the encoder at time k. The cb is the output label, or the corresponding output of the encoder at time k.
[0048]
FIG. 5 shows a data processing architecture 500 in accordance with the invention. The data processing architecture 500 includes a processing engine 502 having a software programmable core processor 504 and a reconfigurable array of processing elements 506. The array of processing elements includes a multidimensional array of independently programmable processing elements, or reconfigurable cells (RCs), each of which includes functional units that can be configured for performing a specific function according to a context for the RC.
[0049] The core processor 504 is a MIPS-like RISC processor with a scalar pipeline. In one embodiment, the core processor includes sixteen 32-bit registers and three functional units: a 32-bit ALU, a 32-bit shift unit, and a memory unit. In addition to typical RISC instructions, the core processor 504 is provided with specific instructions for controlling other components of the processing engine 502. These include instructing the array of processing elements 506 and a direct memory access (DMA) controller 508 that provides data transfer between external memory 514 and 516 and the processing elements. The external memory includes a DMA external memory 514 and a core processor external memory 516.
[0050] A frame buffer 512 is provided between the DMA controller 508 and the array of processing elements 506 to facilitate the data transfer. The frame buffer 512 acts as an internal data cache for the array of processing elements 506, and includes two sets of data cache. The frame buffer 512 makes memory access transparent to the array of processing elements 506 by overlapping computation with data load and store, by alternating between the two sets of cache. Further, the input/output datapath from the frame buffer 512 allows for broadcasting of one byte of data to all of the processing elements in the array 506 simultaneously. Data transfers to and from the frame buffer 512 are also controlled by the core processor 504, and through the DMA controller 508.
[0051] The DMA controller 508 also controls the transfer of context instructions into context memory 510, 520. The context memory provides a context instruction for configuring the RC array 506 to perform a particular function, and includes a row context memory 510 and a column context memory 520 where the array of processing elements is an M-row by N-column array of RCs. Reconfiguration is done in one cycle by caching several context instructions from the external memory 514.
[0052] In a specific exemplary embodiment, the core processor is 32-bit. It communicates with the external memory 514 through a 32-bit data bus. The DMA 508 has a 32-bit external connection as well. The DMA 508 writes one 32-bit data to context memory 510, 520 each clock cycle when loading a context instruction. However, the DMA 508 can assemble the 32-bit data into 128-bit data when loading data to the frame buffer 112, or disassemble the 128-bit data into four 32-bit data when storing data to external memory 514. The data bus between the frame buffer 512 and the array of processing elements 506 is 128-bit in both directions. Therefore, each reconfigurable processing element in one column will connect to one individual 16-bit segment output of the 128-bit data bus. The column context memory 520 and row context memory 510 are each connected to the array 506 by a 256-bit (8×32) context bus in both the column and row directions. The core processor 504 communicates with the frame buffer 512 via a 32-bit data bus. At times, the DMA 108 will either service the frame buffer storing/load, row context loading or column context loading. Also, the core processor 504 provides control signals to the frame buffer 512, the DMA 108, the row/column context memories 510, 520, and array of processing elements 506. The DMA 508 provides control signals to the frame buffer 512, and the row/column context memories 510, 520.
[0053] The above specific embodiment is described for exemplary purposes only, and those having skill in the art should recognize that other configurations, datapath sizes, and layouts of the reconfigurable processing architecture are within the scope of this invention. In the case of a two-dimension array, a single one, or portion, of the processing elements are addressable for activation and configuration. Processing elements which are not activated are turned off to conserve power. In this manner, the array of reconfigurable processing elements 506 is scalable to any type of application, and efficiently conserves computing and power resources.
[0054] The RCs are connected in the array according to various levels of hierarchy. FIG. 6 illustrates an exemplary hierarchical configurations for an array 506 of individual RCs 507. First, RCs within each quadrant (i.e. group of 4×4 RCs) are fully connected in a row or column. Second, RCs in adjacent quadrants are connected via express lanes that enable an RC in one quadrant to broadcast its results to the RCs in an adjacent quadrant. The programmability of the interconnection network of RC array is derived from the context word. Depending upon the context, an RC can access the output of any other RC in its column or row, or select as an input from its own register file, or get the data from frame buffer. The context word provides functional programmability by configuring a logic unit of each RC to perform specific functions.
[0055] The context word from context memory is broadcast to all RCs in the corresponding row or column. Thus, all RCs in a row, and all RCs in a column share the same context word and perform the same operation, as illustrated by FIG. 7. Thus the array can operate in Single Instruction, Multiple Data form (SIMD). Alternatively, different columns or rows can perform different operations depending in different context instructions.
[0056] Executing complex algorithms with the reconfigurable architecture is based on partitioning applications into both sequential and parallel tasks. The core processor 504 executes the sequential tasks, whereas the data-parallel tasks are mapped to the RC array 506. The core processor 504 initiates all data and configuration transfers within the processing engine 502. DMA instructions initiate data transfers between the external memory 514 and the frame buffer 512, and context loading from external memory 516 into the context memories 510, 520. The RC array instructions control the operation of the RC array 506, by specifying the context and the broadcast mode.
[0057] Execution setup begins with core processor 504 requesting a configuration load from core processor external memory 516 into the respective context memory 510 and 520. Next, the core processor 504 requests the frame buffer 512 to be loaded with data from DMA external memory 514. Once the context instruction and the data are ready, the core processor 504 enables the RC array 106 execution through one of several RC array broadcast instructions. While the RC array can perform computations on data in one frame buffer set, new data may be loaded in the other frame buffer set, or the context memory may be loaded with new context instructions.
[0058] The core processor 504 controls the context broadcast mode and also provides various control/address signals for the DMA controller 508, the context memory 510 and 520, and the frame buffer 512. These control and data signals represent various components of the Turbo encoder and decoder, which are mapped to the RC array for execution. According to the invention, the mapping can occur in parallel or in serial mode, as discussed below.
[0059] According to one embodiment, a serial mapping method is used, described in reference to FIG. 2. In DEC 1 (the first decoder), the computation of the LLR (the step to compute L(uk)) must wait until the computations of αk−1(s′), γk(s′,s), βk(s) are done. DEC2 cannot begin to decode until the interleaver following the DEC1 is finished. In the second iteration, DEC1 cannot start until the DEC2 is completely done in the first iteration. Therefore, only one column of RCs is allocated to perform steps of αk−1(s′), βk(S′S), βk(s), and the rest of the RCs in the array will be shut down to conserve power, i.e. in a low-power mode. According to this mapping method, LOG-MAP or MAX-LOG-MAP or MIX-LOG-MAP can be employed. Thus, the serial mapping is optimal for relatively small-sized data frames.
[0060] The second approach is parallel mapping, which is based on the sliding window approach. In this case, one column and/or row of RCs are allocated to perform γ, one for α, one for 1st-β, one for 2nd-β and one for LLR. On top of the sliding window approach, LOG-MAP, MAX-LOG-MAP or MIX-LOG-MAP can be used. The serial mapping method is described below in greater detail.
[0061] For serial mapping (i.e. Time-Multiplexing Mapping), the procedures are determined as follows, for executing the Log-Gamma calculation:
2|
|
for (k=0;k<FRAME_LENGTH;k++)
{
g00[k] = (−Lu[k]− Le[k]− Lc[k])/2;
g01[k] = (−Lu[k]− Le[k]+ Lc[k])/2;
g10[k] = (Lu[k] + Le[k]− Lc[k])/2;
g11[k] = (Lu[k] + Le[k]+ Lc[k])/2;
};
|
[0062] Where, Lu[k] is the systematic information, Lc[k] is parity check information and Le[k] is the a priori information. Because g00[k]=−g11[k] and g01[k]=−g10[k], it can be further optimized as:
3|
|
for (k=0;k<FRAME_LENGTH;k++)
{
g10[k] = (Lu[k] + Le[k]− Lc[k])/2;
g11[k] = (Lu[k] + Le[k]+ Lc[k])/2;
};
|
[0063]
FIG. 8 illustrates a method for calculating the Log-Gamma according to an embodiment of the invention. The steps of a method are as follows:
[0064] Le(k) to Le(k+7) are loaded to one column of RC from Frame Buffer: 1 cycle
[0065] Lu(k) to Lu(k+7) are loaded to one column of RC from Frame Buffer: 1 cycle
[0066] Lc(k) to Lc(k+7) are loaded to one column of RC from Frame Buffer: 1 cycle
[0067] Perform g10(k) to g10(k+7): 1 cycle
[0068] Perform g11(k) to g11(k+7): 1 cycle
[0069] Store g10(k) to g10(k+7): 1 cycle
[0070] Store g11(k) to g11(k+7): 1 cycle
[0071] Loop index overhead: 2 cycles
[0072] The total cycles needed to perform the LOG-MAP/MAX-LOG-MAP are 9 cycles for 8 trellis stages. For the operation of Log-Gamma, only the first column of RCs is enabled for serial mapping. Table 2 summarizes the cycles and trellis stages for the Log-Gamma calculation method:
4TABLE 2
|
|
LOG-MAPMAX-LOG-MAP
|
MS19 cycles/8 trellis stages9 cycles/8 trellis stages
TI TMS320C62X—5 cycles/2 trellis stages
(without a-priori information)
|
[0073] For the Log-Alpha operation, the procedures for the MAX-LOG-MAP implementation are:
5|
|
for (k=1; k<=FRAME_LENGTH;k++)
{
m_t = alpha[(k−1)*8+0] − g11[k−1];
m_b = alpha[(k−1)*8+4] + g11[k−1];
alpha[k*8+0] = (m_t > m_b) ? m_t : m_b;
m_t = alpha[(k−1)*8+0] + g11[k−1];
m_b = alpha[(k−1)*8+4] − g11[k−1];
alpha[k*8+1] = (m_t > m_b) ? m_t : m_b;
m_t = alpha [(k−1)*8+1) − g10[k−1];
m_b = alpha[(k−1)*8+5] + g10[k−1];
alpha[k*8+2] = (m_t > m_b) ? m_t : m_b;
m_t = alpha[(k−1)*8+1] + g10[k−1];
m_b = alpha[(k−1)*8+5] − g10[k−1];
alpha[k*8+3] = (m_t > m_b) ? m_t : m_b;
m_t = alpha[(k−1)*8+2] + g10[k−1];
m_b = alpha[(k−1)*8+6] − g10[k−1];
alpha[k*8+4] = (m_t > m_b) ? m_t : m_b;
m_t = alpha[(k−1)*8+2] − g10[k−1];
m_b = alpha[(k−1)*8+6] + g10[k−1];
alpha[k*8+5] = (m_t > m_b) ? m_t : m_b;
m_t = alpha[(k−1)*8+3] + g11[k−1];
m_b = alpha[(k−1)*8+7] − g11[k−1];
alpha[k*8+6] = (m_t > m_b) ? m_t : m_b;
m_t = alpha[(k−1)*8+3] − g11[k−1];
m_b = alpha[(k−1)*8+7] + g11[k−1];
alpha[k*8+7] = (m_t > m_b) ? m_t : m_b;
}
|
[0074]
FIG. 9 is a graphical illustration of the Log-Alpha mapping method. Assume: alpha(k,0), alpha(k,1), alpha(k,2), alpha(k,3), alpha(k,4), alpha(k,5), alpha(k,6), alpha(k,7) are already in the RCs of one column of the RC array. Those data are generated in the calculation of the previous trellis stage. The context is broadcast in a row direction, and only one column, or row, of RCs is activated. The steps for executing the Log-Alpha mapping are:
[0075] RC exchanges data in 4 pairs of RCs as shown in at t=0: 1 cycle
[0076] Read 1 pair of g11(k) and g10(k). This pair data will be broadcast so that all of the RCs in one column will have the same pair of g11(k) and g10(k). Performs +/− g11(k) or +/− g10(k) based on different location: 2 cycles
[0077] Perform max* or max operation dependent on the selected algorithm in each RC, where A and B are generated in the previous step.
[0078] 1) |A−B|: 1 cycle (only for LOG-MAP)
[0079] 2) Max(A, B) or Lookup table of fr|A−B| with Max(A,B): 1 cycle
[0080] 3) max(A, B)+fr|A−B|: 1 cycle (only for LOG-MAP)
[0081] Re-shuffle (using two express lanes) the data in the correct order as shown in at t=p+1: 1 cycle
[0082] Normalization max, get the max of max: 3 cycles
[0083] Propagate max of max to every RC, substrate max of max: 1 cycles
[0084] Store alpha(k+1, 0) to alpha(k+1, 7) to the frame buffer: 1 cycle
[0085] Loop index overhead: 2 cycles
[0086] Table 3 summarizes the steps and trellis stages for the Log-Alpha operation:
6TABLE 3
|
|
LOG-MAPMAX-LOG-MAP
|
MS114 cycles/trellis stage12 cycles/trellis stage
TI TMS320C62X— 9 cycles/trellis stages
(no normalization)
|
[0087] The Log-Beta procedures are shown as follows:
7|
|
for (k= FRAME_LENGTH − 1;k>=0;k−−){
m_t = beta[(k+1)*8 + 0] − g11[k];
m_b = beta[(k+1)*8 + 1] + g11[k];
beta[k*8 + 0] = (m_t > m_b) ? m_t : m_b;
m_t = beta[(k+1)*8 + 2] − g10[k];
m_b = beta[(k+1)*8 + 3] + g10[k];
beta[k*8 + 1] = (m_t > m_b) ? m_t : m_b;
m_t = beta[(k+1)*8 + 4] + g10[k];
m_b = beta[(k+1)*8 + 5] − g10[k];
beta[k*8 + 2] = (m_t > m_b) ? m_t : m_b;
m_t = beta[(k+1)*8 + 6] + g11[k];
m_b = beta[(k+1)*8 + 7] − g11[k];
beta[k*8 + 3] = (m_t > m_b) ? m_t : m_b;
m_t = beta[(k+1)*8 + 0] + g11[k];
m_b = beta[(k+1)*8 + 1] − g11[k];
beta[k*8 + 4] = (m_t > m_b) ? m_t : m_b;
m_t = beta[(k+1)*8 + 2] + g10[k];
m_b = beta[(k+1)*8 + 3] − g10[k];
beta[k*8 + 5] = (m_t > m_b) ? m_t : m_b;
m_t = beta[(k+1)*8 + 4] − g10[k];
m_b = beta[(k+1)*8 + 5] + g10[k];
beta[k*8 + 6] = (m_t > m_b) ? m_t : m_b;
m_t = beta[(k+1)*8 + 6] − g11[k];
m_b = beta[(k+1)*8 + 7] + g11[k];
beta[k*8 + 7] = (m_t > m_b) ? m_t : m_b;}
|
[0088] Assume: beta(k,0), beta(k,1), beta(k,2), beta(k,3), beta(k,4), beta(k,5), beta(k,6), beta(k,7) are already in the RCs of one column of the RC array. Those data are generated in the calculation of the previous trellis stage. The context is broadcast in a row direction and only one column of RC is activated. FIG. 10 illustrates the Log-Beta mapping operations. The steps of the Log-Beta method are:
[0089] RC exchanges data with its neighbor in 4 pairs of RCs: 1 cycle
[0090] Read 1 pair of g11(k) and g10(k). This pair data will be broadcast so that all of the RCs in one column will have the same pair of g11 (k) and g10(k). Performs +/− g11(k) or +/− g10(k) based on different location: 2 cycles
[0091] Perform max* or max operation dependent on the selected algorithm in each RC, where A and B are generated in the previous step.
[0092] 4) |A−B|: 1 cycle (only for LOG-MAP)
[0093] 5) Max(A, B) or Lookup table of fr|A−B| with Max(A,B): 1 cycle
[0094] 6) max(A, B)+fr|A−B|: 1 cycle (only for LOG-MAP)
[0095] Re-shuffle (using two express lanes) the data in the correct order as shown in at t=p+1: 1 cycle
[0096] Normalization max, get the max of max: 3 cycles
[0097] Propagate max of max to every RC, substrate max of max: 1 cycles
[0098] Store beta(k+1, 0) to beta(k+1, 7) to the frame buffer: 1 cycle
[0099] Loop index overhead: 2 cycles
[0100] Table 4 summarizes the Log-Beta operation:
8TABLE 4
|
|
LOG-MAPMAX-LOG-MAP
|
MS114 cycles/trellis stage12 cycles/trellis stage
TI TMS320C62X— 9 cycles/trellis stages
(no normalization)
|
[0101] The LLR procedures are shown as follows:
9|
|
for (k=1;k<=FRAME_LENGTH;k++)
{
enumerator = −MAX;
denominator = −MAX;
t_d = alpha[(k−1)*8+0]+beta[k*8+0] − g11[k−1];
denominator = (denominator > t_d) ? denominator : t_d;
t_e = alpha[(k−1)*8+0]+beta[k*8+1] + g11[k−1];
enumerator = (enumerator > t_e) ? enumerator : t_e;
t_d = alpha[(k−1)*8+1]+beta[k*8+2] − g10[k−1];
denominator = (denominator > t_d) ? denominator : t_d;
t_e = alpha[(k−1)*8+1]+beta[k*8+3] + g10[k−1];
enumerator = (enumerator > t_e) ? enumerator : t_e;
t_d = alpha[(k−1)*8+2]+beta[k*8+5] − g10[k−1];
denominator = (denominator > t_d) ? denominator : t_d;
t_e = alpha[(k−1)*8+2]+beta[k*8+4] + g10[k−1];
enumerator = (enumerator > t_e) ? enumerator : t_e;
t_d = alpha[(k−1)*8+3]+beta[k*8+7] − g11[k−1];
denominator = (denominator > t_d) ? denominator : t_d;
t_e = alpha[(k−1)*8+3]+beta[k*8+6] + g11[k−1];
enumerator = (enumerator > t_e) ? enumerator : t_e;
t_d = alpha[(k−1)*8+4]+beta[k*8+1] − g11[k−1];
denominator = (denominator > t_d) ? denominator : t_d;
t_e = alpha[(k−1)*8+4]+beta[k*8+0] + g11[k−1];
enumerator = (enumerator > t_e) ? enumerator : t_e;
t_d = alpha[(k−1)*8+5]+beta[k*8+3] − g10[k−1];
denominator = (denominator > t_d) ? denominator : t_d;
t_e = alpha[(k−1)*8+5]+beta[k*8+2] + g10[k−1];
enumerator = (enumerator > t_e) ? enumerator : t_e;
t_d = alpha[(k−1)*8+6]+beta[k*8+4] − g10[k−1];
denominator = (denominator > t_d) ? denominator : t_d;
t_e = alpha[(k−1)*8+6]+beta[k*8+5] + g10[k−1];
enumerator = (enumerator > t_e) ? enumerator : t_e;
t_d = alpha[(k−1)*8+7]+beta[k*8+6] − g11 [k−1];
denominator = (denominator > t_d) ? denominator : t_d;
t_e = alpha[(k−1)*8+7]+beta[k*8+7] + g11 [k−1];
enumerator = (enumerator > t_e) ? enumerator : t_e;
ext[k−1] = enumerator − denominator − Lu[k−1] − La(k−1);};
|
[0102] Assume: alpha(k,s), beta(k,s), g11(k)/g10(k) pair are in the frame buffer where s=0,1, . . . , 7. Those data are generated in the calculation of the Log-Gamma, Log-Alpha, Log-Beta stages. The context will be broadcast to the rows, and all of RCs are activated. FIG. 11 illustrates the LLR operations. FIG. 12 is a graphical depiction of the enumerator and denominator calculations of the LLR operations. The steps of the LLR method portion are as follows:
[0103] alpha(k,0) to alpha(k+7, 7) are loaded to each column of RC: 8×1 cycles
[0104] beta(k,0) to beta(k+7, 7) are loaded to each column of RC: 8×1 cycles
[0105] RC exchanges data in each column in 4 pairs of RCs for beta variable at t=0, the result are shown at t=1: 1 cycle
[0106] RC exchanges data in each column in 4 pairs of RCs for alpha variable. The results are shown at t=2: 1 cycle
[0107] Read 1 pair of g11(k) and g10(k). This pair data will be broadcast so that all of the RCs will have the same pair of g11(k) and g10(k). Performs alpha(k−1 ,m′) +beta(k, m) +/− g11 (k) or +/− g10(k) for enumerator and denominator based on different location: 2 cycles
[0108] Perform max* or max operation for enumerator and denominator dependent on the selected algorithm in each column. A pair of data will be performed each time, thus it will take 3 iterations to get the final max* or max. However, the Lookup operation cannot be performed in parallel because of the limitation of the Frame Buffer.
[0109] 1) Max(A, B): 3×1 cycle
[0110] 2) |A=B|: 3×1 cycle (only for LOG-MAP)
[0111] 3) Lookup table of fr|A=B|: 8×3×1 cycle (only for LOG-MAP)
[0112] 4) max(A, B)+fr|A−B|: 3×1 cycle (only for LOG-MAP)
[0113] Calculate the extrinsic information: enumerator-denominator-Lu(k=1): 2 cycles
[0114] Store extrinsic information to the frame buffer: 1 cycle
[0115] Loop index overhead: 2 cycles
10TABLE 5
|
|
Table 5 summarizes the steps per trellis stage:
LOG-MAPMAX-LOG-MAP
|
MS158 cycles/8 trellis stages28 cycles/8 trellis stages
TI TMS320C62X—13 cycles/trellis stage
|
[0116]
FIG. 13 shows the serial steps for the Turbo mapping method of the present invention. Its starts from the calculation of log-γ, then log-α, and finally, the LLR within one decoder (e.g. DEC1). All of the intermediate data are stored in the frame buffer. Once the LLR values are available, an interleaving/deinterleaving procedure is performed to re-order the data sequence. The above procedure is repeated for the second decoder (DEC2) in the same iteration or for the same decoder in the next iteration.
[0117] Table 6 summarizes the serial execution of a Turbo decoding method with an array of independently reconfigurable processing elements:
11TABLE 6
|
|
MAX-LOG-MAP
LOG-MAPMAX-LOG-(TIMIX-LOG-
STEPS(MS1)MAP (MS1)TMS320C62X)MAP
|
Log-Gamma1.13 cycles/stage1.13 cycles/stage 2.5 cycles/stage1.13 cycles/stage
Log-Alpha 14 cycles/stage 12 cycles/stage 9 cycles/stage 14 cycles/stage
Log-Beta 14 cycles/stage 12 cycles/stage 9 cycles/stage 14 cycles/stage
LLR 7.3 cycles/stage 3.5 cycles/stage 13 cycles/stage 3.5 cycles/stage
Interleaver 2 cycles/stage 2 cycles/stage? 2 cycles/stage
Total38.5 cycles/stage30.7 cycles/stage33.5 cycles/stage34.7 cycles/stage
|
[0118] Table 7 summarizes the throughput, or decoded data rate, for the Turbo decoding method using the reconfigurable array, according to the invention, and using the following formula:
12
[0119] where f is the clock frequency (MHz) of MS 1.
12TABLE 7
|
|
LOG-MAP (MS1)MAX-LOG MAP (MS1)MIX-LOG-MAP
|
0.52 Mbits/s0.65 Mbits/s0.576 Mbits/s
|
[0120] This parallel mapping is based on the MIX-LOG-MAP algorithm. The window size is twice the number of trellis states in each stage. Basically, the sliding window approach is suitable for the large frame size of CDMA2000, W-CDMA and TD-SCDMA. Parallel mapping has a higher performance, uses less memory, but has higher power consumption compared to serial mapping. The following tables show the steps for each kernel. They are the similar as the steps in the serial mapping. The resource allocation for each computational procedure in the parallel mapping is shown in FIG. 14.
[0121] Log-Gamma, using the 6th row of RCs in an exemplary embodiment:
13TABLE 8
|
|
Clock cycleOperations
|
|
1Load a priori info
2Load the systematic info Lu
3Load the check-bit info Lc
4Compute x = Lu + Le
5Compute g11 = (x + Lc)/2,
6Compute g10 = (x − Lc)/2, meanwhile,
left shift by 8 to pack into H8.
7Pack g10, g11 into 16-bit data. G10 is in H8.
8Store the data back to Frame Buffer
|
[0122] Log-Alpha, using the 5th row of RCs according to an exemplary method:
14TABLE 9
|
|
Clock cycleOperations
|
|
1Configure the data bus into broadcast.
2Load g10, g11, αk−1 + g11/g10 based on condition
(condition is pre-loaded), fix the data in Feedback
register/input data register. Put data in r0
3αk−1 − g11/g10, put data in r1
4Full connection, column-wise, exclusive, get r0 into r2,
r0 is from other RC based on trellis diagram
5Full connection, column-wise, exclusive, get r1 into r3,
r1 is from other RC
6Max(r2, r3)
7|r2 − r3|
8Nop (something like branch delay slot,
probably can be used later)
9LUT(fr|A − B|), other column RCs still can work on normal
computation, max + fr(|A − B|)
10Normalization max*(exclusive, row-context)
11Normalization max*(exclusive, row-context)
12Normalization max*(exclusive, row-context)
13-max(exclusive, row-context)
|
[0123] Log-Beta(1st-Beta and 2nd-Beta, using 1st and 2d nd row of RCs).
15|
|
Clock cycleOperations
|
|
1Load g10, g11, βk+1 + g11/g10 based on condition
(condition is pre-loaded), fix the data in Feedback
register/input data register. Put data in r0
2βk+1 − g11/g10, put data in r1
3Full connection, column-wise, exclusive, get r0 into r2,
r0 is from other RC based on trellis diagram
4Full connection, column-wise, exclusive, get r1 into r3,
r1 is from other RC
5Max(r2, r3)
6|r2 − r3|
7Nop (something like branch delay slot,
probably can be used later)
8LUT(fr|A − B|), other column RCs still can work on normal
computation, max+fr(|A − B|)
9Normalization max*(exclusive, row-context)
10Normalization max*(exclusive, row-context)
11Normalization max*(exclusive, row-context)
12-max(exclusive, row-context)
|
[0124] LLR, using the 3rd row of RCs:
16TABLE 10
|
|
Clock cycleOperations
|
|
1Copy log-alpha αk−1 from storage column (6th column)
2Copy βk from 1st-β or 2nd-β, meanwhile βk + αk−1→re
3Full connection, column-wise, exclusive, reshuffle βk,
meanwhile βk (reordered) + αk−1→rd
4-Lu(x) for every RC, frame buffer data bus in
broadcast mode
5-Le for every RC, frame buffer data bus in broadcast mode
6Normalization max*(exclusive, row-context)
7Normalization max*(exclusive, row-context)
8Normalization max*(exclusive, row-context)
9Put the data in the correct position in LLR column
(exclusive, row-context)
|
[0125] Table 11 illustrates a cycle schedule for all of the kernels summarized in Tables 8-10. The cycle schedule is exemplary only, based on the following criteria which may or may not necessarily be met in other embodiments:
[0126] 1) no two rows of RCs access the frame buffer simultaneously.
[0127] 2) if one row of RC performs a MIMD operation, the other rows will be in idle mode. In the table, “full” means a MIMD operation, others rows are in idle mode.
[0128] 3) the only case that two MIMD operations can be performed in parallel is the 1st-β and 2nd-β, where the operations are the same.
17TABLE 11
|
|
Storage &
clock1st-β2nd-βLLRreorderαγ
|
|
1Le(FB)
2Lu(FB)
3Lc(FB)
4Le + Lu
5(x + Lc)/2
6(x − Lc)/2
7Pack
8Store
FB
9Copy αk−1,Copy αFB + g10/g11
10FB + g10/g11FB + g10/g11Copy LLR−g10/g11
11−g10/g11FB + g10/g11−g10/g11Get Left
12−g10/g11Copy βk
13Full, for r0
14Full, for r1
15Full, for r0Full, for r0
16Full, for r1Full, for r1
17|r2 − r3||r2 − r3|−Le|r2 − r3|
18Max(r2, r3)Max(r2, r3)βk + αk−1Max(r2, r3)
19Full,
βk(2),
20NOPNOPβk(2) + αk−1LUT + max
21LUT + maxNOP
22LUT + max
23normalization 1normalization 1Max 1normalization 1
24normalization 2normalization 2Max 2normalization 2
25normalization 3normalization 3Max 3normalization 3
26−max−max−max
|
[0129] For the Log-gamma, it will be perform once every 16 group of symbols. TLog-gamma=(2(data bus reconfiguration)+8×2)/16(stages)=18/16=1.125 cycles. Cycles for the rest of the operations=18.125. Thus, Tsub-total=1.125+18.125=19.25 cycles. If the clock cycles for the interleaver are 2*Lframe, where Lframe is the frame size, and the overhead to update the loop index is 2 clock cycles, then the total number of cycles per stage will be 23.5. FIG. 15 is a flow chart illustrating the parallel mapping method of performing Turbo coding with a reconfigurable processor array.
[0130] Other arrangements, configurations and methods for executing a block cipher routine should be readily apparent to a person of ordinary skill in the art. Other embodiments, combinations and modifications of this invention will occur readily to those of ordinary skill in the art in view of these teachings. Therefore, this invention is to be limited only be the following claims, which include all such embodiments and modifications when viewed in conjunction with the above specification and accompanying drawings.
Claims
- 1. A digital signal processing method, comprising:
configuring a portion of an array of independently reconfigurable processing elements for performing a turbo coding routine; and executing the turbo coding routine on data blocks received at the configured portion of the array of processing elements.
- 2. The method of claim 1, wherein configuring a portion of the array of reconfigurable processing elements includes activating the portion with an activation signal.
- 3. The method of claim 1, wherein the portion of the array of independently reconfigurable processing elements includes at least one processing element.
- 4. The method of claim 1, wherein executing the turbo coding routine on data blocks received at the configured portion of the array of processing elements includes encoding the data blocks.
- 5. The method of claim 1, wherein executing the turbo coding routine on data blocks received at the configured portion of the array of processing elements includes decoding the data blocks.
- 6. The method of claim 1, wherein configuring a portion of an array of independently reconfigurable processing elements for performing a turbo coding routine includes configuring the portion as a logarithmic maximum a posteriori (LOG-MAP)-based processor.
- 7. The method of claim 6, further comprising configuring the portion to access a look-up table.
- 8. The method of claim 1, further comprising idling all processing elements in the array other than the portion of processing elements configured for performing the turbo coding routine.
- 9. The method of claim 1, wherein each processing element includes at least one functional unit, and wherein configuring a portion of an array of independently reconfigurable processing elements for performing a turbo coding routine includes programming the functional unit to perform at least one function of the turbo coding routine.
- 10. The method of claim 9, wherein the function unit includes programmable logic that is configurable for performing a logical function.
- 11. A digital signal processing apparatus, comprising:
an array of interconnected, reconfigurable processing elements, each processing element being independently programmable with a context instruction; a context memory connected to the array for storing and providing the context instruction to the processing elements; and a processor connected to the array and to the context memory, for controlling the loading of the context instruction to the processing elements, for configuring a portion the processing elements to perform a turbo coding routine.
- 12. The apparatus of claim 11, wherein the processor is further configured to execute the turbo coding routine by controlling a state of the configured portion of processing elements.
- 13. The apparatus of claim 11 wherein the array, the context memory, and the processor reside on a single chip.
- 14. The apparatus of claim 11, wherein the turbo coding routine is an encoding process on data blocks received at the portion of the array.
- 15. The apparatus of claim 11, wherein the turbo coding routine is a decoding process on data blocks received at the portion of the array.
- 16. The apparatus of claim 11, wherein each processing element includes at least one functional unit that is programmable for performing at least one function of the turbo coding routine.
- 17. The apparatus of claim 16, wherein the functional unit includes programmable logic that is configurable by the context instruction.
- 18. The apparatus of claim 11, wherein the processor is further configured to idle all processing elements that are not of the portion of processing elements configured for performing the turbo coding routine.
- 19. The apparatus of claim 11, wherein the context instruction is configured to program the portion of processing elements to emulate a logarithmic maximum a posteriori (LOG-MAP) processor.
- 20. A digital signal processing apparatus, comprising:
a context memory for storing one or more context instructions for performing a turbo coding routine; and an array of independently reconfigurable processing elements, each of which is responsive to a context instruction for being configured to execute a portion of the turbo coding routine.