The present invention relates generally to electronic circuits, and more particularly relates to information coding techniques.
Turbo (i.e., iterative) parallel concatenated convolutional codes (PCCC's), commonly referred to as “turbo codes,” find widespread application, for example, in modern baseband (e.g., mobile broadband) systems including, but not limited to, Long Term Evolution (LTE) and Wideband Code Division Multiple Access (WCDMA) devices. Turbo codes are essentially PCCC's having an encoder formed by two or more constituent systematic recursive convolutional encoders joined by an interleaver. A received data stream is usually decoded using maximum likelihood decoding.
Typically, turbo codes are implemented in a straightforward manner, meaning that an encoded data stream is processed on a bit-by-bit basis. However, since the input block length is normally very large, maximum likelihood encoding would be significantly complex and thus impractical. A bit-by-bit processing approach, whereby one bit of the input data stream is processed per iteration (e.g., one bit/iteration), leads to poor performance and is therefore undesirable. Another known turbo code implementation approach is to utilize look-up-tables, which slightly improves the bit/cycle performance. This approach, however, requires a significantly large memory allocation for implementing the look-up tables and is thus not practical, particularly for standard digital signal processor (DSP) machines and/or other processing systems in which memory is a commodity.
The present invention, in illustrative embodiments thereof, provides techniques for performing turbo PCCC encoding in a manner which enables required output data bits to be computed with a higher level of parallelism compared to conventional approaches and without the need for look-up tables or costly memory allocation for implementing the look-up tables. Furthermore, aspects of the invention reduce the dependence upon results of adjacent historic data samples, thereby allowing encoding to be performed in a distributed manner.
In accordance with an embodiment of the invention, an iterative PCCC encoder includes a first delay line operative to receive at least one input data sample and to generate a plurality of delayed samples as a function of the input data sample. The encoder further includes a second delay line including a plurality of delay elements connected in a series configuration. An input of a first one of the delay elements is adapted to receive a sum of first and second signals, the first signal generated as a sum of the input data sample and at least one of the delayed samples, and the second signal generated as an output of a single one of the delay elements. A third delay line in the encoder is operative to generate an output data sample as a function of the sum of the first and second signals and a delayed version of the sum of the first and second signals.
In accordance with another embodiment of the invention, a method for performing iterative PCCC encoding includes the steps of: generating a first plurality of data samples, each of the data samples being generated by delaying an input data sample, Xin[n], by a prescribed delay amount, where n is an integer indicative of an n-th sample in a data stream; summing the input data sample Xin[n] with at least one of the data samples in the first plurality of data samples to thereby generate a first signal; generating a second plurality of data samples, each of the data samples in the second plurality of data samples being generated by delaying a sum of the first signal and a second signal by respective delay amounts, a given one of the data samples in the second plurality of data samples forming the second signal; and generating an output data sample, Yout[n], as a function of the sum of the first and second signals and a delayed version of the sum of the first and second signals.
These and other features, objects and advantages of the present invention will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings.
The following drawings are presented by way of example only and without limitation, wherein like reference numerals indicate corresponding elements throughout the several views, and wherein:
It is to be appreciated that elements in the figures are illustrated for simplicity and clarity. Common but well-understood elements that may be useful or necessary in a commercially feasible embodiment may not be shown in order to facilitate a less hindered view of the illustrated embodiments.
The present invention, according to aspects thereof, will be described herein in the context of illustrative turbo PCCC circuit architectures and coding methodologies, at least portions of which may be implemented, for example, on a digital signal processor (DSP) machine (e.g., DSP core) or alternative processor (e.g., microprocessor, central processing unit (CPU), etc.). It is to be appreciated, however, that the invention is not limited to the circuit architectures and/or methods shown and described herein. Rather, the invention is more generally applicable to techniques for beneficially enhancing turbo PCCC coding by increasing the level of parallel computations performed. In this manner, techniques of the invention provide a transformation for turbo PCCC coding which achieves a significant improvement in data throughput compared to conventional approaches. Moreover, it will become apparent to those skilled in the art given the teachings herein that numerous modifications can be made to the embodiments shown that are within the scope of the present invention. That is, no limitations with respect to the specific embodiments described herein are intended or should be inferred.
Concatenated coding schemes were proposed as a method for achieving large coding gains by combining two or more relatively simple building-block or component codes, sometimes referred to as constituent codes (see, e.g., G. D. Forney, Jr., “Concatenated Codes,” The M.I.T. Press, 1966, which is incorporated herein by reference in its entirety). Turbo codes were first introduced in 1993 in an article by Berrou, Glavieux and Thitimajshima (see, e.g., C. Berrou et al., “Near Shannon Limit Error-Correcting Coding and Decoding: Turbo-Codes,” Proceedings of the IEEE International Conference on Communications, pp. 1064-1070, 1993, the disclosure of which is incorporated herein by reference in its entirety). That article demonstrated that a turbo code together with an iterative decoding algorithm could provide performance, in terms of bit error rate (BER), which approaches the theoretical limit. In general, a turbo code encoder provides a parallel concatenation of multiple (i.e., two or more) recursive systematic convolutional (RSC) codes which are typically, though not necessarily, identical to one another, applied to an input bit sequence. An output of the encoder includes systematic bits (i.e., the input bit sequence itself) and parity bits which can be selected to provide a desired rate of encoding.
The first adder block 104 is adapted to receive an input signal, Xin[n], which may be an n-th sample in a data stream (where n is an integer), applied to the encoder circuit 100. Adder block 104 is preferably operative to generate a signal, Xo[n], which is a summation of input signal Xin[n] and a signal generated by second adder block 112. Delay element 106 is preferably adapted to receive signal Xo[n] from adder block 104 and is operative to generate a signal, Xo[n−1], which is essentially signal Xo[n] which has been delayed by D1. Delay element 108 is preferably adapted to receive signal Xo[n−1] from delay element 106 and is operative to generate a signal, Xo[n−2], which is essentially signal Xo[n−1] which has been delayed by D2. Likewise, delay element 110 is preferably adapted to receive signal Xo[n−2] from delay element 108 and is operative to generate a signal, Xo[n−3], which is essentially signal Xo[n−2] which has been delayed by D3. The signal generated by adder block 112 is preferably a summation of signals Xo[n−2] and Xo[n−3]. In this manner, signal Xo[n] presented to the first delay element 106 is equal to the input signal Xin[n] summed with delayed versions of the input signal: Xo[n]=Xin[n]+Xo[n−2]+Xo[n−3]. Thus, delay line 102 represents an iterative structure.
The encoder circuit 100 further comprises a second delay line 114 including a first delay element 116 having a first delay D1 associated therewith, a second delay element 118 having a second delay D2 associated therewith, a third delay element 120 having a third delay D3 associated therewith, a first adder block 122 and a second adder block 124. Each of the delay values D1, D2 and D3 may be different or, alternatively, one or more of the delay values may be equal to one another. Furthermore, one or more of the delay values in the first and second delay lines 102 and 114, respectively, may be equal to one another. Again, it is to be understood that the invention is not limited to any particular delay values. Delay elements 116, 118 and 120 are preferably coupled together in series, such as, for example, in a tapped delay line arrangement (i.e., an output of one delay element is connected to an input of an adjacent delay element in the delay line 114).
Signal Xo[n] from adder block 104 is supplied to delay element 116 and concurrently to adder block 122. Delay element 116 is preferably operative to generate a signal Xo[n−1] which is essentially signal Xo[n] delayed by D1. Signal Xo[n−1] is supplied to delay element 118 and to adder block 122. Delay element 118 is preferably operative to generate a signal Xo[n−2] which is essentially signal Xo[n−1] delayed by D2. Signal Xo[n−2] is supplied to delay element 120. Delay element 120 is preferably operative to generate a signal Xo[n−3] which is essentially signal Xo[n−2] delayed by D3. An output signal generated by adder block 122, which is a summation of signals Xo[n] and Xo[n−1] (i.e., Xo[n]+Xo[n−1]) is added with signal Xo[n−3] to generate an output signal Yout[n] of the encoder circuit 100, where:
Yout[n]=Xo[n]+Xo[n−1]+Xo[n−3] (1)
As apparent from
In accordance with an important aspect of the invention, a transformation of the encoder circuit 100 shown in
As previously stated in connection with encoder circuit 100 illustrated in
Xo[n]=Xo[n−2]+Xo[n−3]+Xin[n] (2)
where n is an integer indicative of a given sample number in the input data stream. By way of example only and without loss of generality, an illustrative transformation is presented herein which beneficially achieves a higher level of parallelism, and thus provides improved bit-per-iteration performance (i.e., higher overall data throughput) compared to conventional turbo PCCC encoder methodologies. Specifically, using equation (2) above, the term Xo[n−2] can be determined by adding two delay units to each of the terms in the expression to thereby yield the following equivalent expression:
Xo[n−2]=Xo[n−4]+Xo[n−5]+Xin[n−2] (3)
In a similar manner, the term Xo[n−3] can be determined from equation (2) above by adding three delay units to each of the terms in the expression to thereby obtain the following equivalent expression:
Xo[n−3]=Xo[n−5]+Xo[n−6]+Xin[n−3] (4)
Hence, an expression for Xo[n] may be computed by substituting equation (3) for the term Xo[n−2] in equation (2) and by substituting equation (4) for the term Xo[n−3], as follows:
Xo[n]=Xo[n−4]+Xo[n−5]+Xin[n−2]+Xo[n−5]+Xo[n−6]+Xin[n−3]+Xin[n] (5)
Equation (5) above can be simplified by recognizing that the two Xo[n−5] terms cancel one another, thereby yielding the following expression for Xo[n]:
Xo[n]=Xo[n−4]+Xo[n−6]+Xin[n]+Xin[n−2]+Xin[n−3] (6)
The term Xo[n−4] in equation (6) can be determined by adding four delay units to each of the terms in equation (2) above to thereby obtain the following equivalent expression:
Xo[n−4]=Xo[n−6]+Xo[n−7]+Xin[n−4] (7)
Substituting equation (7) into equation (6) for the term Xo[n−4] results in the following expression for Xo[n]:
Xo[n]=Xo[n−6]+Xo[n−7]+Xin[n−4]+Xo[n−6]+Xin[n]+Xin[n−2]+Xin[n−3] (8)
Simplifying equation (8) above by canceling the two Xo[n−6] terms yields the following expression for Xo[n]:
Xo[n]=Xo[n−7]+Xin[n]+Xin[n−2]+Xin[n−3]+Xin[n−4] (9)
As apparent from equation (9) above, the signal Xo[n] depends only on the historic term Xo[n−7]. From a practical implementation standpoint, this means that seven output bits can be computed in parallel using shifted inputs, Xin[n−2], Xin[n−3] and Xin[n−4], and previously determined (i.e., historic) output values. Of course, as will become apparent to those skilled in the art given the teachings herein, the present invention is not limited to the transformation set forth in equation (9). Rather, a greater or lesser amount of parallelism can be achieved as desired, depending on the particular coding application. An advantage of the improved data throughput afforded by using additional parallelism in the encoder circuit would be mitigated somewhat by an increase in the number of delay elements required in one or more of the delay lines in the PCCC encoder, although increasing the number of delay elements in the PCCC encoder can typically be implemented without a significant increase in cost. Conversely, the benefit of using a reduced number of delay elements in one or more delay lines in the encoder would be tempered by a decrease in the overall data throughput of the encoder.
With reference now to
More particularly, first delay line 302 preferably includes a plurality of delay elements connected together in a series configuration, such that an output of a given delay element is coupled with an input of an adjacent delay element in the delay line. Specifically, first delay line 302 includes a first delay element 308 having a delay D1 associated therewith, a second delay element 310 having a delay D2 associated therewith, a third delay element 312 having a delay D3 associated therewith, and a fourth delay element 314 having a delay D4 associated therewith. Delay element 308 is adapted to receive an input signal, Xin[n], which may a sample in an input data stream supplied to encoder circuit 300, and is operative to generate a signal, Xin[n−1], which is indicative of signal Xin[n] delayed by D1, where n is an integer indicative of a given sample number in the input data stream. Delay element 310 is adapted to receive signal Xin[n−1] and is operative to generate a signal, Xin[n−2], which is indicative of signal Xin[n−1] delayed by D2. Delay element 312 is adapted to receive signal Xin[n−2] and is operative to generate a signal, Xin[n−3], which is indicative of signal Xin[n−2] delayed by D3. Likewise, delay element 314 is adapted to receive signal Xin[n−3] and is operative to generate a signal, Xin[n−4], which is indicative of signal Xin[n−3] delayed by D4.
Signal Xin[n−4] generated by delay element 314 is preferably supplied to a first adder 316. First adder 316 is operative to generate a signal, Xa1, which is a summation of signal Xin[n−4] and signal Xin[n−3] generated by delay element 312; namely, Xa1=Xin[n−3]+Xin[n−4]. A second adder 318 is adapted to receive signal Xa1 generated by adder 316 and signal Xin[n−2] generated by delay element 310 and is operative to generate a signal, Xa2, which is a summation of the output signal of adder 316 and Xin[n−2]; namely, Xa2=Xin[n−2]+Xin[n−3]+Xin[n−4]. In this manner, delay line 302, in combination with adders 316 and 318, are operative to generate the shifted input sample terms in equation (9) above; namely, Xin[n−2], Xin[n−3] and Xin[n−4].
Second delay line 304 preferably includes an adder 320, or alternative summation circuitry, and a plurality of delay elements connected together in a series configuration, such that an output of a given delay element is coupled with an input of an adjacent delay element in the delay line. As will be described in further detail below, a first one of the delay elements in delay line 304 is preferably operative to receive a first signal, including input signal Xin[n] and at least one signal which is a delayed version of the input signal (e.g., signals Xin[n−2] and Xin[n−4]), and a second signal generated as an output of a single one of the delay elements in delay line 304. In this manner, delay line 304 is operative to generate the sample term Xo[n−7] in equation (9) above.
More particularly, second delay line 304 includes a first delay element 322 having a delay D1 associated therewith, a second delay element 324 having a delay D2 associated therewith, a third delay element 326 having a delay D3 associated therewith, a fourth delay element 328 having a delay D4 associated therewith, a fifth delay element 330 having a delay D5 associated therewith, a sixth delay element 332 having a delay D6 associated therewith, and a seventh delay element 334 having a delay D7 associated therewith. It is to be appreciated that the invention is not limited to any specific number of delay elements in delay line 304. Nor is the invention limited to any specific delay values used for the respective delay elements 322 through 334; rather, each of delay values D1 through D7 may be the same or, alternatively, one or more of the delay values may be different relative to one another. It is also to be appreciated that the delay values D1 through D4 in delay line 302 are not necessarily equivalent to delay values D1 through D4 in delay line 304, despite the apparent similar naming conventions employed.
Delay element 322 is adapted to receive a signal, Xo[n], supplied thereto and is operative to generate a signal, Xo[n−1], which is indicative of signal Xo[n] delayed by D1 (i.e., shifted). Delay element 324 is adapted to receive signal Xo[n−1] and is operative to generate a signal, Xo[n−2], which is indicative of signal Xo[n−1] delayed by D2. Delay element 326 is adapted to receive signal Xo[n−2] and is operative to generate a signal, Xo[n−3], which is indicative of signal Xo[n−2] delayed by D3. Delay element 328 is adapted to receive signal Xo[n−3] and is operative to generate a signal, Xo[n−4], which is indicative of signal Xo[n−3] delayed by D4. Delay element 330 is adapted to receive signal Xo[n−4] and is operative to generate a signal, Xo[n−5], which is indicative of signal Xo[n−4] delayed by D5. Delay element 332 is adapted to receive signal Xo[n−5] and is operative to generate a signal, Xo[n−6], which is indicative of signal Xo[n−5] delayed by D6. Likewise, delay element 334 is adapted to receive signal Xo[n−6] and is operative to generate a signal, Xo[n−7], which is indicative of signal Xo[n−6] delayed by D7.
Signal Xo[n−7], generated by the last delay element 334 in delay line 304, is preferably fed back to the beginning of delay line 304 through adder 320 in an iterative arrangement. More particularly, signal Xo[n] generated by adder 320 is preferably a summation of input signal Xin[n], signal Xa2, which, as previously described, is equal to Xin[n−2]+Xin[n−3]+Xin[n−4], and signal Xo[n−7]. Thus, signal Xo[n] supplied to delay element 322 may be expressed as Xo[n]=Xin[n]+Xin[n−2]+Xin[n−3]+Xin[n−4]+Xo[n−7], which is the same as equation (9) above.
Signal Xo[n] is concurrently supplied to delay line 306. Delay line 306 may be implemented in a manner consistent with delay line 114 shown in
Signal Xo[n] from adder block 320 is supplied to delay element 336 and concurrently to adder block 342. Delay element 336 is preferably operative to generate a signal Xo[n−1], which is essentially signal Xo[n] delayed by D1. Signal Xo[n−1] is concurrently supplied to delay element 338 and to adder block 342. Delay element 338 is preferably operative to generate a signal Xo[n−2] which is essentially signal Xo[n−1] delayed by D2. Signal Xo[n−2] is supplied to delay element 340. Delay element 340 is preferably operative to generate a signal Xo[n−3] which is essentially signal Xo[n−2] delayed by D3. An output signal generated by adder block 342, which is a summation of signals Xo[n] and Xo[n−1] (i.e., Xo[n]+Xo[n−1]) is fed to adder 344 where it is added with signal Xo[n−3] to generate an output signal Yout[n] of the encoder circuit 300, where Yout[n]=Xo[n]+Xo[n−1]+Xo[n−3], which is equivalent to equation (1) above.
In accordance with another embodiment of the invention, turbo PCCC encoder circuit 300 can be simplified somewhat by reusing one or more output results generated in delay line 304 in delay line 306. For example, it is apparent from
Techniques of the invention described herein may be performed using hardware and/or software aspects. Software includes, but is not limited to, firmware, resident software, microcode, etc., which can be executed on hardware which may include, but is not limited, a central processing unit (CPU), DSP, hardware state machine, programmable logic array (PLA), etc. By way of illustration only and without limitation, according to an embodiment of the invention at least a portion of the turbo PCCC encoder (e.g., according to
The lines of executable MATLAB pseudo-code shown above may be thought of as respective steps in a turbo PCCC encoding methodology according to an embodiment of the invention. This pseudo-code can be implemented in various hardware including, but not limited to, an LTE or any third generation (3G) acceleration chip, or implemented in a field programmable gate array (FPGA) or application specific integrated circuit (ASIC). It is to be understood that the pseudo-code is provided as an illustration only, and that other means of implementing one or more aspects of the invention are contemplated, as will become readily apparent to those skilled in the art given the teachings herein.
One or more embodiments of the invention or elements thereof may be implemented in the form of an article of manufacture including a machine readable medium that contains one or more programs which when executed implement such method step(s); that is to say, a computer program product including a tangible computer readable recordable storage medium (or multiple such media) with computer usable program code stored thereon in a non-transitory manner for performing the method steps indicated. Furthermore, one or more embodiments of the invention or elements thereof can be implemented in the form of an apparatus including a memory and at least one processor that is coupled with the memory and operative to perform, or facilitate the performance of, exemplary method steps.
As used herein, “facilitating” an action includes performing the action, making the action easier, helping to carry the action out, or causing the action to be performed. Thus, by way of example and not limitation, instructions executing on one processor might facilitate an action carried out by instructions executing on a remote processor, by sending appropriate data or commands to cause or aid the action to be performed. For the avoidance of doubt, where an actor facilitates an action by other than performing the action, the action is nevertheless performed by some entity or combination of entities.
Yet further, in another aspect, one or more embodiments of the invention or elements thereof can be implemented in the form of means for carrying out one or more of the method steps described herein; the means can include (i) hardware module(s), (ii) software module(s) executing on one or more hardware processors, or (iii) a combination of hardware and software modules; any of (i)-(iii) implement the specific techniques set forth herein, and the software modules are stored in a tangible computer-readable recordable storage medium (or multiple such media). Appropriate interconnections via bus, network, and the like can also be included.
Aspects of the invention may be particularly well-suited for use in an electronic device or alternative system (e.g., broadband communications system). For example,
It is to be appreciated that the term “processor” as used herein is intended to include any processing device, such as, for example, one that includes a CPU and/or other processing circuitry (e.g., DSP, network processor, microprocessor, etc.). Additionally, it is to be understood that a processor may refer to more than one processing device, and that various elements associated with a processing device may be shared by other processing devices. For example, in the case of encoder circuit 300 shown in
Accordingly, an application program, or software components thereof, including instructions or code for performing the methodologies of the invention, as described herein, may be stored in a non-transitory manner in one or more of the associated storage media (e.g., ROM, fixed or removable storage) and, when ready to be utilized, loaded in whole or in part (e.g., into RAM) and executed by the processor. In any case, it is to be appreciated that at least a portion of the components shown in the previous figures may be implemented in various forms of hardware, software, or combinations thereof (e.g., one or more DSPs with associated memory, application-specific integrated circuit(s) (ASICs), functional circuitry, one or more operatively programmed general purpose digital computers with associated memory, etc). Given the teachings of the invention provided herein, one of ordinary skill in the art will be able to contemplate other implementations of the components of the invention.
At least a portion of the techniques of the present invention may be implemented in an integrated circuit. In forming integrated circuits, identical die are typically fabricated in a repeated pattern on a surface of a semiconductor wafer. Each die includes a device described herein, and may include other structures and/or circuits. The individual die are cut or diced from the wafer, then packaged as an integrated circuit. One skilled in the art would know how to dice wafers and package die to produce integrated circuits. Integrated circuits so manufactured are considered part of this invention.
An integrated circuit in accordance with the present invention can be employed in essentially any application and/or electronic system in which PCCC's may be employed. Suitable systems for implementing techniques of the invention may include, but are not limited to, mobile phones, personal digital assistants (PDA's), personal computers, wireless communication networks, etc. Systems incorporating such integrated circuits are considered part of this invention. Given the teachings of the invention provided herein, one of ordinary skill in the art will be able to contemplate other implementations and applications of the techniques of the invention.
Although illustrative embodiments of the present invention have been described herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments, and that various other changes and modifications may be made therein by one skilled in the art without departing from the scope of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
5970085 | Yi | Oct 1999 | A |
6023783 | Divsalar et al. | Feb 2000 | A |
6094427 | Yi | Jul 2000 | A |
6519732 | Li | Feb 2003 | B1 |
6598203 | Tang | Jul 2003 | B1 |
6651209 | Morsberger et al. | Nov 2003 | B1 |
6772391 | Shin | Aug 2004 | B1 |
7765457 | Amer | Jul 2010 | B2 |
8201048 | Eroz et al. | Jun 2012 | B2 |
8250429 | Lin | Aug 2012 | B2 |
8271848 | Bresalier et al. | Sep 2012 | B2 |
8365047 | Palanki et al. | Jan 2013 | B2 |
20020087923 | Eroz et al. | Jul 2002 | A1 |
20100272011 | Palanki et al. | Oct 2010 | A1 |
20120082053 | Qiu et al. | Apr 2012 | A1 |
Entry |
---|
Sug H. Jeong et al., “Bit Manipulation Accelerator for Communication Systems Digital Signal Processor,” EURASIP Journal on Applied Signal Processing 2005:16, pp. 2655-2663 (2005). |
Number | Date | Country | |
---|---|---|---|
20120324316 A1 | Dec 2012 | US |