Method of blindly detecting a transport format of an incident convolutional encoded signal, and corresponding convolutional code decoder

Abstract
A method for blindly detecting a transport format of a convolutional encoded signal is provided. The transport format is unknown and belongs to a set of MF predetermined reference transport formats. The method includes decoding the convolutional encoded signal using a Maximum-a-Posteriori algorithm. The decoding includes considering the MF possible reference transport formats and delivering MF corresponding groups of soft output information, calculating from each group of soft output information a calculated cyclic redundancy check (CRC) word, and comparing the calculated CRC word with the transmitted CRC word. Groups are selected which the calculated CRC word is equal to the transmitted CRC word, and an actual transport format of the convolutional encoded signal is selected from at least one soft output information among last ones of each selected group.
Description
FIELD OF THE INVENTION

The invention relates in general to channel coding and decoding techniques, especially convolutional codes, and more particularly, to the blind transport format detection (BTFD).


An application of the invention is directed in general to wireless communication systems, and more particularly, to CDMA systems such as the different CDMA based mobile radio systems like CDMA 2000, WCDMA (Wide Band CDMA) and the IS-95 standard.


BACKGROUND OF THE INVENTION

The third generation mobile radio system specifies convolutional codes and turbo-codes as channel coding techniques as disclosed in 3GPP, Technical Specification Group Radio Access Network; Multiplexing and Channel Coding (FDD); (3G TS 25.212 version 3.5.0(2000-12)), Release 1999. The UMTS standard defines, beside the TFCI (Transport Format Combination Indicator) base transport format detection, a possibility of blindly detecting the transport format in use. More details can be found in 3GPP, Technical Specification Group Radio Access Network; Multiplexing and Channel Coding (FDD); (3G TS 25.212 version 4.3.0 (2001-12)), Release 4.


This option is used to reduce the TFCI symbol overhead in the transmission frame, and thus to increase the air load. Further, as indicated in the UMTS standard, the explicitly blind detected transport channels have to be coded using a convolutional code.


Generally speaking, the blind transport format detection is based on the use of the CRC (Cyclic Redundancy Check) words. An example of such detection can be found in the annex of the above mentioned 3GPP document. The main idea is to use a Viterbi decoder for decoding the different possible block sizes corresponding to the possible transport formats that can be used in the transport channel under investigation.


Then, a CRC word has to be checked for all possible block sizes. A correct CRC determines the correct block size. For a large CRC length, the CRC check is sufficient to determine the correct block. However, in the case of smaller CRCs, like 8 or 12 bits, an additional metric has to be deployed to distinguish between two or more blocks with a correct CRC. When a Viterbi decoder is used, an additional calculation is needed for calculating this additional metric.


SUMMARY OF THE INVENTION

In view of the foregoing background, an object of the invention is to provide an approach to the above described problem. This approach is a different method for blindly detecting a transport format of an incident convolutional encoded signal.


Generally speaking, the transport format is unknown and belongs to a set of MF predetermined reference transport formats. The signal comprises a data block having an unknown number of bits corresponding to the unknown transport format, and a CRC field containing a transmitted CRC word.


The method according to the invention comprises decoding the signal using a Maximum-a-Posteriori algorithm. The decoding step includes decoding the signal considering respectively the MF possible reference formats and respectively delivering MF corresponding groups of soft output information. The method also comprises calculating from each group of soft output information a calculated CRC word and comparing the calculated CRC word with the transmitted CRC word, selecting all the groups for which the calculated CRC word is equal to the transmitted CRC word, and selecting the actual transport format of the encoded signal from at least one soft output information among the last ones of each selected group.


Thus, the invention uses already calculated soft output information (LLR information) as additional metric to distinguish between two or more blocks with a correct CRC. This is possible since the proposed architecture uses a MAP base decoder for decoding the CC code. In this case, no additional metric calculation is needed like in the case of the Viterbi decoder.


The actual transport format of the encoded signal may be selected from the last soft output information of each selected group. The actual transport format is for example the reference format having the greatest last soft output information.


However, other possibilities exist for selecting the actual transport format. For example, we can use the last but one soft output information, or the last but one and the last soft output information. Another possibility includes combining one of the last soft output information with the minimum soft output information.


The decoding step may comprise calculating state metrics, and all the data blocks are decoded in parallel window-by-window on sliding windows having a predetermined size, and at least some of the state metrics calculated on a window are valid for all the data blocks.


The decoding step may comprise for each window calculating forward state metrics during a forward recursion, performing a backward acquisition having a predetermined acquisition length, calculating backward state metrics during a backward recursion and calculating soft output information in the reverse order. For each window only one forward recursion is performed which is valid for all the transport formats.


The first window processing may include forward recursion, backward acquisition, backward recursion and soft output calculation, and is completely valid for all data blocks having a size larger than the sum of the window size and the acquisition length. In other words, according to particular embodiments of the invention, there is a complete reuse of forward state metric calculation for all different block sizes and partial reuse (if possible) of backward state metric calculation and LLR calculation for different block sizes.


Another aspect of the invention is directed to a convolutional code decoder comprising input means, convolutional code decoding means, and blind transport format detection means. The input means is for receiving a convolutional encoded signal having an unknown transport format belonging to a set of MF predetermined reference transport formats. The signal comprises a data block having an unknown number of bits corresponding to the unknown transport format, and a Cyclic Redundancy Check (CRC) field containing a transmitted CRC word.


The convolutional code decoding means may implement a Maximum-a-Posteriori algorithm for decoding successively the signal by respectively considering successively the MF possible reference formats, and comprising a Log-Likelihood-Ratio unit for successively delivering MF corresponding groups of soft output information.


The blind transport format detection means may comprise a Cyclic Redundancy Check unit for calculating from each group of soft output information a calculated CRC word, and comparison means for comparing the calculated CRC word with the transmitted CRC word. First selection means are for selecting all the groups for which the calculated CRC word is equal to the transmitted CRC word. Second selection means are for selecting the actual transport format of the encoded signal from at least one soft output information among the last ones of each selected group. The second selection means are adapted to select the actual transport format of the encoded signal from the last soft output information of each selected group.


The convolutional code decoding means may be adapted to calculate state metrics. All the data blocks are decoded in a parallel window-by-window on sliding windows having a predetermined size, and at least some of the state metrics calculated on a window are valid for all the data blocks.


The convolutional code decoding means may be adapted for each window to calculate forward state metrics during a forward recursion, to perform a backward acquisition having a predetermined acquisition length, and to calculate backward state metrics during a backward recursion. For each window, only one forward recursion is performed that is valid for all the transport formats. The first window processing may include forward recursion, backward acquisition, backward recursion and soft output calculation, and is completely valid for all data blocks having a size larger than the sum of the window size and the acquisition length.


The code decoder according to the invention may be a combined turbo-code/convolutional code decoder further comprising turbo-code decoding means for performing turbo-code decoding. The turbo-code and convolutional code decoding means comprise common processing means having a first configuration dedicated to turbo-code decoding and a second configuration dedicated to convolutional code decoding.


The decoder may further comprise metrics memory means for storing state metrics associated to the states of a first trellis and delivered by the processing means in its first configuration, and input/output memory means for storing input and output data delivered to and by the processing means in its second configuration. Adaptable memory means store input and output data delivered to and by the processing means in its first configuration, and for storing state metrics associated to the states of a second trellis and delivered by the processing means in its second configuration. Control means are for configuring the common processing means in its first or second configuration depending on the kind of code. Memory control means are for addressing differently the adaptable memory means depending on the configuration of the common processing means.


The common processing means may implement a Maximum-a-Posteriori (MAP) algorithm. The MAP algorithm implemented is for example a so-called LogMAP algorithm or a so-called MaxLogMAP algorithm. The convolutional code decoder according to the invention may be advantageously formed by an integrated circuit.


Another aspect of the invention is directed to terminal of a wireless communication system including a decoder as defined above. This terminal may form a cellular phone or a base station.




BRIEF DESCRIPTION OF THE DRAWINGS

Other advantages and features of the invention will appear on examining the detailed description of embodiments, these being in no way limiting and of the appended drawings in which:



FIGS. 1a and 1b show the structure of two different NSCs used in UMTS for convolutional encoding according to the invention;



FIG. 2 shows a part of a trellis which represents the possible transitions in one time step according to the invention;



FIG. 3 shows a receiving chain of a mobile phone including a decoder according to the invention;



FIG. 4 shows very diagrammatically the internal structure of a first embodiment of a decoder according to the invention;



FIG. 5 shows more in detail a part of a decoder according to the invention;



FIG. 6 illustrates diagrammatically possible transport formats according to the invention;



FIG. 7 shows diagrammatically a part of blind detection means according to the invention;



FIG. 8 shows diagrammatically a CRC processing scheme according to the invention;



FIG. 9 shows diagrammatically a windowing scheme according to the invention;



FIG. 10 illustrates a flow chart of a blind detection according to the invention;



FIG. 11 shows a UMTS turbo-code encoder according to the invention;



FIG. 12 shows a generic turbo decoder according to the invention;



FIG. 13 shows very diagrammatically the internal structure of a decoder according to a second embodiment of the invention;



FIG. 14 shows in greater detail a portion of the decoder illustrated in FIG. 13;



FIG. 15 shows in greater detail an adaptable memory belonging to a decoder according to the invention;



FIG. 16 shows diagrammatically an ACS unit architecture according to the invention;



FIG. 17 shows an LLR unit belonging to a combined decoder according to the invention; and



FIG. 18 shows global control steps for turbo-code decoding according to the invention.




DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

Encoding will initially be discussed. Convolutional encoding is performed by calculating the modulo-2 sum of the input values of the current and/or selected previous time steps. Therefore, implementation is straightforward and mainly includes a shift register and a couple of exclusive-OR gates. Different kinds of convolutional codes can be realized based upon the switching. The codes are systematic codes, non-systematic codes, recursive codes and non-recursive codes.


In systematic codes, one of the output streams is equal to the input stream, i.e., the systematic information. In non-systematic codes (NSC), each output is a parity information. Parity information is produced by taking the modulo-2 sum of shift register entries stating the history of the encoding process. In recursive codes, a special parity signal is produced and fed back in conjunction with the systematic input. In non-recursive codes, no such feedback loop exists.


A convolutional encoder is defined by a combination of these properties, the memory depth (constraint length) and the logical functions used to produce the parity information. These properties are described through generator polynomials.



FIGS. 1a, 1b represents the structure of two different NSCs used in UMTS for convolution and convolutional encoding. Furthermore, two different rates have to be considered. The rate 1/2 convolutional encoder in FIG. 1a has two outputs, whereas the rate 1/3 encoder in FIG. 1b has three outputs.


With the value of the current time-step available, M=K-1 flip-flops are needed to store the encoder history. This gives rise to an interpretation of the encoder as a finite-state machine (FSM). It shows a behavior equivalent to a Mealy-automaton. The likelihood of a transition between two states is the key to the decoding of convolutional codes.


A code trellis is the unrolled state chart of a finite-state machine. The number of states the encoder can be in (N) is a function of the constraint length K:

N=2K-1.


Depending on the nature of the code (RSC, NSC, . . . ) only certain transitions are possible. A trellis is used to depict those transitions. In FIG. 2, part of a trellis is shown that represents the possible transitions in one time-step. Instead of the usual tree-structure used to display state-charts, the trellis combines states which are equivalent. Solid lines in FIG. 2 stand for transitions due to the input of a systematic bit of “0” whereas dashed lines represent those caused by a “1”. Conventionally, every encoding process starts in the all-zero state.


For the considered codes the initial state of the trellis is always known to be the all-zero state. Without taking any precautions, the encoder ends in an arbitrary state, leaving no hint where to start the backward recursion. This can be counteracted by driving the encoder into a defined final state. Reaching the final state (e.g., the all-zero state) can be achieved by appending a sequence, which steers the encoder towards the final state as fast as possible. This sequence is also depending on the state the encoder is in after the last information bit has been coded. The length of this sequence is equal to K−1. The transmitted bits are called tailbits.


Decoding will now be discussed. Decoding convolutional codes is keeping track of the transitions that took place in the encoder. From these transistors the input symbols which have been sent are deducted. Due to the degradations caused by the channel, only estimates of the systematic and parity bits are available, which will both be called channel values here. There are two different kinds of outputs: hard values and soft values.


For the hard values, they merely indicate if a symbol is supposed to be a “1” or “0”. For the soft values, these also deliver a measure for the reliability of the decision. The hard decision is extended by the probability that the decision is correct.


Based on the channel values, probabilities can be computed that certain combinations of systematic and parity bit occurred. From this and considering the encoder history, the probability that the encoder was in a given state at a given time-step can be computed.


Two approaches exist to deal with those state-probabilities. The maximum likelihood based Viterbi algorithm uses them to search the most likely codeword. For this it traverses the trellis from the all-zero state to the end state and looks for the most likely sequence. The states chosen for the survivor path indicate the most likely sequence of symbols that has been sent. Hence, a Viterbi decoder is a sequence estimator.


The Maximum-a-Posteriori (MAP) algorithm on the other side estimates the probability that the encoder was in the given state and that the current state leads to the final state given the remainder of the channel values. This can be efficiently computed by a forward and backward recursion over the trellis. Afterwards, for each bit the probabilities for these states associated with a systematic “0” are added and compared to those associated with a “1”. The symbol with the higher probability is assumed to be the sent one. Since this works on a bit level rather than on sequence level, it is called symbol estimation.


The name Maximum-a-Posteriori stems from the fact that the estimation of the bits is based on the whole receiver sequence. It is done after all the information is in. Equation 2.1 shows the output of such a MAP decoder.


Bahl et al. described in [L. Bahl, J. Cocke, F. Jelinek, and J. Raviv.] Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate. IEEE Transaction on Information Theory, IT-20:284-287, march 1974] an efficient algorithm for the MAP decoder, which is based on recursions operating on the trellis in forward and backward recursion. That algorithm is commonly referred to as a MAP or BCJR algorithm.


Let Rk denote the input of the MAP, with {overscore (R)}=(R1, . . . ,Rk, . . . RN), where N is the length of the block, then the BCJR-algorithm computes the a-posteriori probabilities (APP)
Λ(dk)=lnPr{dk=1R}Pr{dk=0R}(2.1)

for each data symbol dk after reception of the symbol sequence {overscore (R)}.


It is computed using two probabilities. One, that the encoder has reached state Skm, with mÅ{1 . . . 2M} after k received symbols:

αk(m)=Pr{(Skm|R0 . . . Rk−1}  (2.2)

and another, that the remainder of the input sequence will lead the encoder to the final state given the state Sk+1m′ at time k+1:

βk+1(m′)=Pr{(Rk . . . RN|Sk+1m′}  (2.3)

For this, the probability of a transition from state Skmto Sk+1m′ has to be known. It is depending on the code structure, the channel model and the received symbols Rk:

γ(Skm,Sk+1m′)=Pr{(Skm,Sk+1m′|Rk}  (2.4)

Using γ, α and β can be computed recursively by:
αk(m)=mak-1(m)·γ(Sk-1m,Skm)(2.5)βk(m)=mβk+1(m)·γ(Sk-1m,Skm)(2.6)

A known start and final state are necessary for the BCJR algorithm to perform optimally. If the trellis is not terminated, all states have to be assumed to have equal probability for k=N.


The a-posteriori probability itself can be expressed as
Λ(dk)=lnmmγ(Sk-1m,Skm,dk=1)·αk-1(m)·βk(m)mmγ(Sk-1m,Skm,dk=0)·αk-1(m)·βk(m)(2.7)

When the encoder is an NSC, this equation can be simplified because the state number can be used to identify a state reached by input dk=1 or dk=0.
Λ(dk)=Skmdk=1αk(Skm)·βk(Skm)Skmdk=0αk(Skm)·βk(Skm)(2.7a)

The large number of multiplications involved in the computation of the APP makes it less attractive for implementation. Therefore, the MAP algorithm has to be transformed to the logarithmic domain, where it becomes the LogMAP algorithm, which increases numerical stability and eases implementation, while not degrading the error correction performance.


The MAP algorithm in the logarithm domain, i.e., LogMAP, will now be discussed. The transformation of multiplications into additions is the motivation for defining the MAP algorithm in the log-domain. A problem is posed by the additions. Using the Jacobian logarithm, the additions are substituted by a new operator:

ln(eδ1+eδ2)=max*(δ1,δ2)=max(δ1,δ2)+ln(1+e−|δ1-δ2|)

Similar the negative logarithm can be taken, which leads to:

min*(δ1,δ2)=min(δ1,δ2)−ln(1+e−|δ1-δ2|).


For more than two operands, the max* is applied recursively. Since the operator is associative, a tree-like evaluation can be employed, which is advantageous for hardware implementation. The sub-optimal MaxLogMAP algorithm is obtained by using the approximation

max*(δ1,δ2)≈max(δ1,δ2).


Using the max* operation, the recursions become:

ln(αk(m′))=maxm*(ln(αk−1(m))+ln(γ(Sk−1m,Skm′)),   (2.8)
ln(βk(m))=maxm*(ln(βk+1(m′))+ln(γ(Skm,Sk+1m′))   (2.9)

Let ln(αk(m′)) from now on be denoted as {overscore (a)}k(m′) (accordingly for β and γ), then the recursions take the form:

{overscore (α)}k(m′)=maxm*({overscore (α)}k−1(m)+{overscore (γ)}(Sk−1m,Skm′)   (2.10)
{overscore (β)}k(m)=maxm*({overscore (β)}k+1(m′)+{overscore (γ)}(Skm,Sk+1m′).   (2.11)

Similar we get:

Λ(dk)=maxm,m′*({overscore (γ)}(Sk−1m,Skm′,dk=1)+{overscore (α)}k−1(m)+{overscore (β)}k(m′))−maxm,m′*({overscore (γ)}(Sk−1m,Skm′,dk=0)+{overscore (α)}k−1(m)+{overscore (β)}k(m′))   (2.12)

and in case of an NSC encoder:

Λ(dk)=maxSkm|dk=1({overscore (α)}k(Skm)+{overscore (β)}k(Skm))−maxSkm|dk=0({overscore (α)}k(Skm)+{overscore (β)}k(Skm))   (2.12a)

Computation of {overscore (γ)} includes the estimation of channel values. An optimized branch metric calculation is used. In case of an NSC encoder the channel values are parity information. Dependant on the rate there are only four or eight different values per k in total the {overscore (γ)} can take. The code-structure alone determines which of them is assigned to which transition. After skipping constant factors and making additional algebraic transformations we get (eq. 2.13):

    • Rate 1/2:

      {overscore (γ)}(xkG0=+1,xkG1=+1)=0 γ_(xkG0=+1,xkG1=-1)=4EsykG1N0γ_(xkG0=-1,xkG1=+1)=4EsykG0N0γ_(xkG0=-1,xkG1=-1)=4EsykG0N0+4EsykG1N0
    • Rate 1/3:

      {overscore (γ)}(xkG0=+1,xkG1=+1,xkG2=+1)=0 γ_(xkG0=+1,xkG1=+1,xkG2=-1)=4EsykG2N0γ_(xkG0=+1,xkG1=-1,xkG2=+1)=4EsykG1N0γ_(xkG0=+1,xkG1=-1,xkG2=-1)=4EsykG1N0+4EsykG2N0γ_(xkG0=-1,xkG1=+1,xkG2=+1)=4EsykG0N0γ_(xkG0=-1,xkG1=+1,xkG2=-1)=4EsykG0N0+4EsykG2N0γ_(xkG0=-1,xkG1=+1,xkG2=+1)=4EsykG0N0+4EsykG1N0γ_(xkG0=-1,xkG1=-1,xkG2=-1)=4EsykG0N0+4EsykG1N0+4EsykG2N0

      This simplifies the implementation significantly since only up to four terms have to be computed from the channel data. One term can be dropped completely and the last one be computed from two others. The scaling factor 4Es/N0 is multiplied externally by usage of a working point.


Windowing will now be discussed. The MAP algorithm minimizes the probability of bit-errors, basing the decision for each bit on the knowledge of the complete block of samples that is a posteriori. It has been shown, however, that a sliding window technique, where a window slides in direction of increasing bit positions k, delivers almost the same communication performance as the original MAP decoder. Then the decisions are based on a subblock beginning at the first bit position in the complete block and ending at the last position in the sliding window. The MAP algorithm can decide all bits belonging to the window, less the bits contained in the last portion of that window. Those bits in the last portion are decided when the window has moved to its next position(s). If only the bits in the middle part get decoded, then the window does not even have to slide (that is, to move steadily in one direction).


When we look at the equations of the MAP algorithm, one can easily identify four subtasks: calculation of the branch metrics (step 1); calculation of the forward state metrics during forward recursion (step 2); calculation of the backward state metrics during backward recursion (step 3); and calculation of the soft outputs (step 4).


The data dependencies between these steps are as follows. Both recursions (step 2 and step 3) and the soft output calculation (step 4) depend on the branch metrics (step 1), and the soft output calculation step (step 4) in addition depends on the forward and backward state metrics (step 2 and step 3). All branch metrics and soft outputs of the current data block can be computed independently from each other. Only the order of computing the forward state metrics and the order of computing the backward state metrics are predefined by the direction of the respective recursion. The sequence of the recursions does not matter since there is no data dependency between the forward and backward state metrics. The backward recursion can be processed before, in parallel to, or after (as implied above) the forward recursion.


Hence, we can introduce the notion of a first and a second recursion. The metrics for the first recursion for a certain trellis step have to be stored in memory until the second recursion has produced the missing complementary metrics for computing the soft output value connected with that trellis step. Thus, the decoder needs to store the first recursion metrics of the full data block. Introducing windowing, which comprises a data subblock, breaks this dependency. Decoding on a window-by-window basis permits the required memory size to be reduced.


A prerequisite for decoding on windows is the concept of acquisition. Originally, the forward and the backward recursion of a MAP decoder start at one end of the trellis and stop at the opposite end. Upon an appropriately long acquisition phase, however, a recursion can start at any trellis step. This applies to both forward and backward recursions.


Consider a forward acquisition commencing at trellis step k−M, where M is the acquisition depth. The forward state metrics are initialized as:

{overscore (α)}k−M(Sk−M)=0, Sk−M∈{0, . . . ,2m−1}  (3.1)

After M recursion steps, the metrics {overscore (α)}k(Sk) approach the values that would be obtained by starting the recursion from the beginning of the trellis. Now consider a backward recursion commencing at trellis step k+M. The backward state metrics are initialized as:

{overscore (β)}k+M(Sk+M)=0, Sk+M∈{0, . . . ,2m−1}  (3.2)

In analogy to the forward recursion, after the same number of recursion steps M, the metrics {overscore (β)}k(Sk) approach the values that would be obtained by starting the recursion at the end of the trellis. If the acquisition depth M is too small, then the decoding performance can be severely degraded. With M above being a certain value, the decoding performance is virtually optimal. The value of M that leads to a reasonable balance between computational effort and decoding performance can be determined by simulation.


The window size itself has no influence on the communication performance, but on the size of the RAM for storage of the α-state metrics. Furthermore, the throughput is dependant on the ratio between acquisition length and window size. When only one state metric calculation unit is available, the individual windows are processed sequentially. It takes one clock cycle for the forward recursion and one clock cycle for the backward recursion/LLR calculation per data bit. The additional computational overhead is determined by the total number of acquisition steps. If the window size is equal to the acquisition length, the decoding needs one additional clock cycle per bit, resulting in total three clock cycles per bit. If the window size is much larger than the acquisition length, the computational overhead goes near zero, resulting in a throughput of nearly two. So the choice of the window size is a trade off between memory size and throughput.


A first embodiment of a convolutional decoder will now be discussed while initially referring to FIG. 3, which illustrates a decoder according to the invention incorporated in the reception chain of a cellular mobile phone TP.


The encoded signal is being received by the antenna ANT and processed by the radio frequency stage REF of the receiver. At the output of the REF stage, the signal is converted into the digital domain by an A/D converter. The digital base band signal is then processed by a rake demodulator which is used generally in the case of a CDMA system.


Then, the channel decoding stage includes a convolutional code decoder CTD according to the invention. The decoder CTD, as illustrated in FIG. 4, comprises convolutional code decoding means CCDCM for performing convolutional code decoding.


The decoding means CCDCM implements a MAP algorithm. Input/output memory means referenced as CC I/O RAMs are provided for storing input and output data delivered to and by the decoding means CCDCM. Another memory, referenced CCα-RAM, is used for storing forward state metrics (∝-state metrics) associated to the states of a trellis (for example 256 states) and delivered by the decoding means CCDCM.


The decoder CTD comprises blind transport format detection means BTFD for blindly detecting the transport format used in a transport channel under investigation. The internal architecture of the blind transport format detective means BTFD will be described more in detail below.



FIG. 5 illustrates in greater detail the internal structure of the convolutional code decoding means CCDCM. These means comprise essentially three major calculation units, i.e., a BM unit, a state machine unit and an LLR unit. The BM unit calculates the branch metrics BM and controls the CC I/O RAMs. The calculation of the branch metrics is performed according to equation 2.13, depending on the value of the rate. The state metric unit SM calculates state metrics SM according to equations 2.10 and 2.11. The LLR unit calculates the soft output information (LLRs) in a pipeline manner, according to equations 2.12a.


The input RAMs includes three RAMs referenced as G0-RAM, G1-RAM and G2-RAM according to the respective convolutional code input data. The output RAM is an LLR-RAM. The blind detection of the transport format will be now described more in detail with reference to FIGS. 6-10.


In FIG. 6, the structure of a coded block in the case of BTFD is depicted. The transport channel under consideration has MF different transport formats TF1 to TFMF and thus MF different coded blocks sizes. A coded block contains a CRC field (transmitted CRC word) and the data bit field. The CRC field in each of the different blocks has the same size and thus uses the same polynomial. In the presented case the coded block only contains one CRC at the end. The DTX field can be considered as being noise.


The blind transport format detection means comprises a cyclic redundancy check unit CRCU (FIG. 7) for calculating from each group of soft output information (LLR) corresponding to each transport format, a calculated CRC word. This calculated CRC word is stored in a CRC register CRG1. Therefore, certain input parameters are necessary, which are the length of the CRC and the CRC polynomials.


The transmitted CRC word (CRC sum) which is attached to the data block in reverse order during the encoding, is stored in the register CRG2. Comparison means are adapted to compare the content of the register CRG1 with the register CRG2. When equal, the CRC check is positive.


The LLR bits are stored in a buffer BFF having a certain window size, for example 53 bits for convolutional code. These bits are shifted out of the buffer into the register CRG1 during forward recursion and backward acquisition (FIG. 8).


This architecture can be extended to check up to MF=16 different block sizes by using a total of 16 CRC units. The buffer has to be implemented only once, because the LLRs of the different blocks are calculated serially. So only the shaded blocks in FIG. 7 are multiple representations.


As mentioned before, a sliding window approach is used. The MAP algorithm minimizes the probability of bit-errors, basing the decision for each bit on the knowledge of the complete block. Originally the forward and backward recursion of a MAP decoder start at one end of the trellis and stop at the opposite end. These starting points are known for the forward recursion (state 0) and if tailing is applied as in UMTS, we also know the end point (this is the starting point for backward recursion) of the trellis (state 0).


However, upon an appropriate long acquisition phase we can start at any trellis step. This applies for both forward and backward recursion. The starting point for the forward recursion is always known, because we start the calculation with the alpha recursion. Furthermore, the starting points for the following window is known because we already calculated the state metrics and stored them in the alpha RAM. For backward recursion we need acquisition to get the appropriate starting points when calculating the backward recursion for the window. So we normally do the normal (full) acquisition to initialize the backward recursion. This is done until we get to the last two windows.


These windows are somewhat special. They include a last window and a next to last window. In the last window, we do not do a normal acquisition because we do not have data to do so. But we do not need this at all because we have the tail bits, which are used to give us the initial state metrics for backward recursion. In the next to last window, it could happen that there are not enough data bits left to do a full acquisition. This is not a problem because we simply start with the tail bits and then process the remaining acquisition steps. Than we proceed as usual with backward recursion/LLR calculation.


It would be possible to adapt this scheme for the blind transport format detection. However, if we consider for example two transport formats of sizes 108 and 120, then when we determine the LLRs for block length 108, and we cannot calculate the LLRs for block length 120 without starting the forward recursion from the second window. The initialization values for the forward recursion have to be stored in a specific memory, otherwise, we have to start the forward recursion from the beginning.


When one wants to reuse as much as possible, the calculation scheme has to be preferably changed. The individual block lengths are not calculated serially, but in parallel. This is shown in FIG. 9. The actual stored alpha values are used for calculating all possible LLRs of this window. This way, we have only one forward recursion for all the different transport formats, and only the backward recursion/LLR calculation is individual.


Furthermore, even some backward recursion/LLR calculations are done only once for more than one block size. So we have a total reuse of all so far calculated alpha values and the possibility of partial reuse of calculated beta values and LLRs. As there can be more than one block length with a correct CRC. The absolute values of the last LLR can be used to distinguish between two or more blocks with a correct CRC. These values have to be stored additionally.



FIG. 9 shows the windowing scheme for the already mentioned example for two different block sizes 108 and 120. The processing starts with forward recursion for the first window. After the backward acquisition the backward recursion/LLR calculation follows. The forward recursion is only done once. The alpha state metrics are valid for both block sizes. Because the second block size is larger than the sum (window size+acquisition length), the backward recursion/LLR calculation is valid for both blocks. So the first window processing (forward recursion, backward acquisition, backward recursion/LLR calculation) is completely reused for both block sizes.


The next window starts as usual with forward recursion. Because this is the next to last window for the first block size (108) in this example, there is no full acquisition but the tail bit processing and a partial acquisition. Afterwards, the backward recursion/LLR calculation for the second window follows. These calculated LLRs are individual for the first block size and cannot be reused. So for the second block size a separate backward acquisition and backward recursion/LLR calculation has to be performed. In our example the acquisition also includes tail bits and partial acquisition.


The next window starts with the forward recursion. The forward recursion is always done for the whole data, so it is either a complete window or for the last window includes the remaining bits. So in our example it is done for the remaining 14 bits. Then the last backward processing for block size 108 starts with the acquisition (done with tail bits) and then the backward recursion/LLR calculation. After all LLRs for block size 108 are calculated the results of the final CRC is available. The last backward processing for block size 120 is done similarly.


The proposed general processing flow for BTFD is shown in FIG. 10. In step 100, shift (i, . . . imax) means that the calculated LLR bits are stored in all the CRC registers CRG1 respectively associated with all the possible formats (imin . . . imax). However, in steps 101 and 102, shift(i) means that the LLR bits are stored in the CRC register associated to the current format i.


A second embodiment of a decoder according to the invention will be now described. This decoder is a combined turbo-code/convolutional code decoder, as for example, the one which has been disclosed in European patent application no. 2019974.1.


The main features of such a combined decoder will be now described. However, before describing in detail the internal architecture of such a combined decoder, some general considerations on turbo-code encoding and decoding will be now described.


A turbo code encoder comprises two convolutional encoders and an interleaver. The convolutional codes are fixed to be the RSC codes of rate 1/2 and generator polynomials (13, 15/(octal notation) introduced before.


The systematic information of the second encoder is not transmitted because it can be reconstructed (by deinterleaving) from the systematic output of the first encoder. By this a rate of R=1/3 is achieved. FIG. 11 shows the detailed UMTS turbo code encoder. The trellis termination leads each encoder into its final state separately. This dissolves the dependency between the systematic information of the first and second encoder for the tailbits, because these lead each encoder independent from the other by activating the respective switch, see FIG. 11. Hence the last six bits per encoder (systematic and parity for each) have to be transmitted separately. This results in a total overhead of 12 bits per block.


Decoding turbo codes by searching the most likely codeword is far too complex. Therefore, iterative decoding is advised. The two convolutional codes are decoded separately. While doing this, each decoder incorporates information that has been gathered by the other. This “gathering of information” is the exchange of soft-output values, where the bit-estimates of one unit are transformed into a priori information for the next. The decoders hence have to be soft-input soft-output (SISO) units.


The confidence in the bit estimation is represented as a Log-Likelihood-Ratio (LLR):
Λ(dk)=lnP(dk=1)P(dk=0)

The sign shows whether this bit is supposed to be one or zero whereas the confidence in the decision is represented by the magnitude.


To extract the information that has been gathered during the last decoding stage, the systematic and a priori information that lead to this estimate have to be subtracted. This yields:

L1(dk)=Λ1(dk)−yks−Ldeint2(dk)
L2(dk)=Λ2(dk)−yints−Lint1(dk)

This is called the extrinsic information. The confidence of one decoder in a bit to have a certain value biases the initial guess of the other.



FIG. 12 shows such a turbo code decoder comprising two MAP decoders, an interleaver and a deinterleaver. Feeding the input of one decoder as a priori information input to the next enables the improvement over the decoding iterations. It also gave turbo codes their name, as it resembles the feedback-of-exhaust used in combustion turbo engines. Inputs to the decoder are the received channel values (systematic, parity1 and parity2). During the very first MAP1 operation, the a priori information is set to zero.


Concerning the MAP algorithm, equations 2.1-2.7 are used for turbo decoding as well as equations 2.8-2.12 and equations 3.1-3.2. Computation of {overscore (γ)} includes the estimation of channel values and the a priori information. Whereas the conventional method is quite complicated, an optimized branch metric calculation is used. Prior to transmission, every bit is subject to a transformation. Let xk∈{0,1} denote the (coded) bit, then the transmitted value is

yk=−2·xk+1, hence yk∈{−1,1}.

Thus the actual mapping is ‘1’→‘−1’ and ‘0’→‘1’.


There are only four different values per k in total the {overscore (γ)} can take, one for every assumption (xks∈{−1,1},xkp∈{−1,1}). The code-structure alone determines which of them is assigned to which transition. After skipping constant factors and making additional algebraic transformations we get:
γ_(xks=+1,xkp=+1)=0γ_(xks=+1,xkp=-1)=4EsykpN0γ_(xks=-1,xkp=+1)=4EsyksN0+L(dk)γ_(xks=-1,xkp=-1)=4EsykpN0+4EsyksN0+L(dk)(2.17)


This simplifies the implementation significantly, as only two terms have to be computed from the channel and a priori data. One term can be dropped completely and the last one can be computed from the first two. The scaling factor
4EsN0

is multiplied externally by usage of a working point.


5.2 The combined decoder CTD according to this embodiment, as illustrated in FIG. 13, comprises turbo-code decoding means TCDCM for performing turbo-code decoding, and convolutional code processing means CCDCM for performing convolutional code.


Turbo-code and convolutional code decoding means comprise common processing means CCPR implementing MAP algorithm and having a first configuration dedicated to turbo-code decoding and a second configuration dedicated to convolutional code decoding. The common processing means CCPR or MAP unit form a Soft-in-Soft-out unit (SISO unit) on which MAP1 and MAP2 operations are done serially for turbo decoding, until a stopping criterion is fulfilled.


Further to these common processing means, the turbo-code decoding means TCDM comprises conventional interleaving means IL. Moreover, the combined decoder CTD comprises metrics memory means referenced as TC alpha-RAM for storing forward state metrics associated to the states of a first trellis (8 states here). The forward state metrics are delivered by the processing means CCPR in its first configuration (turbo decoding).


Input/output memory means referenced as CC I/O RAMs are provided for storing input and output data delivered to and by the processing means CCPR in its second configuration, i.e., for CC decoding.


Adaptable memory means ADMM are used for storing input and output data delivered to and by the processing means CCPR in its first configuration (turbo-code decoding), and for storing forward state metrics associated to the states of a second trellis (256 states here) and delivered by the processing means CCPR in its second configuration (convolutional code decoding).


Control means CDRLM configure the common processing means CCPR in its first or second configuration depending on the kind of code, and memory control means CTMM address differently the adaptable memory means ADMM depending on the configuration of the common processing means CCPR.



FIG. 14 illustrates in greater detail the internal structures of the common processing means CCPR. The processing means CCPR comprise essentially three major calculation units. The BM unit, the state metric unit and the LLR unit. The BM unit calculates the branch metrics BM and controls the complete I/O RAMs and alpha RAM. The state metric unit calculates 8 state metrics SM in parallel with 8 add-compare-select (ACS) units. The LLR unit calculates the LLRs in a pipelined manner.


The input RAMs comprise three RAMs referenced as G0_RAM, G1_RAM and G2_RAM according to the respective CC input data. Output RAM is CCLLR_RAM. The RAM-sizes are 512*6 bits. The alpha state metric RAM for TC decoding is a dedicated 64*88bit RAM (alpha_RAM).


The adaptable memory means ADMM is used either for storing input and output data for turbo-code decoding or for storing forward state metrics in the convolutional code decoding. As shown in FIG. 15, these adaptable memory means comprises a plurality of elementary memories as well as an additional memory Add.


More precisely, in the present application, the turbo-code decoding means are adapted to receive successive sequences of N1 symbols (N1=5120) of b1 bits (b1=6). The input and output data delivered to and by the processing means CCPR in its first configuration (turbo decoding) comprises for each received sequence, g different blocks of N1 words of b1 bits. Here, g=4 and these g blocks are the systematic input data X, the parity input data Y1, the interleaved parity input data Y2, and the decisions of the decoding as output data.


The forward state metrics to be stored in the adaptable memory means (when convolutional code decoding) is a block of N2 words (N2=1728) of b2 bits, with b2 being greater than b1. Generally, the product N2 times b2 is equal to the product of W (window size) with the number of states of the trellis, and with the number of bits for each state. In the present case, W is equal to 54 or the convolutional code, and 64 for the turbo-code. The number of states is equal to 8 for the turbo-code and to 256 for the convolutional code. The number of bits for each state is equal to 11.


Accordingly, N2 is equal to 1728 whereas b2 is equal to 88. Thus, as shown in FIG. 8, the main memory means of the adaptable memory means ADDM comprise 4 groups of p (p=3) elementary memories respectively dedicated to the g blocks of N1 words. Each elementary memory is adapted to store N2 words of b1 bits.


The additional memory Add is adapted to store 1728 words of 16 bits. Generally speaking, the memory control means address the adaptable memory means ADMM in the first configuration (turbo-code decoding) such that each block of 5120 words of 6 bits is written in or read from its dedicated group of 3 elementary memories.


Further, the memory control means address the adaptable memory means ADMM in the second configuration (convolutional code decoding) such that the twelve elementary words of the forward state metrics are respectively stored in the twelve elementary memories of the main memory means at the same address, whereas the additional elementary word of the forward state metrics (the 16 other bits) is stored in the additional memory means at the same address.


In other words, for CC decoding each of the I/O RAMs of the TC is split into 3 separate RAMs. These RAMs are concatenated to form the required bit width for the storage of 8 state metrics in parallel. We need 88 bits (8*11 bits), the I/O RAMs are 6 bit wide so we get 4*3*6 bits=72 bits. Therefore, we need an additional 16 bit RAM to form the CC alpha RAM. This RAM sharing enables us to get a sufficient window size for CC decoding.


As shown in FIG. 15, the naming conventions are as follows. The systematic input data is stored in X_RAM1, X_RAM2, X_RAM3, the parity input data is stored in Y1_RAM1, Y1_RAM2, Y1_RAM3, and the interleaved parity input data is stored in Y2_RAM1, Y2_RAM2, Y2_RAM3.


The TC output RAMs are LLR_RAM1, LLR_RAM2 and LLR_RAM3. The MSB represents the output hard decision, whereas the LSB represent the extrinsic information/LLR soft decision (depending on actual decoding progress). This enables a stopping of the decoding after MAP1 CRC check.


Because the RAMs are split into three, an appropriate address transformation is done to map the address space 0-5119 to three times 0-1727. This value is rounded up to the minimum feasible RAM. Only the actual needed RAMs are activated, all others are deselected.


The common processing means comprises a branch metric unit for calculating in each first and second configurations the branch metrics associated to the branches of the corresponding trellis (see eq. 2.17 for turbo code).


The transfer of TC input data samples to the CTD decoder is done in the following order:

X1,Y11,Y12,X2,Y21,Y22, . . . ,XBi,XBi1,YBi2,TailBits

Bi is the number of bits in the i−th code block.


The storage of the tail bits has to be considered, the transmitted bits for the trellis termination are:

xK+1, zK+1, xK+2, zK+2, xK+3, zK+3, x′K+1, z′K+1, x′K+2, z′K+2, x′K+3, z′K+3; where Y1=z, Y2=z′ and x′=XSecond encoder, K=block length.


The parity data RAMs Y1 and Y2 are filled sequentially, the respective three tailbits are just appended. The systematic data RAM X is also filled sequentially because there are six tailbits in total. The tailbits for the first encoder are appended first, then the tailbits for the second encoder follow.


The input sequence for CC decoding is simpler because there is only one tail sequence. The tailbits are just appended to the respective RAM.


The different steps of the windowing scheme can be divided into further sub-steps. The calculation of the branch metrics is the prerequisite for the calculation of the state metrics. The memory control means therefore addresses the input RAMs to calculate the respective branch metrics. We use the optimized MAP calculation scheme proposed above. The used naming convention is based on the binary representation of the pair [systematic/parity information] ∈{0,1} resp. [G0/G1], [G0/G1/G2]. X, Y, LLR, G0, G1, G2 represent the individual data stored in this RAMs (e.g. the content of the LLR-RAM is the extrinsic information).


Turbo Code:

    • branch0=0
    • branch1=Y
    • branch2=X+LLR
    • branch3=X+Y+LLR


      Convolutional Code, rate 1/2:
    • branch0=0
    • branch=G1
    • branch2=G0
    • branch3=G1+G0

      Convolutional Code, rate 1/3:
    • branch0=0
    • branch1=G2
    • branch2=G1
    • branch3=G1+G2
    • branch4=G0
    • branch5=G0+G2
    • branch6=G0 +G1
    • branch7=G0 +G1 +G2


The branch metric calculation is very straightforward, two additions for TC and one or four additions (calculation of branch7 reuses a previously addition, e.g., G0+G1) for CC. In case of TC we have to use interleaved data during a MAP2 operation. Therefore, the BM unit interacts with the external interleaver to fetch the appropriate addresses. To avoid data collisions during forward and backward TC recursion, a dedicated LLR-cache is needed as well as a dedicated LLR-register.


Turning now to the calculation of the state metrics, the common processing means comprise a configurable state metrics unit SM. As shown in FIGS. 14, 15, and 16, the configurable state metrics units SM comprises an architecture of 8 parallel ACS (Add, Compare, Select) units for calculating in each configuration 8 forward state metrics, auxiliary memory means (AXMM) for temporarily storing the calculated forward state metrics for a recursive calculation, and auxiliary control means for controlling the storage of the metrics in the auxiliary memory means depending on the configuration of the common processing means.


More precisely, the ACS architecture illustrated in FIG. 16 calculates eight state metrics in parallel out of the branch metrics and the previous state metrics according to equations 2.10 and 2.11. This is done with 8 ACS (Add, Compare, Select) units based on a modmin-procedure (the MODMIN blocks perform the min* operator).


This ACS architecture is used for forward as well as for backward recursion for both turbo code and convolutional code. Because of the different trellis diagrams, the flexibility is achieved with multiplexers for the incoming branch metrics (bmux) and state metrics (smux). These multiplexers are controlled externally as explained thereafter.


The eight ACS units need sixteen branch metric+state metric sums, therefore sixteen bmux multiplexers are provided. Each of these bmux multiplexers can select among eight branch metrics (because CC rate 1/3 needs eight different branch metrics in total).


However, because of a particular state metric distribution only twelve smux state metric multiplexers are needed instead of sixteen. Furthermore, this state metric distribution leads to only 2:1 smux multiplexers, because the setting is valid for either CC forward recursion/TC backward recursion or CC backward/TC forward recursion. There is no additional multiplexing of the calculated state metrics. In case of forward recursion, the new state metrics are always in ascending order. FIG. 16 shows the general architecture of the ACS architecture. Sm1-sm8 denote the 8 parallel input state metrics at one timestep, and bm0-bm7 denote the up to 8 different branch metrics. The smux setting for TC forward/CC backward is marked bold. The output of this unit is the new state metrics as well as the LLR-metrics (βstate metric+branch metric) for the LLR calculation.


The multiplexer of the state metrics unit are controlled by a specific machine, not shown for reason of simplification. The state metric multiplexer smux controlling is very simple. There are only two different settings, either for TC forward recursion/CC backward recursion or for TC backward recursion/CC forward recursion. The controlling of the branch metric multiplexer bmux is more complex and done in the FSM. Each sequence (32 steps) depends on the chosen rate as well as on the actual operation (forward recursion or backward recursion).


CC rate 1/2 forward recursion. There are 4 different sets of bmux settings, denoted A,B,C,D. The forward sequence is AABBCCDDA ABBCCDDCCDDAABBCCDDAABB.


CC rate 1/2 backward recursion. There are 4 different sets of bmux settings, denoted A,B,C,D. The backward sequence is AABBCCDDAABBCCDDBB AADDCCBBAADDCC; CC rate 1/3 forward recursion.


There are 8 different sets of bmux settings, denoted A,B,C,D,E,F,G,H. The forward sequence is ABCDEFGHFEHGBADCHGFE DCBACDABGHEF; CC rate 1/3 backward recursion. There are 8 different sets of bmux settings, denoted A,B,C,D,E,F,G,H. The backward sequence is ABCDEFGHFEHGBADCDCB AHGFEGHEFCDA.


The LLR unit calculates the LLR (see eq. 2.12 for turbo code). This is done in a pipeline for the TC decoder comprising three modmin stages with registers between stage one and stage two see (FIG. 17). Input to the first stage are the sums alpha state metric from the alpha RAM+LLRsum (this equals branch metric+beta state metric) from the SM unit. These values are also registered, thus resulting in a total pipeline depth of four. The upper modmin-tree calculates the minimum of all states reached by input ‘1’ (LLR1), and the lower one the minimum reached by input ‘0’ (LLR0).


Once the LLR calculation starts, new values are present at the inputs every clock cycle. The control is very simple and done by a simple shifting of a data valid flag through a 4 stage flip-flop pipeline. Parts of the LLR calculation for TC are reused for convolutional decoding, see FIG. 17 (not shown are the multiplexer for the input values and the adder for calculating the sums).


More precisely, the architecture shown in FIG. 17 is used for turbo decoding whereas only the upper and lower parts thereof are used for convolutional code decoding. Since the encoding of the convolutional Code is done with an NSC, the input ‘0’ or ‘1’ determines the state number.


Therefore, we do not need the branch metrics, which simplifies the calculation to only four modmin units in the first stage. The upper two modmin units calculate the minimum of the four state metric sums reached by input ‘1’ (onestates), the lower two the minimum of input ‘0’ (zerostates). Therefore, the inputs are the alpha state metrics from the alpha-RAM and the beta metrics from the AXMM memory.


Because we do not calculate all 256 states at once we need a loopback to determine the appropriate minimums. This is additional hardware compared to the turbo decoder part and situated after stage 2. The controlling is quite simple and realized with a FSM. This can be best expressed when looking at the feedback unit for the one states. The first minimum is stored in the lower register, and the second one in the upper register. All following minima are stored in the lower register. The resulting minimum is always in the upper register. The LLR_valid flag is also generated by the FSM.


The combined decoder according to the invention comprises a global control unit which controls the decoding process on the MAP level. Because TC decoding is done in an iterative matter, the number of iterations depends on the actual decoding status. Therefore, after each MAP operation a stopping criterion is checked. This stopping criterion can either be the selected total number of half-iterations, a correctly detected CRC-sum (only after MAP1) or an early detection of undecodable blocks based on the mean value criterion. The particular decoding steps for TC are shown in FIG. 18. In case of CC decoding only one MAP1 operation is needed.


Furthermore, the global control unit controls the handshake mode. This handshake mode allows step-by-step execution of the decoding steps and on-demand memory flush.

Claims
  • 1-20. (canceled)
  • 21. A method for blindly detecting a transport format of a convolutionally encoded signal, the transport format being unknown and belonging to a set of MF predetermined reference transport formats, the convolutionally encoded signal comprising a data block having an unknown number of bits corresponding to the unknown transport format and a cyclic redundancy check (CRC) field containing a transmitted CRC word, the method comprising: decoding the convolutionally encoded signal using a Maximum-a-Posteriori algorithm, the decoding comprising considering the MF possible reference transport formats and delivering MF corresponding groups of soft output information, calculating from each group of soft output information a calculated CRC word, comparing the calculated CRC word with the transmitted CRC word, selecting groups for which the calculated CRC word is equal to the transmitted CRC word, and selecting an actual transport format of the convolutionally encoded signal from at least one soft output information among last ones of each selected group.
  • 22. A method according to claim 21, wherein the actual transport format of the convolutionally encoded signal is selected from the last soft output information of each selected group.
  • 23. A method according to claim 22, wherein the actual transport format is a reference format having the greatest last soft output information.
  • 24. A method according to claim 21, wherein the decoding further comprises calculating state metrics based upon the data blocks being decoded in a parallel window-by-window on sliding windows having a predetermined size, and based upon at least one of the state metrics being calculated on a window and is valid for the data blocks.
  • 25. A method according to claim 24, wherein the decoding further comprises for each window: calculating forward state metrics during a forward recursion, and for each window only one forward recursion is performed which is valid for the transport formats; performing a backward acquisition having a predetermined acquisition length; calculating backward state metrics during a backward recursion; and calculating soft output information in a reverse order.
  • 26. A method according to claim 25, wherein a first window processing comprises forward recursion, backward acquisition, backward recursion and soft output calculation, and is valid for data blocks having a size larger than a sum of a window size and an acquisition length.
  • 27. A method for blindly detecting a transport format of an encoded signal, the transport format being unknown and belonging to a set of predetermined reference transport formats, the method comprising: considering the possible reference transport formats and delivering corresponding groups of soft output information; calculating from each group of soft output information a calculated cyclic redundancy check (CRC) word; comparing the calculated CRC word with a transmitted CRC word; selecting groups for which the calculated CRC word is equal to the transmitted CRC word; and selecting an actual transport format of the encoded signal from at least one soft output information among last ones of each selected group.
  • 28. A method according to claim 27, wherein the encoded signal comprises a convolutionally encoded signal comprising a data block having an unknown number of bits corresponding to the unknown transport format and a CRC field containing the transmitted CRC word.
  • 29. A method according to claim 27, wherein the considering, the calculating and both of the selecting results in the encoded signal being decoded.
  • 30. A method according to claim 29, wherein the decoding is performed using a Maximum-a-Posteriori algorithm.
  • 31. A method according to claim 27, wherein the actual transport format of the encoded signal is selected from the last soft output information of each selected group.
  • 32. A method according to claim 31, wherein the actual transport format is a reference format having the greatest last soft output information.
  • 33. A method according to claim 27, wherein the decoding further comprises calculating state metrics based upon the data blocks being decoded in a parallel window-by-window on sliding windows having a predetermined size, and based upon at least one of the state metrics being calculated on a window and is valid for the data blocks.
  • 34. A method according to claim 33, wherein the decoding further comprises for each window: calculating forward state metrics during a forward recursion, and for each window only one forward recursion is performed which is valid for the transport formats; performing a backward acquisition having a predetermined acquisition length; calculating backward state metrics during a backward recursion; and calculating soft output information in a reverse order.
  • 35. A method according to claim 34, wherein a first window processing comprises forward recursion, backward acquisition, backward recursion and soft output calculation, and is valid for data blocks having a size larger than a sum of a window size and an acquisition length.
  • 36. A decoder comprising: an input for receiving a convolutionally encoded signal having an unknown transport format belonging to a set of predetermined reference transport formats, the convolutionally encoded signal comprising a data block having an unknown number of bits corresponding to the unknown transport format and a cyclic redundancy check (CRC) field containing a transmitted CRC word; a convolutional code decoder module implementing a Maximum-a-Posteriori algorithm for decoding the convolutionally encoded signal by considering the possible reference formats and delivering corresponding groups of soft output information; and a blind transport format detector module comprising a cyclic redundancy check (CRC) unit for calculating from each group of soft output information a calculated CRC word, a comparator for comparing the calculated CRC word with a transmitted CRC word, a first selector for selecting groups for which the calculated CRC word is equal to the transmitted CRC word, and a second selector for selecting an actual transport format of the convolutionally encoded signal from at least one soft output information among the last ones of each selected group.
  • 37. A decoder according to claim 36, wherein said convolutional code decoder module comprises a log-likelihood-ratio unit for delivering the corresponding groups of soft output information.
  • 38. A decoder according to claim 36, wherein said second selector selects the actual transport format of the convolutionally encoded signal from the last soft output information of each selected group.
  • 39. A decoder according to claim 38, wherein the actual transport format is the reference format having the greatest last soft output information.
  • 40. A decoder according to claim 36, wherein said convolutional code decoder module calculates state metrics based upon the data blocks being decoded in a parallel window-by-window on sliding windows having a predetermined size, and based upon at least one of the state metrics being calculated on a window and is valid for the data blocks.
  • 41. A decoder according to claim 40, wherein said convolutional code decoder module performs the following for each window: calculating forward state metrics during a forward recursion, and for each window only one forward recursion is performed which is valid for the transport formats; performing a backward acquisition having a predetermined acquisition length; calculating backward state metrics during a backward recursion; and calculating soft output information in a reverse order.
  • 42. A decoder according to claim 41, wherein a first window processing comprises forward recursion, backward acquisition, backward recursion and soft output calculation, and is valid for data blocks having a size larger than a sum of a window size and an acquisition length.
  • 43. A decoder according to claim 36, wherein said convolutional code decoder module comprises a combined turbo-code/convolutional code decoder module for also performing turbo-code decoding.
  • 44. A decoder according to claim 43, wherein said combined turbo-code/convolutional code decoder module comprises a common processor having a first configuration for turbo-code decoding and a second configuration for convolutional code decoding; and the decoder further comprising: a metrics memory for storing state metrics associated with states of a first trellis and delivered by said common processor in the first configuration; an input/output memory for storing input and output data delivered to and by said common processor in the second configuration; an adaptable memory for storing input and output data delivered to and by said common processor in the first configuration, and for storing state metrics associated to the states of a second trellis and delivered by said common processor in the second configuration; a controller for configuring said common processor in the first or second configuration based upon a code type; and a memory controller for addressing differently said adaptable memory based upon a configuration of said common processor.
  • 45. A decoder according to claim 44, wherein said common processor implements a Maximum-A-Posteriori (MAP) algorithm.
  • 46. A decoder according to claim 45, wherein the MAP algorithm comprises at least one of a LogMAP algorithm and a MaxLogMAP algorithm.
  • 47. A decoder according to claim 36, wherein the input, said convolutional code decoder module and said blind transport format detection module are formed as an integrated circuit.
  • 48. A communication system comprising: a radio frequency stage for receiving a signal; a demodulator connected to said radio frequency stage for demodulating the received signal; and a decoder connected to said demodulator and comprising an input for receiving from said demodulator an encoded signal having an unknown transport format belonging to a set of predetermined reference transport formats, a decoder module for decoding the encoded signal by considering the possible reference formats, and delivering corresponding groups of soft output information, and a blind transport format detection module comprising a cyclic redundancy check (CRC) unit for calculating from each group of soft output information a calculated CRC word, a comparator for comparing the calculated CRC word with a transmitted CRC word, a first selector for selecting the groups for which the calculated CRC word is equal to the transmitted CRC word, and a second selector for selecting an actual transport format of the encoded signal from at least one soft output information among the last ones of each selected group.
  • 49. A communication system according to claim 48, wherein the encoded signal comprises a convolutional encoded signal comprising a data block having an unknown number of bits corresponding to the unknown transport format and a CRC field containing the transmitted CRC word.
  • 50. A communication system according to claim 48, wherein said decoder module implements a Maximum-a-Posteriori (MAP) algorithm.
  • 51. A communication system according to claim 48, wherein said decoder module comprises a log-likelihood-ratio unit for delivering the corresponding groups of soft output information.
  • 52. A communication system according to claim 48, wherein said second selector selects the actual transport format of the encoded signal from the last soft output information of each selected group.
  • 53. A communication system according to claim 52, wherein the actual transport format is the reference format having the greatest last soft output information.
  • 54. A communication system according to claim 48, wherein said decoder module calculates state metrics based upon the data blocks being decoded in a parallel window-by-window on sliding windows having a predetermined size, and based upon at least one of the state metrics being calculated on a window and is valid for the data blocks.
  • 55. A communication system according to claim 54, wherein said decoder module performs the following for each window: calculating forward state metrics during a forward recursion, and for each window only one forward recursion is performed which is valid for the transport formats; performing a backward acquisition having a predetermined acquisition length; calculating backward state metrics during a backward recursion; and calculating soft output information in a reverse order.
  • 56. A communication system according to claim 55, wherein a first window processing comprises forward recursion, backward acquisition, backward recursion and soft output calculation, and is valid for data blocks having a size larger than a sum of a window size and an acquisition length.
  • 57. A communication system according to claim 48, wherein said decoder module comprises a combined turbo-code/convolutional code decoder module for also performing turbo-code decoding.
  • 58. A communication system according to claim 57, wherein said combined turbo-code/convolutional code decoder module comprises a common processor having a first configuration for turbo-code decoding and a second configuration for convolutional code decoding; and said decoder further comprising: a metrics memory for storing state metrics associated with states of a first trellis and delivered by said common processor in the first configuration; an input/output memory for storing input and output data delivered to and by said common processor in the second configuration; an adaptable memory for storing input and output data delivered to and by said common processor in the first configuration, and for storing state metrics associated to the states of a second trellis and delivered by said common processor in the second configuration; a controller for configuring said common processor in the first or second configuration based upon a code type; and a memory controller for addressing differently said adaptable memory based upon a configuration of said common processor.
  • 59. A communication system according to claim 48, wherein said radio frequency stage, said demodulator, and said decoder module form a cellular phone.
  • 60. A communication system according to claim 48, wherein said radio frequency stage, said demodulator, and said decoder module form a base station.
Priority Claims (1)
Number Date Country Kind
03290993.9 Apr 2003 EP regional