The present invention relates to midamble cancellation. More particularly the present invention relates to method and apparatus for performing midamble cancellation utilizing an algorithm enabling parallel cancellation of midamble for both data field 1 and data field 2 of a received TDD burst.
As shown in
Midamble cancellation (also referred to hereinafter as MDC) can also be applied to remove midamble interference from the convolution tail of Data field 1 into the first (W−1) chips of the midamble field, also shown in
Midamble cancellation is used to remove the effect of the midamble from:
The first W−1 chips of the midamble field, allowing better modeling of the convolution tail of the first Data field protruding into the midamble field, further allowing modeling of the AH A matrix to be exactly block Toeplitz; and the first W−1 chips of Data field 2. A technique is provided for calculation of midamble interference which significantly reduces the necessary hardware as well as processing time.
The invention will be understood from the accompanying figures, wherein like elements are designated by like numerals and, wherein:
The midamble shift numbers at 16b are also applied to code decision circuit 18 for determining channelization codes, provided at 18a, which are then applied to the multi-user detector (MUD) 20. Midamble cancellation circuit 14 utilizes the inputs described hereinabove for generating a midamble cancelled burst at 14a which is applied to the multi-user detector circuit 20.
As can clearly be seen, midamble cancellation is implemented before MUD processing. The midamble cancellation procedure initially constructs an estimate of the first W−1 chips of the midamble received in the midamble field and the first W−1 chips of the midamble spread into data field 2, respectively. The received midamble estimation is derived based on the channel responses provided by the channel estimator, 12 which utilizes a known algorithm for obtaining channel estimation, and midamble shift numbers obtained from the midamble detection block 16, which likewise uses a known algorithm to derive midamble shift numbers which, in turn, are utilized to derive channelization codes by code decision circuit 18 employing a known algorithm.
The received burst is stored in a buffer 32 which cooperates with the algorithm 30 of
Midamble cancellation is applied separately to the even and odd samples of the received over-sampled sequences.
The data employed in the cancellation circuitry of the present invention comprises:
The data inputs include a received data burst denoted by
Km sets of complex channel coefficients:
[{{right arrow over (h)}1, {right arrow over (h)}2, . . . , {right arrow over (h)}K
Km is the number of different midambles detected by the midamble detection algorithm in the post processing and midamble detection block 16 (see
Km midamble shift numbers: each number is used to generate a corresponding midamble code.
A microprocessor (not shown) forming part of the cancellation circuit 14 provides the association between channel impulse response and midamble shift (equivalent to midamble codes), which indicates which channel response belongs to which midamble shift (code).
The data outputs include:
Midamble cancelled data burst:
The parameters of the algorithm are:
Maximum midamble shift K.
Length L of each midamble code.
Burst type in use.
Length W of channel responses where W=28, 32, 57, 64 or 114 depending on the burst type and maximum midamble shift K.
Table 1 sets forth the values of the above parameters.
where mik represents the i-th element of the midamble,
{circle around (X)} denotes the convolution operator. In other words, the received midamble sequence is a superposition of the Km convolutions between the active midamble codes and channel responses. Equation (1) can be rewritten in a matrix form as follows:
where
represents the transpose of the row channel response vector,
The matrix consists of some midamble elements for all the Km midambles in the LHS of the above equation is of size (W−1)W•Km. The LHS of say, the i-th row represents the sum of Km convolutions evaluated at the time instance of the i-th chip of the received midamble. The k-th partition of each row in the midamble matrix consists of that portion of
The second received midamble interference corresponds to the first W−1 chips of the received midamble tail into the data field 2 where the tail results from the delay spread of the channel, and it corrupts the first W−1 chips of the received data field 2 (see
The procedure for constructing the midamble interference is similar to that for the data field 1 set forth above. However, in this case the convolution tail of the midamble field spreads into the data field 2. The midamble interference on the first W−1 chips of the data field 2,
can be then modeled in a matrix form as follows:
After modeling the two midamble interference sequences by Equations (2) and (3), respectively, Equation (2) is cancelled from the first W−1 chips of the midamble field in the received stored data burst,
The output, at 42a, is applied to MUD 20, see
The performance of the technique of the present invention is dependent on the accuracy of the channel estimation and midamble detection algorithm. With perfectly known channel responses, the implementation should result in less than 0.1 dB degradation in resultant signal-to noise ratio.
Since the midamble cancellation processing (circuit 14—
Processing element (PE) adders perform a “multiplication” of midambles and channel responses as shown by “multiplier” 108 in
The following is a high-level description of the system design.
The midamble server 78 supplies 16-bit midamble sequences based on the midamble number and midamble shift. Each sequence corresponds to 16 1-bit values.
Channel Estimation (CHEST) 80 supplies configuration parameters that control the functionality of midamble cancellation. Also, CHEST supplies control signals that initiate midamble cancellation processing.
The computed interference sequences are stored into 2 pairs of RAMs 82-84 and 86-88. Each pair consists of a real component 82, 86 and an imaginary component 84, 88. One pair is for the data field 1 interference results and the second pair is for the data field 2 interference results.
From Equation 2 and Equation 3, set forth above, we can see that the processing consists of a large matrix multiplication. The size of the left-hand matrix is (W−1)×W*Km. The size of the right-hand vector is W*Km×1. The total number of multiplies is (W−1)*W*Km. Since the size of each midamble sample is 1 bit, the implementation of the multipliers can be simplified and implemented by a mux.
Based on Table 1, the worst-case number of multiplies occurs when W=57 and Km=8, resulting in a total of 25,536 multiplies. Performing these multiplies sequentially is unacceptable since the total number of clock cycles equals the number of multiplies. Instead, it is necessary to perform the multiplications for multiple rows in parallel by assigning a processing element (PE) to each row. The PE for each row can be conveniently implemented using a multiply and accumulate function. The total processing time then will be (W−1)*W*Km/NPE, where NPE is the number of PE's.
The greatest savings in processing time are achieved when NPE=the number of rows=(W−1). The worst case processing time, in this case, is W*Km. This occurs when W=29 and Km=16 and results in 464 cycles. If the processing time requirement permits it, the number of PE's could be made less than the total number of rows. The PE's could be allocated to a set of rows for part of the processing time and then reallocated to a different set of rows for the next part of the overall processing.
The approach set forth above assumes each of the equations (2) and (3) are processed separately and that the hardware will need to be duplicated for each of the equations. From Equation 2 and Equation 3 we see that the first multiplicand matrix is upper-triangular while the second matrix is lower triangular. We can combine the two matrices into a single matrix since there is no overlap between the two of them. This allows the processing of the two equations to be combined into one hardware process.
The additional hardware consists of two (2) accumulators in each PE instead of 1, along with the associated control logic. Note that each PE performs a multiply and accumulate across a given row sequentially. Therefore, during any given clock cycle, only one of the two accumulators will be active and it will accumulate the results for either the upper triangular matrix multiply or for the lower one. By the end of a row, both accumulators have the results for both of the matrix multiplies.
The amount of hardware required to implement this function is directly related to the amount of time available for processing and to the bit widths used for the computations. Since the processing time and bit width requirements need not be firm, the design herein was chosen to be parameterized.
The parameterization occurs in two different aspects. First, the bit widths are parameterized allowing easy scaling of the design. Second, the amount of hardware used in parallel is also a parameter. The design is based on a basic processing element referred to as a PE. The number of required PE's depends on how parallel the design needs to be. Therefore, the number of PE's in the design is parameterized.
Note from the Equation 2 and Equation 3 that column i+1 in the matrices, is equal to column i shifted down by 1 row. This allows a simple architecture that uses a shift register 94 (see
In
At the start of processing, the lower register 94 contains all of the data needed for the data field 1 calculation (lower triangular matrix—see
The size of the upper shift register 92 is fixed at 16 bits. The size of the lower shift register 94 is equal to the number of PE's and is therefore parameterized. The parameter can take on multiples of 16-bits. Each stage of the shift register contains one binary bit (0 or 1) which respectively control subtraction and addition operations.
Each shift register has a set of queue registers R that allow processing to be pipelined. The queue registers R are loaded with data from the next active midamble shift, by RAM 96 while the PEs process data stored in the working shift register 94 from the current midamble shift.
Note that data retrieved from the midamble RAM 96 is packed into 16-bit words before being stored into the shift registers 92,94.
As set forth above,
Since both the channel estimates and the midamble bits are complex-valued samples, the PEs need to perform complex arithmetic. However, a full multiplier is not necessary since the midamble value consists of a single bit.
According to 3GPP TS 25.221: mi=(j)i*mi for all i=1, . . . , P
Therefore, the midamble sample represents 1 of 4 possible values:
The channel estimate consists of a multi-bit complex value A+Bj.
Therefore, multiplying the channel responses by the midamble samples results in 1 of 4 possible outputs:
From this we see that multiplication can be implemented with a pair of muxes (multiplexers) 120, 122 and a pair of adders/subtractors 124, 126, as shown in
From a consideration of
The pattern is repeated for each subsequent row wherein one more column position for each row yields an output for matrix U and one less column position yields an output for matrix L until, at the last row, there are no outputs for matrix L and all columns of the last row yield an output for matrix U.
For a given implementation of the MDC, the number of PE's may be less than the number of required calculations. In this case, the total number of rows is subdivided into sections whose size is the number of PE's. This is illustrated in
Table 2 shows the combined midamble matrix derived from combining Equation 2 and Equation 3 for a given midamble shift.
Note that the total number of midamble elements required for a given midamble shift consists of 0 to W−2and L-(W−1) to L-1. Note also that since the midamble is repetitive, L-1 and 0 are contiguous. Therefore, the total elements required consist of a contiguous list from L-(W−1) to W−2 . When a subset of the total rows is processed due to a limited number of PE's, the list of required elements remains contiguous since only the start and end points are altered. Therefore, retrieving midamble samples can be simplified by establishing a start point and sequentially retrieving data until all the required data has been retrieved. This simplifies the midamble packer control logic.
In reality, midamble cancellation establishes the end point and retrieves samples in reverse order. This is because the lower triangular matrix is processed first.
Note that the indices listed above are all relative to the basic midamble offsets for a particular midamble shift. The absolute midamble indices are discussed below.
MDC creates a shifted midamble sequence by addressing the midamble RAM in a circular fashion. The starting point is based on the midamble shift number.
Table 3 lists the equations from two (2) different versions of third generation (3G) specifications that define how to generate the initial midamble offsets based on the basic midamble. Both versions are shown as a reference, depending on what version is used for Spin 1 of the design. Table 4 and Table 5 list the initial offset values calculated from the corresponding equations for both long and short midamble, respectively.
Step 1: At the beginning of Steiner processing, CHEST kicks off the midamble cancellation preload process. During this process, midamble cancellation requests the entire basic midamble sequence from the midamble server and stores it into a local RAM.
Step 2: After post-processing is complete, CHEST kicks off midamble cancellation main processing this process, midamble cancellation retrieves midamble samples and channel responses for each active midamble shift.
Step 3: At the end of processing, each PE contains 2 accumulators full of data. The first accumulator from each PE (corresponding to data field 1 results) is sequentially muxed out and stored into RAMs (See RAM 82 and 84—
Steps 4, 5: If the number of processing elements is less than W−1, steps 2 and 3 are repeated until all of the required processing is complete.
The following is a description of the processing flow and the finite state machines that control various processes within the midamble cancellation function.
There are two (2) control signals that start MDC processing. The first signal starts the MDC preload process (S1). The second control signal kicks off the MDC main processing (S2).
The available processing elements (PEs) are each assigned to process one row of the matrix multiplication (S3). If the total number of PE's is less than the total number of rows (W−1), then the PE's will be assigned to a first set of rows. Once processing is complete for this set of rows, the PE's will be reassigned to the next set of rows. This is repeated until all of the rows have been processed.
The next step is to loop through each midamble shift in order to look for an active midamble (S4). When an active shift is found, the matrix multiplication continues (S5).
The multiplication continues for the entire midamble sequence for the current shift. This continues until all midamble shifts have been processed. Once all of the active midamble shifts have been processed (S6), data is available for both data field 1 and data field 2 (S7). The data is sequentially output and written into the output RAMs.
The entire process is repeated until all W−1 rows are processed (S8).
The state machines, shown in
The preload state machine,
The preprocessor,
The processing element state machine,
The midamble shift state machine,
The midamble data packer state machine,
The data output state machine,
The internal bit widths were chosen to accommodate the following maximum parameters:
Table 6 lists the number of clock cycles required to perform midamble cancellation for the given parameters. The measurements were taken from the start of processing, excluding the midamble preload from the midamble server.
This application claims priority from U.S. provisional application No. 60/379,196 filed on May 9, 2002, which is incorporated by reference as if fully set forth.
Number | Name | Date | Kind |
---|---|---|---|
4853887 | Jutand et al. | Aug 1989 | A |
4862098 | Yassa et al. | Aug 1989 | A |
4977580 | McNicol | Dec 1990 | A |
5724390 | Blaker et al. | Mar 1998 | A |
5872801 | Mobin | Feb 1999 | A |
5905757 | Kundmann et al. | May 1999 | A |
5923273 | Pastorello | Jul 1999 | A |
6002716 | Meyer et al. | Dec 1999 | A |
6208285 | Burkhardt | Mar 2001 | B1 |
6339612 | Stewart et al. | Jan 2002 | B1 |
6381461 | Besson et al. | Apr 2002 | B1 |
6477555 | Hartung | Nov 2002 | B1 |
6504884 | Zvonar | Jan 2003 | B1 |
6523055 | Yu et al. | Feb 2003 | B1 |
6584150 | Wu et al. | Jun 2003 | B1 |
6639551 | Li et al. | Oct 2003 | B2 |
6795417 | Zeira et al. | Sep 2004 | B2 |
6816470 | Kim et al. | Nov 2004 | B2 |
6922716 | Desai et al. | Jul 2005 | B2 |
20020006122 | Zeira | Jan 2002 | A1 |
20020136177 | Jechoux et al. | Sep 2002 | A1 |
20020163896 | Hiramatsu | Nov 2002 | A1 |
20020181557 | Fuji | Dec 2002 | A1 |
20020198915 | Rainish | Dec 2002 | A1 |
20030153275 | Oh et al. | Aug 2003 | A1 |
20040032849 | Tang et al. | Feb 2004 | A1 |
Number | Date | Country |
---|---|---|
9952249 | Oct 1999 | WO |
Number | Date | Country | |
---|---|---|---|
20030210754 A1 | Nov 2003 | US |
Number | Date | Country | |
---|---|---|---|
60379196 | May 2002 | US |