This invention generally relates to solving linear systems. In particular, the invention relates to using array processing to solve linear systems.
Linear system solutions are used to solve many engineering issues. One such issue is joint user detection of multiple user signals in a time division duplex (TDD) communication system using code division multiple access (CDMA). In such a system, multiple users send multiple communication bursts simultaneously in a same fixed duration time interval (timeslot). The multiple bursts are transmitted using different spreading codes. During transmission, each burst experiences a channel response. One approach to recover data from the transmitted bursts is joint detection, where all users data is received simultaneously. Such a system is shown in
The multiple bursts 90, after experiencing their channel response, are received as a combined received signal at an antenna 92 or antenna array. The received signal is reduced to baseband, such as by a demodulator 94, and sampled at a chip rate of the codes or a multiple of a chip rate of the codes, such as by an analog to digital converter (ADC) 96 or multiple ADCs, to produce a received vector, r. A channel estimation device 98 uses a training sequence portion of the communication bursts to estimate the channel response of the bursts 90. A joint detection device 100 uses the estimated or known spreading codes of the users' bursts and the estimated or known channel responses to estimate the originally transmitted data for all the users as a data vector, d.
The joint detection problem is typically modeled by Equation 1.
Ad+n=r Equation 1
d is the transmitted data vector; r is the received vector; n is the additive white gaussian noise (AWGN); and A is an M×N matrix constructed by convolving the channel responses with the known spreading codes.
Two approaches to solve Equation 1 is a zero forcing (ZF) and a minimum mean square error (MMSE) approach. A ZF solution, where n is approximated to zero, is per Equation 2.
d=(AHA)−1AHr Equation 2
A MMSE approach is per Equations 3 and 4.
d=R−1AHr Equation 3
R=AHA+σ2I Equation 4
σ2 is the variance of the noise, n, and I is the identity matrix.
Since the spreading codes, channel responses and average of the noise variance are estimated or known and the received vector is known, the only unknown variable is the data vector, d. A brute force type solution, such as a direct matrix inversion, to either approach is extremely complex. One technique to reduce the complexity is Cholesky decomposition. The Cholesky algorithm factors a symmetric positive definite matrix, such as à or R, into a lower triangular matrix G and an upper triangular matrix GH by Equation 5.
{tilde over (A)} or R=GGH Equation 5
A symmetric positive definite matrix, Ã, can be created from A by multiplying A by its conjugate transpose (hermetian), AH, per Equation 6.
Ã=AHA Equation 6
For shorthand, {tilde over (r)} is defined per Equation 7.
{tilde over (r)}=AHr Equation 7
As a result, Equation 1 is rewritten as Equations 8 for ZF or 9 for MMSE.
Ãd={tilde over (r)} Equation 8
Rd={tilde over (r)} Equation 9
To solve either Equation 8 or 9, the Cholesky factor is used per Equation 10.
GGHd={tilde over (r)} Equation 10
A variable y is defined as per Equation 11.
GHd=y Equation 11
Using variable y, Equation 10 is rewritten as Equation 12.
Gy={tilde over (r)} Equation 12
The bulk of complexity for obtaining the data vector is performed in three steps. In the first step, G is created from the derived symmetric positive definite matrix, such as à or R, as illustrated by Equation 13.
G=CHOLESKY (Ãor R) Equation 13
Using G, y is solved using forward substitution of G in Equation 8, as illustrated by Equation 14.
y=FORWARD SUB(G,{tilde over (r)}) Equation 14
Using the conjugate transpose of G, GH, d is solved using backward substitution in Equation 11, as illustrated by Equation 15.
d=BACKWARD SUB(GH,y) Equation 15
An approach to determine the Cholesky factor, G, per Equation 13 is the following algorithm, as shown for à or R, although an analogous approach is used for R.
ad,e denotes the element in matrix à or R at row d, column e. “:” indicates a “to” operator, such as “from j to N,” and (·)H indicates a conjugate transpose (hermetian) operator.
Another approach to solve for the Cholesky factor uses N parallel vector-based processors. Each processor is mapped to a column of the à or R matrix. Each processor's column is defined by a variable μ, where μ=1:N. The parallel processor based subroutine can be viewed as the following subroutine for μ=1:N.
recv(·,left) is a receive from the left processor operator; send(·,right) is a send to the right processor operator; and gK,L is a value from a neighboring processor.
This subroutine is illustrated using
b and 2c show two possible functions performed by the processors on the cells below them. In
v←aμ:N,μ/√{square root over (aμμ)} Equation 16
aμ:N,μ:=v Equation 17
“←” indicates a concurrent assignment; “:=” indicates a sequential assignment; and v is a value sent to the right processor.
In
v←μ Equation 18
aμ:N,μ:=aμ:N,μ−vμvμ:N Equation 19
vk indicates a value associated with a right value of the kth processor 50.
d-2g illustrate the data flow and functions performed for a 4×4 G matrix. As shown in the
These elements are extendable to an N×N matrix and N processors 50 by adding processors 50 (N−4 in number) to the right of the fourth processor 504 and by adding cells of the bottom matrix diagonal (N−4 in number) to each of the processors 50 as shown in
The implementation of such a Cholesky decomposition using either vector processors or a direct decomposition into scalar processors is inefficient, because large amounts of processing resources go idle after each stage of processing.
Accordingly, it is desirable to have alternate approaches to solve linear systems.
A user equipment or base station, generically referred to as a wireless transmit receive unit (WTRU), recovers data from a plurality of data signals received as a received vector. The user equipment determines data of the received vector by determining a Cholesky factor of an N by N matrix and using the determined Cholesky factor in forward and backward substitution to determine data of the received data signals. The WTRU comprises an array of at most N scalar processing elements. The array has input for receiving elements from the N by N matrix and the received vector. Each scalar processing element is used in determining the Cholesky factor and performs forward and backward substitution. The array outputs data of the received vector.
a-2h are diagrams illustrating determining a Cholesky factor using vector processors.
a and 3b are preferred embodiments of N scalar processors performing Cholesky decomposition.
a-4e are diagrams illustrating an example of using a three dimensional graph for Cholesky decomposition.
a-5e are diagrams illustrating an example of mapping vector processors performing Cholesky decomposition onto scalar processors.
a-6j for a non-banded and
a-8d are diagrams illustrating the processing flow using delays between the scalar processors in the 2D scalar array.
e is a diagram of a delay element and its associated equation.
a illustrates projecting the scalar processor array of
b illustrates projecting a scalar processing array having delays between every other processor onto a 1D array of four scalar processors.
c-9n are diagrams illustrating the processing flow for Cholesky decomposition of a banded matrix having delays between every other processor.
o-9z illustrate the memory access for a linear array processing a banded matrix.
a and 10b are the projected arrays of
a and 11b illustrate separating a divide/square root function from the arrays of
a is an illustration of projecting a forward substitution array having delays between each processor onto four scalar processors.
b is an illustration of projecting a forward substitution array having delays between every other processor onto four scalar processors.
c and 12d are diagrams showing the equations performed by a star and diamond function for forward substitution.
e is a diagram illustrating the processing flow for a forward substitution of a banded matrix having concurrent assignments between every other processor.
f-12j are diagrams illustrating the processing flow for forward substitution of a banded matrix having delays between every other processor.
k-12p are diagrams illustrating the memory access for a forward substitution linear array processing a banded matrix.
a and 13b are the projected arrays of
a-14d are diagrams illustrating the processing flow of the projected array of
a is an illustration of projecting a backward substitution array having delays between each processor onto four scalar processors.
b is an illustration of projecting a backward substitution array having delays between every other processor onto four scalar processors.
c and 15d are diagrams showing the equations performed by a star and diamond function for backward substitution.
e is a diagram illustrating the processing flow for backward substitution of a banded matrix having concurrent assignments between every other processor.
f-15j are diagrams illustrating the processing flow for backward substitution of a banded matrix having delays between every other processor.
k-15p are diagrams illustrating the memory access for a backward substitution linear array processing a banded matrix.
a and 16b are the projected arrays of
a-17d are diagrams illustrating the processing flow of the projected array of
a and 18b are the arrays of
a and 19b are diagrams of a reconfigurable array for determining G, forward and backward substitution.
a and 20b are illustrations of breaking out the divide and square root function from the reconfigurable array.
a illustrates bi-directional folding.
b illustrates one directional folding.
a is an implementation of bi-directional folding using N processors.
b is an implementation of one direction folding using N processors.
a and 3b are preferred embodiments of N scalar processors 541 to 54N (54) performing Cholesky decomposition to obtain G. For simplicity, the explanation and description is explained for a 4×4 G matrix, although this approach is extendable to any N×N G matrix as shown in
a illustrates a three-dimensional computational dependency graph for performing the previous algorithms. For simplicity,
y←√{square root over (ain)} Equation 20
aout←y Equation 21
← indicate a concurrent assignment. ain is input to the node from a lower level and aout is output to a higher level.
y←z* Equation 22
aout←ain−|z|2 Equation 23
y←w Equation 24
x←ain/w Equation 25
aout←x Equation 26
y←w Equation 27
x←z Equation 28
aout←ain−w*z Equation 29
a is a diagram showing the mapping of the first stage of a vector based Cholesky decomposition for a 4×4 G matrix to the first stage of a two dimensional scalar based approach. Each vector processor 52, 54 is mapped onto at least one scalar processor 56, 58, 60, 62 as shown in
y←√{square root over (aij)} Equation 30
aij:=y Equation 31
:=indicates a sequential assignment. y indicates a value sent to a lower processor.
y←w Equation 32
x←aij/w Equation 33
aij:=x Equation 34
w indicates a value sent from an upper processor.
y←z* Equation 35
aij:=aij−|z|2 Equation 36
x indicates a value sent to a right processor.
y←w Equation 37
x←z Equation 38
aij:=aij−w*z Equation 39
e-6j illustrate the processing flow for a banded 5 by 5 matrix. Active processors are unhatched. The banded matrix has the lower left three entries (a41, a51, a52, not shown in
In stage 2, six processors (α22, α32, α33, ã42, ã43, ã44) are operating. As shown in
The simplified illustrations of
If the bandwidth of the matrix has a limited width, such as P, the number of processing elements can be reduced. To illustrate, if P equals N−1, the lower left processor for aN1 drops off. If P equals N-2, two more processors (aN-11 and aN2) drop off.
Reducing the number of scalar processing elements further is explained in conjunction with
y:=x Equation 41
For each processing cycle starting at t1, the processors sequentially process as shown by the diagonal lines showing the planes of execution. To illustrate, at time t1, only processor 56 of a11 operates. At t2, only processor 58 of a21 operates and at t3, processors 58, 60 of a31 and a22 operate and so on until stage 4, t16, where only processor 56 of a44 operates. As a result, the overall processing requires N2 clock cycles across N stages.
Multiple matrices can be pipelined through the two dimensional scalar processing array. As shown in
After a group of matrices have been pipelined through stage 1, the group is pipelined through stage 2 and so forth until stage N. Using pipelining, the throughput of the array can be dramatically increased as well as processor utilization.
Since all the processors 56, 58, 60, 62 are not used during each clock cycle, when processing only 1 matrix, the number of processing elements 56, 58, 60, 62 can be reduced by sharing them across the planes of execution.
a is an expansion of
In the implementation of
To reduce the processing time of a single array, the implementation of
Another advantage to the implementations of
c-9n illustrate the timing diagrams for each processing cycle of a banded 5 by 5 matrix having a bandwidth of 3 with delays between every other connection. At each time period, the value associated with each processor is shown. Active processors are unhatched. As shown in the figures, the processing propagates through the array from the upper left processor in
o-9z illustrate the timing diagrams and memory access for each processing cycle of a linear array, such as per
To reduce the complexity of the processing elements, the divide and square root function are not performed by those elements (pulled out). Divides and square roots are more complex to implement on an ASIC than adders, subtractors and multipliers.
The only two functions which perform a divide or a square root is the pentagon and octagon functions 56, 58. For a given stage, as shown in
Using Equations 34 and 30, each octagon's x output is that octagon's aij divided by the square root of the pentagon's aij. Using a multiplier instead of a divider within each octagon processor, for a given stage, only the reciprocal of the square root of the pentagon's aij needs to be determined instead of the square root, isolating the divide function to just the pentagon processor and simplifying the overall complexity of the array. The reciprocal of the square root would then be stored as the aij of the matrix element associated with the pentagon instead of the reciprocal. This will also be convenient later during forward and backward substitution because the divide functions in those algorithms become multiples by this reciprocal value, further eliminating the need for dividers in other processing elements, i.e. the x outputs of
After the Cholesky factor, G, is determined, y is determined using forward substitution as shown in
For a banded matrix, the algorithm is as follows.
gLK is the corresponding element at row L, column K from the Cholesky matrix, G.
a and 12b are two implementations of forward substitution for a 4×4 G matrix using scalar processors. Two functions are performed by the processors 72, 74, the star function 72 of
y←w Equation 42
x←z−w*gij Equation 43
The diamond function 74 performs Equations 44 and 45.
x←z/gij Equation 44
y←x Equation 45
Inserting delay elements between the concurrent connections of the processing elements as in
Since each processing element is used in only every other processing cycle, half of the delay elements can be removed as shown in
The operation per cycle of the processing elements of the projected array of
e-12j illustrate the timing diagrams for each processing cycle of a banded 5 by 5 matrix.
To show that the same processing elements can be utilized for forward as well as Cholesky decomposition,
Similarly,
After the y variable is determined by forward substitution, the data vector can be determined by backward substitution. Backward substitution is performed by the following subroutine.
For a banded matrix, the following subroutine is used.
(·)* indicates a complex conjugate function. g*LK is the complex conjugate of the corresponding element determined for the Cholesky factor G. YL is the corresponding element of y.
Backward substitution is also implemented using scalar processors using the star and diamond functions 76, 78 as shown in
y←w Equation 46
x←z−w*gij* Equation 47
x←z/g*jj* Equation 48
y←x Equation 49
The delays 68 at the concurrent assignments between processors 76, 78, the array of
Since each processor 76, 78 in 16a operates in every other clock cycle, every other delay can be removed as shown in
The operations per cycle of the processing elements 76, 78 of the projected array of
e-15j illustrates the extension of the processors of
The timing diagrams begin in stage 7, which is after stage 6 of forward substitution. The processing begins in stage 7, time 0 (
Similarly,
To simplify the complexity of the individual processing elements 72, 74, 76, 78 for both forward and backward substitution, the divide function 80 can be separated from the elements 72, 74, 76, 78, as shown in
Since the computational data flow for all three processes (determining G, forward and backward substitution) is the same, N or the bandwidth P, all three functions can be performed on the same reconfigurable array. Each processing element 84, 82 of the reconfigurable array is capable of operating the functions to determine G and perform forward and backward substitution, as shown in
To simplify the complexity of the individual processing elements 82, 84 in the reconfigurable array, the divide and square root functionality 86 are preferably broken out from the array by a reciprocal and square root device 86. The reciprocal and square root device 86, preferably, determines the reciprocal to be in a multiplication, as shown in
To reduce the number of processors 82, 84 further, folding is used.
a illustrates bi-directional folding for four processing elements 761, 762, 763, 764/78 performing the function of twelve elements over three folds of the array of 11b. Instead of delay elements being between the processing elements 761, 762, 763, 764/78, dual port memories 861, 862, 863, 864 (86) are used to store the data of each fold. Although delay elements (dual port memories 86) may be present for each processing element connection, such as for the implementation of
During the first fold, each processors' data is stored in its associated dual port memory 86 in an address for fold 1. Data from the matrix is also input to the processors 761-763, 764/78 from memory cells 881-884 (88). Since there is no wrap-around of data between fold 1 processor 764/78 and fold 3 processor 761, a dual port memory 86 is not used between these processors. However, since a single address is required between the fold 1 and fold 2 processor 761 and between fold 2 and fold 3 processor 764/78, a dual port memory 86 is shown as a dashed line. During the second fold, each processor's data is stored in a memory address for fold 2. Data from the matrix is also input to the processors 761-763, 764/78 for fold 2. Data for fold 2 processor 761 comes from fold 1 processor 761, which is the same physical processor 761 so (although shown) this connection is not necessary. During the third fold, each processor's data is stored in its fold 3 memory address. Data from the matrix is also input to the processors 761-763, 764/78 for fold 3. Data for fold 3 processor 764/78 comes from fold 2 processor 764/78 so this connection is not necessary. For the next processing stage, the procedure is repeated for fold 1.
a is an implementation of bi-directional folding of
b illustrates a one directional folding version of the array of 11b. During the first fold, each processor's data is stored in its associated dual port memory address for fold 1. Although fold 1 processor 764/78 and fold 3 processor 761 are physically connected, in operation no data is transferred directly between these processors. Accordingly, the memory port 864 between them has storage for one less address. Fold 2 processor 764/78 is effectively coupled to fold 1 processor 761 by the ring-like connection between the processors. Similarly, fold 3 processor 764/78 is effectively coupled to fold 2 processor 761.
b is an implementation of one directional folding of
To implement Cholesky decomposition, forward and backward substitution onto folded processors, the processor, such as the 764/78 processor, in the array must be capable of performing the functions for the processors for Cholesky decomposition, forward and backward substitution, but also for each fold. As shown in
The signals w, x, y, and z are the same as those previously defined in the PE function definitions. The signals aq and ad represent the current state and next state, respectively, of a PE's memory location being read and/or written in a particular cycle of the processing. The names in parentheses indicate the signals to be used for the second slice.
This preferred processing element can be used for any of the PEs, though it is desirable to optimize PE1, which performs the divide function, independently from the other PEs. Each input to the multiplexers 941 to 948 is labeled with a ‘0’ to indicate that it is used for PE1 only, a ‘−’ to indicate that it is used for every PE except PE1, or a ‘+’ to indicate that it is used for all of the PEs. The isqr input is connected to zero except for the real slice of PE1, where it is connected to the output of a function that generates the reciprocal of the square root of the aqr input. Such a function could be implemented as a LUT with a ROM for a reasonable fixed-point word size.
As shown in
This application is a continuation of U.S. patent application Ser. No. 10/172,113, filed on Jun. 14, 2002, which is a continuation-in-part of patent application Ser. No. 10/083,189, filed on Feb. 26, 2002, which claims priority from U.S. Provisional Patent Application No. 60/332,950, filed on Nov. 14, 2001.
Number | Name | Date | Kind |
---|---|---|---|
4964126 | Musicus | Oct 1990 | A |
5630154 | Bolstad et al. | May 1997 | A |
6061706 | Gai et al. | May 2000 | A |
6064689 | Vollmer et al. | May 2000 | A |
6313786 | Sheynblat et al. | Nov 2001 | B1 |
6707864 | Kim | Mar 2004 | B2 |
6714527 | Kim et al. | Mar 2004 | B2 |
6870882 | Al-Dhahir et al. | Mar 2005 | B1 |
6937644 | Pan et al. | Aug 2005 | B2 |
6985513 | Zeira | Jan 2006 | B2 |
20030026325 | De et al. | Feb 2003 | A1 |
Number | Date | Country |
---|---|---|
1 447 916 | Aug 2004 | EP |
03-141480 | Oct 1989 | JP |
03-185745 | Aug 1991 | JP |
06-028324 | Apr 1994 | JP |
10-003468 | Jan 1998 | JP |
2001-056808 | Feb 2001 | JP |
2001-189684 | Jul 2001 | JP |
Number | Date | Country | |
---|---|---|---|
20070206543 A1 | Sep 2007 | US |
Number | Date | Country | |
---|---|---|---|
60332950 | Nov 2001 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10172113 | Jun 2002 | US |
Child | 11801074 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10083189 | Feb 2002 | US |
Child | 10172113 | US |