The present invention is directed to radar systems, and more particularly to the processing of received data.
In digital signal processing for radio and radar applications, the need to form the product of a matrix with a vector is often encountered. Often the values are complex numbers having a real and imaginary part, but the operation is implemented using real multiplies and adds. For example, a number N of sequentially received time samples may be subjected to a Fourier Analysis to obtain N spectral components. The well-known Fast Fourier Transform or FFT is conventionally used for this process as it requires a number of complex multiply/accumulate operations only of the order of N log2(N) as opposed to the Discrete Fourier Transform (DFT) which needs N2 such operations.
Digital beamforming is another operation that may be required in communications or radar applications.
For reception, a first number of antenna elements of an antenna array receive signals which are then digitized and submitted to digital beamforming to determine the signals received from each of a second number of directions. Such a receive beamforming operation may be expressed as multiplication of a vector of signal samples received at the same instant by the antenna elements by a fixed matrix of beamforming coefficients, the signal sample vector changing from sampling instant to sampling instant while the “fixed” matrix of beamforming coefficients may change only slowly if at all.
For transmission, digital beamforming takes a first plurality of digitized signal streams for transmission and creates therefrom a second plurality of signals to be transmitted from the second plurality of transmitter-antenna elements such that each signal is transmitted in a different desired direction. This also may be expressed as a matrix-times-vector operation similar to receive beamforming.
A logic structure suitable for chip integration is described that performs multiplication of an M×N matrix of multi-bit values to a vector of N multi-bit values in parallel, yielding all outputs at the same time. The structure treats a single bit of each element of one row of the matrix at a time, bits of like significance forming a row of single digits that in the binary case may be regarded as having values of (1 or 0), (1 or −1) or (1, 0, or −1). The latter ternary states arise if the matrix row values are in sign-magnitude form.
The row of single digits is then multiplied to the multi-bit vector. Since the digits are only +/−1 or 0, no multiplication is involved, and the result is simply sums and differences of the multi-bit vector. The inventive structure forms all possible sums and differences of groups of the multibit vector elements where a group size L can be smaller than the vector length N to keep the number of sums and differences, which is either 2L or 3L within a reasonable number. For example, a group of size L=8 would produce 256 combinations for binary digits or 6,561 in the ternary case. To avoid the much greater number in the ternary case, the following procedure is used:
The applications of interest (such as Fourier transforms and beamforming) have complex matrix elements which are of the form Exp(jθ)={cos(θ)+j sin(θ)}. That is, every value has a magnitude less or equal to 1, which is added to all elements making their values lie between 0 and +2 and dividing by 2 makes the values lie between zero and 1. The addition of 1 to the matrix followed by division by 2 is compensated by multiplying the resulting matrix-vector product 2 and subtracting the sum of the vector elements from each result. Thus, ternary values caused by negative matrix values are thus eliminated and the bit-rows of the matrix are then binary, 1 or 0.
All combinations of a group of L of the N vector values with a weight of 0 or 1 are efficiently computed using just one addition per value, for example by forming the combinations in Grey Code order, in which the bit weights only differ in one position from one value to the next. These combinations will be used repeatedly for different rows of bits from the same and from different rows of matrix values.
If a group size L does not divide into N, different group sizes L1, L2 . . . , etc. can be used which sum to N.
The preformed combinations are then selected according to the specific bit pattern of bits of like significance selected from successive groups of matrix row elements, and the results using successive groups of row elements are added to obtain a partial product of the N-element vector with a whole row of N digits selected from the same matrix row. This is repeated by selected bits of different but like significance from the same matrix row to obtain partial products with other matrix digit-rows, the partial products being combined with a shift to account for the place-significance of the different matrix element digits with which the vector was multiplied. The result is the product of one matrix row with the vector. This is then repeated for all matrix rows to obtain the desired M-value matrix-vector product.
However large M may be, the same preformed combinations of the N-element vector values can be used for each row of digits and for each matrix row, wherein a gain in computational efficient is obtained.
In a preferred implementation, the precomputed combinations are computed on the fly using serial adders, and not stored in memory. The serial output streams of the serial adders are made available on a number of horizontal lines corresponding to the number of combinations, and a number of vertical lines, corresponding to the number of matrix rows M times the number of bits in each multi-bit matrix value, pick up selected precombinations for further addition by placing a serial adder at the crossing of the vertical line with the horizontal line carrying the bit stream of the selected combination. The vertical lines corresponding to bits of different significance of the same matrix row are finally combined with bit shifts corresponding to the bit place significance to yield the final results. This latter operation is the only structure that resembles a multiplier, and so it is claimed that only one multiplier is needed for each of the M output values.
Using the invention to perform multiplication of an N×N DFT matrix with a N-element vector to be transformed, a fully parallel DFT thus needs only N multiplies, which is faster than an FFT.
The method also accelerates the dot product of two vectors. It can be regarded as achieving this by avoiding accumulation of partial products of different place significance for each multiplication and instead accumulating partial products of the same significance across all multiplications before applying one shift-and-add operation to accumulated partial products of different place significance at the end.
For matrix and vector values that are complex, further additions of partial products such as Real×Real−Imaginary×Imaginary and Real×Imaginary+Imaginary×Real are performed. It may be arranged that the real and imaginary parts of a result appear adjacent to one another on a chip to minimize the routing required to perform a square-root-of-sum-of-squares operation on all result values.
The present invention will now be described with reference to the accompanying figures, wherein numbered elements in the following written description correspond to like-numbered elements in the figures. Methods and systems of the present invention may include a logic structure suitable for chip integration that performs multiplication of an M×N matrix of multi-bit values to a vector of N multi-bit values in parallel, yielding all outputs at the same time. The exemplary structure treats a single bit of each element of one row of the matrix at a time, bits of like significance forming a row of single digits that in the binary case may be regarded as having values of (1 or 0), (1 or −1) or (1, 0, or −1). The latter ternary states arise if the matrix row values are in sign-magnitude form.
U.S. Pat. No. 6,219,365 to current inventor Paul W. Dent, filed 19 Jan. 1999 and entitled “Apparatus for Performing Multiplication of a Vector of Multi-Bit Values by a Matrix of Multi-Bit Coefficients” describes a “fast” matrix times vector method and apparatus when the matrix is fixed and the vector is variable by multiplying the matrix to a column of single bits of the vector elements at a time, the bits being of like place-significance. Since the bits take on only one of two values, for example (1 or 0), or (1 or −1), this multiplication generates only one of a limited number of all possible sums and differences of the matrix coefficients, which combinations can be precomputed and stored in look-up tables. In case the look-up tables become too large, the bit vector can be divided into smaller bit vectors that are multiplied by a correspondingly smaller number of matrix coefficients leading to smaller look-up tables. In the transmit beamforming case, it was disclosed that modulation of digital data on to a radio frequency carrier using linear modulation can be exchanged in order with the linear operation of transmit beamforming, such that the transmit beamforming only need operate on a single column of data bits at the data bit rate. The outputs of the beamformer were then subjected to the linear modulation operation, up-sampling and filtering after beamforming to produce spectrally-shaped I,Q samples at a sample rate of multiple samples per data bit. Thus, switching the order of modulation and beamforming resulted in the beamformer input vector having only a single column of binary values and the matrix multiplication with it takes place only at data bit rate instead of the higher I/O sample rate. Moreover, there are no multiplies to be performed. The U.S. Pat. No. 6,219,365 patent is hereby incorporated by reference herein in its entirety.
Multiplication is a more complex operation than addition, and thus, there is a strong motivation to reduce multiplication operations. Multiplier hardware structures take more chip area and power than adder structures so there is also strong motivation to reduce multiplier structures needed for a given speed of computation.
The above-incorporated '365 patent also discloses how to perform fully parallel multiplication of an N×N matrix of multi-bit values to a vector of N multi-bit values using only N multiplier structures compared to the N2 that would be needed with a conventional approach.
The matrix-times-multi-bit-vector method of the '365 patent comprises performing matrix multiplication with a single column of the vector's bits of like significance at a time using look-up tables precomputed as a function of the matrix coefficients, and then combining the results for bit columns of different place significance by shifting the results according to place significance and adding. The latter operation is analogous to a multiplier structure that adds partial products; however, there is only one such structure needed per output value computed.
In a current application, a matrix-times-vector operation is required in which the matrix is M×N and the vector is of length N, and the number of rows M of the matrix is much greater than the number of columns, N; for example, N=256 and M=8192. In that case, the method of the '365 patent results in an excessive number of look-up tables to be precomputed and stored in memory. Therefore, an alternative method is sought which is described herein.
Referring to
It may be deduced from
R1=1*S0+0.9*S1+0.9*S2−0.75*S3+0.8*S4+0.9*S5,
where * stands for fixed point multiplication.
The selection of fewer than all N row values (N is only six in this case) results in a reduction of the number of combinations of the vector values that have to be formed. Using an exemplary L=3 and looking at the 3rd bit down, the first group 100 in
There can only be 3×3×3=27 possible combinations of three vector values needed whatever the matrix coefficients. The same 27 combinations suffice for all bit rows of all matrix rows, and thus need be formed once only.
For a larger N, it is desirable for L to be as large as possible, but if the single digits in the rows can take on ternary values, the number of combinations to be formed is 3L, which is 243 for L=5. For binary symbols, the value of L can go up to 8, so N can be divided into a smaller number of groups of 8. For example, if N=256, 32 groups of 8 can be used, and each group of 8 results in needing 256 precombinations of 8 vector values to be formed. There are 32 groups of 8 for N=256, so 32 times 256 combinations have to be formed. Alternatively, if L=4, 64 groups of 16 combinations would be needed. It is also possible to use different values of L for different groups if no one L divides into N. For example, if N=255, 31 groups of 8, and one group of 7, could be used, or 51 groups of 5. The greater the number of combinations that are precomputed at this stage, the fewer additions of partial products that have to be combined later, so there a tradeoff between the silicon area and power needed to form the precombinations and the later complexity. This tradeoff depends on how many output values are needed, and a larger number M of output values favors forming more precombinations early on.
Ternary values may be avoided by noting that in Fourier transform-like operations, such as DFTs or antenna beamforming, all the matrix values have real and imaginary parts that lie between −1 and +1. Consideration of the complex case occurs later herein but consider for now a real matrix comprising only cosines or sines with values between +1 and −1. These are all rendered positive by adding 1 to every matrix element, such that the values then lie between 0 and 2. The next step is dividing by 2 so that they all lie between 0 and 1. Adding 1 to all matrix values is equivalent to adding the sum of all the vector values to each result and is therefore compensated by subtracting the sum of all vector values from each of the final results. The division by 2 may be compensated if desired by first multiplying each result by 2 before subtracting the sum of the vector values; alternatively, half the sum of the vector values may be subtracted.
In
“B7” represents the row bits just to the right of the binary point, that is, 011 011. The first 011 group signifies the addition of S1 and S2, therefore, a dot (●) (serial adder) 602 is place on the crossing of the “B7” vertical line with the horizontal line carrying the S1+S2 combination. The second group 011 corresponds to the addition of S4 and S5. Therefore, the “B7” vertical line also has a serial adder (●) 602 on the horizontal line corresponding to the combination S4+S5. The vertical lines thus join the output of one adder to the input of the next to form an adders string. Thus, having passed through all adders in the string, the result at the end of the string is the product of the vector with one digit-row of one matrix row, the digits in the row being of the same place significance. Vertical line “B8” carries the serial product of greatest place significance, “B7” is a factor 2 less significant, and so on, down to the least significant partial product on line Bo. These shall all now be added with shifts corresponding to their place significance, which is achieved by delaying bits of high significance in delay elements D () 600 so they match up with bits of equal significance in the next least significant partial product. The bit streams are LSB first, so later bits of higher significance. After adding the partial products with place-significant shifts, the final output Ro is the dot product of the first matrix row with the input vector.
The structure of
The physical size of the chip structure can be estimated. For example, each group of L bits creates 2Lcombinations of the input vector values. There are N/L such groups, therefore, the number of horizontal lines is N.2L/L−1.
The number of vertical lines is equal to the product of the number of matrix rows with the number of bits precision of each matrix coefficient. For example, if N=256 and L=8, there are 8,191 horizontal lines, and if the matrix coefficient precision is 9 bits and there are 8,192 matrix rows, there will be 73,728 vertical lines.
Modern semiconductor chips allow 50 nm line-spacing and have, for example, up to ten metal layers. Using only one layer of metallization for the horizontal and vertical lines, 73,728×50 nm=3.7 mm, and the 8,192 horizontal lines occupy 0.4 mm. Thus, the main part of the structure fabricated as
In an exemplary 5 nm silicon process, it is conceivable that a feasible serial bit rate through the serial adders is 16 GB/s. The benefit of serial adders is that there is no carry propagation to wait for—that being explicitly built into the carry feedback. Assuming a final word length growth to 32 bits, the circuit can perform one such matrix x vector operation every 2 ns. This is equivalent to over 1015 fixed-point multiply-accumulates per second.
A structure for the case where all values are complex will now be developed, using
As before, the N bit row corresponding to like-significant bits of the binary expanded real parts (101) is divided into groups of L bits, for example, where N=6 in
In
Real×Real−Imag×Imag,
but only adders are required to form the imaginary part Rio as the imaginary part of a complex product is:
Real×Imag+Imag×Real.
Also, to simplify
Likewise, the final imaginary result is compensated by subtracting the sum of all imaginary vector values formed by a second vertical line having an adder to combine SIo+SI1+SI2 with SI3+SI4+SI5.
It may be mentioned that a “string” of adders in series may beneficially be replaced by a binary tree of adders, in which pairs of values at a time are added in a first rank of adders, then pairs of first rank adder outputs are added in second adders and so forth, the number of adders being the same, but leading to simpler carry-flushing in the serial adder case due to the tree depth being only Loge of the number of adders. Apart from the latter characteristic these two structures shall be regarded herein as functionally interchangeable.
In
Exemplary embodiments can be used to efficiently implement common algorithms that can be expressed as Matrix×Vector. For example, the Discrete N-point Fourier Transform algorithm (also referred to as a complex Fourier Transform) can be expressed as the multiplication of an N×N complex matrix to an N-element complex vector. As the Fourier Transform Matrix is fixed but the vector to be transformed is variable, the inventive algorithm described herein is appropriate. The DFT would be computed with the equivalent of only 2N real multiply-equivalent operations instead of the 4N2 needed for a DFT or the 4N log2(N) real multiplies that are needed with the Fast Fourier Transform. For N=256, this is a factor of 512 times more efficient than the DFT and 16 times more efficient than an FFT. The efficiency gain may translate into lower power consumption when computing a large number of transforms continuously. The chip areas of 1.5 mm2 estimated previously for a 256-in, 8,192-out real matrix multiply becomes 6 mm2 for the complex case. A 256-point Fourier transform engine with 256 in and 256 out is 1/32nd of that size, which is about 0.2 mm2 and performs a transform perhaps every 2 ns.
Although the number base envisioned herein is principally binary, and in some cases ternary, the principle discussed herein is valid for any number base, such as decimal or hexadecimal, although not obviously as efficient for full custom chip implementation.
Accordingly, an exemplary logic structure suitable for chip integration performs multiplication of an M×N matrix of multi-bit values to a vector of N multi-bit values in parallel, yielding all outputs at the same time. The exemplary structure treats a single bit of each element of one row of the matrix at a time, with bits of like significance forming a row of single digits that in the binary case may be regarded as having values of (1 or 0), (1 or −1), or (1, 0, −1). The latter ternary states arise if the matrix row values are in sign-magnitude form. The row of single digits is then multiplied to the multi-bit vector. Since the digits are only +/−1 or 0, no multiplication is involved, and the result is simply sums and differences of the multi-bit vector. The exemplary structure forms all possible sums and differences of groups of the multibit vector elements where a group size L can be smaller than the vector length N to keep the number of sums and differences, which is either 2L or 3L within a reasonable number.
Changes and modifications in the specifically-described embodiments may be carried out without departing from the principles of the present invention, which is intended to be limited only by the scope of the appended claims as interpreted according to the principles of patent law including the doctrine of equivalents.
The present application claims the filing benefits of U.S. provisional application, Ser. No. 63/140,567, filed Jan. 22, 2021, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
63140567 | Jan 2021 | US |