Adaptive digital beamformer coefficient processor for satellite signal interference reduction

Information

  • Patent Grant
  • 6885338
  • Patent Number
    6,885,338
  • Date Filed
    Friday, December 28, 2001
    22 years ago
  • Date Issued
    Tuesday, April 26, 2005
    19 years ago
Abstract
Filter coefficients of a beamformer are computed based on a segment of input samples. The segment of input samples is divided into a plurality of blocks of input samples wherein the plurality of blocks of input samples are received by a shared memory at a first rate. The first block of the plurality of blocks is received in the shared memory at a first time. The plurality of blocks of input samples from the shared memory are read out at a second rate wherein the first block of the plurality of blocks is read from the shared memory at a second time. A plurality of partial covariance matrices for the plurality of blocks read from the shared memory are computed and added together to determine a covariance matrices used to compute the filter coefficients.
Description
FIELD OF THE INVENTION

The present invention relates to a method and apparatus for reducing interference in a received satellite signal using real-time signal processing.


BACKGROUND OF THE INVENTION

Navigational aides, such as those devices used in automobiles to assist drivers in locating destinations, have become very popular in recent years. These navigational aides work by receiving satellite signals from systems such as the Global Positioning System (GPS). GPS consists of 24 satellites that orbit the earth and transmit signals to these navigational aides. A navigational aide processes these signals to determine, for example, the location of a driver and, based on the driver's location, the navigational aides may provide directions to the driver's destination. In addition to navigational devices, GPS provides means for automatic vehicle location systems, aircraft landing systems, and precision timing systems. These devices have both commercial and military applications.


However, the satellite signals on which these devices rely are transmitted at a very low power level and are therefore susceptible to unintentional and intentional interference. Sources of unintentional interference include cellular phones and television stations transmitting antennas. Intentional interference (jamming) is accomplished by intentionally producing signals to interfere with the satellite signals transmitted.


When interference occurs, the performance of devices that rely on the satellite signals degrades. To maintain or improve the performance of these devices in the presence of interference, GPS receivers must be designed to cancel or minimize the interference.


For a significant reduction of the effects of interference to a desired satellite signal, a hardware implementation of digital filters operating on analog-to-digital sampled data from the satellite receiver's intermediate frequency may be required. The digital filters require numerical coefficients that are derived from the incoming sampled data and are applied to the filters in real-time. However, in conventional systems the hardware needed to store the sampled data is very costly and computationally inefficient. Therefore, there is a need for a GPS receiver that may cancel or minimize interference in a cost effective and computationally simplified manner.


SUMMARY OF THE INVENTION

A method for computing filter coefficients of a beamformer based on a segment of input samples. The method comprising the steps of dividing the segment of input samples into a plurality of blocks of input samples and receiving the plurality of blocks of input samples in a shared memory at a first rate wherein a first block of the plurality of blocks is received in a shared memory at a first time. The method further comprises reading the plurality of blocks of input samples from the shared memory at a second rate wherein the first block of the plurality of blocks is read from the shared memory at a second time. Still further, the method comprises computing a plurality of partial covariance matrices for the plurality of blocks read from the shared memory and adding the plurality of partial covariance matrices.


Additional objects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objects and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims.


It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate several embodiments of the invention and together with the description, serve to explain the principles of the invention



FIG. 1 illustrates an exemplary embodiment of a spatial temporal adaptive processing (STAP) beamformer;



FIG. 2 illustrates an exemplary block diagram of a coefficient processor;



FIG. 3 illustrates an exemplary method for computing filter coefficients for the beamformer of FIG. 1 using the coefficient processor of FIG. 2;



FIG. 4 illustrates another exemplary block diagram of a coefficient processor; and



FIG. 5 illustrates an exemplary method for computing filter coefficients for the beamformer of FIG. 1 using the coefficient processor of FIG. 4.





DESCRIPTION OF THE EMBODIMENTS

Reference will now be made in detail to the present exemplary embodiments of the invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.



FIG. 1 depicts an exemplary embodiment of a spatial temporal adaptive processing (STAP) beamformer 100. The beamformer 100 may comprise N antenna elements 160 to receive one or more satellite signals 100A arriving at the antenna elements 160 in one or more directions. In addition to satellite signals 100A, the N antenna elements may also receive one or more interference signals 100B arriving at the antenna elements 160 from one or more directions. Each antenna element 160 is connected to a multiple tapped delay line structure 150 comprising M taps. Each of the multiple tapped delay line structures 150 may comprise M-1 delay elements 110, M multipliers 120, and an adder 130. Generally, each of the multiple tapped delay structures 150 may have an FIR structure.


Although not shown in FIG. 1, the signals 100A, 100B received by each antenna elements 160 may undergo preprocessing prior to being received by the multiple tapped delay line structures 150 and a coefficient processor 170. For example, signals received by each antenna elements 160 may be filtered by a preselection filter and down-converted to baseband or other IF frequency. Further, the signals may be filtered by a bandlimited filter and sampled by an analog-to-digital converter prior. Still further, the baseband filtered sampled signals may be further converted to complex baseband signals by digital demodulation or Hilbert transform type processing, for example, prior to being input to the multiple tapped delay line structures 150 and the coefficient processor 170.


The beamformer 100 receives input signals 100A, 100B and computes filter coefficients, wnm, which are applied to the multiple tapped delay line structures 150 for processing the input signals. The signals at the taps in the multiple tapped delay line structures 150 are weighted by corresponding filter coefficients, wnm, and the resulting products are summed by adders 130 producing an output signal for each multiple tapped delay line structure 150. The filter coefficients, wnm, are computed by the coefficient processor 170, which will be discussed in greater detail below. The outputs from each of the multiple tapped delay line structures 150 are then summed together by an adder 140 to generate output samples, y(k).


The output samples, y(k), of the beamformer 100 may be expressed by the following equation:
y(k)=n=1Nm=1Mwnm*xn(k-(m-1))(1)


where xn(k) denotes a complex input sample from the n-th antenna element at time k. It is assumed that at the n-th antenna element, the satellite signal is multiplied by a factor e−jΔn. The exponent factor, Δn, depends on the angle of arrival of the satellite signal, the carrier frequency of the satellite signal, and the position of the n-th antenna element. More specifically, Δncτn, where ωc is the carrier frequency of the satellite signal and τn is the inter-element time delay at antenna element n. If the steering vector for a given satellite direction is denoted by └e−jΔ1e−jΔ2 . . . e−jΔN┘, then the input samples, xn(k), may be expressed in the z-space by the following equation:

Xn(z)=e−jΔne−j{tilde over (ω)}τnV(z)  (2)

where {tilde over (ω)} is the baseband frequency of processed satellite signal received by the multiple tapped delay line structures 150 and a coefficient processor 170 and V(z) is the z-space representation of the satellite signal at a first antenna element.


To minimize the effects of interference signals 100B on the GPS receiver, the expected power, P, of the complex output samples, y(k), of the beamformer 100 may be minimized according the following equation:
P=E{y(k)2}=E{(n=1Nm=1Mwnm*xn(k-(m-1)))(i=1Nj=1Mwijxi*(k-(j-1)))}=E{n=1Nm=1Mj=1Mi=1Nwnm*xn(k-(m-1))xi*(k-(j-1))wij}(3)


The expected power, P, may be simplified by rearranging the input samples and weights into the following (N×M)×1 vectors:
x~(k)=[x1(k)x1(k-1)x1(4)x2(k)x2(k-1)x2(4)xN(k)xN(k-1)xN(4)]and(4)w~=[w11w12w1Mw21w22w2MwN1wN2wNM](5)


In matrix notation, the output samples, y(k), of the beamformer 100 may be expressed as follows:

y(k)={tilde over (w)}H{tilde over (x)}(k)  (6)


The resulting expected output power, P, is given by:

P=E{|y(k)|2}=E{({tilde over (w)}H{tilde over (x)}(k)({tilde over (x)}H(k){tilde over (w)})}

 ={tilde over (w)}HE{{tilde over (x)}(k){tilde over (x)}H(k)}{tilde over (w)}={tilde over (w)}HR{tilde over (x)}{tilde over (x)}{tilde over (w)}  (7)


where R{tilde over (x)}{tilde over (x)} is a covariance matrix. Before determining the minimum expected output power, P, at least one constraint may be imposed to avoid the trivial solution of zeros for the filter coefficients, {tilde over (w)}. Accordingly, equation (7) may be minimized subject to the following constraint:

CH{tilde over (w)}=F  (8)


where C is a constraint weighting matrix and F is a constraint solution vector. The constraint weighting matrix, C, may be an (N×M)×L matrix and constraint solution vector, F, may be an L×1 matrix. Accordingly, the interference minimization problem my be characterized as follows:

minimize P={tilde over (w)}HR{tilde over (x)}{tilde over (x)}{tilde over (w)}
subject to CH{tilde over (w)}=F


The filter coefficients, {tilde over (w)}, that may solve the interference minimization problem may be determined by the following equation:

{tilde over (w)}=R{tilde over (x)}{tilde over (x)}−1C[CHR{tilde over (x)}{tilde over (x)}−1C]−1F  (9)


Conventional coefficient processors receive and store a predetermined number of input samples prior to computing the covariance matrix, R{tilde over (x)}{tilde over (x)}. For example, for a beamformer 100 having four antenna elements 160 (i.e., N=4) and five taps (i.e., M=5), the conventional coefficient processor may receive and store four thousand complex input samples prior to computing the covariance matrix, R{tilde over (x)}{tilde over (x)}. Because the input samples are complex, the coefficient processor may require a memory device having eight thousand memory locations. Further, conventional coefficient processors are computationally intensive. In the above example, the conventional coefficient processor computes the inversion of a 20×20 covariance matrix, R{tilde over (x)}{tilde over (x)}, and solves a 20-by-20 system of linear equations to compute the optimum filter coefficients, {tilde over (w)}.



FIG. 2 illustrates a first embodiment of the coefficient processor 170 according to the present invention. The coefficient processor 170 is a computationally efficient processor for computing the covariance matrix, R{tilde over (x)}{tilde over (x)}. Further, the coefficient processor 170 is less costly than conventional coefficient processors because coefficient processor 170 requires less memory storage to compute the covariance matrix, R{tilde over (x)}{tilde over (x)}. Coefficient processor 170 may comprise a data input device 210 to receive sampled input signals from the antenna elements 160, a shared memory device 220 to store the input samples, a CPU 230 to compute the filter coefficients, {tilde over (w)}, a position input device 240 to receive satellite position data, and an output device 250 to output the filter coefficients, {tilde over (w)}, to the multiple tapped delay line structures 150 of the digital beamformer 100. The CPU 230 may compute the covariance matrix, R{tilde over (x)}{tilde over (x)}, by computing a plurality of partial covariance matrices, R{tilde over (x)}{tilde over (x)}i, and adding the partial covariance matrices, R{tilde over (x)}{tilde over (x)}i, together to compute the covariance matrix, R{tilde over (x)}{tilde over (x)}.


More specifically, the elements, rij, of a covariance matrix, R{tilde over (x)}{tilde over (x)}, of an N×S matrix where a sample in the N×S matrix is designated as aij, where the subscript i refers to i-th row of the N×S matrix and j refers to the j-th column in the N×S matrix, may be determined based on the following equation:
rij=k=1Naki*akjfor1iSand1jS(10)


where the term a*ki is the complex conjugate of ak1. The computation of the elements, rij, of the covariance matrix, R{tilde over (x)}{tilde over (x)}, may be broken up into a sum of parts by dividing the N×S matrix into L sub-matrices where all L sub-matrices may have an M rows and S columns. Accordingly, the elements, rij, of the covariance matrix, R{tilde over (x)}{tilde over (x)}, may be determined based on the following equation having L individual summation terms:
rij=k=1Maki*akj+k=M+12Maki*akj++k=(L-1)M+1LMaki*akj(10)


Each summation represents a partial covariance computation. Accordingly, the CPU 230 may compute the covariance matrix, R{tilde over (x)}{tilde over (x)}, by computing a plurality of partial covariance matrices, R{tilde over (x)}{tilde over (x)}i, and adding the partial covariance matrices, R{tilde over (x)}{tilde over (x)}i, together to compute the covariance matrix, R{tilde over (x)}{tilde over (x)}. The CPU 230 may then compute the filter coefficients, {tilde over (w)}, according to equation (9) above.



FIG. 3 illustrates a method used by the coefficient processor 170 of FIG. 2 to compute the covariance matrix, R{tilde over (x)}{tilde over (x)}, for segments of input samples. Continuing with the example of a beamformer 100 having four antenna elements 160 (i.e., N=4) and five taps (i.e., M=5), the segments may consist of four thousand complex input samples grouped in eight blocks of 25×20 matrices. A block of 25 by 20 input samples may comprise twenty-five vectors having twenty input samples. The twenty input samples may consist of five input samples from each antenna element 160. More specifically, a block of 25 by 20 input samples may consist of the following input samples:
[x1(k-120)x1(k-121)x1(k-124)x2(k-120)x4(k-120)x4(k-124)x1(k-5)x1(k-6)x1(k-9)x2(k-5)x4(k-5)x4(k-9)x1(k)x1(k-1)x1(k-4)x2(k)x4(k)x4(k-4)]


As shown in FIG. 3, the shared memory device 220 and the CPU 230 operate in parallel. That is, as the shared memory device 220 receives blocks of input samples for an i-th segment of input samples, the CPU 230 reads the blocks of input samples and processes the blocks of input samples to compute a covariance matrix, R{tilde over (x)}{tilde over (x)}, for the i-th segment of input samples. As shown in FIG. 3, the shared memory a device 220 may continuously receive input samples at a rate equal to 1125 microseconds per block of input samples, for example.


The CPU 230 may not read a first block of input samples of the i-th segment to begin computing the covariance matrix, R{tilde over (x)}{tilde over (x)}, for the i-th segment until a time after the first block of input samples has been received by the shared memory device 220. The purpose of the time delay may be to permit incoming input samples to fill the shared memory device 220 at a rate less than or equal to the rate the input samples are read by the CPU 230. In this way, blocks of input samples may always be available for processing by the CPU 230. To determine the number of blocks that may be stored in shared memory device 220 before the CPU 230 may read blocks of input data from the shared memory device 220 so that blocks of input samples may always be available for processing by the CPU 230, let the following variable be defined as follows:


Ns=the number of blocks of samples in a segment of input samples;


Nc=the number of blocks of samples in the shared memory device 220 for a segment before a first block of input samples is read by the CPU 230;


Tw=the time to write a block of input samples to the shared memory device 220;


Tr=the time to read a block of input samples from the shared memory device 220 by the CPU 230; and


Ts=the time of one segment of input samples.


At a time t=NsTw, the number of blocks of input samples remaining in the shared memory device 230, Nm, may be given by the following equation:
Nm=Ns-(1Tr)(NsTw-NcTw)(11)


For blocks of input samples to be available for processing by the CPU 230, the number of blocks of input samples remaining in the shared memory device 230, Nm, at a time t=NsTw should be greater than or equal to zero. Accordingly, based on equation (1), the of number of blocks, Nc, that may be stored in shared memory device 220 before the CPU 230 may read blocks of input data from the shared memory device 220 so that blocks of input samples may always be available for processing by the CPU 230 may be determined based on the following equation:
NcNs(1-TrTw)(12)


For the embodiment of FIG. 3, where Ns=8, Tw=1125 microseconds, Tr=1100 microseconds, and Ts=10000 microseconds, Nc may be at least 0.178 blocks. Accordingly, the CPU 230 may begin to read blocks of input data from the shared memory device 220 after 0.178 blocks of input samples have been received by the shared memory device 220 to ensure that blocks of input samples may always be available for processing by the CPU 230. If Tw=1250 microseconds and Tr=1100 microseconds, Nc may be at least 0.960 blocks. If Tw=Tr, Nc may be at least zero blocks.


Each time the CPU 230 reads a block of input samples from the shared memory device 220, it frees a memory cell in the shared memory device 220 to receive additional input samples from the data input device 210. In this way, the shared memory device 220 need not be capable of storing all four thousand complex samples needed to compute the covariance matrix, R{tilde over (x)}{tilde over (x)}. Accordingly, less memory storage is required to compute the covariance matrix, R{tilde over (x)}{tilde over (x)}. As illustrated in the examples above, the shared memory device 220 may only need to store a fraction of the total number of blocks of input signals for a segment.


Even if Tw>Tr, the shared memory device 220 may only need to store a fraction of the total number of blocks of input signals in a segment. For example, at time t=NsTw and for Nc=0, the number of blocks of input samples remaining in the shared memory device 230, Nm, may be given by the following equation:
Nm=Ns(1-TwTr)(13)


If Tw=Tr/2, the shared memory device 220 may only need to store one of the total blocks of input samples in a segment.


The CPU 230 computes the covariance matrix, R{tilde over (x)}{tilde over (x)}, for an i-th segment of input samples by computing a partial covariance matrix for each block of input samples for the i-th segment. The partial covariance matrix computation may be computed for the upper half of the covariance matrix, R{tilde over (x)}{tilde over (x)}, because the covariance matrix, R{tilde over (x)}{tilde over (x)}, is conjugate symmetric. Computing the partial covariance matrix in this way may result in a saving of approximately half the computation time. When the CPU 230 has computed the partial covariance matrices for each block of input samples for the i-th segment of input samples, the CPU 230 adds the partial covariance matrices together to compute the upper half of the covariance matrix, R{tilde over (x)}{tilde over (x)}. The remaining half of the covariance matrix, R{tilde over (x)}{tilde over (x)}, is filled in by taking the complex conjugate of the i-th row and j-th column element and putting that value in the j-th row and i-th column of the covariance matrix, R{tilde over (x)}{tilde over (x)}. Once the covariance matrix, R{tilde over (x)}{tilde over (x)}, is computed, the CPU device 230 may compute the filter coefficients, {tilde over (w)}, by first executing an LU decomposition algorithm which triangularizes the covariance matrix, R{tilde over (x)}{tilde over (x)}by decomposing it into the product of a lower triangular matrix and an upper triangular matrix. Triangularization decomposes the linear system to be solved into two triangular systems of equations, which are solved recursively. Following LU decomposition, alternative algorithms may be used to compute the filter coefficients, {tilde over (w)}, for different operational conditions. {tilde over (w)}. For example, U.S. application No. Ser. No. 10/035,676 (now U.S. Pat. No. 6,480,151), filed on even date herewith in the name of Khalil John Maalouf, Jeffrey Michael Ashe, and Naofal AI-Dhahir and entitled “A GPS Receiver Interference Nuller With No Satellite Signal Distortion,” assigned to the assignee of the present application, which is hereby incorporated by reference, discloses algorithms that may be used to compute the filter coefficients, {tilde over (w)}.


Referring to FIG. 4, processing time for computing the covariance matrix, R{tilde over (x)}{tilde over (x)}may be further reduced by adding partial correlation processors 225 to the coefficient processor 170. FIG. 5 illustrates the process of the shared memory device 220 operating in parallel with the partial correlation processors 225 and the CPU device 230. Continuing with the example of a beamformer 100 having four antenna elements 160 (i.e., N=4) and five taps (i.e., M=5), the shared memory device 220 associates input samples in blocks of 25 by 20. The shared memory device 220 delivers at time ti an i-th block of 25 by 20 input samples to one of the partial correlation processors 225. The shared memory device 220 may continuously deliver blocks of 25 by 20 input samples to the partial correlation processors 225 evenly spaced in time every 375 microsecond, for example. Each time the shared memory device 220 delivers a block of input samples to the CPU 230, it frees a memory cell in the shared memory device 220 to receive additional input samples from the data input device 210. In this way, the shared memory device 220 need not be capable of storing all four thousand complex samples needed to compute the covariance matrix, R{tilde over (x)}{tilde over (x)}.


Each partial correlation processor 225 receives a block of 25×20 input samples from the shared memory device 220 and computes a partial covariance matrix. The number of partial correlation processors 225 may be chosen so that a partial correlation processor 225 is always available to begin processing a block of data received from the shared memory device 220. As shown in FIG. 5, four partial correlation processors 225 are provided. The first partial correlation processor 225 receives the first block of input samples. While the first partial correlation processor 225 processes the first block of input samples, the second partial correlation processor 225 receives and processes the second block of input samples. While the first and second partial correlation processors 225 process the first and second block of input samples the third processor 225 receives and processes the third block of input samples. Finally, while the first, second, and third processors 225 process the first, second, and third block, the fourth processor 225 receives and processes the fourth block of input samples. When the fifth block of input samples is ready for processing, the first processor 225 is available to receive and process the fifth block. The remainder of the processors 225 are available to receive and process the remainder of the blocks of the segment of input samples.


When each partial correlation processor 225 finishes its computation, it delivers the result to the CPU device 230. When the CPU 230 receives the partial covariance matrices for a segment of input samples from the partial correlation processors 225, the CPU 230 adds the eight partial covariance matrices together to compute the covariance matrix, R{tilde over (x)}{tilde over (x)}. The coefficient processor 170 of FIG. 5 may compute the covariance matrix, R{tilde over (x)}{tilde over (x)}, in a faster time than the coefficient processor 170 of FIG. 3 because the multiple partial correlation processors 225 are available to read out and process the blocks of input samples quicker than the single CPU 230 of FIG. 3. For example, the coefficient processor 170 of FIG. 4 may reduce the processing time for computing the covariance matrix, R{tilde over (x)}{tilde over (x)}, to 3 milliseconds from 10 milliseconds using the coefficient processor of FIG. 2.


The coefficient processors 170 described above assumed a beamformer 100 having four antenna elements 160 (i.e., N=4) and five taps (i.e., M=5). However, the coefficient processors 170 may be adapted for beamformers 100 having a variety of numbers of antenna elements 160 and taps. Adapting the coefficient processors 170 for a variety of beamformers 100 will be obvious to those of ordinary skill in the art.


Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims
  • 1. A method for computing filter coefficients of a beamformer based on a segment of input samples comprising the steps of: dividing the segment of input samples into a plurality of blocks of input samples; receiving the plurality of blocks of input samples in a shared memory at a first rate wherein a first block of the plurality of blocks is received in a shared memory at a first time; reading the plurality of blocks of input samples from the shared memory at a second rate wherein the first block of the plurality of blocks is read from the shared memory at a second time, computing a plurality of partial covariance matrices for the plurality of blocks read from the shared memory; adding the plurality of partial covariance matrices.
  • 2. The method of claim 1, wherein the segment of input samples corresponds to an N×S matrix of input samples and wherein the plurality of blocks of input samples correspond to L sub-matrices of the N×S matrix wherein the L sub-matrices are M×S matrices where M=N/L.
  • 3. The method of claim 1, wherein the second time is delayed from the first time and the second rate is greater than or equal to the first rate.
  • 4. The method of claim 1, wherein the second rate is less than the first rate.
  • 5. A method for computing filter coefficients of a beamformer based on a segment of input samples comprising the steps of: dividing the segment of input samples into a plurality of blocks of input samples; receiving the plurality of blocks of input samples in a shared memory; reading the plurality of blocks of input samples by a plurality of partial covariance processors from the shared memory wherein each of the plurality of partial covariance processors compute a partial covariance matrix for each block of input samples read by the partial covariance processor; adding the plurality of partial covariance matrices.
  • 6. The method of claim 5, wherein the segment of input samples corresponds to an N×S matrix of input samples and wherein the plurality of blocks of input samples correspond to L sub-matrices of the N×S matrix wherein the L sub-matrices are M×S matrices where M=N/L.
  • 7. An apparatus for computing filter coefficients of a beamformer based on a segment of input samples wherein the segment of input samples are divided into a plurality of blocks of input samples, the apparatus comprising: a shared memory for receiving the plurality of blocks of input samples at a first rate wherein a first block of the plurality of blocks is received in a shared memory at a first time; and a processor for reading the plurality of blocks of input samples from the shared memory at a second rate, computing a plurality of partial covariance matrices for the plurality of blocks read from the shared memory, adding the plurality of partial covariance matrices, wherein the first block of the plurality of blocks is read from the shared memory at a second time, wherein the second time is delayed from the first time and the second rate is greater than the first rate.
  • 8. An apparatus for computing filter coefficients of a beamformer based on a segment of input samples wherein the segment of input samples are divided into a plurality of blocks of input samples, the apparatus comprising: a shared memory for receiving the plurality of blocks of input samples; and a plurality of partial covariance processors for reading the plurality of blocks of input samples from the shared memory wherein each of the plurality of partial covariance processors computes a partial covariance matrix for each block of input samples read by that partial covariance processor; a processor for adding the partial covariance matrices computed by the plurality of partial covariance processors.
CROSS-REFERENCE TO RELATED APPLICATION

This application claims the priority benefit of U.S. Provisional Application No. 60/259,121, filed on Dec. 29, 2000, which is incorporated herein by reference.

US Referenced Citations (15)
Number Name Date Kind
3763490 Hadley et al. Oct 1973 A
H374 Abo-Zena et al. Nov 1987 H
5268927 Dimos et al. Dec 1993 A
5274386 Pellon Dec 1993 A
5420593 Niles May 1995 A
5592173 Lau et al. Jan 1997 A
5781156 Krasner Jul 1998 A
5874914 Krasner Feb 1999 A
5923287 Lennen Jul 1999 A
5955987 Murphy et al. Sep 1999 A
6317501 Matsuo Nov 2001 B1
6446008 Ozbek Sep 2002 B1
6498581 Yu Dec 2002 B1
6594367 Marash et al. Jul 2003 B1
6651007 Ozbek Nov 2003 B1
Related Publications (1)
Number Date Country
20020171580 A1 Nov 2002 US
Provisional Applications (1)
Number Date Country
60259121 Dec 2000 US