1. Field of the Invention
The present invention relates generally to a method and apparatus that enables the computation of a signal in a certain subspace, its projection that lies outside the subspace, and the orthogonal basis for a given matrix. More particularly, the present invention relates to the use of such a method or apparatus for real-time hardware applications since the method and apparatus may be utilized without matrix inversions or square root computations.
2. Description of the Prior Art
In spread spectrum systems, whether it is a communication system, a Global Positioning System (GPS) or a radar system, each transmitter may be assigned a unique code and in many instances each transmission from a transmitter is assigned a unique code. The code is nothing more than a sequence (often pseudorandom) of bits. Examples of codes include the Gold codes (used in GPS—see Kaplan, Elliot D., Editor, Understanding GPS: Principles and Applications, Artech House, 1996), Barker codes (used in radar—see Stimson, G. W., “An Introduction to Airborne Radar”, SciTech Publishing Inc., 1998), Walsh codes (used in communications systems like CDMAOne and CDMA2000—See IS-95 and IS2000 Standards). These codes may be used to spread the signal so that the resulting signal occupies some specified range of frequencies in the electromagnetic spectrum or the codes may be superimposed on another signal which might also be a coded signal.
Assigning a unique code to each transmitter allows the receiver to distinguish between different transmitters. An example of a spread spectrum system that uses unique codes to distinguish between transmitters is a GPS system.
If a single transmitter has to broadcast different messages to different receivers, such as a base-station in a wireless communication system broadcasting to different mobiles, one may use codes to distinguish between the messages for each mobile. In this scenario, each bit for a particular user is encoded using the code assigned to that user. By coding in this manner, the receiver, by knowing its own code, may decipher the message intended for it from the composite signal transmitted by the transmitter.
In some communication systems, a symbol is assigned to a sequence of bits that make up a message. For example, a long digital message may be grouped into sets of M bits and each one of these sets of M bits is a assigned to a symbol. For example, if M=6, then each set of 6 bits may assume one of 26=64 possibilities. One such possibility is 101101. Such a system would broadcast a unique waveform, called a symbol, to indicate to the receiver the sequence of bits. For example, the symbol α might denote the sequence 101101 and the symbol β might denote the sequence 110010. In the spread spectrum version of such a system, the symbols are codes. An example of such a communication system is the mobile to base-station link of CDMAOne or IS-95.
In some instances, such as in a coded radar system, each pulse is assigned a unique code so that the receiver is able to distinguish between the different pulses based on the codes.
Of course, all of these techniques may be combined to distinguish between transmitters, messages, pulses and symbols all in one single system. The key idea in all of these coded systems is that the receiver knows the codes of the message intended for it and by applying the codes correctly, the receiver may extract the message intended for it. However, such receivers are more complex than receivers that distinguish between messages by time and/or frequency alone. The complexity arises because the signal received by the receiver is a linear combination of all the coded signals present in the spectrum of interest at any given time. The receiver has to be able to extract the message intended for it from this linear combination of coded signals.
The following section presents the problem of interference in linear algebraic terms followed by a discussion of the current, generic (baseline) receivers.
Let H be a vector containing the spread signal from source no.1 and let θ1 be the amplitude of the signal from this source. Let si be the spread signals for the remaining sources and let φi be the corresponding amplitudes. Suppose the receiver is interested in source number 1, the signals from the other sources may be considered to be interference. Then, the received signal is:
y=Hθ1+s2φ2+s3φ3+ . . . +spφp+n (1)
where n is the additive noise term, and p is the number of sources in the CDMA system. Let the length of the vector y be N, where N is the number of points in the integration window. This number N is selected as part of the design process as part of the trade-off between processing gain and complexity. A window of N points of y will be referred to as a segment.
In a wireless communication system, the columns of the matrix H represent the various coded signals and the elements of the vector θ are the powers of the coded signals. For example, in the base-station to mobile link of a CDMAOne system, the coded signals might be the various channels (pilot, paging, synchronization and traffic) and all their various multi-path copies from different base-stations. In the mobile to base-station link, the columns of the matrix H might be the coded signals from the mobiles and their various multi-path copies.
In a GPS system, the columns of the matrix H are the coded signals being broadcast by the GPS satellites at the appropriate code, phase and frequency offsets.
In an array application, the columns of the matrix are the steering vectors or equivalently the array pattern vectors. These vectors characterize the relative phase recorded by each antenna in the array as a function of the location and motion dynamics of the source as well as the arrangement of the antennas in the array. In the model presented above, each column of the matrix H signifies the steering vector to a particular source.
The equation (1) may now be written in the following matrix form:
where
Receivers that are currently in use correlate the measurement, y, with a replica of H to determine if H is present in the measurement. If H is detected, then the receiver knows the bit-stream transmitted by source number 1. Mathematically, this correlation operation is:
correlation function=(HTH)−1HTy (3)
where T is the transpose operation.
Substituting for y from equation (2) illustrates the source of the power control requirement:
It is the middle term, (HTH)−1HTSφ, in the above equation that results in the near-far problem. If the codes are orthogonal, then this term reduces to zero, which implies that the receiver has to detect θ in the presence of noise (which is (HTH)−1HTn) only. It is easy to see that as the amplitude of the other sources increases, then the term (HTH)−1HTSφ contributes a significant amount to the correlation function, which makes the detection of θ more difficult.
The normalized correlation function, (HTH)−1HT, defined above, is in fact the matched filter and is based on an orthogonal projection of y onto the space spanned by H. When H and S are not orthogonal to each other, there is leakage of the components of S into the orthogonal projection of y onto H. This leakage is geometrically illustrated in FIG. 1. Note in
Signal projection may be computed by means of performing the projection operation directly by computing Ps=S(STS)−1ST and then computing the other desired quantities. This direct matrix inversion method requires computing the inverse, which may be prohibitive in hardware. In addition, the direct matrix inversion method cannot handle a subspace matrix S that is singular.
Signal projection may also be computed using Householder, Givens and Gram-Schmidt methods (QR methods). These methods may be used to decompose a given matrix into an orthonormal basis. In these QR methods, the subspace matrix is first decomposed into its orthonormal representation and then the orthonormal representation is used to compute the projection of the signal. No matrix inverse computations are required, but square root computations are needed in the computation of the orthonormal representation.
Thus, there is a need in the art for a method and apparatus that provide for signal projection computations in signal processing applications without the need for any matrix inversions or square root computations, as well as to provide for the handling of a subspace matrix S which is singular.
It is therefore an object of the present invention to provide a method and apparatus that provide for signal projection computations in signal processing applications without the need for any matrix inversions or square root computations.
It is a further object to provide a method and apparatus that provide for signal projections computations that can handle a subspace matrix S that is singular.
According to a first broad aspect of the present invention, there is provided a method for generating a projection from a received signal (y), the signal comprising H, a signal of the source of interest; S, the signals of all other sources and composed of vectors s1, s2, s3 . . . , sp; and noise (n); the method comprising the steps of determining a basis matrix U for either H or S; storing elements of the basis matrix U; and determining yperp where: yperp=y−U(UTU)−UTy.
According to another broad aspect of the present invention, there is provided a method for generating a projection from a received signal (y), the signal comprising H, a spread signal matrix of the source of interest; S, the spread signal matrix of all other sources and composed of vectors s1, s2, s3 . . . , sp; and noise (n); the method comprising the steps of: A. assigning s1 as a first basis vector u1; B. determining σi, where uiTui=σi; C. storing ui; D. computing of inner products of the si+1 and the u1 through ui vectors by utilizing a Multiply-add-accumulator (MAC) i times; E. multiplying the inner product with a respective scalar 1/σi and thereby creating a first intermediate product; F. scaling each respective basis vector ui by multiplying each respective first intermediate product with each respective basis vector ui; G. obtaining a vector sum from step F; H. subtracting the vector sum from si−1 to obtain the next basis vector ui+1; I. comparing ui+1 to a predetermined value and if equal to or less than the value, discarding the ui+1 and going to step N; J. storing ui+1; K. determining an inner product of uTi+1ui+1, L. determining the reciprocal of step K which is 1/σi+1; M. storing 1/σi+1; N. incrementing i; O. conducting steps D through N until all the s vectors have been processed which happens at i=p, where p is the total number of spread signal s vectors of interest; and determining yperp where: yperp=y−U(UTU)−1UTy.
According to another broad aspect of the present invention, there is provided a method for generating a projection from a received signal (y), the signal comprising H, a spread signal matrix of the source of interest; S, the spread signal matrix of all other sources and composed of vectors s1, s2, s3 . . . , sp; and noise (n); the method comprising the steps of: A. assigning s1 as a first basis vector u1; B. determining σi, where uiTui=σi; C. storing ui; D. computing of inner products of the si+1 and the u1 through ui vectors by utilizing a Multiply-add-accumulator (MAC) i times; E. multiplying the inner product with a respective scalar 1/σi and thereby creating a first intermediate product; F. scaling each respective basis vector ui by multiplying each respective first intermediate product with each respective basis vector ui; G. serially subtracting the intermediate product from si+1; H. utilizing the result from step G and subtracting the next incoming value of
until all the values are processed; I. obtaining the next basis vector ui+1 from step H; J. comparing ui+1 to a predetermined value and if equal to or less than the value, discarding the ui+1 and going to step O; K. storing ui+1; L. determining an inner product of uTi+1ui+1; M. determining the reciprocal of step K which is 1/σi+1; N. storing 1/σi+1; O. incrementing i; P. conducting steps D through O until all the s vectors have been processed which happens when i=p, where p is the total number of spread signal s vectors of interest; and Q. determining yperp where: yperp=y−U(UTU)−1UTy.
According to another broad aspect of the present invention, there is provided an apparatus for generating a projection from a received signal (y), the signal comprising H, a signal of the source of interest; S, the signals of all other sources and composed of vectors s1, s2, s3 . . . , sp; and noise (n); the apparatus comprising: means for determining a basis vector U; means for storing elements of the basis vector U for H or S; and means determining yperp where: yperp=y−U(UTU)−1UTy.
According to another broad aspect of the present invention, there is provided an apparatus for generating a projection from a received signal (y), the signal comprising H, a spread signal matrix of the source of interest; S, the spread signal matrix of all other sources and composed of vectors s1, s2, s3 . . . , sp; and noise (n); the apparatus comprising:
According to another broad aspect of the present invention, there is provided an apparatus for generating a projection from a received signal (y), the signal comprising H, a spread signal matrix of the source of interest; S, the spread signal matrix of all other sources and composed of vectors s1, s2, s3 . . . , sp; and noise (n); the apparatus comprising:
Other objects and features of the present invention will be apparent from the following detailed description of the preferred embodiment.
The invention will be described in conjunction with the accompanying drawings, in which:
It is advantageous to define several terms before describing the invention. It should be appreciated that the following definitions are used throughout this application.
Where the definition of terms departs from the commonly used meaning of the term, applicant intends to utilize the definitions provided below, unless specifically indicated.
For the purposes of the present invention, the term “analog” refers to any measurable quantity that is continuous in nature.
For the purposes of the present invention, the term “base station” refers to a transmitter and/or receiver that communicate(s) with multiple mobile or stationary units in a cellular environment.
For the purposes of the present invention, the term “baseline receiver” refers to a receiver against which a receiver of the present invention is compared.
For the purposes of the present invention, the terms “basis” and “basis vector” refer to a set of vectors that completely span the space under consideration. In 3-D space, any three linearly independent vectors comprise a basis for the 3-D space, and for 2-D space, any 2 vectors that are linearly independent comprise a “basis.”
For the purposes of the present invention, the term “bit” refers to the conventional meaning of “bit,” i.e. a fundamental unit of information having one of two possible values, a binary 1 or 0, or in bipolar binary terms, a −1 or a +1.
For the purposes of the present invention the term “Code-Division Multiple Access (CDMA)” refers to a method for multiple access in which all users share the same spectrum but are distinguishable from each other by a unique code.
For the purposes of the present invention, the term “chip” refers to a non-information bearing unit that is smaller than a bit, the fundamental information bearing unit. For example, one bit is composed of multiple chips in an application that employs spreading. Depending on the amount of the spreading factor, a fixed length sequence of chips constitute a bit.
For the purposes of the present invention, the term “code offset” refers to a location within a code. For example, base stations in certain cellular environments distinguish between each other by their location within a particular pseudorandom code.
For the purposes of the present invention, the term “correlation” refers to the inner product between two signals scaled by the length of the signals. Correlation provides a measure of how alike two signals are.
For the purposes of the present invention, the terms “decomposition” and “factorization” refer to any method used in simplifying a given matrix to an equivalent representation.
For the purposes of the present invention, the term “digital” refers to the conventional meaning of the term digital, i.e. relating to a measurable quantity that is discrete in nature.
For the purposes of the present invention, the term “doppler” refers to the conventional meaning of the term doppler, i.e. a shift in frequency that occurs due to movement in a receiver or transmitter and/or the background.
For the purposes of the present invention, the term “Global Positioning System (GPS)” refers to the conventional meaning of these terms, i.e. a satellite-based system for position location.
For the purposes of the present invention, the product STS where S is a matrix, is called the “Grammian” of S.
For the purposes of the present invention, the term “in-phase” refers to the component of a signal that is aligned in phase with a particular signal, such as a reference signal.
For the purposes of the present invention, the term “quadrature” refers to the component of a signal that is 90 degrees out of phase with a particular signal, such as a reference signal.
For the purpose of the present invention, the term “interference” refers to the conventional meaning of the term interference, i.e. a signal that is not of interest, but which interferes with the ability to acquire, identify, detect, track or perform any other operation on the signal of interest. Interference is typically structured noise that is created by other processes that are trying to do the same thing.
For the purposes of the present invention, the term “linear combination” refers to the combining of multiple signals or mathematical quantities in an additive way, where each signal is multiplied by some non-zero scalar and all the resultant quantities so obtained summed together.
For the purposes of the present invention, a vector is “linearly dependent” with respect to a set of vectors if it can be expressed as an algebraic sum of any of the set of vectors.
For the purposes of the present invention, the term “matched filter” refers to a filter that is designed to facilitate the detection of a known signal by effectively correlating the received signal with an uncorrupted replica of the known signal.
For the purposes of the present invention, the term “noise” refers to the conventional meaning of noise with respect to the transmission and reception of signals, i.e. a random disturbance that interferes with the ability to detect a signal of interest, say, for example, the operation of a nearby electrical device. Additive “noise” adds linearly with the power of the signal of interest. Examples of noise can include automobile ignitions, power lines and microwave links.
For the purpose of the present invention, the term “matrix inverse” refers to the inverse of a square matrix S, denoted by S−1, that is defined as that matrix which when multiplied by the original matrix equals the identity matrix, I, i.e. SS−1=S−1S=I, a matrix which is all zero save for a diagonal of all ones.
For the purposes of the present invention, the term “mobile” refers to a mobile phone that functions as a transmitter/receiver pair that communicates with a base station.
For the purposes of the present invention, the term “modulation” refers to imparting information on another signal, such as a sinusoidal signal or a pseudorandom coded signal, typically accomplished by manipulating signal parameters, such as phase, amplitude, frequency or some combination of these quantities.
For the purposes of the present invention, the term “multipath” refers to copies of a signal that travel a different path to the receiver.
For the purposes of the present invention, the term “norm” refers to a measure of the magnitude of a vector. The “2-norm” of a vector refers to its distance from the origin.
For the purposes of the present invention, the term “normalization” refers to a scaling relative to another quantity.
For the purposes of the present invention, two nonzero vectors, e1 and e2 are said to be “orthogonal” if their inner product (defined as e1Te2, where T refers to the transpose operator) is identically zero. Geometrically, this refers to vectors that are perpendicular to each other.
For the purposes of the present invention, any two vectors are said to be “orthonormal” if, in addition to being orthogonal, each of their norms are unity. Geometrically, this refers to two vectors that, in addition to lying perpendicular to each other, are each of unit length.
For the purposes of the present invention, the term “processing gain” refers to the ratio of signal to noise ratio (SNR) of the processed signal to the SNR of the unprocessed signal.
For the purposes of the present invention, the term “projection” with respect to any two vectors x and y refers to the projection of the vector x onto y in the direction of y with a length equal to that of the component of x, which lies in the y direction.
For the purposes of the present invention, the term “pseudorandom number (PN)” refers to sequences that are typically used in spread spectrum applications to distinguish between users while spreading the signal in the frequency domain.
For the purposes of the present invention, the term “rake receiver” refers to a method for combining multipath signals in order to increase the processing gain.
For the purposes of the present invention the term “signal to noise ratio (SNR)” refers to the conventional meaning of signal to noise ratio, i.e. the ratio of the signal to noise (and interference).
For the purposes of the present invention, the term “singular matrix” refers to a matrix for which the inverse does not exist. In a “singular matrix,” one of its rows or columns is not linearly independent of the rest, and the matrix has a zero determinant.
For the purposes of the present invention, the term “spread spectrum” refers to techniques that use spreading codes to increase the bandwidth of a signal to more effectively use bandwidth while being resistant to frequency selective fading.
For the purposes of the present invention, the term “spreading code” refers to a code used in communication systems to modify the bit being transmitted in a spread spectrum system, e.g. the CDMA Pseudorandom (PN) codes used in the short and long codes. Examples of spreading codes include Gold, Barker and Walsh codes.
For the purposes of the present invention, the term “steering vector” refers to a vector that contains the phase history of a signal that is used in order to focus the signal of interest.
For the purposes of the present invention, the term “symbol” refers to the fundamental information-bearing unit transmitted over a channel in a modulation scheme. A symbol may be composed of one or more bits, which can be recovered through demodulation.
For the purposes of the present invention, the term “transpose” refers to a mathematical operation in which a matrix is formed by interchanging rows and columns of another matrix. For example, the first row becomes the first column; the second row becomes the second column, and so on.
In the following detailed description, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific illustrative embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that logical, mechanical, and electrical changes may be made without departing from the spirit and scope of the invention. The following detailed description is, therefore, not to be taken in a limiting sense.
The present invention provides a method and apparatus for computing the orthogonal basis for a matrix that is free of matrix inversions and square root computations. The present invention was developed in the context of signal processing applications and the removal of interference from coded signals. However, the application of the present invention is not limited to signal processing applications.
Linear combinations of structured signals are frequently encountered in a number of diverse signal environments including wireless communications, Global Positioning Systems (GPS) and radar. In each of these application areas, the receiver observes a linear combination of structured signals in noise. Mathematically,
y=Hθ+n
where y is the received signal, the columns of the matrix H are the structured signal, θ is the relative weight of each component and n is the additive background noise.
In a wireless communication system, the columns of the matrix H represent the various coded signals and the elements of the vector θ are the powers of the coded signals. For example, in the base-station to mobile link of a CDMAOne system, the coded signals may be the various channels (pilot, paging, synchronization and traffic) and all their various multi-path copies from different base-stations at the appropriate code, phase and frequency offsets, and carrying on it navigation information.
In the mobile to base-station link, the columns of the matrix H may be the coded signals from the mobiles and their various multi-path copies.
In a GPS system, the columns of the matrix H may be the coded signals being broadcast by the GPS satellites at the appropriate code, phase and frequency offsets.
In an array application, the columns of the matrix may be the steering vectors or equivalently the array pattern vectors. These vectors characterize the relative phase recorded by each antenna in the array as a function of the location and motion dynamics of the source as well as the arrangement of the antennas in the array. In the model presented above, each column of the matrix H signifies the steering vector to a particular source.
The goal of the receiver in each case is to extract one or more of the structured signals, i.e., the columns of the matrix H, from the measured signal y. In some instances, the goal of the receiver is also to estimate the elements of the vector θ corresponding to the columns of interest. However, the remaining columns of the matrix of H, though not of interest to the receiver, will be a source of interference. This interference may be significant enough to impede the ability of the receiver to detect and extract the signal, i.e., column of H and relative weight, of interest. This problem is illustrated below using a CDMA example.
Let H be a vector containing the spread signal from source no.1 and let θ1 be the amplitude of the signal from this source. Let si be the spread signals for the remaining sources and let φi be the corresponding amplitudes. Supposing that the receiver is interested in source number 1, the signals from the other sources may be considered to be interference. Then, the received signal is:
y=θ1H+φ2s2 . . . φpsp+n (1)
where n is the additive noise term, and p is the number of sources in the CDMA system. Let the length of the vector y be m, where m is the number of points in the integration window. The number m is selected as part of the design process as part of the trade-off between processing gain and complexity. A window of m points of y is referred to herein as a segment.
The above equation is written below in the following matrix form:
where
Receivers that are currently in use correlate the measurement, y, with a replica of H to determine if H is present in the measurement. If H is detected, then the receiver knows the bit-stream transmitted by source number 1. Mathematically, this correlation operation is:
correlation function=(HTH)−1HTy (3)
where T is the transpose operation.
Substituting for y from equation (2) illustrates the source of the power control requirement:
It is the middle term, (HTH)−1HTSφ, in the above equation that results in the near-far problem. If the codes are orthogonal, then this term reduces to zero, which implies that the receiver has to detect θ in the presence of noise (which is (HTH)−1HTn) only. It is easy to see that as the amplitude of the other sources increases, then the term (HTH)−1HTSφ contributes a significant amount to the correlation function, which makes the detection of θ more difficult.
The normalized correlation function, (HTH)−1HT, defined above, is in fact the matched filter and is based on an orthogonal projection of y onto the space spanned by H. When H and S are not orthogonal to each other, there is leakage of the components of S into the orthogonal projection of y onto H. This leakage is geometrically illustrated in FIG. 1. Note in
One way to mitigate this interference is to remove the interference from y by means of a projection operation. Mathematically, a projection onto the space spanned by the columns of the matrix S is given by:
Ps=S(STS)−1ST
A projection onto the space perpendicular to the space spanned by the columns of S is obtained by subtracting the above projection Ps from the identity matrix (a matrix with ones on the diagonal and zeros everywhere else). Mathematically, this projection is represented by:
Ps⊥=I−Ps=I−S(STS)−1ST
The projection matrix Ps⊥ has the property that when it is applied to a signal of the type Sφ, i.e., this is a signal that lies in the space spanned by the columns of S, it completely removes Sφ no matter what the value of φ, i.e., it is magnitude independent. This cancellation is illustrated below:
Ps⊥(Sφ)=(I−S(STS)−1ST)Sφ=Sφ−S(STS)−1STSφ=Sφ−Sφ=0
When applied to our measurement vector y, it cancels the interference terms:
Ps⊥y=Ps⊥(Hθ+Sφ+n)=Ps⊥Hθ+Ps⊥Sφ+Ps⊥n=Ps⊥Hθ+Ps⊥n
The hardware realization of this projection operation and interference cancellation presents certain complexities and hurdles, overcoming which are the main objectives of this invention.
In general, using Ps⊥ to compute yperp requires the computation of the Grammian of S (where S is an m×p matrix), which requires mp2 mathematical floating point operations (flops) and computing its inverse, which requires additional p3 flops.
Clearly, the computation of the inverse of the Grammian is difficult, time-consuming and expensive, and progressively more so as p increases. It is also potentially unstable when there are singularities in S. Singularities in S would occur if any of its columns were to be linearly dependent on a set of vectors comprising any of its other columns, and thus an entire row and column of the Grammian becomes identically zero. This would result in an inability to compute the inverse of the Grammian, and consequently, hamper any computations downstream from that step.
Even in the absence of any singularities, performing matrix inverses in hardware implementation, especially in the fixed-point implementations that are likely to be used in practical implementations, can present complications. For a detailed discussion on this issue, see Rick A. Cameron, ‘Fixed-Point Implementation of a Multistage Receiver’, PhD Dissertation, January 1997, Virginia Polytechnic Institute and State University, the entire contents and disclosure of which is hereby incorporated by reference in its entirety.
One alternative to computing the inverse of the Grammian directly is to decompose S using QR factorization methods into Q and R matrices, and then utilizing those in further computations. QR factorization may be performed using any one of the Householder, Givens, Fast Givens, Gram-Schmidt, or the modified Gram-Schmidt methods. These methods are discussed in detail in Golub G. H and C. F. Van Loan, Matrix Computations, Baltimore, Md., Johns Hopkins Univ. Press, 1983, the entire contents and disclosure of which is hereby incorporated by reference.
The set of Householder methods involve computations of the order of 4 mp2 and provide more information than is needed for the projection operation and come with the added cost of increased computations. Givens methods may have potentially high overflows. The Gram-Schmidt and the modified Gram-Schmidt methods are computationally more efficient, but involve square root computations. Square roots are particularly difficult and expensive to implement at the chip level, because of the multiple clock cycles needed to compute a single square root.
The present invention describes an apparatus for computing Ps⊥y to compute the subspace projection of a signal via the computation of the inverse of the Grammian of the subspace that is free of both square roots and inverse computations, and hence is eminently suitable for real-time application on digital signal processors, FPGAs, ASICs and other realizations.
For the purposes of the remaining description, the following nomenclature applies:
S=m×p matrix containing the spread signal interference structure, composed of vectors s1, s2, s3 . . . , sp;
y=m×1 measurement vector;
yperp=m×1 vector whose components that lie in the space spanned by the columns of the matrix S have been projected out; and
U=m×p orthogonal (but not orthonormal) basis for S composed of vectors u1, u2, u3, . . . , up.
In accordance with an embodiment of the present invention, let u1=s1. Then, s2 may be resolved into a component that is parallel to s1 and another component that is not. Then, u2 may be defined to be a component of s2 that is not in s1.
Then, s2 is given by the equation:
s2=s1a1+u2,
where a1 is the component of s2 that lies in s1, and s2 is expressed as a linear combination of s1 and u2, where u2 is the new desired basis vector.
Solving for a1, the following is obtained:
a1=(s1Ts1)−1s1Ts2
or alternately, since u1=s1,
a1=(u1Tu1)−1u1Ts2.
Therefore, u2=s2−s1a1
=s2−u1(u1Tu1)−1u1Ts2.
Thus, the second basis vector, u2 is the component of s2 that is not in u1, illustrated geometrically in FIG. 2. Moreover, the basis vectors u1 and u2 together span the same space that is spanned by s1 and s2. Furthermore, u1 and u2 are orthogonal to each other;
Now, let the two basis vectors be represented by: U2=[u1u2], and proceed to find the next basis vector, u3.
Next, decompose the vector s3 into a component that lies in the space spanned by the already computed basis vectors, U2 and a residual component that lies outside the space spanned by U2, which then becomes the next basis vector. This step is geometrically illustrated in FIG. 3.
Setting s3=U2a2+u3, and solving for a2 and u3, the following is obtained:
u3=s3−u1(u1Tu1)−1u1Ts3−u2(u2Tu2)−1s3.
Mathematically, the third basis vector u3 is the third vector in the S matrix s3 with those components that lie in the space spanned by the previous basis vectors, u1 and u2, projected out.
In terms of inputs, stored variables, and outputs, the implementation as the procedure unfolds can be visualized in
The process of orthogonalization continues in the same manner, and at each step, the next basis vector is computed from the corresponding s vector by projecting out from the vector all its components that lie in the space spanned by the previously computed basis vectors. In case the incoming vector is linearly dependent on the previously computed basis vectors, the result of subtracting out its projection onto the previously computed basis from itself becomes approximately zero or at any other predetermined threshold level, i.e., to the order of machine precision, and this vector does not contribute significantly to the basis, and should therefore be excluded. This point is a tradeoff between accuracy and computational complexity. This discussion will assume that the desire is to have a system that is as accurate as possible. Proceeding along these lines, the ith step becomes the calculation of the ith basis vector ui and can be expressed as
ui=si−u1(u1Tu1)−1u1Tsi−u2(u2Tu2)−1u2Tsi− . . . −ui−1(ui−1Tui−1)−1ui−1Tsi
The process of computing the basis vector terminates at i=p with the calculation of the pth basis vector up. Exploiting the fact that uiTui is a scalar and its inverse therefore is a simple reciprocal; the ith step of the iteration process for computing the basis vectors can be rewritten as
where σi−1=ui−1Tui−1 and is the square of the 2-norm of the ui vector.
The i+1th step would be
If the last two equations are examined closely, it is found that the σi terms may be reused, and thereby their computation avoided at every step. The i+1th step essentially would consist then of multiplying pre-computed values of the reciprocal terms
with the newly computed uiuiTsi−1 values (which can be computed most efficiently by first performing the uiTsi+1 operation and scaling the number obtained using
to obtain another scalar number, and then finally scaling the vector ui using this scalar), and then subtracting out the sum of these products from the si+1 vector.
If the result of the subtraction is zero (to the order of the chip precision), that vector is excluded from the basis and not used in further computations. It should be appreciated that any other level of precision may be utilized without departing from the teachings of the present invention.
In a computationally constrained system, where memory is available freely, the i−1th step could be sped up by storing and reusing the values of the uiujT outer product.
At this point, the matrix factorization for S has been completed and the following has been computed U=[u1u2u3 . . . up−1up]. The vectors comprising U are all orthogonal to each other; uiTuj=0 for all i≠j, and uiTui=σi for all i, where σi is a scalar inner product. Note that this property varies slightly from typical orthogonal factorizations, which are also orthonormal computations in that the 2-norm of all the basis vectors are unity, i.e. uiTui=1 for all i.
Recalling that the objective of the factorization was to arrive at a method to compute yperp without the need to compute square-roots and matrix inverses, factorization is used to substitute for S in the original equation:
yperp=y−S(STS)−1STy;
and the following is obtained:
yperpy−U(UTU)−1UTy.
The orthogonal factorization is useful due to the simplicity of computing the inverse of the Grammian.
becomes a diagonal matrix
because uiTuj=0 for all i≠j.
The inverse is another diagonal matrix with the diagonal elements replaced by their reciprocals, as shown below:
Thus, the computation
yperp=y−U(UTU)−1UTy
reduces to
which is equivalent to the representation
Thus, the process of computing the interference free signal vector has been simplified to a computation that is numerically stable in the presence of singularities in S, and one that is free of both matrix inverses and square root computations.
The projection of the signal vector onto the space spanned by the columns of S, ys, is given by the representation
According to a preferred embodiment of the present invention, the implementation of the algorithm involves the building of an apparatus that takes in as inputs the matrix S (whose columns are the vectors, s) and the measurement signal vector y, and produces as output the yperp vector, after performing the operation of projecting out the portion of the signal that is represented by S.
In this implementation, the input may be visualized as a stream of s vectors being input into the apparatus one at a time (of length m) followed at the end by the y vector (also of length m), with the yperp vector being the desired output at the end of the computational process. Each step in real-time would begin with the input of the first s vector, and terminate with the output of the yperp vector.
An apparatus according to an embodiment of the present invention may be built using the basic operations detailed below.
Each step involves p iterations (one for each column in the S matrix), beginning with the input of the first column, s1, and ending with sp. It should be appreciated that the mathematical complexity of the system may be reduced by choosing p to be a number smaller than the number of columns in the S matrix. This sacrifices accuracy for simplicity but is still considered within the teachings of the present invention. The following discussion will assume that we are not making any accuracy compromises. The flow of variables and the interconnection between the different basic elements of the apparatus are shown in
The first step is the computation of the inner product of the si+1 vector 500 and each of the previously computed and stored basis vectors, u1 through ui 502. This step is shown in
The i inner-products obtained 504 are each next multiplied by a scalar multiplier 507 (shown in
506 to produce the
values 508 which are then used to scale the basis vectors from storage 510 (shown in
vectors 512, which represent the components of the si+1 vector that lie in the space spanned by each of the previously computed basis vectors. Scalar vector multiplier 509 performs the scaling. The stored
are preferably stored in memory 521.
The steps shown in FIG. 7 and
The vector sum of these components 514 is then obtained by vector adder 511 (shown in
If ui+1 is zero, then that vector is excluded from the basis and not used in further computations. Even if ui+1 was not zero, but below a pre-determined threshold, it is excluded from the basis because cancellation is the subspace spanned by that particular interference vector will not produce sufficient gain in performance to warrant its use in the basis, and subsequently, for cancellation. Otherwise, the ui+1 is stored for use in future computations 520. In addition, the inner-product of the new basis vector ui+1 with itself, uTi+1ui+1 522 is computed using a MAC 521 (shown in FIG. 12), and then its reciprocal is computed 524 (shown in
All the above iteration steps are repeated p times until the input of the last sp vector, and its basis vector up computed, at which point the computation of the orthogonal basis for S is complete.
According to an alternative embodiment of the present invention, illustrated in
1501 is serially subtracted out from the si+1 vector 1500, temporarily storing the result obtained, and then proceeding to subtract out the next incoming value of
until all the values are processed, until the next basis vector ui+1 1520 is computed. As may be seen, many elements from
An apparatus of the present invention may be used in a variety of ways to achieve different signal processing objectives. Such an apparatus may be used to calculate the orthogonal (but not orthonormal) decomposition of a matrix S in the mode shown in FIG. 16. In this mode of operation, the embodiment shown in
For implementing projections and canceling interference in a signal y where the interference lies in the subspace spanned by S, an apparatus of the present invention may be used in the mode shown in FIG. 17. Here, an apparatus of the present invention may take as inputs the signal vector y, and the subspace matrix S, and produce as output the component that lies outside, yperp. In this mode of operation; first, the embodiment shown in
In
In addition, the same apparatus could be used to compute the projection of a reference signal vector onto the space spanned by a matrix formed from a set of interference vectors, and the projection of a reference signal vector perpendicular to the space spanned by a matrix formed from a set of interference vectors. This would be useful in implementations in signal processing applications, where, rather than calculating the orthogonal projection of a signal in the space of the interference and then correlating it using the desired reference signal, the orthogonal projection of the desired reference signal in the space of the interference vectors is computed using this present invention, and then correlated with the original measurement signal. This teaching is also considered within the scope of the present invention.
As an illustration of the use of this invention,
The operation of the structure is illustrated in FIG. 19. In
In the architecture presented, the single data processing channel consists of multiple fingers 800, 800′ and 800″ where each finger consists of a code generation module 802, 802′ and 802″ (for building the S matrix); PS⊥ modules 804, 804′ and 804″; an acquisition module 810, 801, and 810″ and a tracking module 812, 812′ and 812″. The tracking module, of course, consists of FLLs 822, 822′ and 822″; PLLs 820, 820′ and 820″; as well as DLLs 818, 818′ and 818″. Each processing finger 800, 800′ and 800″ within a channel has the function of acquiring and tracking a distinct multipath signal from the same source.
In order to understand how the architecture depicted in
The input data to this channel arrives in the form of a digital IF data stream. Since there are other sources being tracked, the replicate code generator module 802, 802′ and 802″ would generate the appropriate S matrix and this matrix is used to create PS⊥804, 804′ and 804″. In this case, the digital IF data stream y is provided as input into the PS⊥ module. The output of this module 804 is fed into the acquisition module 810 in the same finger.
In case the system was not tracking any other sources, then there would be no S matrix generated and therefore no PS⊥ function. In this case, the input digital IF data stream is passed directly into the acquisition stage.
The acquisition stage acquires the signal and all its multipath copies from the source of interest. If the acquisition stage identifies more than one multipath, then multiple tracking sections are used for each multipath signal individually. The outputs of the tracking stages 812, 812′ and/or 812″ are the code, phase, and Doppler offsets that are used to build the S in the other channels. Furthermore, if all the available processing tracks are consumed, there is no need to mitigate any co-channel interference.
Now suppose that due to co-channel interference, the acquisition stage 810, 810′ or 810″ was only able to acquire fewer multipaths than there are available processing fingers, i.e., the other multipath signals are buried in the co-channel interference. In that case, the information from the acquisition stage is used to track the first signals identified. The information about the code, phase and Doppler offsets of the first signals being tracked are obtained from the tracking system 812, 812′ and/or 812″ and are provided as input into the replicate code generators modules 802′ and 802″ in the same channel.
The S matrix built in this finger now has included in it the code of the lone signal being processed in the finger 800. As a result, the finger 800′ will eliminate interference from all the other sources as well as the dominant signal from the source of interest. The acquisition module 810′ in this finger then acquires the multipath signal which is now visible because the interference from the dominant signal has been eliminated. That multipath is then tracked in 812′ and the tracking information is provided to both the finger 800 (to improve its ability to track the dominant signal) as well as to the other fingers, e.g., 800″ to aid in finding additional weak multipath signals. The tracking information from all these modules are used to perform the Rake operation 830 for data demodulation.
Although the present invention has been fully described in conjunction with the preferred embodiment thereof with reference to the accompanying drawings, it is to be understood that various changes and modifications may be apparent to those skilled in the art. Such changes and modifications are to be understood as included within the scope of the present invention as defined by the appended claims, unless they depart therefrom.
This application makes reference to U.S. Provisional Patent Application No. 60/326,199 entitled “Interference Cancellation in a Signal,” filed Oct. 2, 2001; U.S. Provisional Patent Application No. 60/251,432, entitled “Architecture for Acquiring, Tracking and Demodulating Pseudorandom Coded Signals in the Presence of Interference,” filed Dec. 4, 2000; U.S. patent application Ser. No. 09/612,602, filed Jul. 7, 2000; U.S. patent application Ser. No. 09/137,183, filed Aug. 20, 1998; U.S. Provisional Patent Application No. 60/325,215, entitled “An Apparatus for Implementing Projections in Signal Processing Applications,” filed Sep. 28, 2001; U.S. Provisional Patent Application No. 60/331,480, entitled “Construction of an Interference Matrix for a Coded Signal Processing Engine,” filed Nov. 16, 2001; and to U.S. patent application Ser. No. 09/988,218, entitled “Interference Cancellation in a Signal,” filed Nov. 19, 2001. The entire disclosure and contents of these applications are hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
5343493 | Karimullah | Aug 1994 | A |
5644592 | Divsalar et al. | Jul 1997 | A |
5787130 | Kotzin et al. | Jul 1998 | A |
5812086 | Bertiger et al. | Sep 1998 | A |
5844521 | Stephens et al. | Dec 1998 | A |
5872540 | Casabona et al. | Feb 1999 | A |
5872776 | Yang | Feb 1999 | A |
5926761 | Reed et al. | Jul 1999 | A |
5930229 | Yoshida et al. | Jul 1999 | A |
5953369 | Suzuki | Sep 1999 | A |
6002727 | Uesugi | Dec 1999 | A |
6014373 | Schilling et al. | Jan 2000 | A |
6088383 | Suzuki et al. | Jul 2000 | A |
6101385 | Monte et al. | Aug 2000 | A |
6104712 | Robert et al. | Aug 2000 | A |
6115409 | Upadhyay et al. | Sep 2000 | A |
6127973 | Choi et al. | Oct 2000 | A |
6131013 | Bergstrom et al. | Oct 2000 | A |
6137788 | Sawahashi et al. | Oct 2000 | A |
6141332 | Lavean | Oct 2000 | A |
6154443 | Huang et al. | Nov 2000 | A |
6157685 | Tanaka et al. | Dec 2000 | A |
6157847 | Buehrer et al. | Dec 2000 | A |
6166690 | Lin et al. | Dec 2000 | A |
6172969 | Kawakami et al. | Jan 2001 | B1 |
6175587 | Madhow et al. | Jan 2001 | B1 |
6192067 | Toda et al. | Feb 2001 | B1 |
6201799 | Huang et al. | Mar 2001 | B1 |
6215812 | Young et al. | Apr 2001 | B1 |
6219376 | Zhodzishsky et al. | Apr 2001 | B1 |
6222828 | Ohlson et al. | Apr 2001 | B1 |
6230180 | Mohamed | May 2001 | B1 |
6233229 | Ranta et al. | May 2001 | B1 |
6233459 | Sullivan et al. | May 2001 | B1 |
6240124 | Wiedeman et al. | May 2001 | B1 |
6256336 | Rademacher et al. | Jul 2001 | B1 |
6259688 | Schilling et al. | Jul 2001 | B1 |
6278726 | Mesecher et al. | Aug 2001 | B1 |
6282231 | Norman et al. | Aug 2001 | B1 |
6282233 | Yoshida | Aug 2001 | B1 |
6285316 | Nir et al. | Sep 2001 | B1 |
6285319 | Rose | Sep 2001 | B1 |
6301289 | Bejjani et al. | Oct 2001 | B1 |
6308072 | Labedz et al. | Oct 2001 | B1 |
6317453 | Chang | Nov 2001 | B1 |
6321090 | Soliman | Nov 2001 | B1 |
6324159 | Mennekens et al. | Nov 2001 | B1 |
6327471 | Song | Dec 2001 | B1 |
6333947 | Van Heeswyk et al. | Dec 2001 | B1 |
6351235 | Stilp | Feb 2002 | B1 |
6351642 | Corbett et al. | Feb 2002 | B1 |
6359874 | Dent | Mar 2002 | B1 |
20010003443 | Velazquez et al. | Jun 2001 | A1 |
20010020912 | Naruse et al. | Sep 2001 | A1 |
20010021646 | Antonucci et al. | Sep 2001 | A1 |
20010046266 | Rakib et al. | Nov 2001 | A1 |
20020001299 | Petch et al. | Jan 2002 | A1 |
20020176488 | Kober et al. | Nov 2002 | A1 |
Number | Date | Country | |
---|---|---|---|
20040030534 A1 | Feb 2004 | US |
Number | Date | Country | |
---|---|---|---|
60326199 | Oct 2001 | US | |
60325215 | Sep 2001 | US | |
60251432 | Dec 2000 | US |