Technologies for personal and vehicular navigation are rapidly developing due to the availability of Global Navigation Satellite Systems (GNSS) such as the United States' Global Positioning System (GPS), the Russian GLONASS system, and the European Galileo system. These systems however are designed for environments where a clear line of sight (LOS) exists between the user receiver and the GNSS satellites. Using trilateration methods, a user can convert the ranging measurement obtained from the LOS signal into an estimate of the user's position.
Indoor and urban canyon navigation presents two main challenges for GNSS: low signal strength and severe multipath interference. Assisted GPS (AGPS) and Ultra-Tight Coupling (UTC) are current methods for tracking weak signals in the GNSS receiver operating environment. Although there are several existing techniques for mitigating multipath interference, these known techniques are not effective to adequately mitigate the multipath in the indoor and urban environments. Multipath interference exists when a receiver receives signals reflected from nearby terrain or structures, such as buildings or walls. The received multipath signal is commonly referred to as a non-line of sight (NLOS) signal. Multipath signals always arrive “late” compared to the LOS signal, thus creating an error in the measured range and corrupting the user position estimate. This problem is especially acute indoors and in urban canyons, due to multiple reflection-generating objects which surround a user (building walls, furniture, cars, etc.).
There are several known methods to mitigate multipath, but these methods help only in relatively benign multipath environments. For example, if the strength of muitipath signals is not large compared to the strength of the LOS signal, and multipath delays are not too small, narrow correlators, strobe correlators and similar methods may be used to effectively isolate or remove the multipath signals. However, these techniques are effective only if the multipath delays are on the order of or larger than the inverse signal bandwidth, e.g., at least 0.1 chip length, which is 30 meters for GPS civilian signal. However, in the indoor and urban environment, there are many multipath-generating surfaces at distances smaller than 30 meters, which makes these techniques not effective for those environments.
Another known technique for multipath mitigation is called “Multipath Estimation Delay Lock Loop” (MEDLL) from NovaTel, Inc. in Calgary Alberta, Canada. A MEDLL receiver has many correlators which integrate the satellite signal at different delays (compared to typically three correlators for a traditional receiver) against the known code of the transmitted signal. The result is a profile of a GNSS satellite signal's correlation with the code replica, sampled at an array of points. LOS and all multipath components contribute to this profile, forming a complex signature. The MEDLL can discriminate the individual signal components, thus can be used to isolate LOS from multipath. This method is effective if there are few dominant signal components, and if one of the dominant components is the LOS signal. If there are many multipath components and/or if LOS is not present or weak, the MEDLL receiver does not yield a reliable LOS measurement.
The present disclosure presents novel and advantageous systems and methods for discriminating between LOS and NLOS signal paths in a radio frequency (RF) receiver such as a GNSS navigation receiver. In one aspect, the present disclosure provides for a method referred to as Synthetic Aperture Line of Sight Assessment, or SALSA. The SALSA method uses the direction of arrival for a signal to discriminate between LOS and NLOS and is especially beneficial in environments where the amplitude of the multipath signals exceeds that of the LOS signal, and where the multipath delays are within the inverse of the signal bandwidth.
In another aspect, the present disclosure provides for a method referred to as Genetic Algorithm for Multipath Elimination, or GAME. The GAME method identifies and isolates signals that have been incorrectly identified as LOS signals. For a user navigating in urban or indoor environment, some LOS signals may be completely blocked by building walls, etc. Thus, there is no guarantee that a pseudorange measurement is not corrupted by the multipath, even if the most sophisticated algorithms and hardware is used for processing each signal. Therefore, one needs to have a method to identify LOS paths among received signals. If there are multiple signals received, then it is possible to check the consistency of several measurements to find if any of them are corrupted. For example, RAIM (Receiver Autonomous Integrity Monitoring) is often used to find if there is one faulty measurement in a set, e.g. due to a GPS satellite failure. In the case of urban and indoor navigation, there may be multiple faulty (biased) measurements at each epoch, and the partition between biased and non-biased measurements may change from one epoch to another due to user motion. Thus, the algorithm to identify biased measurements must be able to react to changes in the measurements at a rapid pace, and must be able to identify multiple faults at each epoch. GAME is designed to serve this purpose.
The SALSA and GAME methods are complementary and are designed to extract a LOS measurement in a multipath environment if the LOS signal is present and if the user is moving, or if a signal is received over time. SALSA and GAME may also be used to support Direction Finding algorithms for emitter geolocation.
In another aspect, the present disclosure provides for a method referred to as Weighted Average Functionality For Limiting Error Sources (“WAFFLES.”) WAFFLES uses linear combinations of TDOA measurements in such way, that it largely cancels effects of timing, calibration, and geolocation errors. An alternative and complementary navigation method to GPS means is the use of various terrestrial signals rather than GPS signals. For example, signals for digital TV, from cell phone base stations, and Wi-Fi stations can be used to navigate in the areas where such signals are available. Since these signals are not designed for navigation, they lack several important navigation features, most importantly, accurate timing. This makes it necessary to use reference stations to timestamp signal features. Information about timestamps can be transmitted to the user via a communication channel. By comparing timing of the same features received by the user equipment with that at the reference station, the user can form a TDOA measurement, which can be used for navigation. One problem facing this concept is that location of signal sources is not always known with high accuracy. For example, location of a cell base tower may be obtained from original engineering drawings, but there is always a possibility that location data will have some inaccuracies. Errors in locating signal sources directly impact user navigation accuracy. Another problem deals with calibration and timing errors at the reference stations. TDOA at the user assumes that timing measurements at the reference stations are accurate. This may be not the case if calibration and synchronization of reference stations is inaccurate. These two problems are linked. Indeed, if data on location of the signal sources is not reliable, it may be possible to geolocate these signal sources using the same reference stations as those providing timing information to the user. However, any timing and calibration errors at the reference station will introduce errors in the source geolocation. Thus, TDOA measurements at the user will be impacted by timing and calibration errors in two ways: (1) directly, and (2) due to errors in source geolocation. WAFFLES addresses this problem by using linear combinations of TDOA measurements in such way, that it largely cancels effects of timing, calibration, and geolocation errors.
In another aspect, the present disclosure provides for a method of navigation in an environment where the navigation signals may be obstructed, referred to as LEAF. RF navigation under foliage presents its own set of challenges. The same may be true for some other complex environments, where signal propagation to the user is subject to diffused scattering. The earliest arrival of the signal is due to the direct signal component, and its measurement is the goal of the pseudorange estimate by the receiver. However, this component arrives at the receiver along with all scattered signal components, which cannot be separated from the direct component and which corrupt the pseudorange measurement.
The LEAF method is designed to estimate pseudorange using statistical assumptions about the propagation medium. Individual scatterers (e.g., leaves and twigs in the foliage) produce random phases and amplitudes of the signal at different delays; however statistical properties of the received signal are predictable and can be used. The method finds the most likely signal delay from correlation measurements obtained by an array of correlators.
SALSA
Determining the Direction of Arrival (DOA) of a signal is a useful method to discriminate LOS from multipath. The LOS signal generally comes from the direction of the transmitting satellite, while the multipath signals come from the direction of the reflector. The DOA discriminator is not constrained by assumptions of large delay and relatively weak multipath. Traditional methods for separating signals by DOA require an antenna array. Of course, it is not always practical for a user to carry an antenna array. In the mobile receiver environment, is desirable to use a small, one-element omni-directional antenna. The present disclosure describes a method for using an omni-directional antenna for DOA by utilizing an antenna array processing effect created by exploiting user motion to synthesize the array aperture.
With reference to
In SALSA operation, a GNSS receiver can correlate an incoming satellite signal with a known code replica to determine complex (I and Q) correlations. Each correlation requires integrating over some period of time, □t. This time interval is referred to in this disclosure as a sub epoch. A bank of N correlators, each having an equally spaced code delay, produces N correlation measurements for each sub epoch. The receiver accumulates correlation measurements from multiple sub epochs over each epoch T. Thus, by the end of the epoch, there is a two-dimensional array of correlations, where one dimension is the code delay, and another dimension is time (sub epoch).
For example, consider a simple case when the user with a receiver moves with a constant velocity during an epoch (this means at a constant speed and along a straight line). Since LOS and multipath signals transmitted by the transmitter arrive at the user from different directions, they will have different rates of phase change at the receiver as illustrated in
If a Fourier transform is performed in the time domain (across sub epochs) for each replica delay, different paths will show in different phase rate bins. In this example, signal paths can be separated by their Doppler rates. For each phase rate bin, there may still be some correlation profile in the replica delay domain that may be due to relatively few paths which have the corresponding phase rate (e.g., arrive from approximately the same corresponding direction). However, only a few paths will contribute to a particular phase rate bin, and therefore these paths can be resolved by some conventional techniques, such as MEDLL. The result of this procedure will be multiple, separately identified paths, each with its own phase rate and delay. The earliest path (smallest delay) is a candidate for the LOS.
In real world applications, a user may not move with a constant velocity and along a straight line. The disclosed methods can be applied to any trajectory if the LOS components are added coherently, and multipath components are added destructively. This can be done if the relative trajectory is approximately known during the epoch, which generally requires using an inertial measuring unit (IMU). In this case, the correlation phases for each correlation measurement may be rotated in such a way that LOS components should have the same phase. This step will be referred to as motion compensation.
After the correlation phases have been motion compensated, a simple way to isolate the LOS is to sum correlations over all sub epochs for each replica delay. In a motion-compensated frame, LOS corresponds to the zero phase rate (user “standing still” in the framework tied to the user) and the Fourier transform at the zero phase rate is the same as integration over time. This example, though not an optimal implementation, shows that SALSA has some similarities with integration.
There are several primary benefits of doing SALSA as compared to the plain long signal integration. First, SALSA accounts for imperfections in motion compensation. In an ideal case, having an accurate estimate of relative user trajectory during the epoch from an inertial measuring unit (IMU), and an accurate internal clock, perfect motion compensation would be possible, and the LOS component is guaranteed to be in the zero phase rate bin. In this idealized case, integration would achieve the same result as the Fourier transform. However, the estimate of the user trajectory is imperfect, and the user clock has a drift. In this realistic case, the true signature of the LOS may appear not in the zero phase rate bin, but somewhere in its vicinity, and a Fourier transform may be used to find it.
Second, the position of the LOS-induced maximum in the phase rate domain is an indication of the errors in the clock drift and trajectory estimate. In essence, it is the phase rate residual. When multiplied by the epoch duration, it becomes the residual in ADR (accumulated delta range), which is a valuable measurement in itself, commonly used by many navigation systems.
Third, SALSA assists in isolating the LOS component from multipath. Some multipath components may be located in phase rate bins which are poorly separated from the location of the LOS bin. Each component will show up as a maximum in the phase rate (Fourier) domain. If maxima are not well separated, a tail of one maximum may contaminate a measurement for another maximum. A plain time-integration ignores this effect, and suffers from multipath correlations “leaking into” the integration result. SALSA uses windowing (e.g., Kaiser-Bessel window) to improve path separation.
Fourth, SALSA may be used with beamforming and direction finding to improve isolation of the LOS signal. Each path contributes a spectral peak in the phase rate (Fourier) domain. To find and isolate a particular peak, such as that for LOS, there are many sophisticated methods to do so in the beamforming and direction-finding (DF) technologies, examples include MUSIC, Maximum Entropy Methods, and weighted subspace fitting. By re-casting a navigation problem in synthetic aperture terms, the entire arsenal of beamforming and DF techniques can be used with the SALSA method.
SALSA MUSIC
In another aspect of the present disclosure, SALSA uses a modified version of the MUSIC algorithm to analyze correlations across RF channels and delays, and thus does not require user motion.
In mathematical terms, A matrix C* can be formed
where m is the number of elements in the array, and n is the rank of the signal subspace (i.e., number of spectral components we are looking for). Coefficients c1 are unknowns and will be solved for.
Let S=[s1, . . . sn], m×n be a matrix formed by eigenvectors of the signal correlation matrix. Then it can be shown that
C*S=0. [2]
Terms in this matrix equation can be re-arranged to rewrite it in the form:
Φc=μ [3]
where the (m−n)n×n matrix Φ and the (m−n)n×1 vector μ are entirely determined from the elements of S, and where c=[c1, . . . cn]. If the sample version of S is used, then c can be treated as unknowns in a linear (over-determined) system of equations and can be solved for.
Next, a polynomial can be formed
A(z)=1+c1z−1+ . . . +cnz−n [4]
Roots of that polynomial for z−1 happen to correspond to components of the signal:
Although it may appear convoluted, it is recognized that this method works for correlated and for coherent signals, i.e. assumption of signal independence is not necessary.
The above algorithms are known and discussed in Spectral Analysis of Signals, P. Stoica and R. Moses, 2005.
The present disclosure adapts these algorithms for use in the specific application. In the present disclosure, all signals (LOS and multipaths) come ultimately from the same source. Thus, all signals are fully coherent. This is a somewhat extreme case, and the mathematical treatment for this case follows.
First, with respect to a signal correlation matrix, for any pair of array elements, the corresponding element of the correlation matrix is defined as:
Signal samples y1 are a sum of multiple signal components. In the case of fully coherent signal components, all components (except noise) vary by the same phase with time, and therefore
In other words, averaging over time does not do any good for fully coherent signals. This has big implications. It is easy to check by direct substitution that for any vector
M
where g is a constant. Thus, matrix M projects any vector on vector
This totally invalidates the traditional MUSIC method. However, with the modified MUSIC method presented above, the l-th equation in the system of equations (2), becomes greatly simplified by the fact that there is only one non-zero eigenvector and therefore only one column in matrix S:
where notation c0=1 is used for brevity. This assumes that samples y1 comprise n signal components arriving from different directions. If there is a uniform linear array (ULA), then
Then
Changing the order of summation to get:
Equations [12] for different l form a linear system with linearly independent coefficients αkeiΩ
for any value of k.
Denoting eiΩ
where bx are defined via roots of polynomial [13]. Equation [14] must hold for any k, which means that bk are simply ζk−1 (not necessarily in the same order).
Thus, the method can be summarized as follows:
and solve them for cq. Assume c0=1.
It is not necessary to find eigenvectors, which is good in terms of simplicity and computational efficiency.
With reference to
The PR and Doppler measurements are used to estimate user position using standard navigation techniques such as Kalman filtering 370. High-rate IMU data 380 can be used to augment the measurements in the navigation filter 370.
Modified MUSIC Implementation for a Multi-Channel Signal
The SALSA MUSIC method is based on two observations. First, isolating LOS and multipaths in a multi-channel signal (such as OFDM) by a moving user is mathematically equivalent to the problem of spectral estimation in two dimensions. The first dimension is the time domain, and the second dimension is the channel domain. Second, the conventional MUSIC algorithm does not work well for the problem at hand, because LOS and multipath signals are highly correlated (basically, fully correlated). However, the modified MUSIC algorithm works.
In one embodiment, the SALSA MUSIC method can be summarized as follows:
Thus, in one aspect inertial measuring units (IMU) and UTC can be used to get multiple measurements for a single epoch. Multiple correlation measurements can be collected over one epoch, e.g. 100 measurements, separated by 10 ms are collected over one second. Rather than integrating or averaging these measurements (as it is done by techniques for long integration of GPS signals), measurements are processed as a set.
In another aspect, using data from the IMU and UTC, measurements are corrected for the user motion, in such way that for the LOS path the phase of the signal remains constant or changes linearly.
In another aspect array processing techniques are used to isolate paths. Array processing techniques are applied to the set of measurements to isolate individual signal paths. These techniques may include windowing and Fourier transform, principal component analysis, MUSIC and Modified MUSIC.
In yet another aspect, the paths can be cycled through and delays can be determined. With the knowledge of DOA for all paths reaching the receiver, the algorithm can apply beamforming to all or some of the paths and determines each path's delay. The beamforming can be performed using the following steps:
In another aspect, the LOS path is chosen. Delay estimates and other characteristics can be used for each of the paths to find the LOS path. Any combination of the following criteria can be used:
In another aspect, the LOS delay can be output to navigation software.
GAME
SALSA alone may not be sufficient to produce good measurements in certain environments. SALSA outputs measurements for first-arrival signals. These measurements may be processed by a Kalman filter; however Kalman filters are vulnerable to any biases in the input measurements. For example, if a signal from a particular GNSS satellite has no LOS component (or the LOS component is too weak to be detected) then the first arrival will correspond to a multipath component, which is necessarily delayed. This creates a bias in some measurements, and will ruin the Kalman filter performance. GAME is designed to identify and eliminate faulty (i.e., non-LOS) measurements.
One well known prior art method used in identify faulty measurements is called Receiver Autonomous Integrity Monitoring (RAIM). RAIM is a popular algorithm for airplane navigation, where data integrity is important. Current RAIM methods check consistency of measurements to identify and eliminate one faulty measurement (there also was some work to extend RAIM to multiple faults). GAME is also directly to eliminating faulty measurements, but it is a substantial improvement upon RAIM in the following ways:
GAME builds on two known algorithms: Interacting Multiple Models (IMM) and Genetic Algorithms. In one embodiment, GAME simultaneously tracks multiple models; each model characterized by a particular allocation of LOS/non-LOS flags. For each model, GAME computes a Bayesian likelihood of that model being true. GAME assumes that multipath environment is dynamic, and accounts for a possibility of flips in LOS/non-LOS flags. Each epoch, GAME performs two major steps computing a priori and posteriori likelihoods for a set of models. The most posteriori likely model is used to select LOS-only measurements, which are then passed to the Kalman filter for processing.
By way of example, at epoch n the algorithm computes likelihoods for a set of models Ln(m), where m identifies a model. At the next epoch, there is a new set of models, which are not necessarily the same as the models in the old set. There is a probability that particular LOS signals become non-LOS and vice versa. Thus a probability of a flip in each flag can be defined and the probability transition matrix can be computed from the old set to the new one. If this probability transition matrix is applied to posteriori likelihoods in the old set, we will get a priori likelihoods in the new set, which we denote {tilde over (L)}n+1(m)
Computation of posteriori likelihoods requires measurement redundancy. A covariance matrix for all measurements can be assigned. A basic assumption is that the error variance for a non-LOS measurement is large compared to that for LOS measurement (due to multipath-induced bias). The Bayesian probability density for having a particular set of measurements and some value of the user state (i.e., position, velocity and clock) can be computed as follows:
where {circumflex over (Q)} is the covariance matrix,
Conventional IMM implementation is not feasible for the applications discussed above due to a large number of models to track. To illustrate, if there are 10 signals, there are 1024 models (different allocations of LOS/non-LOS flags). It is clear that a workable solution must resort to tracking but a small fraction of all possible models. In GAME, a genetic algorithm (GA) is used to track only a subset of the possible models. There is some reasonably small set of models tracked by the algorithm concurrently. In each epoch, a model may remain in the set, may be eliminated, or may spawn more models to track. The fate of a model is dependent on its likelihood, which serves as a goal function in a typical genetic algorithm. Thus, most unlikely models face higher chances of elimination, and most likely ones have the offspring. A key aspect of integrating IMM and GA is the use of IMM-computed likelihood as the goal function of GAME.
As part of computing GAME likelihoods, a starting assumption is that measurements
The likelihood to have vector
In addition, prior information can be used in the likelihood computation, which comes from the output of the navigation filter. If the filter navigation solution is denoted by
However, to use it in updating model weights, just the likelihood of a vector of flags
where
The right hand side of the last equation is a quadratic form for
The quadratic term is now as follows:
This term is diagonalized (i.e., converted to the canonical form) if the variable transformation matrix is given by
{circumflex over (M)}=ĴT−1 [21]
where Ĵ is a factor in the LU-decomposition
ĤT·{circumflex over (Q)}−1·Ĥ+{circumflex over (P)}−1=Ĵ·ĴT. [22]
In the new variables, the argument of the exponent takes the following form:
E=
(
where the following notations are used:
F=
Substituting the expression for E into that for Λ(
The constant multiplier and ∥{circumflex over (M)}∥ will be canceled when likelihood values for all models are normalized, and therefore do not have to be computed. This is the final (albeit still not normalized) equation for the likelihood of each model. It accounts for both the new measurements and for the prior information in the form of a navigation solution.
With Reference to
Next the probabilities of transition from a model in List_0 to a model in List_1 is computed. The next step is to multiply the likelihood of the model in List_0 by the transition probability and tally it in the likelihood of the model in List_1 and proceed to the next time epoch.
One aspect the present disclosure is directed to a system and method which processes pseudorange, Doppler and ADR measurements from several navigation signal sources (such as GPS satellites), where some measurements can be substantially corrupted by the effects multipath or by other biases.
In another aspect, the present disclosure processes multiple models, where each model is characterized by assuming that some signals are non-biased (e.g., not affected by multipath, or affected insignificantly), and other signals are biased (e.g., corrupted by multipath delays). The number of models processed can be less than the total number of possible models. For example, if there are S sources, the total possible number of models, which assume all possible allocations between biased and non-biased measurements, is equal to 2S. The number of processed models can be limited by the amount of processing power available to the user.
In another aspect, each processed model has a priori and posteriori likelihood values associated with it. The a priori likelihood computation assumes that each processed model at the current epoch has originated from a model, processed at the previous time epoch. The current model may be identical to one of the previous models, in which case its a priori likelihood depends on the probability of the signals not changing from biased to not-biased category and vice versa. Alternatively, the current model may differ from any model at the previous time epoch, in which case its a priori likelihood depends on the probability of a signal switching between the biased and non-biased category.
In yet another aspect, for each processed model, the algorithm computes the posteriori Bayesian likelihood of measurements obtained by the receiver. This likelihood is computed by combining the a priori likelihood with the probability of obtaining the last epoch's set of measurements. It will be different for different models, since a priori probability distributions for measurement residuals are different for biased and non-biased measurements. Typically, biased measurements would have larger variances, and the expected value of residuals of biased measurements may have non-zero mean.
For some or all of the processed models, the algorithm may also compute posteriori likelihoods of some derivatives of these models. For example, the algorithm may compute the likelihoods of models, where one signal source is moved from the biased to the non-biased category, or vice versa. From all candidate models (processed models and their derivatives), the algorithm selects models with highest likelihood values, which are retained. Models with lower likelihood values are destroyed. This selection process forms a set of models for computing a priori likelihoods for the next epoch. In other words, a priori likelihoods for the epoch are computed from posteriori likelihoods for the previous epoch. The model with the highest likelihood is selected. Measurements, which are flagged as non-biased for this model, are passed to the navigation filter for processing. Measurements which are deemed biased are not processed by the navigation filter.
WAFFLES
Another problem in prior art navigation systems is the errors introduced by Signals of Opportunity (SoOP) geolocation errors, and reference station calibration and timing errors. A priori locations of SoOPs may not be known accurately and therefore SoOPs must be geolocated using navigation infrastructure. Assuming that SoOPs are geolocated using the very same reference stations, which provide data to the user (possibly, even at the same time), SoOP geolocation errors are largely due to errors at the reference stations, and the latter are at the root of the problem. Thus, both types of errors must be mitigated jointly. We refer to these errors as CTG errors (for Calibration, Timing, and Geolocation).
Normally, any error in the source location, reference station calibration, or reference station timing translates into an error in user TDOA ranging. For example, if a source is hypothetically geolocated with 50 m accuracy, the same order of magnitude accuracy the the user navigation is observed, which could severely degrade navigation accuracy.
However, there is a small parameter to exploit. The errors in question are normally much smaller than other distances in the problem, e.g. those from the source to the user and from the source to reference stations. The ratio of the source geolocation error to the spatial scale of the problem is small, e.g. 50 m/5000 m˜102. The basic idea is to compute a weighted average of TDOA measurements in such a way that the effects of CTG errors cancel. If this goal is achieved, then the ranging error will be non-zero in the second-order approximation only. This may reduce the ranging error from ˜50 m to about 50×10−2 m=0.5 m.
This method described in this disclosure is referred to as WAFFLES, which is designed to cancel effects of CTG errors in the first order approximation (with respect to the small parameter defined above). Even though this method may be an important piece of the puzzle to meet performance requirements, it should not be viewed as an excuse to relax efforts on calibrating reference stations and geolocating SoOPs accurately. Indeed, the performance of WAFFLES will depend on the geometry of the problem. Even though the first-order error is canceled, the second-order error may still be a concern for some geometries. This is when calibration and synchronization of reference stations becomes important. Moreover, the WAFFLES method allows an excellent return on any improvement in the calibration and timing. For example, reducing calibration error by a factor of 2 may reduce the second-order user ranging error by a factor of 4.
As compared to well known location of GPS satellites, SoOPs introduce a number of additional error sources. One of the most important error sources is due to timing and calibration errors at the reference station, and associated error in SoOP geolocation.
The geometry of the problem is shown in
This problem turns out to be somewhat complicated. The reason for this is the lack of the number of degrees of freedom available. If there are N reference stations, then there are N weights to play with. There are also N calibration errors to cancel, which is N constraints to satisfy. However, it is required that the sum of weights is equal to one, which adds one more constraint, for the total of N+1 constraints. Thus, all constraints can not be satisfied with N weights, and something has to be sacrificed.
As described below, it can be shown that the user ranging error due to reference station calibration errors is given by (in the linear approximation):
δ
This is the un-modeled error for the user for TDOA measurements using different reference stations. The error is due to reference station calibration errors Δ
This disclosure is directed to canceling these TDOA errors by computing a weighted sum of measurements at the cost of sacrificing two SoOP measurements. The approach is similar to GPS double differencing, but regular double differencing will not work here. In the case of GPS, double differencing cancels satellite clock error (which is a bit analogous to canceling reference station calibration errors in the present case) and satellite ephemeris errors (which are analogous to SoOP geolocation errors). However, GPS has the great advantage that a satellite is very far off, and all LOS are essentially parallel. If reference stations and the user are scattered across the area, like in our case, the plain old double differencing would not work. Hence the need for an approach like WAFFLES.
Additionally, weights can be computed for any epoch by the receiver, but they depend on the estimated user position. If the user moves around quite a bit, then weights will be time dependent. Since they are used as coefficients for terms, which define the partial derivatives of the measurement with respect to the user position, this is equivalent to changing the “virtual location” of the equivalent TOA SoOP measurement. This will not be a problem for a navigation code, which is designed from scratch properly.
By way of illustration, consider a 3D case with 4 reference stations (a 2D case with 3 reference stations is completely analogous). The pseudorange equation for the j-th reference station:
where
Assuming some initial approximation
The unknowns are δ
where components of 4-vector ρ−|
The solution for δ
δ
where
Vector
Proceeding to estimating the user position, the user pseudorange equation is as follows:
where
To process this measurement, it is compared with the estimate for TDOA, i.e. compute the residual. This residual will comprise two parts: (a) The “legitimate part”, which is due to the user clock and user position error. These are unknowns we solve for, so we need that error in the residual to have something to process; and (b) The “error part”, which we do not model and do not solve for. This part is due to term Δtj and due to SoOP geolocation error
The present disclosure is directed to the second part. To make the math less cumbersome, only the second error source is retained and the residual due to the user position and clock errors is not computed:
δ
where 4×4 matrix M is defined by:
Substituting
δ
Next, we introduce the notation
K=MN−1−1 [38]
to get:
δ
Unfortunately, computation shows that matrix K does not have a full rank. Computing it explicitly for a 2D case is somewhat involved, but important. Assuming there are three reference stations.
δ
where
is the unit vector in the SoOP to reference station direction.
Differencing equations 0 and 1, and 1 and 2 in [40] to cancel δθ and solve for δ
where subscript k is for x,y components and
and
User TDOA error is given by (see (8)):
δρj=ξjxSx+ξjySy+Δtj [45]
where
Thus, matrix K is given by:
Kjm=ξjxαxm+ξjyαym+δjm. [47]
Next we compute Kjm explicitly for some specific m, for example for m=0 and for different values of j.
Case m=0; j=1
Case m=0; j=2
Case m=0; j=0
Amazingly, K00=K10=K20. The same holds true for other values of m.
This means that errors in a particular reference station contribute the same value to the user TDOA error regardless of whether the same or different reference station is used for making that TDOA. If it is the same station, then this contribution is via SoOP geolocation error and the reference station calibration error. If a different station is used, the TDOA user error is due to SoOP geolocation error only. Yet, in both cases, the net result is the same.
This complicates the algorithm, forcing a more costly route. First, there is no absolute concept of time (as in GPS), and therefore timing and calibration errors at one station can be set to zero. This station will be treated as an absolute time reference.
In this case, there are 2 independent non-zero calibration errors for 3 stations, e.g.
Δt0=0; Δt1,Δt2≠0.
Considering 3 SoOPs and introducing another subscript to equations to denote a particular SoOP. In addition, only TDOA measurements from one reference station (recall that errors from all stations are equal, so any station will suffice) are considered, which is denoted with as m=0. User TDOA error (again, with any reference station, e.g. with station 0) is given by:
where subscript s denotes different SoOPs, and not different reference stations as in (13). Setting Δt0=0, we combine measurements for three SoOPs with weights αs, such that coefficients for Δt1, Δt2 cancel, and the sum of weights is equal to 1 (so far). This produces the following system of equations for αs:
K001α0+K101α1+K201α2=0
K002α0+K102α1+K202α2=0 [52]
α0+α1+α2=1
If TDOA measurements from 3 SoOPs are linearly combined with weights αs, the first order error cancels.
The WAFFLES method assumes some typical reference station position errors and some geometry (i.e., user, SoOP, and reference stations positions), computes the true and the estimated value of TDOAs for 3 SoOPs, computes coefficients αs, and then combines measurements. If the algorithm works, the difference between the two weighted averages must be relatively small.
Implementation of the algorithm has two subtleties, which are described below:
where δŪ is the user position error, δ
There is just one more little modification to the equations to be made. The weighted sum of TDOA measurements can be used in the Kalman filter for navigating the user. Normally, a TOA or TDOA measurement equation has the position of the user in the form like |Ū−
Now the desired weighted TDOA measurement is given by:
where ρs are TDOA measurements for SoOP s using reference station 0.
Estimating Location of a Static User Under Foliage
Another aspect of the present disclosure provides a method of estimating user location under foliage or for other environments dominated by scattering of GPS signals. In one embodiment, the statistical properties of the foliage are assumed to be known with some reasonable accuracy. It is not necessary to know the positions of individual twigs and leaves, rather, correlations for the channel impulse response can be used.
In one embodiment, the user receiver has a bank of correlators. The receiver can compute correlations between the replica waveform and the signal waveform at different delays. Thus, for each SoOP there will be a large number of correlation measurements, taken at different time epochs and at different delays. Measurements, which are sufficiently separated in time and/or delay will not be correlated; however, measurements taken at adjacent time epochs or by adjacent correlators will be correlated. Correlation is due to two separate effects. First, the scattering medium (foliage) is the same for different measurements; thus there will be signal components coming from same branches, twigs and leaves. Second, the signal bandwidth is obviously limited; this creates correlation of measurements over delays, which are separated by less than the inverse bandwidth of the signal.
The channel impulse response (CIR) for a particular SoOP is a function of delay τ and time t. The time dependence is due to the satellite motion and the change in the environment (e.g., wind moves the leaves):
R=R(τ,t) [56]
The CIR has the LOS peak at τ=0, which corresponds to the true user position.
The correlation of this signal with the replica waveform is measured. The correlation as a function of delay and time is given by convolution of CIR with the autocorrelation function of the waveform:
C(τ,t)=R(τ,t)*A(τ)=∫R(θ,t)·A(τ−θ)dθ. [57]
Assuming that statistical properties of R(τ,t) are known, pair-wise covariances for values of C(τ,t) can be computed:
Similarly, a noise covariance matrix can be computed
η(τi,ti,τj,tj)=E{n(τi,ti)·n*(τj,tj)}. [59]
This computation can be done once for each type of forest and stored at the receiver in the form of a lookup function for different pairs of angles of arrival and delays. (Even though arguments for this covariance matrix include time, and not the angle of arrival per se, the statistics is mostly due to the change in the angle as the time progresses and satellite SoOP moves. From a pre-stored table, the receiver can extract the covariance matrices for a particular satellite pass using the specific geometry.)
The problem can be formulated as follows. Correlations between the signal and the waveform replica can be measured. These measurements comprise a realization {tilde over (C)} of a random process. The receiver position can be estimated from this data.
An estimate for the overall offset of the delay can be denoted with a hat, {circumflex over (τ)}. At a particular time, {circumflex over (τ)}=τ−Δ
{tilde over (C)}({circumflex over (τ)},t)=C(τ,t)+n(τ,t) [60]
is the most likely realization of the random process for C(τ,t) and n(τ,t).
For discrete values of τi,ti, equation [60] can be viewed as an over-determined system of linear equations for C(τi,ti),n(τi,ti). If one long vector of unknowns is formed x={C(τi,ti)|n(τi,ti)} then equation [60] can be written in the form:
Mx={tilde over (C)}({circumflex over (τ)},t) [61]
where matrix M has the following structure:
A solution of equation [61] is sought while minimizing the following quadratic form (this is what makes the solution the most likely one):
F=xWx* [63]
where the covariance matrix W is constructed from the covariance matrices for the signal and for the noise:
This is called a linear equality-constrained least squares problem, and there are efficient numerical routines to solve it. Since {circumflex over (τ)} depends on the user position error, the solution will too be a function of the position error (via equation [61]). Thus, the value of the quadratic form [63] will be a function of the user position error. By varying Δ
It may be emphasized that the above-described embodiments, particularly any “preferred” embodiments, are merely possible examples of implementations, merely set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiments of the disclosure without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and the present disclosure and protected by the following claims Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible program carrier for execution by, or to control the operation of, data processing apparatus. The tangible program carrier can be a propagated signal or a computer readable medium. The propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a computer. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter affecting a machine-readable propagated signal, or a combination of one or more of them.
The term “processors or processing” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The processor can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a stand alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, to name just a few.
Computer readable media suitable for storing computer program instructions and data include all forms of non volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, input from the user can be received in any form, including acoustic, speech, or tactile input.
Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described is this specification, or any combination of one or more such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet.
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
While this specification contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
It may be emphasized that the above-described embodiments, particularly any “preferred” embodiments, are merely possible examples of implementations, merely set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiments of the disclosure without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and the present disclosure and protected by the following claims.
This application is a divisional of U.S. application Ser. No. 12/406,456 filed Mar. 18, 2009, which claims the priority of U.S. Provisional Patent Application No. 61/064,643 filed Mar. 18, 2008, the disclosure of which is incorporated by reference.
The U.S. Government has a paid-up license in this invention and the right in limited circumstances to require the patent owner to license others on reasonable terms as provided for by the terms of Contract No. W15P7T-07-C-P204
Number | Name | Date | Kind |
---|---|---|---|
3984068 | McPhee | Oct 1976 | A |
4005421 | Dax | Jan 1977 | A |
4034370 | Mims | Jul 1977 | A |
4894662 | Counselman | Jan 1990 | A |
4975704 | Gabriel et al. | Dec 1990 | A |
5243349 | Mims | Sep 1993 | A |
5708436 | Loiz et al. | Jan 1998 | A |
5973643 | Hawkes et al. | Oct 1999 | A |
6031882 | Enge et al. | Feb 2000 | A |
6141393 | Thomas et al. | Oct 2000 | A |
6144316 | Skinner | Nov 2000 | A |
6232922 | McIntosh | May 2001 | B1 |
6249542 | Kohli et al. | Jun 2001 | B1 |
6347264 | Nicosia et al. | Feb 2002 | B2 |
6463091 | Zhodzicshsky et al. | Oct 2002 | B1 |
6466612 | Kohli et al. | Oct 2002 | B2 |
6516021 | Abbott et al. | Feb 2003 | B1 |
6553396 | Fukuhara et al. | Apr 2003 | B1 |
6628844 | Benitz | Sep 2003 | B1 |
6630904 | Gustafson et al. | Oct 2003 | B2 |
6633814 | Kohli et al. | Oct 2003 | B2 |
6731182 | Sakurai | May 2004 | B2 |
6911931 | Vincent | Jun 2005 | B2 |
6952440 | Underbrink | Oct 2005 | B1 |
6975607 | Sekine et al. | Dec 2005 | B2 |
7030814 | Stone et al. | Apr 2006 | B2 |
7068742 | Yousef et al. | Jun 2006 | B2 |
7106243 | Krikorian et al. | Sep 2006 | B2 |
7110432 | Hooton | Sep 2006 | B2 |
7139307 | Takahashi | Nov 2006 | B2 |
7145496 | Cho et al. | Dec 2006 | B2 |
7301992 | Kohli et al. | Nov 2007 | B2 |
7388541 | Yang | Jun 2008 | B1 |
7440493 | Pietila et al. | Oct 2008 | B2 |
7453961 | Li et al. | Nov 2008 | B1 |
7782710 | Uzes | Aug 2010 | B1 |
7808884 | Jitsukawa et al. | Oct 2010 | B2 |
7844397 | Lund et al. | Nov 2010 | B2 |
7855678 | Scherzinger et al. | Dec 2010 | B2 |
7860014 | Takano | Dec 2010 | B2 |
7869487 | Gilmour et al. | Jan 2011 | B2 |
7872583 | Yushkov et al. | Jan 2011 | B1 |
7969311 | Markhovsky et al. | Jun 2011 | B2 |
8059700 | Lopez-Risueno et al. | Nov 2011 | B2 |
8116394 | Jia | Feb 2012 | B2 |
20020001339 | Dooley et al. | Jan 2002 | A1 |
20020154614 | Jagger et al. | Oct 2002 | A1 |
20030054845 | Krasny et al. | Mar 2003 | A1 |
20030132879 | Dooley et al. | Jul 2003 | A1 |
20030142639 | Cheung et al. | Jul 2003 | A1 |
20030227962 | Hintz-Madsen | Dec 2003 | A1 |
20040008139 | Stone et al. | Jan 2004 | A1 |
20040052305 | Olson et al. | Mar 2004 | A1 |
20040218701 | Singh et al. | Nov 2004 | A1 |
20050001742 | Small | Jan 2005 | A1 |
20060193298 | Kishigami et al. | Aug 2006 | A1 |
20060209974 | Yoshida | Sep 2006 | A1 |
20070211791 | Ganguly et al. | Sep 2007 | A1 |
20070237269 | Lillo et al. | Oct 2007 | A1 |
20070263738 | Jitsukawa et al. | Nov 2007 | A1 |
20080063043 | Xia et al. | Mar 2008 | A1 |
20080129598 | Godefroy et al. | Jun 2008 | A1 |
20080174483 | Bryant et al. | Jul 2008 | A1 |
20080211912 | Greenfeld et al. | Sep 2008 | A1 |
20080234930 | Cheok et al. | Sep 2008 | A1 |
20080238772 | Soloviev et al. | Oct 2008 | A1 |
20080284646 | Walley et al. | Nov 2008 | A1 |
20080291086 | Walley et al. | Nov 2008 | A1 |
20080291981 | Jonsson et al. | Nov 2008 | A1 |
20090028221 | Sahinoglu et al. | Jan 2009 | A1 |
20090102711 | Elwell et al. | Apr 2009 | A1 |
20090219202 | Pon | Sep 2009 | A1 |
20090323642 | Tanno et al. | Dec 2009 | A1 |
20100049366 | Lee et al. | Feb 2010 | A1 |
20100103989 | Smith et al. | Apr 2010 | A1 |
20100211316 | Da Silva et al. | Aug 2010 | A1 |
20100295734 | Wu et al. | Nov 2010 | A1 |
20110256882 | Markhovsky et al. | Oct 2011 | A1 |
20120076177 | Simic et al. | Mar 2012 | A1 |
20130098309 | Nohara et al. | Apr 2013 | A1 |
Entry |
---|
Lal C. Godara, “Application of Antenna Arrays to Mobile Communications, Part II: Beam-Forming and Direction-of-Arrival Considerations”, Proceedings of the IEEE, vol. 85, No. 8, Aug. 8, 1997. |
Number | Date | Country | |
---|---|---|---|
61064643 | Mar 2008 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12406456 | Mar 2009 | US |
Child | 13774961 | US |