The present application relates generally to the field of wireless location, i.e., systems and methods for estimating the position of a wireless device, and more particularly to a method using generalized error distributions.
As the Federal Communications Commission (FCC) moves towards a PSAP-level location accuracy mandate, improving methods for different location technologies becomes a necessity. The subject matter described herein relates to the fields of communications and location technology. It provides a means for improving the accuracy of location technologies such as Global Positioning System (GPS), Uplink Time Difference of Arrival (UTDOA) and Advanced Forward Link Trilateration (AFLT).
A common approach to position estimation is to find a weighted least squares solution from measured quantities such as time differences, pseudoranges or power levels. The weighted least squares solution is known to achieve a maximum likelihood (ML) solution when input errors are independent and Gaussian (see J. Caffery, Wireless Location in CDMA Cellular Radio Systems, Boston-London: Kluwer Academic Publishers, 2000), but it cannot do this under the more general conditions encountered in practice. For example, TDOA errors have a tendency to be positive relative to the predicted leading edge of the multipath delay profile. As explained below, several factors such as imperfect leading edge detection and non-line-of-sight (NLOS) propagation contribute to these positive errors. As a result, the per-baseline error distribution is skewed. This skew reduces the accuracy of the basic weighted least squares method. In contrast, the method described herein exploits knowledge of this skew to obtain improved results. Moreover, correlation among these errors can often be found; for example, distinct multipath components can be received at the same sector, common NLOS conditions may exist at a site and common errors may be introduced by the reference signal. These correlations may be incorporated into a maximum a posteriori (MAP) algorithm as described below. This framework can also be used to incorporate an estimate of the a priori mobile position distribution in the location solution.
UTDOA is a network-based technology allowing for any signal transmitted from any type of mobile station (MS) to be received at any base station to obtain a UTDOA measurement. A reference base station measures the received signal at roughly the same time as each cooperating base station, as illustrated in
Terrestrial wide area wireless location techniques using uplink (device-to-network based receiver) TDOA or TOA techniques include U-TDOA, U-TDOA/Angle of Arrival (AoA) hybrid and U-TDOA/Assisted GPS. U-TDOA and hybrids currently function in CDMA [IS-95, IS-2000], GSM, UMTS, WiMAX (802.16e/m and 802.20) and conceptually for the upcoming Long-Term-Evolution (LTE) OFDM based wireless radio access network (RAN). Terrestrial Uplink techniques require that the mobile device 102 transmissions 109 be measured by network based receivers (in this case co-located within the cell sites 103104. Measurement data is then conveyed by backhaul 111 to a Position Determining Entity (PDE) 106 for conversion into a latitude, longitude, velocity, and in some cases an altitude. Regardless of the aforementioned wireless location technique, determination of the radio signal time-of-flight is key to accurate determination of the mobile devices 102 actual location. In
In the system of
a, 2b, 2c and 2d illustrate how objects, such as a building, may block the direct path, creating a non-line of sight impairment in different location environments, including uplink, downlink, GNSS and hybrid GNSS/uplink systems (where GNSS stands for Global Navigation Satellite System). A diffracted path traveling around a building arrives at the receiver later than the highly attenuated or completely blocked direct path. Additionally, reflections from obstacles can cause scattering, which produces dispersion of the arrival times of different paths. In
303=Transmit time
304=Detection threshold
305=Line-of-sight (LOS) time-of-flight
306=Lag time
307=Basis for reported TOA or TDOA
308=Delay spread
309=Missed signal components
The method described in U.S. Pat. No. 6,564,065, May 13, 2003, K. Chang et al., “Bayesian-update based location prediction method for CDMA Systems,” appears to predict power levels from CDMA pilot channel measurements with location decisions made from an a posteriori power distribution using simulation. The method described in U.S. Pat. No. 5,252,982, Oct. 12, 1993, E. Frei, “Method of precise position determination,” appears to assume Gaussian errors using a weighted least squares method that iteratively finds phase ambiguities for a GPS location solution using an a posteriori RMS error.
A method for improving the results of radio location systems that incorporate weighted least squares optimization generalizes the weighted least squares method by using maximum a posteriori (MAP) probability metrics to incorporate characteristics of the specific positioning problem (e.g., UTDOA). As discussed, WLS methods are typically used by TDOA and related location systems including TDOA/AOA and TDOA/GPS hybrid systems. The incorporated characteristics include empirical information about TDOA errors and the probability distribution of the mobile position relative to other network elements. A technique is provided for modeling the TDOA error distribution and the a priori mobile position. A method for computing a MAP decision metric is provided using the new probability distribution models.
An illustrative implementation provides an error detection method comprising: obtaining field data, wherein said field data have baseline or location dependent values to be used in said signal correlation model; analyzing said field data to obtain (1) a signal correlation model and associated measurement parameters, (2) correlation matrix rules, and (3) a model for a priori position; computing weights for the measurements based on an estimated variability of the measurement; using the weights along with the correlation matrix rules to generate a covariance matrix, and computing an inverse covariance matrix; performing an iterative search over a geographical region to find a location with a maximum a posteriori (MAP) metric; determining that a stopping condition has been reached; and reporting the geographic position with the largest MAP metric as the location solution.
The methods described herein include several key innovations, including but not necessarily limited to the following:
Analytical a priori distribution: Empirical data providing the actual locations are used to obtain a distribution for the normalized distance from the reference tower to the location solution in order to model the general shape of the a priori position relative to towers in the search area. An exponential distribution is shown to approximate the shape of the a priori position distribution and its variance is calculated from the empirical data.
Analytical TDOA error distribution: The double exponential distribution model is generalized to incorporate a skew and an arbitrary power in the exponent. Model parameters are estimated from empirical data.
Multipath/NLOS error indicators: Key indicators of the TDOA error distribution include the number of baselines, the predicted multipath correction (based on observed signal parameters and/or knowledge of the local RF environment) and the TDOA correlation of each baseline. Methods are provided to derive model parameters from these indicators by analyzing empirical data and generating conditional error distributions. For each baseline, model parameters such as the skew are computed from these indicators.
TDOA error correlation: Methods are provided for computing a posteriori probabilities for correlated errors between baselines that have the above analytical TDOA error distribution. These correlations are incorporated into the MAP algorithm through the corresponding joint error probability distribution.
Method for common bias mitigation: With more general distributions, it becomes difficult to find an analytical solution for the common bias that can exist in measurements. Methods for removal of the bias are provided along with various complexity-performance tradeoffs.
Iterative adjustment: An iterative procedure is developed that applies the above methods. This procedure includes initialization operations and estimation of residual values.
Other features of the inventive technology are described below.
The attached drawings include the following:
a, 2b, 2c, and 2d: Illustration of impairments to LOS path delay estimation.
a and 4b: Components of MAP error detection method.
a-4b show the components of an illustrative implementation of the MAP error detection method. As shown, the MAP process is started at step 401. Field data 402 is analyzed 403 to obtain a set of signal correlation rules and models. These models and associated measurement parameters are developed from field data that can have baseline or location (or position, where the terms location and position are used interchangeably herein) dependent values to be used in the model. For example, the error skew may be higher for low correlation UTDOA measurements. A table 405 may thus be generated. This table provides a mapping between the model parameter for the skew and the correlation value for the measurement. Similarly, a model and table is computed for the a priori location 406. The field data analysis process also analyzes the correlation between different receiver ports linking the location receiver (e.g., a Location Measuring Unit (LMU) or Signal Collection System (SCS)) to an external antenna, providing correlation values and rules for their application. For example, there may be small correlation of errors on ports at the same location (co-site ports) due to NLOS effects. Once the field data is analyzed, weights are computed for the measurements based on an estimated variability of the measurement. Then the weights are used along with the correlation matrix rules to generate a port by port covariance matrix, which is inverted at 409.
As shown in
A goal of our inventive solutions is to model the a posteriori probability of the error and find the location solution that maximizes this probability. From the Bayes theorem (see A. Papoulis, Probability Random Variables, and Stochastic Processes, McGraw Hill Inc., New York, N.Y., 1984), the conditional probability density function of a random position vector, L is given in terms of a vector of N measurement errors, e, as
and
τi(x, y, z) is the LOS TDOA at point x,y,z for the ith baseline,
{circumflex over (τ)}i is the ith TDOA baseline measurement, and
B is a common bias that may exist in the measurements
To simplify computations, the log of (1) can be maximized since the position that maximizes (1) is also the position that maximizes the log of (1). The natural logarithm of (1) is
ln(fL|e(L|e))=ln(fe|L(e|L))+ln(fL(L))−ln(fe(e)) (4)
Since the last term does not depend on the location it is constant when considering different locations so it can be ignored. This leaves the following function to be maximized over all locations:
ln(fL|e(L|e))=ln(fe|L(e|L))+ln(fL(L)) (5)
The first term is the log of the a posteriori error probability density and the second term is the log of the a priori probability density.
Error Distribution Modeling Process
The error distribution modeling process is shown in
A Priori Distribution
The logic for finding an appropriate a priori distribution is shown in
Once the potential a priori distributions are computed for various model parameters, a model is selected as depicted in
where,
xref,yref,zref are the position coordinates of the reference tower
Rmax is the maximum distance from the reference tower to the edge of the search region.
An exemplary model is chosen to be exponential as
where, λα=11 was chosen to fit the field data.
Error Distribution
The field data is also analyzed to obtain models for the error distribution.
UTDOA correlation for each baseline,
Multipath correction factor for each baseline,
Number of measurements for each location.
Ranges and bin sizes for these parameters are determined for accumulating conditional and overall statistics. The conditional statistics and overall statistics are then compiled over all of the field data and stored for model determination.
A sample overall of error distribution is shown in
The overall distribution provides input to the coarse error model as shown in
where,
pi is a model parameter that is an arbitrary exponential power greater than zero,
ri is a model parameter that is a positive ratio indicating the skew of the distribution, and
σi is the standard deviation for the ith baseline.
Values for k and A are chosen that satisfy the condition
for a given ri and pi. For a Gaussian distribution ri=1, pi=2 and k=½. For a double exponential distribution or Laplace distribution, ri=1, pi=1 and k=√{square root over (2)}.
The coarse modeling step computes values for the model parameters in equation (8) that match the field data.
The conditional error distributions are used as input to the “determine fine error model” block 506 in
The skew can be found in terms of the mean and standard deviation of the conditional distribution as follows. If the conditional error distribution is approximated as a double exponential, then the scaling factor in the exponent is
λ=√{square root over (2)}/σ (9)
where, σ is the standard deviation of the conditional distribution.
To estimate ri, two separate scaled exponential distributions are considered where one of them is flipped around zero. Both components of the distribution are scaled to integrate to ½. As a result, the mean, m, of the conditional distribution can be put in terms of the scaling factors as
where, λL and λR are the scaling factors in the exponents of the separate exponential distribution components on the left and right of zero respectively. It is assumed that all of the skew is due to changes in λR relative to λ allowing for the assumption λL≈λ. Solving equation (10) for λR and using (9) with λL≈λ gives
The skew ratio from (9) and (11) is then
Values for σ and m from the conditional error distribution can then be used to compute ri using equation (12).
An example of the skew ratio as a function of the UTDOA correlation is shown in
Exemplary results of the fine model adjustments are shown in
Correlation Matrix
The field data is also used to analyze the correlation that exists between errors for different ports as shown in
Apply a fixed correlation between ports on the same sector,
Apply a fixed correlation between ports on the same site,
Apply a fixed correlation between a cooperating port and the reference port.
For each rule a normalized correlation value or correlation coefficient for the error is computed from the field data statistics (see A. Papoulis, Probability Random Variables, and Stochastic Processes, McGraw Hill Inc., New York, N.Y., 1984). The correlation values and rules provide input to the “populate covariance matrix and perform inversion” block 409 (
Weighting and Variance Computation
A weighting for each baseline is based on the RMS error from the Cramer Rao bound (see R. McDonough, A. Whalen, Detection of Signals in Noise, 2nd Ed., Academic Press., San Diego, Calif., 1995). The lower bound on the TDOA RMS error in AWGN (additive white Gaussian noise) is
where, W is the signal bandwidth, T is the coherent integration length and ρi is the correlation of the ith baseline. Since the mean error in AWGN is close to zero, the standard deviation of the error is approximately the RMS error. The weight is one over the RMS error squared, giving a theoretical weighting as
These exemplary weighting operations are performed after the field data analysis in the “Compute Weights” block 407 as shown in
Covariance Matrix Computation
A covariance matrix may be required for making decisions using the joint error density. The covariance matrix, C, is a port by port matrix of the covariance between the ith and jth port which is computed as
cij=βijσiσj (15)
where, βij is the correlation coefficient between the ith and jth port from the correlation matrix.
Alternatively this step may be bypassed for computational efficiency if the correlation levels between ports are deemed to be too small. An exemplary decision criterion is to use the covariance matrix if at least one of the βij exceeds a correlation threshold. If this threshold is not exceeded a flag is set to use an independent error analysis.
MAP Decision Metric Computation
The MAP decision computation using the joint error density employs a further generalization for correlated UTDOA errors. Staring with joint Gaussian errors, the a posteriori probability is
where, G is a constant. In terms of the individual UTDOA errors
where, dij are elements of C−. Assuming the model in equation (8) for the marginal error probability density, the following generalization is made to equation (17) for the joint density,
Substituting (18) and the natural log of (7) into (5) gives
Since the objective is to find the x,y,z and B that maximizes (22), the terms that do not depend on x,y,z and B can be ignored, giving:
For computational efficiency, equation (23) may be divided by −k and the x,y,z that minimizes
is found as
The decision metric to be minimized is then
For locations where there is low (˜0) cross-correlation between baselines, the covariance matrix is diagonal. Independence is assumed between UTDOA errors simplifying (25) to
In terms of the pre-computed weights for each baseline, the metric is
The Gaussian bias is found by setting pi=2 and ri=1 in (27). Note that for ri=1, h(ri=1,ei(x,y,z,B))=ei(x,y,z,B) giving
where,
N is the number of baselines
Δτi≡{circumflex over (τ)}i−τi(x,y,z) is the unbiased error.
A minimum solution over the bias is found by setting the derivative of (28) with respect to B equal to zero and solving for B giving
Equation (29) provides a bias when the error distributions are Gaussian.
The exponential bias is found by setting pi=1 and ri=1 in (27) giving
Again, a minimum solution over the bias is found by setting the derivative of (28) with respect to B equal to zero and solving for B. The derivative of each term with respect to B is
where, U(x) is a unit step function (see A. Oppenhem and A. Willsky, Signals and Systems, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1983). Setting the derivative of the sum equal to zero gives
Each term in (32) as a function of B is −√{square root over (W)}i until the value Δτi is reached and then there is a step to √{square root over (W)}i for B greater than Δτi. Due to these discontinuities, there is no exact solution for B. However, a value for B can be found that provides an approximate solution.
A solution in
where N is the total number of baselines in the ordered summation). In the figure the weight and sample arrays are populated and sorted. A threshold is computed that is
to provide a stopping condition. The terms are accumulated in order from the terms with the smallest to the largest transition point. At the point where the threshold is reached, the value ΔτK is returned if there are an odd number of terms; otherwise, the kth term's transition point is averaged with the prior term's transition point.
The first term in (27) is computed following the steps in
Table 1 below illustrates sample improvements relative to a weighted least squares algorithm. Using approximately 46,000 location measurements, the distribution of positioning errors were compiled using the weighted least squares algorithm and the algorithm above. The parameters for the above model were chosen using a separate training data set of 32,000 locations. The table shows improvement of approximately 20 meters and 2 meters in the 95th and 67th percentiles respectively. The average error improved by approximately 15 meters.
Conclusion
The present invention, and the scope of protection of the following claims, is by no means limited to the details described hereinabove. Those of ordinary skill in the field of wireless location will appreciate that various modifications may be made to the illustrative embodiments without departing from the inventive concepts disclosed herein.