The present invention relates generally to state estimation of systems, and more particularly, to state estimation of systems undergoing transitions between multiple state models.
Consistent and accurate methods for performing state estimation in a wide-variety of systems are critical to the function of many processes and operations, both civilian and military. Systems and methods have been developed for state estimation of a system that may transition between different regimes of operation which may be described or defined by a plurality of discrete models. These state estimation methods can be applied to various systems having sensory inputs, by way of non-limiting example only, nuclear, chemical, or manufacturing factories or facilities, control processes subject to external parameter changes, space stations subject to vibrations, automobiles subject to road conditions, and the like. One particularly useful application for state estimation is tracking objects in flight, such as a multistage rocket that is transitioning back and forth between a ballistic model of flight and other thrust models, or an aircraft performing maneuvers mid-flight.
As will be set forth in greater detail below, current state estimation systems and their associated algorithms have difficulty estimating the state of a system transitioning between distinct regimes of operation (i.e. between different, non-interacting models). More recent algorithms may implement “interacting models”, including Interacting Multiple Models (IMM), which allow for transitioning from one model to another during the estimation process. These interacting model algorithms have the advantage of reducing the filter lag and/or noise when the system transitions from one model to another. However, they are often burdened by computational challenges. Likewise, existing IMM-based estimators and their associated filtering arrangements also suffer from significant drawbacks, as their designs typically require large amounts of simulation, making their implementation impractical.
Improved systems and methods for state estimation are desired.
A method for estimating the state of an object is provided. The method includes providing a plurality of interacting models, wherein each model represents a potential operating state of said object. The models are defined by at least one parameter constrained to lie within a predetermined range of values. Measurements of the state of the object are made, from which model-conditioned estimates of the object state are generated. A model probability is updated from the model-conditioned estimates and overall estimates of the object state and associated covariance are generated. The states of the models are mixed in response to the model probabilities and the overall estimate of the state and covariance, and an updated estimate of the state and covariance of the object is generated.
A system for estimating the state of an object is also provided. The system includes a sensor for measuring at least one characteristic of the object and a plurality of filters, each associated with one of a plurality of interacting models representing an operating state of the object. Each of the models is defined by at least one parameter having a known bounded value. The system further includes a model probability evaluator configured to determine the likelihood said object is operating within one of said plurality of interacting models, an estimate mixer responsive to said model probability evaluator for mixing estimates generated by each of the plurality of filters, and a combiner for estimating the state of the object by combining the output of said plurality of filters.
It is to be understood that the figures and descriptions of the present invention have been simplified to illustrate elements that are relevant for a clear understanding of the present invention, while eliminating, for purposes of clarity, many other elements found in state estimation systems, such as IMM-based state estimation systems. However, because such elements are well known in the art, and because they do not facilitate a better understanding of the present invention, a discussion of such elements is not provided herein. The disclosure herein is directed to all such variations and modifications known to those skilled in the art.
In the following detailed description, reference is made to the accompanying drawings that show, by way of illustration, specific embodiments in which the invention may be practiced. It is to be understood that the various embodiments of the invention, although different, are not necessarily mutually exclusive. Furthermore, a particular feature, structure, or characteristic described herein in connection with one embodiment may be implemented within other embodiments without departing from the scope of the invention. In addition, it is to be understood that the location or arrangement of individual elements within each disclosed embodiment may be modified without departing from the scope of the invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims, appropriately interpreted, along with the full range of equivalents to which the claims are entitled. In the drawings, like numerals refer to the same or similar functionality throughout several views.
Referring generally to
The measurements from radar systems 14, 16 are applied to a processing arrangement 22 for determining various target parameters, for example, course (i.e. direction of motion), speed, and target type. The estimated position of the target, and possibly other information, is provided to a utilization apparatus, for example, a radar display 24, for interpretation by an operator. The operator (or possibly automated decision-making equipment) can make decisions as to actions to be taken in response to the displayed information.
As set forth above, existing state-estimation systems and methods have difficulty estimating the state of a system transitioning between distinct modes of operation, such as the maneuvering aircraft 12. This problem has been addressed by implementing algorithms which allow transitions from one model to another (“interacting models”). These interacting models have the advantage of reducing the filter lag and/or noise when the operating regime of the system being modeled switches from one model to another. However, estimation algorithms for such models are often burdened by computational challenges to account for the history of all possible switches, and thus experience undesirable effects, such as lag.
IMM algorithms provide a solution for transitioning between models with reduced computation. In some systems, model transitions are handled by considering interacting models transitioning from one to another according to a Markov transition matrix. The Markov assumption leads to an interaction of the elemental filters representing each model that make up the IMM estimator. As a consequence of the Markov transition process between models, the IMM mixes the state estimates from all the filters, and feeds a different mixture to every filter at the start of each computation cycle. The IMM algorithm makes use of the individual state estimates and error characterization from each elemental state estimator and reports to the external user the overall state estimate and error characterization.
IMM estimators are typically built on a bank elemental Kalman filters. Each Kalman filter is associated with a different model of the IMM (e.g. “tuned’ to a different acceleration, etc.). Because each model's parameters cannot be assumed to have precisely known values, the Kalman filters use empirically-derived white process noise for system modeling. For example, a large value of process noise is used for a large acceleration, and a smaller value for a smaller acceleration. The outputs of the multiple models included in the IMM are weighted and combined according to the likelihood that a measurement fits the assumption of each of the models.
However, Kalman filter-based IMM estimators suffer from numerous drawbacks. For example, the filters must be designed for each type of model, making their implementation impractical. Further, their designs are cumbersome and often non-robust. More specifically, their implementation typically requires large amounts of empirical simulation to design the appropriate white process noise. Further still, these systems generally suffer from an inability to handle data received out-of-sequence and account for biases between multiple sensors, as well as inaccuracies in characterizing estimation errors for maneuvering objects.
Embodiments of the present invention include IMM architectures and algorithms utilizing alternative state estimation processes. More specifically, embodiments may replace existing Kalman filters and their associated processing methods with an “optimal reduced state estimation” (ORSE) IMM algorithm, in combination with elemental ORSE filters. This ORSE IMM architecture may avoid utilizing a specific system model, and instead considers the worst-case bounds for one or more input parameters of the system. These physical bounds replace the empirical white process noise utilized in the Kalman filtering operations of existing IMM state estimators, and offer improved system accuracy, as well as simplified state estimator design.
For a given system that can be represented by a plurality of models, each individual model may include both states, whose dynamics can be modeled, and extrinsic parameters, such as inputs to the system and sensor biases, whose dynamics cannot be modeled, but which extrinsic parameters are constrained to lie within a known range of values. Based on these principles, embodiments of the present invention avoid the need for the specialized filter designs of traditional IMM algorithms by analytic modeling of extrinsic parameters, for example, the physical bounds of target maneuvers as well as the sensor biases.
Exemplary ORSE IMM systems and methods according to embodiments of the present invention may perform an iterative process, wherein for a given time step, a linear combination of object (e.g. target) state estimates are input into each model. Current object measurements (e.g. from one or more sensors) are also input into each model, and residuals computed, along with likelihood functions. Normalized likelihood functions are used as weights in a linear combination of current model outputs to form the desired blended track state and covariance outputs. The residuals and model probabilities are calculated from the outputs of each model and each model's outputs are stored for the next iteration.
As set forth above, elemental ORSE filters may be used for tracking an object. In an exemplary aircraft tracking embodiment, using the physical bounds (i.e. maximum excursions) on various parameters, such as turn rate and tangential acceleration, ORSE filtering can provide for increased estimation consistency. Maximum accelerations produced by these bounded parameters, along the instantaneous normal and tangential airplane axes, bound all physically possible maneuvers. In an exemplary filter model used by an IMM of the present invention, these maximum accelerations are represented in an equivalent statistical model by, for example, a multivariate Gaussian distribution of constant accelerations, whose one-sigma ellipsoid best approximates the maximum accelerations. As a result, among all current estimators (including reduced state Kalman filters) with the same reduced states, ORSE filters utilized by IMM algorithms of embodiments of the present invention have the least covariance. This covariance is the minimal covariance achievable by linearly weighting the predicted states with a new measurement at each successive update of the filter. ORSE minimizes the mean-square and thus, the root-mean-square (RMS) estimation errors for the maximum excursions of the parameters in the truth model. Furthermore, because the bounds are included in the minimized covariance, embodiments of the present invention do not need white plant noise, as is required by Kalman filters, to cope with the reduced state.
Referring generally to
With reference to Tables 1 and 2, and the process illustrated in
where the function f(i)(•) describes the system dynamics together with initial conditions given by Y(tk)=Y(k). This function depends on model Πi and is a function of the arbitrarily time-varying m-dimensional parameter vector λ with known mean
The system is observed by a sensor, and q-dimensional measurement vectors z(k) at the kth sample time tk are collected according to:
z(k)=HY(k)+n(k)+Jb (2)
where H is the q×p measurement selector matrix that selects the appropriate components of the states being measured at time tk for k=1, 2, 3, . . . . The q-dimensional white measurement noise n(k) has mean zero and covariance matrix N. The q×r bias selector matrix J selects the appropriate components of measurement bias. Similar to λ, the r-dimensional bias vector b has unknown time dependence, but is bounded in a symmetric region about zero. As in the case of λ, the ORSE filters model measurement biases b as random constants with mean zero, and a covariance matrix denoted as B, whose one-sigma values are set equal to the maximum excursions described by the bounds. The random vectors b, and the white noise n(k) for k=1,2,3, . . . are assumed to be mutually independent. The matrices H, J, and N are common to all models Πi, i=1, . . . , N.
Of note, there are no states in the filter model for λ and b. Hence, λ and b are not estimated by the filter. They are represented in the filter only by their bounds expressed through their covariance matrices Λ and B. In one embodiment, a multivariate Gaussian distribution may be used for λ, b, and n(k) for k=1,2,3, . . . , as only the first and second order moments of their actual distributions are involved in the mean-square criterion.
Let Ŷ(i)(k|k) denote the measurement updated state estimate provided by an ORSE filter tuned to model Πi at the kth sample time tk. Also at time tk, let M(i)(k|k) be the associated covariance of Ŷ(i)(k|k) due to measurement noise only, D(i)(k|k) be the associated bias coefficients at time tk due to un-modeled parameters, E(i)(k|k) be the associated bias coefficients at time tk due to measurement bias errors, R(i)(k|k) be the covariance matrix at time tk combining measurement noise and measurement bias effects only, S(i)(k|k) be the total covariance of Ŷ(i)(k|k) at time tk, and μ(i)(k) be probability that the object is in model Πi at time tk.
Step 1: Model-Conditioned Mixing
The size of the mixed vector
Thus, the state vectors of all models used in the exemplary IMM are all considered to be of dimension p. Let the accelerating models be the first N-1 models. Let the non-accelerating model be the Nth model. For this non-accelerating model, the dimension d(d<p) of the state vector is increased to the dimension p of the state vector of the accelerating models by zero padding. A matrix γ given by:
where h=p−d is applied to the mixed state vector and matrices for the Nth model as follows:
Ē(i)(k|k)=γ
to convert these matrices for the non-accelerating model (because the non-accelerating model is zero-padded). Whereas, for the N-1 accelerating models, the mixed state vector and matrices are unchanged from their expressions above:
Ē(i)(k|k)=
The second step in the exemplary IMM ORSE process includes model condition estimation using the ORSE bounded filters (e.g. filters 1, 2 . . . N in
Step 2: Model-Conditioned Estimation
Time Updating
This equation is a coupled differential equation for the time duration tk≦t≦tk+1 where the function f(•) describes the system dynamics together with initial conditions given by Ŷ(i)(tk)=
F(i)(tk)=I; G(i)(tk)=0 (21)
where I is the identity matrix.
These partial derivatives
and
are evaluated at Ŷ(i)(tk)=
Ŷ(i)(k+1|k)=Ŷ(i)(tk+1) (22)
M(i)(k+1|k)=F(i)
D(i)(k+1|k)=F(i)
E(i)(k+1|k)=F(i)Ē(i)(k|k) (25)
Next the gains K(i) for the ith filter are computed:
P(i)M(i)(k+1|k)+D(i)(k+1|k)ΛD(i)(k+1|k)′ (26)
V(i)HE(i)(k+1|k)+J (27)
U(i)P(i)H′+E(i)(k+1|k)BV(i)′ (28)
Q(i)=HP(i)(k+1|k)H′+V(i)BV(i)′+N (29)
K(i)=U(i)(Q(i))−1 (30)
Finally, the estimated state vector and associated matrices are measurement updated as follows:
ζ(i)(k+1)=z(k+1)−HŶ(i)(k+1|k) (31)
Ŷ(i)(k+1|k+1)=Ŷ(i)(k+1|k)+K(i)ζ(i)(k+1) (32)
L(i)=In×n−K(i)H (33)
D(i)(k+1|k+1)=L(i)D(i)(k+1|k) (34)
E(i)(k+1|k+1)=E(i)(k+1|k)−K(i)V(i) (35)
M(i)(k+1|k+1)=L(i)M(i)(k+1|k)L(i)′+K(i)NK(i)′ (36)
Once the next-iteration target state estimates are produced (i.e. each model updated), new model probabilities (equation 39) are computed according to a model likelihood function (equation 38).
Step 3: Model Probability Update (the Notation |A| Denotes the Determinant of the Matrix A)
Finally, overall state estimates, covariance of the state estimation error, and bias coefficients for each of the models may be combined using the updated model probability weights computed above.
Step 4: Overall Estimate and Covariance
δ(i)=Ŷ(i)(k+1|k+1)−Ŷ(k+1|k+1) (41)
R(k+1|k+1)=S(k+1|k+1)−D(k+1|k+1)ΛD(k+1|k+1)′ (45)
In one embodiment of the present invention, the outputs to the user of the algorithm are the state estimate Ŷ(k+1|k+1) (equation (40)), the matrix of combined model bias coefficients D(k+1|k+1) (equation (43)) and the covariance matrix R(k+1|k+1) (equation (45)) containing measurement noise and bias effects only, and the model probabilities μ(i)(k+1) (equation (39)). Note that μ(i)(k+1), S(i)(k+1|k+1), D(i)(k+1|k+1), and E(i)(k+1|k+1) are maintained (i.e. stored) and used as an input to the next computation cycle. However, Ŷ(k+1|k+1), D(k+1|k+1), and R(k+1|k+1) are not utilized in subsequent cycles.
Filtering based on the ORSE designs described herein provides consistency and optimality previously lacking in tracking filters, thus promoting better filter performance in the presence of model variations and biased measurements. Thus, IMM architectures using elemental ORSE filters instead of Kalman filters brings forth the consistent and optimal properties of ORSE into adaptive state estimation.
Processing system 22 (
While the foregoing invention has been described with reference to the above-described embodiment, various modifications and changes can be made without departing from the spirit of the invention. Accordingly, all such modifications and changes are considered to be within the scope of the appended claims. Accordingly, the specification and the drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof, show by way of illustration, and not of limitation, specific embodiments in which the subject matter may be practiced. The embodiments illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other embodiments may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. This Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.
Such embodiments of the inventive subject matter may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed. Thus, although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations of variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description.
This invention was made with Government support under Contract N00024-03-C-6110 awarded by the Department of the Navy. The Government has certain rights in this invention.
Number | Name | Date | Kind |
---|---|---|---|
5325098 | Blair et al. | Jun 1994 | A |
6226409 | Cham et al. | May 2001 | B1 |
6278401 | Wigren | Aug 2001 | B1 |
7009554 | Mookerjee et al. | Mar 2006 | B1 |
7180443 | Mookerjee et al. | Feb 2007 | B1 |
7277047 | Mookerjee et al. | Oct 2007 | B1 |
7705780 | Khoury | Apr 2010 | B1 |
7719461 | Mookerjee et al. | May 2010 | B1 |
8886394 | Noonan | Nov 2014 | B2 |
20050128138 | McCabe et al. | Jun 2005 | A1 |
20080120031 | Rosenfeld et al. | May 2008 | A1 |
20090231183 | Nettleton et al. | Sep 2009 | A1 |
Entry |
---|
Y. Bar-Shalom, M. Mallick, H. Chen, and R. Washburn, “One-Step Solution for the General Out-of-Sequence-Measurement Problem in Tracking,” Proceedings of 2002 IEEE Aerospace Conference Proceedings, vol. 4, pp. 1551-1559, 2002. |
G.J. Portmann, J.R. Moore, and W.G. Bath, “Separated Covariance Filtering,” Record of the IEEE 1990 International Radar Conference, 1990, pp. 456-460. |
Y. Bar-Shalom, “Update with Out-of-Sequence Measurements in Tracking: Exact Solution,” IEEE Transactions on Aerospace and Electronic Systems, pp. 769-778, vol. AES-38, No. 3, Jul. 2002. |
E. Mazor, A. Averbuch, Y. Bar-Shalom, and J. Dayan, “Interacting Multiple Model Methods in Target Tracking: A Survey,” IEEE Transactions on Aerospace and Electronic Systems, 34, 1 (Jan. 1998), 103-123. |
W.D. Blair and Y. Bar-Shalom, “Tracking Maneuvering Targets with Multiple Sensors: Does More Data Always Mean Better Estimates?” IEEE Transactions on Aerospace and Electronic Systems, pp. 450-456, vol. AES-32, No. 1, Jan. 1996. |
X.R. Li and V.P. Jilkov, “Survey of Maneuvering Target Tracking—Part V: Multiple-Model Methods,” IEEE Transactions on Aerospace and Electronic Systems, 41, 4 (Oct. 2005), 1255-1321. |