The present disclosure relates to imaging systems, and more particularly to detection of real world physical objects in images.
In imaging to ascertain what objects are within a given geographic region, traditionally analysist had to review images manually and identified the objects in the images. For example, traditional analyst could review photographs of the ground taken from aircraft or satellite borne imaging systems. With the advent of Automatic Target Recognition (ATR) algorithms, which in other applications may take the form of facial recognition, for example, automated systems could reduce the work load of the analyst by pre-identifying certain objects in the images. For example, the analyst could use ATR on a set of images to obtain a count of vehicles from the images of a given geographic region. In the event that the confidence level was low in the ATR results, and in the event that the need to have a high confidence level was compelling, the analyst could direct physical assets to obtain more data, e.g. more or better images, of the geographic region. The additional data could be used to raise the confidence level in the ATR.
The conventional techniques have been considered satisfactory for their intended purpose. However, there is an ever present need for improved target recognition systems and methods. This disclosure provides a solution for this need.
A method includes receiving a directive from a user to find an object in a geographical area, wherein the object is identified with an input label selected from a set of labels, obtaining sensor data in response to the directive for a real world physical object in the geographical area using one or more sensors, processing the sensor data with a plurality of automatic target recognition (ATR) algorithms to assign a respective ATR label from the set of labels and a respective confidence level to the real world physical object, and receiving modeled relationships within the set of labels using a probabilistic model based on a priori knowledge encoded in a set of model parameters. The method includes inferring an updated confidence level that the real world physical object actually corresponds to the input label based on the ATR labels and confidences and based on the probabilistic model.
The directive can include a desired confidence level for the input label and the method can include comparing the desired confidence level to the updated confidence level. In the event that the updated confidence level is below the desired confidence level, the method can include directing or redirecting one or more physical assets to obtain further sensor data of the real world physical object. Obtaining, processing, receiving modeled relationships, and directing or redirecting can be performed by a non-human system to assist a human analyst. Directing or redirecting one or more physical assets can include surveillance activities such as following movement of the real world physical object. Directing or redirecting one or more physical assets can include moving an imaging device on a gimbal, routing an aircraft, moving a forward observer on the ground, and/or routing or controlling a space borne sensor system. The method can include, in the event that the updated confidence level is above the desired confidence level, targeting the real world physical object with a munition.
Modeling can include transforming a taxonomy tree of the set of labels into a complete graph with forward and reverse links between siblings and parents in the taxonomy tree. Inferring can include forming a reduced tree with the input label as a root and including all the ATR labels stemming from the root and intervening labels from the complete graph that are along the shortest paths between the respective ATR labels and the input label on the complete graph. Inferring the updated confidence of the input label can be obtained recursively by traversing the respective shortest paths from the ATR labels to the input label and wherein the confidences of all the intermediate labels in each shortest respective path are computed.
The method can include updating the set of model parameters based on feedback received from field observations to improve prediction capabilities. The set of model parameters can be computed using relative abundances of objects corresponding to the set of labels in a given geography. Obtaining sensor data can include obtaining sensor data that pre-existed the directive. It is also contemplated that obtaining sensor data can include obtaining sensor data that did not pre-exist the directive. It is contemplated that there can be more than one instance of the real world physical object in the geographical area and in the sensor data, wherein processing, modeling, and inferring are performed for each instance of the real world physical object. The probabilistic model can be a Markov Random Field (MRF). The modeled relationships within the set of labels using a probabilistic model can be established a priori before receiving the directive from the user.
A system includes an input device, an output device, and a processing device operatively connected to receive input form the input device and to provide output on the output device. The system also includes machine readable instructions in the processing device configured to cause the processing device to perform a method as disclosed above including receiving input on the input device including a directive from a user as explained above and outputting information on the output device to the user indicative of the updated confidence level.
The processing device can be operatively connected to a network of physical assets, wherein the directive includes a desired confidence level for the input label and wherein the machine readable instructions further cause the processing device to compare the desired confidence level to the updated confidence level, and in the event that the updated confidence level is below the desired confidence level, direct or redirect one or more of the physical assets to obtain further sensor data of the real world physical object.
These and other features of the systems and methods of the subject disclosure will become more readily apparent to those skilled in the art from the following detailed description of the preferred embodiments taken in conjunction with the drawings.
So that those skilled in the art to which the subject disclosure appertains will readily understand how to make and use the devices and methods of the subject disclosure without undue experimentation, preferred embodiments thereof will be described in detail herein below with reference to certain figures, wherein:
Reference will now be made to the drawings wherein like reference numerals identify similar structural features or aspects of the subject disclosure. For purposes of explanation and illustration, and not limitation, a partial view of an embodiment of a portion of a system in accordance with the disclosure is shown in
Target taxonomy can be very useful for the purposes of target classification and identification from sensed data. In this context of target recognition, target taxonomy is a hierarchical grouping of objects of interest according to their similarity and differences in features.
Although the taxonomy is useful to an analyst from an organizational point of view, the benefits of this type of data structure go further if applied to the area of Automatic Target Recognition (ATR). ATR algorithms typically extract relevant features from sensed data to help them identify/classify the object. If the taxonomy organizes the objects such that coarser features are sufficient to assign objects to the upper layers of the taxonomy (“classify”) and finer features are required to assign them to lower layers (“identify”), then the quality of the sensed data in terms of resolution and signal-to-noise ratio (SNR) dictates the finest level of the taxonomy that an ATR can possibly assign to an object. For example, if the resolution/SNR in sensed imagery is such that we can only make out a rough outline of the object, then we may be able to only classify it at the penultimate level of the tree shown in
In the context of an autonomous system that has access to a number of sensor and platforms operating in different modalities (SAR, EO, IR, MSI, HSI, FMV, etc.) with different imaging conditions and stand-off ranges, the quality of sensed data may vary continuously along with the ATR classification abilities. Consequently, we may have ATR detects on the same physical object classified with different labels of the taxonomy tree. Fusing ATR hits that have different labels but nevertheless correspond to the same physical object can be advantageous in two respects: First, the confidence of the detected object can be significantly improved by merging independent data from different modalities and quickly beat down the uncertainty inherent in individual ATR detects. Second, it allows us to infer target attributes that any individual sensor may be incapable of sensing. For example, a high frame-rate video sensor may not have the spatial resolution to uniquely identify the object but can give us a coarse category and accurate estimate of speed and heading of the object. A subsequent single-frame collect by a high-resolution EO sensor can identify the object but provides no information regarding motion. Combining the two detects across the taxonomy helps us to both uniquely identify the object and accurately determine its speed and heading.
To facilitate the fusion of ATR detects across the taxonomy, we need a probabilistic model of how the different labels in a taxonomy relate to each other. Such a model captures the relative distribution of the physical objects in a particular theater of operation and represents our a priori information even before we collect any data. The taxonomy provides a graphical structure for this model and can be invaluable in not only the representation of the model but also in inference, when the model is employed, and the learning of the model parameters both from data provided by human experts and/or data collected in the field.
In this disclosure, we formulate a probabilistic model of labels as a Markov Random Field (MRF) that has a very compact representation. The graphical structure of the MRF is adapted from the hierarchical structure of the taxonomy, which in addition to the conditional independence properties of the MRF also encodes the constraints that all the children are subsets of the parent node and mutually exclusive from each other. We derive efficient recursive algorithms for inference and fusion of ATR detections that scale linearly with the distance between the detected node and the desired node in the taxonomy tree. The method bears similarity to belief propagation in Bayesian networks but differs in the details as the algorithms developed for Bayesian networks are not directly applicable in our scenario (children of any node in the taxonomy are not conditionally independent given the parent node.) Our method can be regarded as a special case of sum-product message passing in clique trees. The specialization realizes efficiencies by directly constructing marginals on the nodes of the clique tree using the mutually exclusive property of the children rather than computing them. Further savings are realized by pre-computing and storing single node and pair-wise marginals to facilitate low overhead message passing during inference.
The disclosure is organized as follows: Section 2 formulates the label model that encodes the a priori information and extends it to include data provided by the ATR. Section 3 derives the inference and data fusion algorithms. The treatment proceeds by deriving the algorithm for a single ATR detect and then using it as a building block for the more complex fusion of multiple ATR detects. Section 4 shows a practical application of the algorithms on a toy model as well as a more realistic taxonomic model and illustrates the ability of the algorithms to cope with conflicting ATR data. Finally, Section 5 provides some concluding remarks.
In this section, we formulate a probabilistic model for labels in a taxonomy. The model encodes the relative occurrences of labels and their relationship with each other. This represents our a priori information and will enable us to perform inference on any label of the taxonomy given data collected in the field on some other label(s).
Given these multiple label assignments and their associated confidences, the problem is to compute the confidence of any user specified label that can be assigned to the object in a given taxonomy. Note that the query label may or may not be present in the assignment set given to us. For a query label already in the assignment set, the computed confidence represents our updated belief that this label can be assigned to this object. This updated confidence may be higher or lower than the one received from the ATR. Essentially, the addition of any label to assignment set alters our current belief of the true identity of the world object and we answer queries based on this evolving belief. For the example shown in
To answer queries of the type specified above, we need a model of all the labels in the taxonomy and how they relate to one another. Since a physical object in the world is assigned a subset of labels from the taxonomy, we model the labels as binary random variables that take on values in the set {1,0}, where the two values denote presence and absence of the label in the assignment set respectively. Let {l1, . . . , ln
Fortunately, the hierarchical structure of the taxonomy provides a map of conditional independence among the sets of labels obviating the need for a full specification that spans all label combinations. Furthermore, the mutual exclusivity of the children of any node in the taxonomy, makes the probability of a large number of label combinations to be trivially zero. For example, a physical T72 on the ground will have the associated labels of “T72”, “MainBattleTank”, “Military”, and “Ground” as seen in
To encode the properties of conditional independence and mutual exclusivity of the children nodes implied by a target taxonomy, we model as a Markov Random Field (MRF) on a undirected graph G={V, E} with nodes V and edges E. The graph G is derived from the target taxonomy by retaining all the original edges of the tree and then augmenting it with a set of undirected edges between all pairs of children of any node of the taxonomy.
Let (G) denote the set of all maximal cliques in the graph G. For each clique C∈(G), let ϕC({li:i∈C}) be a non-negative, real-valued potential function that encodes the favorability of different combination of assignments to the labels in the clique. Then the joint distribution for the labels of the MRF factorizes as follows
where Z is the normalization constant known as the partition function and is given as
P(l1, . . . ,l11|l3)=P(l1,l2,l4,l6|l3)P(l7,l8,l9,l10,l11|l3). (3)
This property holds for all nodes of the graph. If (m) denotes the set of indices of all the nodes of child sub-tree of node lm, then
=,∀m∈ (4)
In other words, each node partitions the graph into two subsets that are conditionally independent.
The MRF assumption dramatically reduces the parameterization complexity of the model.
As seen from Eq. 1, the number of variables directly interacting have been reduced from nl to the maximum number of children of any particular node. The latter is much smaller but can be still large in the worst case. We can reduce this further by formulating the potential function to encode the mutual exclusive property. Note that the number of maximal cliques is equal to the number of nodes with children. Let denote the set of indices of all parent nodes and (k) denote the set of indices of children of node k∈. We choose the potential function to be the conditional probability of all the children given the parent. This choice not only satisfies the requirement for the potential function but also makes the computation of the partition function Z given by Eq. (2) trivial. This is important because the partition function in general is itself intractable to compute if the number of nodes are large. Substituting this choice of the potential function in Eq. 1, we obtain
where the index r denotes the root node of the graph G. We have added the factor P(lr) as part of the potential function of the clique corresponding to the root node. This encodes the a priori probability of the root node, which should be 1 given the object can be labeled using one of the nodes of the given taxonomy. Otherwise, it represents the chances that the object falls in this tree if multiple taxonomic trees are being considered. The partition function Z for this factorization is trivially 1 as it can be shown that
The proof involves performing the summation from the bottom of the tree and moving upwards. The summation operation can be moved inwards in the expression above for each set of leaf nodes until it operates just on the conditional probability of the leaf nodes given their parents. These sum to 1 by construction and reduces the size of the tree by one level from the bottom. Applying the same operation repeatedly on the smaller sub-tree will finally remove all layers of the tree and the result is unity.
The mutually exclusive property of the children is now explicitly encoded by the specifying the conditional probability of the children given that the parent is present as follows
Note that assignments where only one label is present and all other labels are absent are given a non-zero probability (first condition in Eq. (7).) All other assignments have zero probability. The assignment of all labels to absent can also have non-zero probability (second condition in Eq. (7)), if the conditional probability of all the children do not sum to 1, i.e., Σm∈C(k) P(lm=1|lk=1)<1. This models the scenario that there are other labels that can be assigned to the object but they are not explicitly modeled. The conditional probability of the children given that the parent is absent is trivially given by
Clearly, no assignment can have any of the children to be present in this case. The remaining case of all labels absent is guaranteed.
With these definitions, the entire model can be specified by just the conditional probability of a node is present given that its parent is present, P(lm=1|lp(m)=1) for all m∈\r and p(m) denotes the parent of m. There are nl−1 of these conditional probabilities, one for each node in the tree except for the root node. There is also the a priori probability of the root node P(lr). So we have successful reduced the number of the parameters required for the original unconstrained model from 2n
Both Eqs. (7) and (8) can be compactly written as
along with the following constraints
The first inequality given by Eq. (10) allows for room to have a “other” category in the children of any node if it is strictly less than one. We can sweep in this “other” category all the labels that are in the theater of operations but we are not interested in modeling or the ATRs are not capable of detecting. The second constraint given by Eq. (11) ensures that all the children labels are a subset of the parent label so if the parent label is absent, none of the children labels can be present.
2.1 Label Model Augmented with Data
We now have a probabilistic model for the labels that encodes our a priori information. In this section, we augment this model when data is observed.
Let d denote the raw data collected by a sensor. The ATRs operate on this raw data from one or more sensors to produce detections tagged with one of the labels in the taxonomy. Let denote the set of indices of the labels that have been detected in the data. For each detected label, the ATR returns the likelihood of that label P(dk|lk), i.e., the probability of observing the data d given that the object 102 has the label lk. Let nd denote the number of detected labels. For convenience of notation, we have used the subscript k on the raw data d to denote the piece of data that was used for detecting label lk. The raw data could either be a single image/signal or multiple images/signals. In both cases, multiple detects can be made at the same physical location with different labels. Each of these detections can be regarded as a new data collect, hence the subscript k on the data.
The joint probability of all the labels and the data is given as
where the second line assumes that the nd pieces of data are independent given the labels and third line uses the model from Eq. (5). Note that this is a good approximation even for multiple detects (each with a different label) on a single collection, i.e., the same portion of the data running through different ATR algorithms, since the noise in the detections is typically dominated by the model mismatch that the ATR employs for each label and not the measurement noise.
As seen previously, the model parameters comprises nl−1 conditional probabilities, namely, P(lm=1|lp(m)=1), ∀m∈\r, and the a priori probability of the root, P(lr). These parameters are quite intuitive and can be elicited from expert knowledge for a particular theater of operation (TO). Alternatively, they can be computed by the relative frequency of occurrence of objects of different labels in a particular TO. Let N denote the total number of objects in a TO that can potentially be detected by ATRs. Let Nm denote the number objects in this TO that can be assigned the label lm. Then using the frequentist interpretation of probability, we obtain
The estimates provided by Eqs. (15) and (16) can also be employed for a dynamic update to the model in the field once ground truth data is provided on detections produced by the system. So we initialize the model with best estimates of N and {Nm:m∈} for a particular TO. Then at a later time, we are provided confirmation that a object in the TO has been identified by other independent means (such as a visual confirmation by personnel in the field) and the object can be assigned labels in the set . Using this information, the number of objects for each label is updated as follows
N←N+1 (17)
N
m
←N
m+1,∀m∈ (18)
The updated object counts are used in Eqs. (15) and (16) to obtain an updated label model. In this manner, the label model evolves and becomes more accurate over time as feedback is provided from the field.
Armed with the label model, we are now in a position to infer probabilities that were not directly measured. For instance, if a measurement is made on lk by an ATR detect, P(dk|lk), we can use the model to infer P(dk|lm) for all m∈. Here we are leveraging the power of the a priori model that encodes the relative distribution of the labels. For example, if we get an ATR detect on “MainBattleTank” in the taxonomy shown in
The model can also be employed to perform data fusion over multiple collects or multiple ATR algorithms on a single collect. So if nd pieces of data are collected and we are given P(dk|lk), for all k∈, we can infer the probability for all m∈. This is extremely powerful as the independent collects can quickly beat down the uncertainty and increase/decrease our confidence on any label of interest in the taxonomy.
The fused probability of the data given any label is obtained simply by marginalizing out all the other labels from the joint probability given by Eq. (14) and an appropriate normalization
Although simple to derive, the expression in Eq. (20) is formidable to compute directly and becomes quickly intractable as nl grows. The number of operations to compute this fused probability is (2n
As we will see in the subsequent sections, the fast algorithm for computing the fused data probability requires the use of the conditional probabilities associated with all edges and the a priori probability of all the nodes in the graph. These are used over and over again and it is most efficient to compute these once at start up and store them in the graph for future use. We derive an efficient recursive algorithm for computing these quantities in this section.
Let p(m) denote the parent node of node m. Then the a priori probability of any node m is easy is compute given the a priori probability of its parent using
Since P(lm|lp(m)) is a known model parameter, all the a priori probabilities can be computed recursively by starting at the root node of the graph and traversing the children nodes. As each sub-tree of the graph is visited, the a priori probability of the root node is known and can be plugged in Eq. (21).
The model parameters give the conditional probability of the child given the parent. The inverse conditional probability, i.e., the conditional probability of the parent given the child can be computed using Bayes rule once all the a priori probabilities are known
Finally, the conditional probability between siblings is given as
where P(li, lj|lk), the joint conditional probability of the sibling pair can be computed by marginalizing out all the other siblings from Eq. (9) as follows
Alternatively, we may directly compute it using the same logic as in Eq. (9)
The set of Eqs. (21)-(25) have to be executed in a certain order to ensure that all quantities required in their computation have previously been computed. Algorithm 1 shows a recursive routine P
G is graph, k is index of root node of sub-graph
for all children of node k
Use Eq. (21)
Use Eq. (22)
Update graph with a priori and conditional probability
For all pairs of children of node k
Use Eq. (23)
Update graph
After P
3.2 Inference with a Single Observation
This section considers the case when there is a single observation (nd=1). The algorithm in this section will be used as a building block for the more complicated case of fusing multiple observations nd>1.
Let j denote the node on which the observation is made. Then the inference task is to compute P(dj|lk) for all k∈. First compute the shortest path between nodes j and k using a breadth-first search on the directed graph G. Let the set of nodes {j1→ . . . →jn} denote the n intermediate nodes in the path between j and k. Augment this path with the starting and ending node to obtain {j0→ . . . →jn+1}, where j0=j and jn+1=k. Then the conditional probability of the end node given the start node is given as
The direct computation of this quantity is (2n) and is not feasible. However, using the conditional independence of the MRF, the conditional probability can be computed in a recursive fashion as follows
which reduces this cost to (n) and has a linear rather than exponential scaling with the length of the path.
Given the ability to quickly compute the conditional probability between any two nodes j and k, the inferred probability of the data on node k given the data on node j is
Substituting the recursion of Eq. (27) for the conditional probability, we obtain
Note that the sum-product in the recursion of Eq. (28) can be written as a vector-matrix product and makes for a more compact notation. Towards this end, let
denote a 2×2 matrix containing the conditional probability of lj given lk. Note the variable in the subscript/superscript spans the columns/rows of the matrix respectively. Similarly, let
p
j=[P(d|lj=1)P(d|lj=0)] (30)
denote a 1×2 row vector of data probabilities. Note that we have omitted the subscript for the data d in this vector. This gives us the flexibility to use the same notation for ATR detected probability and inferred probability on any node except for using a {circumflex over ( )} on the latter. For example, pj=P(dj|lj) where the subscript on d and l match (ATR detect) and {circumflex over (p)}j≡P(dk|lj) where the subscript on d and l do not match (inferred probability.) Using this notation, Eq. (28) can be rewritten as
The above matrix multiplication can be done by starting from the left and multiplying the matrices on the right one by one or starting from the right and multiplying the matrices on the left one by one. Even though the final result remains the same, the multiplication from the left is more computationally efficient than the one from the right. In the first case, a vector is propagated from the left to the right, whereas, a matrix is propagated from the right to the left until it is collapsed to a vector right at the very end. The former order of multiplication is a factor of 2 more efficient than the latter.
The expressions above give the propagation of the data likelihood P(dj|lj). What is more useful from a decision perspective, is the posterior probability of the label given the data, P(lj|dj). Bayes rule allows us to convert between the two
This can be done as soon as the ATR detects are received and all inference can then be performed in the posterior space. We use
a column vector to denote the posterior probability. The inferred posterior probability is then given as
Note that the matrix multiplication order in Eq. (34) is reversed from that in Eq. (31). It looks as if Eq. (34) is the transpose of Eq. (31) but that is misleading since pkj≠pjk
Algorithm 2 captures the recursion derived above to obtain the inferred probability on any node of the graph. Note that the input to the routine I
Compute P(d|lk) given P(d|lj)
Breadth-first search on directed graph
for all nodes in the path
3.3 Fusion with Multiple Observations
The general expression for the likelihood of the all the observed data conditioned on a node m as shown in Eq. (20) requires the marginalization on all the nodes of the taxonomy. This expression can be simplified and the marginalization can be confined to a smaller subset of nodes. Towards this end, use the directed graph G to find the shortest path from node m to all nodes in the set . From these paths, construct a tree T with node m as the root and the nodes in the set as leaves or parent nodes such that the shortest path from the set to node m in graph G is preserved in tree T. Let T(i) denote the subset of nodes that comprise the sub-tree with root node i of the overall tree T. Therefore T(m) denotes all the nodes in the tree T. Similarly, define
T
(i)∩T(i), (35)
which denotes the subset of nodes in sub-tree with root node i that belong to set , i.e., nodes on which we have ATR detection data. Note that not all nodes in the tree T are in the set since there may be intermediate nodes in the shortest path between m and set D. Consequently, T
T(4)={2,3,4,6,7,11} T
T(2)={2,3,7,11} T
T(3)={3,7,11} T
T(7)={7,11} T
T(11)={11} T
T(6)={6} T
Using these set definitions, divide all the nodes in the taxonomy, , into two sets, namely, T(m) and T(m)=\T(m). Then Eq. (20) can be simplified as follows
to only consider nodes in the set T(m).
Before proceeding further, we will need to define a few more sets that will be useful in reducing Eq. (37) to a recursion. Recall, that (m) is the set of all children of node m in the directed graph G. On the same lines, let T(m) denote the set of children of node m on the tree T. Note that the tree is dependent on the set of nodes on which data is collected as described earlier and keeps changing as new data comes in. Using these two sets of children nodes, define the sets T(m) and T(m) as
T(m)T(m)∩(m) (38)
T(m)T(m)\(m) (39)
Essentially, the sets T(m) and T(m) are dividing the set of children nodes in T(m) such that they belong to a single clique. The set T(m) belongs to a clique parented by node m and the set T(m) belongs to a clique parented by the parent of node m. The nodes in these two sets are conditionally independent given m; a property we will leverage in the derivation of the recursion.
Given these set definitions and using the conditional independence properties of the MRF, Eq. (37) can be derived to be equivalent to the following recursion
Note that the set
T
(m)T
which is the set of all nodes with ATR data that are in the sub-trees spanned by T(m). A similar definition holds for T
The conditional probability of nodes in the sets T(m) and T(m) given node m is required in the above recursion. Since T(m)⊆(m), the nodes belong to the clique associated with parent node m and the conditional probability for this set can be computed on the same lines as Eq. (9)
For the set T(m), the computation is a little more involved and requires first the computation of the conditional probability of the nodes in set T(m)∪m given their parent j=p(m) followed by a subsequent marginalization over j, if j∉T(m),
The conditional probability or for the two cases above can be computed on the same lines as Eq. (43).
Algorithm 3 shows the pseudo-code for the complete algorithm. The function FuseData returns the posterior probability of the node given all the data, , which is a more useful quantity for decision making purposes. The function G
Formally, the compact tree is obtained by eliminating all nodes i from the full tree such that i∉ and the parent of i, k=p(i), has null sets T(k)≡0 and T(k)≡0 and only (k)≠0. In the example shown in
Compute P(lm| dk)
Tree with root m and paths to nodes i ∈
Compute P( dk|lm)
Data available on current node
Initialize
For all independent children
Loop over correlated groups of children
Eqs. (41) or (42)
T (m) = T (m) ∩ (m)
T (m) = T (m) \T (m)
Initialize set of independent nodes
Only one node in the set?
Add to set (m)
T (m) = θ
Remove node from T(m)
Only one node in the set?
Add to set (m)
T (m) = θ
Remove node from T (m)
When the function G
P(d3,d6,d11|l4=Σl
P(d3,d11|l2)=(Σl
P(d3,d11|l3)=P(d3|l3)(Σl
P(d11|l7)=(Σl
where the computations occurring inside InferProbability are shown in parenthesis. Also, note that only the non-empty sets are listed on the right-hand side and the empty sets are omitted.
P(d3,d6,d11|l4)=Σl
P(d3,d11|l2)=(Σl
P(d3,d11|l3)=P(d3|l3)(Σl
The original computation corresponding to node l7 still occurs but without the overhead of the recursion. This savings can become significant if long linear runs are eliminated from the tree. The unfolded recursion for the tree shown in
P(d6,d11|l4)=Σl
P(d11|l2)Σ(Σl
We formulate a relatively small and simple model to illustrate the algorithms that were developed in this disclosure.
The inference can be done either as data likelihoods or as posterior probabilities after the data is observed. For decision purposes, it is the posterior probability that is more useful as it is properly normalized. However, it is instructive to look at data likelihoods as well to gain an understanding of the workings of the algorithm.
All the other probabilities in the tree are intuitive as well. The parent probabilities are now the sum of their children probabilities all throughout except for Tank, where the probability of the children did not add up to unity to make room for a “other” class. The probabilities are now increasing as we go to the coarser categories that encompass the data. The probabilities for both T17v1 and T17v2 are reduced from their raw detected values due to their opposing nature. The Truck side of the taxonomy has low probability given that the observed data fell in the Tank side.
For all the subsequent results, we will only show the posterior probability.
Notice how the tree in
For the example shown in
A more realistic taxonomy 112 is shown in
If the number of hierarchical levels in a taxonomy are large and/or there are a large number of children for any one node, the a priori probability of a node may get tiny. The model is encoding the fact that in the absence of data, it may be very unlikely to observe a particular label on the finer levels of the taxonomy. This a priori belief acts as a bias and it takes an overwhelming amount of evidence to overcome it. For example, the a priori probability of node M2v2 in the example of
One solution is stay all through out in the data likelihood domain. The data fusion as given by Eq. (40) occurs in the likelihood domain and so there is no need to go back to the posterior probability and the a priori probabilities are never explicitly used. However, we saw that the data likelihoods are not normalized properly and do not make much intuitive sense.
A better solution is to use a non-informative prior, i.e., P(l=1)=0.5, to convert ATR detects to the posterior domain initially. In this case, the posterior probability of an ATR detect remains the same as the data likelihood generated by the ATR, P(l|d)=P(d|l), and we introduce no bias. All the inference computation in the taxonomy is then done in the posterior domain. When conversion is necessary to the data likelihood domain for fusing multiple detects, the a priori probability of the model is employed. This works reasonably well and yields intuitive results. The computation remain invariant whether done in the likelihood or the posterior domain as long as the model a priori probabilities are employed subsequently. The net effect of this method is that the raw data likelihoods received from the ATR end up getting modified since the initial conversion to the posterior probability uses a non-informative prior whereas the subsequent conversions back to likelihood domain use the actual model a priori probabilities. Note that the data shown in
If the a priori probabilities are really tiny, the above method (interpreting the received data likelihoods from the ATR as posterior probabilities) becomes very sensitive to input evidence and small increases in detection probability tends to peg the object with very high confidence. This is due to the conversion to the likelihood domain for fusion using the model's tiny a priori probabilities. One way to address this problem is to use a a priori probability that is higher than the model computed a priori probability when it is tiny. Note that any inferred result from the model still uses the model a priori probability for conversion and this exception is only made for received data. However, using altered values of the a priori probability for the raw data makes the result dependent on whether it is computed in the likelihood or the posterior domain. For example, working exclusively in the likelihood domain and then converting the final result to the posterior domain at the very end will give a different result than working primarily in the posterior domain (with switchbacks to the likelihood domain for fusion). We find the latter to work better in practice and yields more intuitive results.
As pointed out earlier, the example in
Target taxonomies have been used in the past to hierarchically organize objects into classes based on functionality and similarity in features. These classes can also be labels that are assigned by ATR algorithms to objects in sensed data. Depending on the resolution and SNR of the sensed data and the level of fine details that can be discerned, an ATR may assign labels from different levels of the taxonomy to the same physical object. There is value in fusing the data for these different labels if it indeed corresponds to the same physical object. The uncertainty inherent in any individual ATR detect can be beaten down by fusing independent detects and a larger set of target attributes can be inferred by merging ATR detects from different modalities.
In this disclosure, we modeled the labels as binary random variables and showed that the graphical structure of the taxonomy can be used to formulate a compact parameterization of the model. In particular, the labels were modeled as a Markov Random Field (MRF) on the undirected graph derived from the hierarchical structure of the taxonomy. Unlike a Bayesian network, the children of any node in the graph are all dependent on each other in our framework to capture the mutually exclusive property of the children. The constraints imposed by the MRF and mutual exclusivity of the children nodes were primary drivers that allowed the joint relationship of n nodes in a taxonomy to be specified with just n parameters.
Using this model formulation, we derived very efficient recursive algorithms for inference and data fusion. We showed that the posterior probability of any label in the taxonomy can be computed with (n) operations, where n is the length of the path between the detected label and the desired label. The complexity therefore scales linearly with the number of edges between the detected and desired label. Similarly, for multiple ATR inputs, the complexity scales linearly with the number of edges in the tree formed with the desired label at the root and shortest path to all the ATR inputs as its branches. The efficiencies were realized by pre-computing certain properties of the graph and storing them for future use.
Finally, it was shown that the parameters of the label model can be estimated by simply counting number of the objects of a particular label in a theater of operation. Online learning and adaptation of the model in the field is also possible if feedback is provided on the fused and inferred results. This makes it possible to evolve the label model over time as more ground truth data is made available improving the performance in the field.
With reference now to
The directive 904 can include a desired confidence level 918 for the input label and the method 900 can include comparing (indicated with box 920) the desired confidence level 918 to the updated confidence level. In the event that the updated confidence level is at or above the desired confidence level, the method 900 can include outputting (indicated with box 922) information on the output device to the user indicative of the updated confidence level, and any relevant information regarding the object in the directive 918. In the event that the updated confidence level is below the desired confidence level, the method can include directing or redirecting (indicated with box 924) one or more physical assets to obtain further sensor data 906 of the real world physical object. Obtaining 906, processing 910, receiving modeled relationships 914, and directing or redirecting 924 can be performed by a non-human system 1000 (shown in
Modeling as in modeled relationships 912 can include transforming a taxonomy tree of the set of labels into a complete graph (as described above) with forward and reverse links between siblings and parents in the taxonomy tree. Inferring 916 can include forming a reduced tree with the input label as a root and including all the ATR labels stemming from the root and intervening labels from the complete graph that are along the shortest paths between the respective ATR labels and the input label on the complete graph, as described above. Inferring 916 the updated confidence of the input label can be obtained recursively by traversing the respective shortest paths from the ATR labels to the input label and wherein the confidences of all the intermediate labels in each shortest respective path are computed, as described above.
The method 900 can include updating the set of model parameters based on feedback 926 received from field observations to improve prediction capabilities. The set of model parameters can be computed using relative abundances of objects corresponding to the set of labels in a given geography. Obtaining sensor data 906 can include obtaining sensor data that pre-existed the directive 904, e.g., no new images necessarily need be obtained in response to the directive 904 if recent images are available for the desired geographic region of interest. It is also contemplated that obtaining sensor data 906 can include obtaining sensor data that did not pre-exist the directive, e.g., if recent images are not available, obtaining sensor data 906 can include directing physical assets to obtain new images or data. It is contemplated that there can be more than one instance of the real world physical object in the geographical area and in the sensor data, wherein processing, modeling, and inferring are performed for each instance of the real world physical object. The probabilistic model can be a Markov Random Field (MRF). The modeled relationships within the set of labels using a probabilistic model can be established a priori before receiving the directive from the user.
With reference now to
The processing device 1006 can be operatively connected to a network 1010 of physical assets 1012, wherein the directive 904 (of
The methods and systems of the present disclosure, as described above and shown in the drawings, provide for target identification with superior properties including increased confidence levels on ATR and decreased load on human analysts. While the apparatus and methods of the subject disclosure have been shown and described with reference to preferred embodiments, those skilled in the art will readily appreciate that changes and/or modifications may be made thereto without departing from the scope of the subject disclosure.