1. Field of the Invention
The invention relates to the field of identification of objects such as structures and vehicles and, in particular, to a process for the identification of structures and vehicles using laser radars.
2. Description of Related Art
The identification of targets under battlefield conditions is a major problem. Of course, direct visual contact by trained personnel is the most accurate, but this exposes them to possible attack and significantly increases personnel workload. Thus in recent years, the use of unmanned surveillance vehicles, particularly unmanned aircraft, have been used for battlefield surveillance. However, to avoid constant monitoring of the unmanned vehicle; they are being equipped with autonomous systems that identify and classify potential targets and only inform the remotely located operator when such a target is identified.
Traditional Laser radar identification techniques have limitations in identification of articulated targets because of the large number of potential target states due to the large number of potential target articulation, variations, and pose. The utility of invariant features for the model-based matching of the entire target will reduce the search space but will not yield reliable estimates of target identification and pose. One approach is use a laser radar system to map the vehicle and record invariant parameters. These observed parameters are compared to those stored in a data base to find a match. However, this method has proved to be cumbersome to implement; because the whole structure, typically vehicles such as tanks or missile launchers, had to be compared to every other structure in the data base.
Thus, it is a primary object of the invention to provide a process for the identification of objects without human intervention.
It is another primary object of the invention to provide a process for the identification of unknown objects without human intervention that uses a laser radar for illumination.
It is a further object of the invention to provide a process for the identification of objects without human intervention that uses a laser radar for illumination and which provides optimum identification with minimum computing time.
The invention is a process for identifying an unknown object. In detail, the process includes the steps of:
1. Compiling data on selected features on a plurality of segments of a plurality of known structures. Preferably, the plurality of segments includes the top and bottom or the top, middle and bottom of the object. This also includes the step of making piecewise pixel-pair invariant measurements of each of the plurality of segments of the known structures.
2. Illuminating the unknown structure with a laser radar system;
3. Dividing the unknown structure into a plurality segments corresponding to each of the segments of the known structures;
4. Sequentially measuring selected features of each of the plurality of segments of the unknown structure. This includes the steps of making piecewise pixel-pair invariant measurements of each of the plurality of segments of the unknown structure; and comparing the piecewise pixel-pair invariant measurements of each of the plurality of segments of the unknown structure until a match is found. This includes the distance between first and second pixels of the pixel-pairs, the angle between the normals to the surface area about the first and second pixels, the normalized distance between the first and second pairs projected along the vector that is the sum of the two normals, and the normalized distance between the first and second pixels along the vector that is the cross product of the two normals.
The novel features which are believed to be characteristic of the invention, both as to its organization and method of operation, together with further objects and advantages thereof, will be better understood from the following description in connection with the accompanying drawings in which the presently preferred embodiment of the invention is illustrated by way of example. It is to be expressly understood, however, that the drawings are for purposes of illustration and description only and are not intended as a definition of the limits of the invention.
The invention is a process for identifying potential targets by means of a laser radar system that does not require human involvement. It is designed for use on an unmanned surveillance vehicle. This process is used after the vehicle has determined that a potential target exists.
In surveillance mission, the unmanned vehicle is sent to a target area based on cues from intelligence gathered or cues from other long-range surveillance platforms. Due to target location error and potential target movements, the unmanned vehicle needs perform its own search upon arrival to the target area using wide footprint sensors such as Synthetic Aperture Radar (SAR) or wide field of view Infrared Sensor. Upon detection of potential regions of interest (ROIs), the Laser Radar sensor is then cued to these ROIs to re-acquire and identify the target and select the appropriate aim point to enhance weapon effectiveness, and reduce fratricide due to enemy fire.
Referring to
Step 10—Set Up A library Of Target Descriptions
Step 10A—Divide Objects in Segments that can be articulated. All structures that are of interest are first scanned by a laser radar system using simulation or actual data collection. The object is divided into sections. For example, referring to
Step 10B—Scan Object Segment. Thereafter each section is scanned by the laser radar system at various positions in a spherical pattern at approximately two degree steps are made as illustrated in
Step 10C—Compute Angle Between Pixel Pairs: The angle between every two pairs of normals is computed using the dot product of the two normals
Where:
nI=the unit vector representing the surface normal at one pixel
nJ=the unit vector representing the surface normal at one pixel
.=refers to the dot product between two vectors
Only those pixel pairs having angles between 80 and 100 degrees are saved.
The process continues with the following steps:
Step 10D-Calculate Invariants: After each measurement four invariant features are recorded for each pixel pair. Referring to
1. The distance between the first (P) and second (Q) pixels of the pixel-pairs,
AINV=∥P−Q∥ (1)
Where: Ainv=Distance between pixel points
2. The angle between the normals, {circumflex over (N)}P and {circumflex over (N)}Q, to the to the surface area about the first and second pixels,
BINV=cos−1({circumflex over (N)}P*{circumflex over (N)}Q) (2)
Where:
{circumflex over (N)}P=the normalized normal vector to the surface at pixel p computed using neighboring pixels
{circumflex over (N)}Q=the normalized normal vector to the surface at pixel Q computed using neighboring pixels
BINV=Angle between {circumflex over (N)}P and {circumflex over (N)}Q
3. The normalized distance between the first and second pairs along {circumflex over (N)}P+{circumflex over (N)}Q,
Where:
Cinv=normalized distance {circumflex over (N)}P+{circumflex over (N)}Q,
4. The normalized distance between the first and second pairs along {circumflex over (N)}P*{circumflex over (N)}Q.
Dinv=(P−Q)N*(NP*NQ)
Where:
DINV=normalized distance between along the cross product of the two normals {circumflex over (N)}P×{circumflex over (N)}Q
The normal {circumflex over (N)}P is determined by the use of the leased squared error method as illustrated in
Patch Offset(I,j)=Patch Normal(I,j)*Patch Centroid(I,j) (6)
nP(i,j)=one of the predefined normals closest to Nn′P (7)
Where XKYKZK are the Cartesian coordinates of pixel (I,j)
Thus once the normal vectors for the two points (P, Q) are determined, the other invariant features are determined. Only those invariants that have normal vectors between two points between 80 and 100 degrees are used. This will significantly reduce computational complexity and reduce the classification ambiguities due to excessive data.
The process continues with the following steps:
Step 10E—Prepare Multi-Dimensional Histograms. The four invariants measured are first normalized by dividing the largest value for that invariant into all the other values thereof creating numbers varying from zero to one. There a four dimensional array of 81 bins (3×3×3×3=81) is constructed. That is three bins for each variant:
Bin Size AINV=(max(AINV)−min(AINV))/3)
Bin Size BINV=(max(BINV)−min(BINV))/3
Bin Size CINV=(max(CINV)−min(CINV))/3
Bin Size DINV=(max(DINV)−min(DINV))/3
A bin is determined for each pair of pixels:
Index AINV=(INT)((AINV)−min(AINV))/BinSizeAINV
Index BINV=(INT)((BINV)−min(BINV))/BinSizeBINV
Index CINV=(INT)((CINV)−min(CINV))/BinSizeCINV
Index DINV=(INT)((DINV)−min(DINV))/BinSizeDINV
The number of bins is somewhat arbitrary, but testing has shown that excellent results are obtained using only 3 bins.
Step 10F—Determine If All Segments Measured. If yes to Step 10G, if no to Step 10H
Step 10G—Go to Next Segment—The program returns to Step 10B
Step 10H—Determine if All Aspects Covered. In this step, a determination is made as to whether all aspects have been covered by taking readings at two degree increments around and over the object as illustrated in
Step 10I—Go to next aspect—The laser radar is repositioned and the program returns to Step 10B
Step 10J—Determine If There Is Another object. A determination as to whether another object is to be added to program, if yes to Step 10K, if no to step 10L.
Step 10K, a new object is selected and the program returns to Step 10A;
The process continues with the following steps:
Step 10L—Create Analysis Tool. The data created during Step 10E is fed to the neural net shown in
Referring to
Step 34—Determine Normal To Ground Patch Around Vehicle—Still referring to
“Where are the equations?” Step 58—Rotate Object. The tank is mathematically rotated by use of the following equation:
x′=x*cos(β)+y*sin(α)*sin(β)+z*cos(α)*sin(β)
y′=y*cos(α)−z*sin(α)
z′=y*sin(α)*cos(β)+z*cos(α)*cos(β)−x*sin(β)
where x, y, z represent the original x, y, z coordinates and x′, y′, z′ represent the newly rotated coordinates
α=the rotation angle about the x axis,
β=the rotation angle about the y axis.
Step 62-Select an object from the library, for example, the tank 12 (
Step 64—Set Height Boundary. The height of the target segment is set along the normal. The bottom segment is first selected. Step 66—Make pixel Measurements. Using the laser radar pixel measurements are made during the laser radar scanning of the object, only one snap shot is required.
Step 68—Compare angles between Normals—The normal to the surfaces around every detected pixel is estimated using the procedure previously discussed in setting up the library. Only pixel pairs whose angle between the normals have values within 80 and 100 degrees are considered for the classification process.
The process continues with the following steps:
Step 70—Computer Invariant Features.
Step 72—Prepare Multi-Dimensional Histograms.
Step 74—Analize Data. Using the trained neural net shown in
Step 76—All Segments Examined?. If all segments examined go to Step 80, if not return to Step 64 and repeat process for next segment until all segments of the object has been analyzed.
Step 78—All Known Objects Examined?. If no return to Step 62 and repeat process for next stored object data in reference library. If yes to Step 80.
Step 80—Determine Unknown Object. The Score from the neural net can vary from 0 to 1; however, a score above 0.90, with all other scores below 0.20 can be considered a positive identification. Several Events can occur.
1. The object is identified
2. No identification is made
3. A multi-number of possible identities are produced.
In the first case, no further processing is required. In the second and third case, the process can be terminated with a conclusion of “no identification possible.” Some targets are very similar (such BTR-60, BTR70, and BTR-80 trucks). The classifier based on the laser radar resolution may not be capable to detect adequate details to discriminate among these target types. Therefore if the classification belongs to an ambiguous class, then there will be a request for refined sensor resolution. A sensor modality change is requested to change resolution and re-image and re-segment the target area again. The segmented target is then fed to the software to resolve ambiguity among the similar targets. The corresponding model of the target with the computed articulation state is used to render the model using the sensor parameter file and Irma system (Government multi-sensor simulation that is used to simulate target signatures from target models and sensor parameters). The simulated target signature is compared with the sensed target for final validation.
Additionally, the process starting at Step 64 can be repeated starting with the top segment and working downward to achieve a higher confidence level. It may also establish the identification of a target object when the process measures from the top down. This is because the bottom segment may be obscured by mud or foliage.
The process can also be used to assess battle damage or variation to a given target segment by computing a transformation, which consists of rotation and translation to compare the target piece to the model piece. The transformation equation is
Y=AX+b (9)
Where:
A=rotation matrix
b=translation vector
X and Y=are pixels on the target piece and the model piece.
The above transformation is applied to the target piece to line up with the corresponding model piece. The target piece is then subtracted from the model piece and the residual represents variation or battle damage. The residual can be used to infer the size of variation or the battle damage. All the classification hypotheses corresponding to the various segments as the target sliced up (or down) are combined using Bayesian or Dempster Shafer evidence theory.
Thus it can be seen that the process can be used to identify object of interest, such as vehicles, missile launchers. Tests results provided in
While the invention has been described with reference to a particular embodiment, it should be understood that the embodiment is merely illustrative as there are numerous variations and modifications, which may be made by those skilled in the art. Thus, the invention is to be construed as being limited only by the spirit and scope of the appended claims.
The invention has applicability to the surveillance systems industry.
This invention was made under US Government Contract No.: FZ6830-01-D-002 issued by the US Air Force dated March 2004. Therefore, the US Government has the rights to the invention granted thereunder.