Not Applicable
Not Applicable
1. Technical Field of the Invention
The present invention is directed to Threat Image Projection (TIP) and more particularly to a method and a system for transforming images of threats to be projected into images of scanned objects.
2. Description of the Prior Art
Digital radiography scanners are widely used for scanning canyon luggage, clothing and other canyon items at security checkpoints to be sure that they do not contain threats (e.g. weapons and explosives) and any other prohibited items. Statistically, events involving real threats in luggage or other carryon items happen rarely. As a result, the operators at the check point can lose their concentration and miss real threats. In order to help the operator to stay alert and to control his activity, threat image projections (TIPs—images of threat objects) can be artificially inserted into the checkpoint images by the scanning system software (SW). In most prior art systems, TIPs data is generated by scanning various threat objects at certain positions in the tunnel of the scanner and its images (overhead and side) are recorded and stored in the TIPs database. The prior art software takes one of the TIP images from the TIP database and inserts it into the empty space of the luggage image and presents the combined image on the operator's screen.
This approach in TIP data collection/presentation suffers from the several drawbacks. It requires the system to include a large database of TIP data for each object at each possible location within the luggage or other checkpoint canyon item. While this approach provides for more realistic and indistinguishable presentations if TIPs, it added to the operational overhead of the system requiring large amounts of data storage and significant time to search for and retrieve the most appropriate TIP image data. Further, it requires significant effort to scan objects at many different orientations in order to generate the TIP database. Alternatively, a smaller database of TIP data can be used, but with a corresponding limited ability to insert images in many locations and the possibility that the images will not appear similar to a real image and thus, easily distinguishable from a real threat.
It is possible to scan the same threat object at many different locations in the tunnel's cross-section and store the scanned images in a TIPs database. But this approach is time consuming as it requires a lot of scans to be done for each threat at different positions and elevations inside the tunnel. Further, the average size TIPs database consists of several thousand threats, and this brute force approach requires an enormous amount of time to scan all the threat objects as each position and results in a very large TIP database.
The present invention is directed to threat and explosives detection systems and methods that can project images of threats in to images of objects being scanned. These systems can include a system or subsystem that project the image of the threat in to the image of the object at a predefined or random location, randomly or at a predetermined time or number of objects scanned. In accordance with the invention, only a relatively small number of scans of each threat object would need to be obtained and stored in the TIP database and the stored TIP image data can be transformed to present the most correct orientation for any point in the tunnel cross-section. Preferably, the transformed TIP image should be visually indistinguishable from the one obtained by direct scanning at this position and the algorithm should be able to provide the TIP transformation in real time, so as not to delay presentation of the scanner image and possibly indicate the presence of a TIP.
The present invention is directed to a method and system for TIP data collection and a method and system for threat image transformation, including an algorithm that enables the system to create in real time, a realistic TIP at any location in the tunnel cross-section. In accordance with the invention, an object scanner includes a conveyor belt that transports the object through the scanner that directs radiation at the object and includes one or more arrays of detectors that measure transmission of the radiation through the object. The detector array(s) and the conveyor are connected to a computer that is adapted to control the movement of the conveyor and receive data signals from the detector array(s). The computer can include computer software that can process the data signals and produce an image of the object allowing the operator to view the contents of the object. In order to project images of threats into the images of objects scanned by the scanner, a database of threat images is created by scanning real or simulated threat objects using the scanner. In accordance with the invention, the object is held in position using a jig or fixture that supports the object on the conveyor in the desired orientation. In addition the jig or fixture can be constructed of a material that is substantially transparent to the radiation or can be easily identified and removed from the resulting image.
In accordance with one embodiment of the invention, the real or simulated threat object can be positioned in the center of the conveyor belt and scanned by the scanner to produce an image in a first angular orientation that can be stored in the TIP database along with information indicating the angle of orientation. In accordance with the invention, the jig or fixture can include an adjustable platform that allows the real or simulated threat object to be positioned at different angles and images of the threat object in several angular orientations can be created and stored in the TIP database along with information indicating the associated angle of orientation. Alternatively, the threat object can be held at the same angular orientation and scanned at different positions on the belt horizontally transverse to the direction of motion to simulate different angles of orientation. In accordance with one embodiment, the range of angles of orientation can be limited to maximum angles of the fan beams of radiation that are used by the scanner.
The database of TIP images can be stored in non-volatile memory in the scanner computer system and accessed randomly under software control. The computer system can include algorithms that can determine whether or not a TIP will be applied to any given object that is scanned as well as the specific type of threat to be used and the location of the TIP in the scanned object. In accordance with the invention, the computer system software can select a location of for the TIP and then analyze the object image to determine whether it would appropriate to insert the TIP at the selected location. For example, the system would select a different location if it were determined that there is a dense object in the selected location that would be incompatible with the insertion of the threat object.
In accordance with one embodiment of the invention, a system is provided for scanning objects and providing an image of the object on a display. The object can, for example, include canyon items including luggage or baggage of any shape or size. The system scans the object with radiation and uses the sensors to detect the radiation passing through the object and generates an image of the contents of the object. In addition, the system can be provided with a Threat Image Projection (TIP) system or subsystem that can insert an image of a threat into a selected location of the image of the object at a predefined or random time. The TIP System determines the location and the orientation of the threat in the object being scanned. The TIP System includes a TIP database containing one or more and preferably three or more images of the threat object, each image at a different angle of orientation.
In accordance with one embodiment of the invention, after the TIP System determines the location and angle of illumination of the threat, the TIP System searches its threat database for images of the threat object that were take at an angle of illumination close to the determined angle of illumination of the TIP. The TIP System can select the image of the threat object that corresponds to an angle of illumination with the smallest difference from the determined angle of illumination. Next, the TIP System can scale the size of the image of the threat as a function of the determined location of the threat in the image of the scanned object.
In accordance with the invention, the TIP system can determine the threat type of the TIP image to be projected into the image of the object and select the location of the TIP image. In accordance with the invention, the TIP system can use the selected location of the TIP image to determine the angle of illumination of the threat as well as the scaling factor for the TIP image. Next, the TIP system can compare the determined angle of illumination with the illumination angles associated with the images in the TIP database and select the image from the database having the closest illumination angle. Next, the TIP system can apply the scaling factor to reduce or enlarge the size of the TIP image and insert the TIP image into the image of the scanned object. The user interface of the scanning system can include a button, switch or other control that allows the operator (the security screener) to indicate the presence of a threat in the object. Actuating the button, switch or control can provide an indication to the operator that the threat was a projected image. Failure to actuate the button, switch or control can also provide an indication that the user missed the TIP as well as provide a highlighted (e.g. brighter intensity or blinking) image of the TIP to help the operator learn from their mistakes.
Thus, the present invention provides for systems and methods for projecting realistic images of threats in to images of scanned objects that assist in keeping the security screening personnel alert and provide more realistic training at the same time. These and other objects of the invention will be apparent from the drawings and description provided herein.
The present invention is directed to threat and explosives detection systems and methods that can project images of threats in to images of objects being scanned. These systems can include a system or subsystem that project the image of the threat in to the image of the object at a predefined or random location, randomly or at a predetermined time or number of objects scanned. These systems can include TIP database, a database of images of various threat objects or categories of objects. Some of these scanning systems can take multiple views of the object as it travels along the conveyor belt, for example, including a top or overhead view as well as side view of the object. These systems can include TIP images in the TIP database for all views (e.g., top and side view images).
In each view, the size and appearance of the TIP image depends on the location of the threat in tunnel's cross-section with respect to the illumination (radiation) source. This concept is illustrated
In addition to the angle of illumination, the distance or height of the object in relation to the conveyor belt and the radiation/illumination source is related to the size of the image produced.
Thus, both size and intensity distribution of TIP image depend on the selected location of the threat image in the tunnel. If the TIP system software does not modify the TIP image in accordance to its location in the tunnel, the TIP does not look realistic on the operator's screen. This can lead to the following negative consequences: 1) the unrealistic TIP appearance surrounded by the real bag background could make the TIP distinguishing decision easier for operator than in the case of a real threat; and 2) the use of TIPs helps to keep the operator alert and can be used to train the operator about the appearance of many different threats. The unrealistic TIPs can mislead operator and unrealistic or easily distinguished TIPs do not provide effective training.
In accordance with one embodiment of the invention, the tunnel T cross-section can be considered orthogonal to the belt and coincide with the central overhead detectors array of multi-view CT scanning system. The multi-view scanning system can include a single radiation source Ob as shown in
In accordance with the invention, the system can transform the image taken of the threat at point O1 to the arbitrary chosen point O3 in the tunnel and to obtain the X-coordinate geo-corrected image of the threat at this point. The exact threat geometry can be rather complex and the requirement to know the geometry for image transformation can be burdensome. To overcome this problem, in accordance with one embodiment of the invention, the system can extend the boundary of the threat (using air) to the shape of a rectangular box that encompasses the threat object. From the physical point of view, the system does not change, but from the mathematical point of view, the problem becomes simplified. The extended threat has a rectangular shape and the problem is reduced to obtaining the relation between the geo-corrected images for rectangular boxes located at points O1 and O3.
The object transformation between these points could be done by numerous ways in accordance with the invention. Some transformations can lead to significant image intensity distortion (as shown by
The second transformation involves the rotation of the image in the tunnel T cross-section plane about the center of rotation at the point Ob. Due to the mathematical property of rotation (it preserves the angles of orientation with respect to the center point) this transformation does not inject any image intensity distortions at all.
We can transform rectangular box from the point O1 to the point O3 in the tunnel using the following steps.
{tilde over (r)}
i
=O
b
O
2
−O
b
O
1
+r
i (2)
These formulas help to describe the relationship between the vertices of the rectangular boxes at the points O1 and O3. The intersections of lines Obri with X-coordinate geo-corrected plane Gx provide interval AB for overhead image of threat located at point O1, whereas intersections of lines Ob{tilde over (r)}i with X-coordinate geo-corrected plane Gx provide interval A′B′ for overhead image of threat located at point O3. Intervals AB and A′B′ define projections of overhead images in geo-corrected plane for different threat locations. Linear mapping between these intervals (ratio of A′B′ to AB) provides the image scaling factor for overhead image transformation.
In operation, the image of the threat object is represented as a set of intensity values of the radiation that passed through the threat object as measured by the detector array. In accordance with one embodiment of the invention, the image of the threat at the correct angle of orientation (correct angle of illumination) can be scaled by the ratio of A′B′/AB. In accordance with one embodiment of the invention, interval AB can be determined by projecting lines from Ob through each vertex or corner of the rectangular box at point O1 and selecting the left most intersection point (min.) and the right most intersection point (max.) as the points that define interval AB. Similarly, interval A′B′ can be determined by projecting lines from Ob through each vertex of the rectangular box at point O3 and selecting the left most intersection point (min.) and the right most intersection point (max.) as the points that define interval A′B′. In this embodiment, the orientation of the rectangular box at O3 is going to be different than the orientation of the rectangular box at O1, because as explained herein, the angle of illumination at O1 is different from the angle of illumination at O3 by α.
In order to determine the appropriate orientation (angle of illumination) of the image of the threat object (taken while the threat object was oriented at arbitrary angle γ and located at O1) to be inserted at point O3, the scan of the threat object at the point O1 must be done at the angle γ-α. In accordance with the invention, the image of the threat object selected for inserting at O3 should be the image of the threat object taken at O1 where the object is oriented at the angle γ-α. This corresponds to the angle of orientation at which an object located at O3 is illuminated. Once the image taken at the proper angle of illumination is selected, the image can be scaled according to the scaling factor and inserted in the image of the object, such as by merging or overlaying one image on another.
In accordance with one embodiment of the invention, the same image transformation method can be used for side view, as shown in
The intersections of lines Osri with the Y-coordinate geo-corrected plane provide interval CD for the side image of the threat located at point O1, whereas intersections of lines Os{tilde over (r)}i with Y-coordinate geo-corrected plane provide interval C′D′ for the side image of the threat located at point O3. Intervals CD and C′D′ define the position of side images in Y-coordinate geo-corrected plane for the different threat locations, O1 and O3. Linear mapping between these intervals (the ratio of C′D′ to CD) provides the image scaling factor for the side image transformation.
In operation, the image of the threat object is represented as a set of intensity values of the radiation that passed through the threat object as measured by the detector array. In accordance with one embodiment of the invention, the image of the threat at the correct angle of orientation (correct angle of illumination) can be scaled by the ratio of C′D′/CD. In accordance with one embodiment of the invention, interval CD can be determined by projecting lines from Os through each vertex or corner of the rectangular box at point O1 and selecting the top most intersection point (max.) and the bottom most intersection point (min.) as the points that define interval CD. Similarly, interval C′D′ can be determined by projecting lines from Os through each vertex of the rectangular box at point O3 and selecting the top most intersection point (max.) and the bottom most intersection point (min.) as the points that define interval C′D′. In this embodiment, the orientation of the rectangular box at O3 is going to be different than the orientation of the rectangular box at O1, because as explained herein, the angle of illumination at O1 is different from the angle of illumination at O3 by β.
In accordance with one embodiment of the invention, the overhead and side images for the rectangular box located at the point O3 and oriented at arbitrary angle γ from the images scanned at point O1 can be determined as follows: 1) the overhead image can be taken at point O1 at angle γ-α; and 2) the side image can be taken at angle γ-β, as shown in
In accordance with the invention, the values of angles α and β depend on the location of the illumination (radiation) sources and the locations of point O1 and point O3. Point O3 can have an arbitrary location inside the tunnel T cross-section. For a fixed value of the orientation angle γ, the angles γ-α and γ-β are continuous functions. In accordance with an alternative embodiment of the invention, the system can store in the TIP database only TIP images taken at a discrete set of angles γ-αi and γ-βj at point O1 and the angular orientation (angle of illumination) of the rectangular box can be approximated at arbitrary point O3 by selecting a TIP image from the corresponding set of images taken at discrete angles. As one ordinary skill will appreciate, there will be some loss of accuracy depending on the available angles in the set. However, the desired level of accuracy and maximum error can be used to define the set of images and selecting the image that provides the least error (smallest difference between the correct angle of illumination and the closest angle in the set of images). In accordance with the invention, the selection of the set of images can be used to balance accuracy with efficiency.
In accordance with the invention, at an arbitrary chosen point O1 the sets of angles γ-αi and γ-βj are independent and both sets of angles can be used to collect overhead and side images. However, in accordance with an alternative embodiment of the invention, the system can select a point O1 that allows the system to use a single set of angles with the CT scanning system, for example, having a 40×60 tunnel as shown in
In accordance with one embodiment of the invention, the TIP database can be generated by scanning one or more threat objects or simulated threat object at a central location in the tunnel T, such as shown in
In this embodiment, the system can also take into consideration that with a multi-view system, the detector arrays are separated in the z direction (direction of the conveyor belt). A multi-view CT system is disclosed in U.S Patent Application Publication No. 2009-0285353, which is hereby incorporated by reference in its entirety. The CT system according to the invention can include a stationary radiation source and one or more detector arrays for measuring radiation passing through objects in the tunnel T. In the multi-view CT system, the CT system can include two or more detector arrays displaced in the Z direction (direction of the conveyor) and additional radiation sources displaced in the Z direction. Where as a standard CT system can include a single radiation source located below the conveyor, the multi-view CT system can include multiple radiation sources at different locations, including, for example, one below the conveyor and one to the side of the tunnel (either above or below the conveyor).
In applying the invention to a multi-view overhead CT scanning system, the image transformations in other overhead views remain the same. Optionally, a correction can be used to account for the shifts in Z direction (direction of the belt) between the central overhead view and one or more of the angled overhead views. These shifts in overhead views can depend on the elevation of point O1 (or point O3) above the bottom source and the Zi-coordinate of image in the ith overhead view is related to the Z-coordinate of image in the central overhead view by relation: Z−Zi=H tan(φi), where φi is the angle between central and ith angled overhead planes; H is the elevation of point O1 (or point O3) above the bottom source.
After the object is scanned and the location and threat type have been determined, the TIP software 730 can generate the appropriate image to be inserted for each view of the system. The TIP system software 730 can include data that provides the location of each radiation source and the center point for each TIP image in the TIP database at 810. In accordance with the invention, at 812, the TIP software 730, knowing the location and orientation of TIP, uses the location of the radiation source and the center point of the TIP image to determine the correct angle of illumination (e.g., α or β) for the TIP. Next, at 814, the TIP software 730 searches the TIP database for the TIP image having the closest angle of illumination to the determined angle of illumination. This can be accomplished by subtracting the determined angle of illumination from each of the TIP image angles of illumination and selecting the image corresponding to the smallest difference. In one embodiment of the invention, the TIP database 740 can include TIP images taken at 5 different angles of illumination (for example, −30°, −15°, 0°, +15°, +30°), allowing for fast selection of the TIP image. Next, at 816, the 730 software 730 can determine the scaling factor, for example, by determining the intervals on a geo-corrected plane. At 818, the scaling factor can be used to scale the TIP image data prior to combining it with the object image data, at 824, to produce the final image of the TIP inserted in the image of the object.
In an alternative embodiment of the invention, the system software 720 or the TIP software 730 can optionally, at 820, analyze the data of the image in the region where the TIP image is to be inserted to determine if the density in the region is above a predefined threshold, at 822, indicating that a solid object is present in the region, and allow the system to select a new location, at 828. This will help avoid inserting the TIP in a location within the object which contains an incompatible element or component. For example, where object is carry-on luggage and the threat is a gun or a knife, this will avoid inserting a TIP the threat appears to be projecting through a laptop computer or a supporting element of the luggage. The process can be repeated until a location is identified where the threat can appropriately be inserted.
This process can be repeated for each view of the CT system, producing multiple TIP images, at least one for each view.
After the location is cleared, at 822, (or a new one selected, at 828) either the system software 720 or TIP software 730 can combine the images at 824, as is well known, such that a combined image can be displayed on the screen to the operator, at 826. The system software 720 can wait for an indication from the operator, such as by pressing a button or touching a location on a touch screen (e.g. where the TIP is found) to allow the system to continue.
The system software 70 can include training features, such that if the operator does not indicate the presence of the TIP and clears the object, the system alerts the operator and highlights the TIP, such as by showing a box around the object on the screen or causing the TIP to glow (e.g., continuously increase or decrease in intensity. The system can also keep track of the operator's performance for later review.
As one of ordinary skill will appreciate, the TIP database can be populated with various sets of image data depending upon the desired speed and accuracy with which the TIP insertion process is to be performed. In accordance with one embodiment of the invention, the maximum range in angle of illumination is approximately 60 degrees and the TIP database includes 5 sets of image data, taken at 15 degree intervals (e.g., −30°, −15°, 0°, +15°, +30°). In alternate embodiments the TIP database can include more or less interval image data. For example, the interval can be 5 degrees and 13 sets of image data. Alternatively, the system can only include one set of image data and only perform the scaling step as described herein.
In accordance with one embodiment of the invention, for a fixed orientation of the threat at angle γ, located at the point O1, several scans at different angles within the range (−30°+γ, +30°+γ) can be taken and saved in the TIP database. This database can selected and used to obtain transformed images to be inserted at arbitrary point O3 inside the tunnel according to the following process:
In accordance with an alternative embodiment of the invention, the TIP images in the TIP database can be used with other multi-view CT scanning systems with no or limited modification. The TIP image database collected on a first multi-view CT scanning system (such as one having a 40×60 tunnel T) to be used with a second multi-view CT scanning system (such as one having a 55×75 tunnel T). This embodiment significantly reduces efforts associated with TIP image database collection. The second multi-view CT scanning system can have, for example, a larger tunnel, different locations of bottom and side X-ray sources and different positions of X- and Y-geo-corrected planes than the first multi-view CT scanning system.
We assume that the location of point O1 for first multi-view CT scanning system was chosen as it was specified earlier (substantially the center of the tunnel T) and TIP images for the database were collected. The location of point O2 for the second multi-view CT scanning system can be selected using the following relation: Ob2O2∥Ob1O1 and Os2O2∥Os1O1. From this relation, it follows that the bottom and side radiation sources of the second multi-view CT scanning system illuminate an arbitrary rectangular box with a center at point O2 at the same angles as the bottom and side radiation sources of the first multi-view CT system illuminate a rectangular box with the center at point O1. Knowing vertices of the rectangular box (at O2) ri=ri+O1O2 the system can determine the intersection of lines Ob2ri with the X-coordinate geo-corrected plane for the second multi-view CT scanning system, that define interval A2B2 (the intersection of lines Obri with the X-coordinate geo-corrected plane for first multi-view CT scanning system define interval A1B1) and scale the overhead image from interval A1B1 to the interval A2B2 according to the ratio of the intervals.
In accordance with one embodiment of the invention, the Z-shift correction for overhead images can be determine by the formula that follows from the simple geometry relations:
Z−Z
i=(Z−Zi)*HO2b/HO1b,
where
HO1b is the elevation of the point O1 over the source Ob1 in first multi-view CT scanning system, and
HO2b is the elevation of the point O2 over the source Ob2 in the second multi-view CT scanning system.
The same approach can be used to recalculate side images for TIP database in the second multi-view CT scanning system.
In order to verify the theoretical results developed above we collected sample TIPs database (for first multi-view CT scanning system, Array CT 40×60), that consisted of two guns of different sizes (horizontal and vertical orientations). The point O1 was chosen as described above. To verify image transformation algorithm we scanned the same objects at different locations in the tunnel. We used several objects for scanning: cell phone, screwdriver, simulated explosive and steak knife arbitrary oriented inside the box. We used 12° angle increment in database collection for these objects. Point O3 was shifted from the point O1 by 10 cm in X-direction and by 5 cm in Y-direction. In all pictures shown below the top row of images show scanned results at point O3 whereas the bottom row of images show transformed results at the same point.
The present invention enables a system that includes a limited library or database of threat images to insert the threat images in to a virtually unlimited number of locations within an image and provide realistic resulting images with improved performance.
Other embodiments are within the scope and spirit of the invention. For example, due to the nature of software, functions described above can be implemented using software, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations.
Further, while the description above refers to the invention, the description may include more than one invention.
This application claims any and all benefits as provided by law of U.S. Provisional Application No. 61/170,462 filed Apr. 17, 2009, which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
61170462 | Apr 2009 | US |