The present invention relates generally to security systems and, more particularly, to methods and systems for screening receptacles including, for example, luggage, mail parcels, or cargo containers to identify certain objects located therein, or for screening persons to identify objects located thereon.
Security in airports, train stations, ports, office buildings, and other public or private venues is becoming increasingly important particularly in light of recent violent events.
Typically, security screening systems make use of devices generating penetrating radiation, such as x-ray devices, to scan receptacles such as, for example, individual pieces of luggage, mail parcels or cargo containers to generate an image conveying contents of the receptacle. The image is displayed on a screen and is examined by a human operator whose task it is to detect and possibly identify, on the basis of the image, potentially threatening objects located in the receptacle. In certain cases, some form of object recognition technology may be used to assist the human operator.
A deficiency with current systems is that they are mostly reliant on the human operator to detect and identify potentially threatening objects. However, the performance of the human operator greatly varies according to such factors as poor training and fatigue. As such, the detection and identification of threatening objects is highly susceptible to human error. Furthermore, it will be appreciated that failure to identify a threatening object, such as a weapon for example, may have serious consequences, such as property damage, injuries and fatalities.
Another deficiency with current systems is that the labour costs associated with such systems are significant since human operators must view the images.
Consequently, there is a need in the industry for providing a method and system for use in screening receptacles (such as luggage, mail parcels, or cargo containers) or persons to detect certain objects that alleviate at least in part deficiencies of prior systems and methods.
As embodied and broadly described herein, the present invention provides an apparatus for screening a receptacle. The apparatus comprises an input for receiving an image signal associated with the receptacle, the image signal conveying an input image related to contents of the receptacle. The apparatus also comprises a processing unit in communication with the input. The processing unit is operative for: processing the image signal in combination with a plurality of data elements associated with a plurality of target objects in an attempt to detect a presence of at least one of the target objects in the receptacle; and generating a detection signal in response to detection of the presence of at least one of the target objects in the receptacle. The apparatus also comprises an output for releasing the detection signal.
The present invention also provides an apparatus for screening a person. The apparatus comprises an input for receiving an image signal associated with the person, the image signal conveying an input image related to objects carried by the person. The apparatus also comprises a processing unit in communication with the input. The processing unit is operative for: processing the image signal in combination with a plurality of data elements associated with a plurality of target objects in an attempt to detect a presence of at least one of the target objects on the person; and generating a detection signal in response to detection of the presence of at least one of the target objects on the person. The apparatus also comprises an output for releasing the detection signal.
The present invention also provides a computer readable storage medium storing a database suitable for use in detecting a presence of at least one target object in a receptacle. The database comprises a plurality of entries, each entry being associated to a respective target object whose presence in a receptacle it is desirable to detect during security screening. An entry for a given target object comprises a group of sub-entries, each sub-entry being associated to the given target object in a respective orientation. At least part of each sub-entry being suitable for being processed by a processing unit implementing a correlation operation to attempt to detect a representation of the given target object in an image of the receptacle.
The present invention also provides a computer readable storage medium storing a program element suitable for execution by a CPU, the program element implementing a graphical user interface for use in detecting a presence of one or more target objects in a receptacle. The graphical user interface is adapted for: displaying first information conveying an image associated with the receptacle, the image conveying contents of the receptacle; displaying second information conveying a presence of at least one target object in the receptacle, the second information being displayed simultaneously with the first information; and providing a control allowing a user to cause third information to be displayed, the third information conveying at least one characteristic associated to the at least one target object.
The present invention also provides an apparatus for screening a receptacle. The apparatus comprises an input for receiving an image signal associated with the receptacle, the image signal conveying an input image related to contents of the receptacle, the image signal having been produced by a device that is characterized by introducing distortion into the input image. The apparatus also comprises a processing unit in communication with the input. The processing unit is operative for: applying a distortion correction process to the image signal to remove at least part of the distortion from the input image, thereby to generate a corrected image signal conveying at least one corrected image related to the contents of the receptacle; processing the corrected image signal in combination with a plurality of data elements associated with a plurality of target objects in an attempt to detect a presence of at least one of the target objects in the receptacle; and generating a detection signal in response to detection of the presence of at least one of the target objects in the receptacle. The apparatus also comprises an output for releasing the detection signal.
The present invention also provides an apparatus for performing a security screening operation on receptacles to detect presence of one or more prohibited objects in the receptacles. The apparatus comprises an input for receiving image data conveying an image of contents of a currently screened receptacle, the image data being derived from a device that scans the currently screened receptacle with penetrating radiation. The apparatus also comprises a processing unit for determining whether the image depicts at least one prohibited object. The apparatus also comprises a storage component for storing history image data associated with images of contents of receptacles previously screened by the apparatus. The apparatus also comprises a graphical user interface for displaying a representation of the contents of the currently screened receptacle on a basis of the image data. The graphical user interface is adapted for displaying a representation of the contents of each of at least one of the receptacles previously screened by the apparatus on a basis of the history image data.
The present invention also provides a computer implemented graphical user interface for use in performing a security screening operation on receptacles to detect presence of one or more prohibited objects in the receptacles. The computer implemented graphical user interface comprises a component for displaying a representation of contents of a currently screened receptacle, the representation of contents of a currently screened receptacle being derived from image data conveying an image of the contents of the currently screened receptacle, the image data being derived from a device that scans the currently screened receptacle with penetrating radiation. The computer implemented graphical user interface is adapted for displaying a representation of contents of each of at least one of a plurality of previously screened receptacles, the representation of contents of each of at least one of a plurality of previously screened receptacles being derived from history image data associated with images of the contents of the previously screened receptacles.
The present invention also provides a method for performing a security screening operation on receptacles to detect presence of one or more prohibited objects in the receptacles. The method comprises receiving image data conveying an image of contents of a currently screened receptacle, the image data being derived from a device that scans the currently screened receptacle with penetrating radiation; processing the image data to determine whether the image depicts at least one prohibited object; storing history image data associated with images of contents of previously screened receptacles; displaying on a graphical user interface a representation of the contents of the currently screened receptacle on a basis of the image data; and displaying on the graphical user interface a representation of the contents of each of at least one of the previously screened receptacles on a basis of the history image data.
The present invention also provides an apparatus for performing a security screening operation on receptacles to detect presence of one or more prohibited objects in the receptacles. The apparatus comprises an input for receiving image data conveying an image of contents of a receptacle, the image data being derived from a device that scans the receptacle with penetrating radiation. The apparatus also comprises a processing unit for determining whether the image depicts at least one prohibited object. The apparatus also comprises a graphical user interface for: displaying a representation of the contents of the receptacle on a basis of the image data; and providing at least one control allowing a user to select whether or not the graphical user interface highlights on the representation of the contents of the receptacle a location of each of at least one prohibited object deemed to be depicted in the image.
The present invention also provides a computer implemented graphical user interface for use in performing a security screening operation on receptacles to detect presence of one or more prohibited objects in the receptacles. The computer implemented graphical user interface comprises a component for displaying a representation of contents of a receptacle, the representation of contents of a receptacle being derived from image data conveying an image of the contents of the receptacle, the image data being derived from a device that scans the receptacle with penetrating radiation. The computer implemented graphical user interface also comprises a component for providing at least one control allowing a user to select whether or not the computer implemented graphical user interface highlights on the representation of the contents of the receptacle a location of each of at least one prohibited object deemed to be depicted in the image.
The present invention also provides a method for performing a security screening operation on receptacles to detect presence of one or more prohibited objects in the receptacles. The method comprises receiving image data conveying an image of contents of a receptacle, the image data being derived from a device that scans the receptacle with penetrating radiation; processing the image data to determine whether the image depicts at least one prohibited object; displaying on a graphical user interface a representation of the contents of the receptacle on a basis of the image data; and providing on the graphical user interface at least one control allowing a user to select whether or not the graphical user interface highlights on the representation of the contents of the receptacle a location of each of at least one prohibited object deemed to be depicted in the image.
The present invention also provides an apparatus for performing a security screening operation on receptacles to detect presence of one or more prohibited objects in the receptacles. The apparatus comprises an input for receiving image data conveying an image of contents of a receptacle, the image data being derived from a device that scans the receptacle with penetrating radiation. The apparatus also comprises a processing unit for: processing the image data to detect depiction of one or more prohibited objects in the image; and responsive to detection that the image depicts at least one prohibited object, deriving a level of confidence in the detection. The apparatus also comprises a graphical user interface for displaying: a representation of the contents of the receptacle derived from the image data; and information conveying the level of confidence.
The present invention also provides a computer implemented graphical user interface for use in performing a security screening operation on receptacles to detect presence of one or more prohibited objects in the receptacles. The computer implemented graphical user interface comprises a component for displaying a representation of contents of a receptacle, the representation of contents of a receptacle being derived from image data conveying an image of the contents of the receptacle, the image data being derived from a device that scans the receptacle with penetrating radiation. The computer implemented graphical user interface also comprises a component for displaying information conveying a level of confidence in a detection that the image depicts at least one prohibited object, the detection being performed by a processing unit processing the image data.
The present invention also provides a method for performing a security screening operation on receptacles to detect presence of one or more prohibited objects in the receptacles. The method comprises receiving image data conveying an image of contents of a receptacle, the image data being derived from a device that scans the receptacle with penetrating radiation; processing the image data to detect depiction of one or more prohibited objects in the image; responsive to detection that the image depicts at least one prohibited object, deriving a level of confidence in the detection; displaying on a graphical user interface a representation of the contents of the receptacle derived from the image data; and displaying on the graphical user interface information conveying the level of confidence.
For the purpose of this specification, the expression “receptacle” is used to broadly describe an entity adapted for receiving objects therein such as, for example, a luggage item, a cargo container or a mail parcel.
For the purpose of this specification, the expression “luggage item” is used to broadly describe luggage, suitcases, handbags, backpacks, briefcases, boxes, parcels or any other similar type of item suitable for containing objects therein.
For the purpose of this specification, the expression “cargo container” is used to broadly describe an enclosure for storing cargo such as would be used, for example, in a ship, train, truck or any other suitable type of cargo container.
These and other aspects and features of the present invention will become apparent to those ordinarily skilled in the art upon review of the following description of specific embodiments of the invention in conjunction with the accompanying figures.
A detailed description of embodiments of the present invention is provided herein below, by way of example only, with reference to the accompanying drawings, in which:
In the drawings, the embodiments of the invention are illustrated by way of examples. It is to be expressly understood that the description and drawings are only for the purpose of illustration and are an aid for understanding. They are not intended to be a definition of the limits of the invention.
The image generation device 102 generates an image signal 150 associated with the receptacle 104. The image signal 150 conveys an input image 800 related to contents of the receptacle 104.
The apparatus 106 receives the image signal 150 and processes the image signal 150 in combination with a plurality of data elements associated with a plurality of target objects in an attempt to detect a presence of one or more target objects in the receptacle 104. In this embodiment, the data elements associated with the plurality of target objects are stored in a database 110.
In response to detection of the presence of one or more target objects in the receptacle 104, the apparatus 106 generates a detection signal 160 which conveys the presence of one or more target objects in the receptacle 104. Examples of the manner in which the detection signal 160 can be generated are described later on. The output module 108 conveys information derived at least in part on the basis of the detection signal 160 to a user of the system 100.
Advantageously, the system 100 provides assistance to human security personnel using the system 100 in detecting certain target objects and decreases the susceptibility of the screening process to human error.
Image Generation Device 102
In this embodiment, the image generation device 102 uses penetrating radiation or emitted radiation to generate the image signal 150. Examples of such devices include, without being limited to, x-ray, gamma ray, computed tomography (CT scans), thermal imaging, and millimeter wave devices. Such devices are known in the art and as such will not be described further here. In a non-limiting example of implementation, the image generation device 102 comprises a conventional x-ray machine and the input image 800 related to the contents of the receptacle 104 is an x-ray image of the receptacle 104 generated by the x-ray machine.
The input image 800 related to the contents of the receptacle 104, which is conveyed by the image signal 150, may be a two-dimensional (2-D) image or a three-dimensional (3-D) image, and may be in any suitable format such as, without limitation, VGA, SVGA, XGA, JPEG, GIF, TIFF, and bitmap amongst others. The input image 800 related to the contents of the receptacle 104 may be in a format that can be displayed on a display screen.
In some embodiments (e.g., where the receptacle 104 is large, as is the case with a cargo container), the image generation device 102 may be configured to scan the receptacle 104 along various axes to generate an image signal conveying multiple input images related to the contents of the receptacle 104. Scanning methods for large objects are known in the art and as such will not be described further here. Each of the multiple images is then processed in accordance with the method described herein below to detect the presence of one or more target objects in the receptacle 104.
In some cases, the image generation device 102 may introduce distortion into the input image 800. More specifically, different objects appearing in the input image 800 may be distorted to different degrees, depending on a given object's position within the input image 800 and on the given object's height within the receptacle 104 (which sets the distance between the given object and the image generation device 102).
Database 110
In this embodiment, the database 110 includes a plurality of entries associated with respective target objects that the system 100 is designed to detect. A non-limiting example of a target object is a weapon. The entry in the database 110 that is associated with a particular target object includes data associated with the particular target object.
The data associated with the particular target object may comprise one or more images of the particular target object. The format of the one or more images of the particular target object will depend upon one or more image processing algorithms implemented by the apparatus 106, which is described later. Where plural images of the particular target object are provided, these images may depict the particular target object in various orientations.
The data associated with the particular target object may also or alternatively comprise the Fourier transform of one or more images of the particular target object. The data associated with the particular target object may also comprise characteristics of the particular target object. Such characteristics may include, without being limited to, the name of the particular target object, its associated threat level, the recommended handling procedure when the particular target object is detected, and any other suitable information. The data associated with the particular target object may also comprise a target object identifier.
In this embodiment, the database 110 comprises a plurality of entries 4021-402N, each entry 402n (1≦n≦N) being associated to a respective target object whose presence in a receptacle it is desirable to detect.
The types of target objects having entries in the database 110 will depend upon the application in which the database 110 is being used and on the target objects the system 100 is designed to detect.
For example, if the database 110 is used in the context of luggage screening in an airport, it will be desirable to detect certain types of target objects that may present a security risk. As another example, if the database 110 is used in the context of cargo container screening at a port, it will be desirable to detect other types of target objects. For instance, these other types of objects may include contraband items, items omitted from a manifest, or simply items which are present in the manifest associated to the cargo container. In the example shown in
The entry 402n associated with a given target object comprises data associated with the given target object.
More specifically, in this embodiment, the entry 402n associated with a given target object comprises a group 416 of sub-entries 4181-418K. Each sub-entry 418k (1≦k≦K) is associated to the given target object in a respective orientation. For instance, in the example shown in
The number of sub-entries 4181-418K in a given entry 402n may depend on a number of factors including, but not limited to, the type of application in which the database 110 is intended to be used, the given target object associated to the given entry 402n, and the desired speed and accuracy of the overall screening system in which the database 110 is intended to be used. More specifically, certain objects have shapes that, due to their symmetric properties, do not require a large number of orientations in order to be adequately represented. Take for example images of a spherical object which, irrespective of the spherical object's orientation, will look substantially identical to one another and therefore the group of sub-entries 416 may include a single sub-entry for such an object. However, an object having a more complex shape, such as a gun, would require multiple sub-entries in order to represent the different appearances of the object when in different orientations. The greater the number of sub-entries in the group of sub-entries 416 for a given target object, the more precise the attempt to detect a representation of the given target object in an image of a receptacle can be. However, this also means that a larger number of sub-entries must be processed which increases the time required to complete the processing. Conversely, the smaller the number of sub-entries in the group of sub-entries 416 for a given target object, the faster the speed of the processing can be performed but the less precise the detection of that target object in an image of a receptacle. As such, the number of sub-entries in a given entry 402n is a trade-off between the desired speed and accuracy and may depend on the target object itself as well. In certain embodiments, the group of sub-entries 416 may include four or more sub-entries 4181-418K.
In this example, each sub-entry 418k in the entry 402n associated with a given target object comprises data suitable for being processed by a processing unit implementing a correlation operation to attempt to detect a representation of the given target object in an image of the receptacle 104.
More particularly, in this embodiment, each sub-entry 418k in the entry 402n associated with a given target object comprises a data element 414k (1≦k≦K) regarding a filter (hereinafter referred to as a “filter data element”). The filter can also be referred to as a template, in which case “template data element” may sometimes be used herein. In one example of implementation, each filter data element is derived based at least in part on an image of the given target object in a certain orientation. For instance, the filter data element 414k may be indicative of a Fourier transform (or Fourier transform complex conjugate) of the image of the given target object in the certain implementation. Thus, in such an example, each filter data element is indicative of the Fourier transform (or Fourier transform complex conjugate) of the image of the given target object in the certain orientation. The Fourier transform may be stored in mathematical form or as an image of the Fourier transform of the image of the given target object in the certain orientation. In another example of implementation, each filter data element is derived based at least in part on a function of the Fourier transform of the image of the given target object in the certain orientation. In yet another example of implementation, each filter data element is derived based at least in part on a function of the Fourier transform of a composite image, the composite image including at least the image of the given target object in the certain orientation. Examples of the manner in which a given filter data element may be derived will be described later on.
In this embodiment, each sub-entry 418k in the entry 402n associated with the given target object also comprises a data element 412k (1≦k≦K) regarding an image of the given target object in the certain orientation corresponding to that sub-entry (hereinafter referred to as an “image data element”). The image can be that on which is based the filter corresponding to the data element 414k.
It will be appreciated that, in some embodiments, the image data element 412k of each of one or more of the sub-entries 4181-418K may be omitted. Similarly, in other embodiments, the filter data element 414k of each of one or more of the sub-entries 4181-418K may be omitted.
The entry 402n associated with a given target object may also comprise data 406 suitable for being processed by a computing apparatus to derive a pictorial representation of the given target object. Any suitable format for storing the data 406 may be used. Examples of such formats include, without being limited to, bitmap, jpeg, gif, or any other suitable format in which a pictorial representation of an object may be stored.
The entry 402n associated with a given target object may also comprise additional information 408 associated with the given target object. The additional information 408 will depend upon the type of given target object as well as the specific application in which the database 110 is intended to be used. Thus, the additional information 408 can vary from one implementation to another. Examples of the additional information 408 include, without being limited to:
In one example, the risk level associated to the given target object (first example above) may convey the relative risk level of the given target object compared to other target objects in the database 110. For example, a gun would be given a relatively high risk level while a metallic nail file would be given a relatively low risk level, and a pocket knife would be given a risk level between that of the nail file and the gun.
In another example, information regarding the monetary value associated with the given target object may be an actual monetary value such as the actual value of the given target object or the value of the given target object for customs purposes, or information allowing such a monetary value to be computed (e.g., a weight or size associated to the given target object). Such a monetary value is particularly useful in applications where the value of the content of a receptacle is of importance such as, for example, mail parcels delivery and customs applications.
The entry 402n associated with a given target object may also comprise an identifier 404. The identifier 404 allows each entry 402n in the database 110 to be uniquely identified and accessed for processing.
As mentioned previously, the database 110 may be stored on a computer readable storage medium that is accessible by a processing unit. Optionally, the database 110 may be provided with a program element implementing an interface adapted to interact with an external entity. Such an embodiment is depicted in
Although the database 110 has been described with reference to
Also, although the database 110 is shown in
Referring now to
The image generation device 702 is adapted for generating image signals associated with a given target object whose presence in a receptacle it is desirable to detect. The image generation device 702 may be similar to the image generation device 102 described above.
The apparatus 704 is in communication with the image generation device 702 and with a memory unit storing the database 110. The apparatus 704 receives at an input the image signals associated with the given target object from the image generation device 702.
The apparatus 704 comprises a processing unit in communication with the input. In this embodiment, the processing unit of the apparatus 704 processes the image signals associated with the given target object to generate respective filter data elements (such as the filter data elements 4141-414K described above). The generated filter data elements are suitable for being processed by a device implementing a correlation operation to attempt to detect a representation of the given target object in an image of a receptacle. For example, the filter data elements may be indicative of the Fourier transform (or Fourier transform complex conjugate) of an image of the given target object. The filter data elements may also be referred to as templates. Examples of other types of filters that may be generated by the apparatus 704 and the manner in which they may be generated will be described later on. The filter data elements are then stored in the database 110 in connection with an entry associated with the given target object (such as one of the entries 4021-402N described above).
In this embodiment, the system 700 comprises the positioning device 706 for positioning a given target object in two or more distinct orientations such as to allow the image generation device 702 to generate an image signal associated with the given target object in each of the two or more distinct orientations.
The apparatus 704 may include a second input (not shown) for receiving supplemental information associated with a given target object and for storing that supplemental information in the database 110 in connection with an entry associated with the given target object (such as one of the entries 4021-402N described above). The second input may be implemented as a data connection to a memory device or as an input device such as a keyboard, mouse, pointer, voice recognition device, or any other suitable type of input device. Examples of supplemental information that may be provided include, but are not limited to:
With reference to
At step 250, an image of a given target object in a given orientation is obtained. The image may have been pre-stored on a computer readable medium and in that case obtaining the image of the given target object in the given orientation involves extracting data corresponding to the image of the given target object in the given orientation from that computer readable medium. Alternatively, at step 250, a given target object is positioned in a given orientation on the positioning device 706 in the viewing field of the image generation device 702 and an image of the given target object in the given orientation is then obtained by the image generation device 702. At step 252, the image of the given target object in the given orientation obtained at step 250 is processed by the apparatus 704 to generate a corresponding filter data element. As previously indicated, the generated filter data element is suitable for being processed by a processing unit implementing a correlation operation to attempt to detect a representation of the given target object in an image of a receptacle.
At step 254, a new sub-entry associated to the given target object (such as one of the sub-entries 4181-418K described above) is created in the database 110 and the filter data element generated at step 252 is stored as part of that new sub-entry. Optionally, the image of the given target object in the given orientation obtained at step 250 may also be stored as part of the new sub-entry (e.g., as one of the image data elements 4121-412K described above).
At step 256, it is determined whether another image of the given target object in a different orientation is required. The requirements may be generated automatically (e.g., there is a pre-determined number of orientations required for the given target object or for all target objects) or may be provided by a user using an input device.
If another image of the given target object in a different orientation is required, step 256 is answered in the affirmative and the method proceeds to step 258. At step 258, the next orientation is selected, leading to step 250 where an image of the given target object in the next orientation is obtained. The image of given target object in the next orientation may have been pre-stored on a computer readable medium and in that case selecting the next orientation at step 258 involves locating the corresponding data on the computer readable medium. Alternatively, at step 258 the next orientation of the given target object is determined.
If no other image of the given target object in a different orientation is required, step 256 is answered in the negative and the method proceeds to step 262. At step 262, it is determined whether there remains any other target object(s) to be processed. If there remains one or more other target objects to be processed, step 262 is answered in the affirmative and the method proceeds to step 260 where the next target object is selected and then to step 250 where an image of the next target object in a given orientation is obtained. If at step 262 there are no other target objects that remain to be processed, step 262 is answered in the negative and the process is completed. In some cases, step 262 may be preceded by an additional step (not shown) in which the aforementioned supplemental information may be stored in the database 110 in association with the entry corresponding to the given target object.
As indicated above with reference to step 250, the images of the target objects may have been obtained and pre-stored on a computer readable medium prior to the generation of data for the entries of the database 110. In such a case, step 250 may be preceded by another step (not shown). This other step would include obtaining a plurality of images of the given target object by sequentially positioning the given target object in different orientations and obtaining an image of the given target object in each of the different orientations using the image generation device 702. These images would then be stored on a computer readable storage medium.
Once the database 110 has been created by a process such as the one described above, it can be incorporated into a system such as the system 100 shown in
Therefore, the example method described in connection with
As described above, the apparatus 704 is adapted for processing an image of a given target object in a given orientation to generate a corresponding filter data element.
Optionally, image processing and enhancement can be performed on the image of the given target object to obtain better matching performance depending on the environment and application.
Many methods for generating filters are known and a few such methods will be described later on.
For example, in one case, the generation of the reference template or filter data element may be performed in a few steps. First, the background is removed from the image of the given target object. In other words, the image is extracted from the background and the background is replaced by a black background. The resulting image is then processed through a Fourier transform function. The result of this transform is a complex image. The resulting Fourier transform (or its complex conjugate) may then be used as the filter data element corresponding to the image of the given target object.
Alternatively, the filter data element may be derived on the basis of a function of a Fourier transform of the image of the given target object in the given orientation. For example, a phase only filter (POF) may be generated by the apparatus 704. A phase only filter (POF) for example contains the complex conjugate of the phase information (between zero and 2π) which is mapped to a 0 to 255 range values. These 256 values correspond in fact to the 256 levels of gray of an image. The reader is invited to refer to the following document, which is hereby incorporated by reference herein, for additional information regarding phase only filters (POF): “Phase-Only Matched Filtering”, Joseph L. Horner and Peter D. Gianino, Appl. Opt. Vol. 23 no. 6, 15 Mar. 1994, pp. 812-816.
As another possible alternative, the filter may be derived on the basis of a function of a Fourier transform of a composite image, the composite image including a component derived from the given target object in the given orientation. For example, in order to reduce the amount of data needed to represent the whole range of 3D orientations that a single target object can take, the apparatus 704 may be operative for generating a MACE (Minimum Average Correlation Energy) filter for a given target object. Typically, the MACE filter combines several different 2D projections of a given object and encodes them in a single MACE filter instead of having one 2D projection per filter. One of the benefits of using MACE filters is that the resulting database 110 would take less space since it would include fewer items. Also, since the number of correlation operations needed to identify a single target object would be reduced, the total processing time to determine whether a given object is present would also be reduced. The reader is invited to refer to the following document, which is hereby incorporated by reference herein, for additional information regarding MACE filters: Mahalanobis, A., B. V. K. Vijaya Kumar, and D. Casasent (1987); Minimum average correlation energy filters, Appl. Opt. 26 no. 17, 3633-3640.
It will readily be appreciated that various other types of templates or filters can be generated.
Output Module 108
In this embodiment, the output module 108 conveys to a user of the system 100 information derived at least in part on the basis of the detection signal 160.
The output controller 200 receives from the apparatus 106 the detection signal 160 conveying the presence of one or more target objects (hereinafter referred to as “detected target objects”) in the receptacle 104. In one embodiment, the detection signal 160 conveys information regarding the position and/or orientation of the one or more target detected target objects within the receptacle 104. The detection signal 160 may also convey one or more target object identifier data elements (such as the identifier data elements 404 of the entries 4021-402N in the database 110 described above), which permit identification of the one or more detected target objects.
The output controller 200 then releases a signal for causing the output device 202 to convey information related to the one or more detected target objects to a user of the system 100.
In one embodiment, the output controller 200 may be adapted to cause a display of the output device 202 to convey information related to the one or more detected target objects. For example, the output controller 200 may generate image data conveying the location of the one or more detected target objects within the receptacle 104. The output controller 200 may also extract characteristics of the one or more detected target objects from the database 110 on the basis of the target object identifier data element and generate image data conveying the characteristics of the one or more detected target objects. As another example, the output controller 200 may generate image data conveying the location of the one or more detected target objects within the receptacle 104 in combination with the input image 800 generated by the image generation device 102.
In another embodiment, the output controller 200 may be adapted to cause an audio unit of the output device 202 to convey information related to the one or more detected target objects. For example, the output controller 200 may generate audio data conveying the presence of the one or more detected target objects, the location of the one or more detected target objects within the receptacle 104, and the characteristics of the one or more detected target objects.
The output device 202 may be any device suitable for conveying information to a user of the system 100 regarding the presence of one or more target objects in the receptacle 104. The information may be conveyed in visual format, audio format, or as a combination of visual and audio formats.
For example, the output device 202 may include a display adapted for displaying in visual format information related to the presence of the one or more detected target objects.
In another example, the output device 202 may include a printer adapted for displaying in printed format information related to the presence of the one or more detected target objects. In yet another example, the output device 202 may include an audio unit adapted for releasing an audio signal conveying information related to the presence of the one or more detected target objects. In yet another example, the output device 202 may include a set of visual elements, such as lights or other suitable visual elements, adapted for conveying in visual format information related to the presence of the one or more detected target objects.
It will be appreciated that other suitable types of output devices may be used in other embodiments.
In one embodiment, which will now be described with reference to
An example of a method implemented by the apparatus 1510 is illustrated in
In this case, the apparatus 1510 comprises a first input 1512, a second input 1502, a third input 1504, a user input 1550, a processing unit 1506, and an output 1508.
The first input 1512 is adapted for receiving an image signal associated with a receptacle, the image signal conveying an input image related to contents of the receptacle (e.g., the image signal 150 associated with the receptacle 104 and conveying the input image 800 related to contents of the receptacle 104).
The second input 1502 is adapted for receiving a detection signal conveying a presence of at least one target object in the receptacle (e.g., the detection signal 160 conveying the presence of one or more target objects in the receptacle 104). Various information can be received at the second input 1502 depending on the specific implementation of the apparatus 106. Examples of information that may be received include information about a position of each of the at least one detected target object within the receptacle, information about a level of confidence of the detection, and information allowing identification of each of the at least one detected target object.
The third input 1504 is adapted for receiving from the database 110 additional information regarding the one or more target objects detected in the receptacle. Various information can be received at the third input 1504 depending on contents of the database 110. Examples of information that may be received include images depicting each of the one or more detected target objects and/or characteristics of the target object. Such characteristics may include, without being limited to, the name of the detected target object, dimensions of the detected target object, its associated threat level, the recommended handling procedure when such a target object is detected, and any other suitable information.
The user input 1550 is adapted for receiving signals from a user input device, the signals conveying commands for controlling the information displayed by the graphical user interface or for modifying (e.g., annotating) the displayed information. Any suitable user input device for providing user commands may be used such as, for example, a mouse, keyboard, pointing device, speech recognition unit, touch sensitive screen, etc.
The processing unit 1506 is in communication with the first input 1512, the second input 1502, the third input 1504, and the user input 1550 and implements the graphical user interface.
The output 1508 is adapted for releasing a signal for causing the output device 202 to display the graphical user interface implemented by the processing unit 1506.
An example of the graphical user interface implemented by the apparatus 1510 is now described with reference to
In this example, the graphical user interface displays first information 1604 conveying an input image related to contents of a receptacle, based on an image signal received at the input 1512 of the apparatus 1510. The input image may be in any suitable format and may depend on the format of the image signal received at the input 1512. For example, the input image may be of type x-ray, gamma-ray, computed tomography (CT), TeraHertz, millimeter wave, or emitted radiation, amongst others.
The graphical user interface also displays second information 1606 conveying a presence of one or more target objects in the receptacle based on the detection signal received at the input 1502 of the apparatus 1510. The second information 1606 is derived at least in part based on the detection signal received at the second input 1502. The second information 1606 may be displayed simultaneously with the first information 1604. In one case, the second information 1606 may convey position information regarding each of the at least one detected target object within the receptacle. The second information 1606 may convey the presence of one or more target objects in the receptacle in textual format, in graphical format, or as a combination of graphical information and textual information. In textual format, the second information 1606 may appear in a dialog box with a message such as “A ‘target_object_name’ has been detected.” or any conceivable variant. In the example shown in
The graphical user interface may also provide a control 1608 allowing a user to cause third information to be displayed, the third information conveying at least one characteristic associated to the one or more detected target objects. For example, the control 1608 may allow the user to cause the third information to be displayed by using an input device such as, for example, a mouse, keyboard, pointing device, speech recognition unit, touch sensitive screen, etc. In the example shown in
The first information 1604 and the second information 1606 may be displayed in a first viewing window 1602 as shown in
With reference to
For example, in this case, the third information conveys, for each detected target object, an image 1632 and object characteristics 1638 including a description, a risk level, and a level of confidence for the detection. Other types of information that may be conveyed include, without being limited to: a handling procedure when such a target object is detected, dimensions of the detected target object, or any other information that could assist a user in validating other information that is provided, confirm presence of the detected target object or facilitate its handling, etc. The third information may be conveyed in textual formal, graphical format, or both. For instance, the third information may include information related to the level of confidence for the detection using a color scheme. An example of a possible color scheme that may be used may be:
As another example, the third information may include information related to the level of confidence for the detection using a shape scheme. Such a shape-based scheme to show information related to the level of confidence for the detection may be particularly useful for individuals who are color blind or for use with monochromatic displays. An example of a possible shape scheme that may be used may be:
In one embodiment, the processing unit 1506 is adapted to transmit a query signal to the database 110, on a basis of information conveyed by the detection signal received at the input 1502, in order to obtain certain information associated to one or more detected target objects, such as an image, a description, a risk level, and a handling procedure, amongst others. In response to the query signal, the database 110 transmits the requested information to the processing unit 1506 via the input 1504. Alternatively, a signal conveying information associated with the one or more detected target objects can be automatically provided to the apparatus 1510 without requiring a query.
With continued reference to
Each entry in the detected target object list 1634 may include information conveying a level of confidence associated to the presence of the corresponding target object in the receptacle. The information conveying a level of confidence may be extracted from the detection signal received at input 1502. For example, the processing unit 1506 may process a data element indicative of the level of confidence received in the detection signal in combination with a detection sensitivity level. When the level of confidence associated to the presence of a particular target object in the receptacle conveyed by the data element in the detection signal is below the detection sensitivity level, the second information 1606 associated with the particular target object is omitted from the graphical user interface. In addition, the particular target object is not listed in the detected target object list 1634. In other words, in that example, only information associated to target objects for which detection levels of confidence exceed the detection sensitivity level is provided by the graphical user interface.
Each entry in the detected target object list 1634 may include information conveying a threat level (not shown) associated to the corresponding detected target object. The information conveying a threat level may be extracted from the signal received from the database 110 received at the third input 1504. The threat level information associated to a particular detected object may convey the relative threat level of the particular detected target object compared to other target objects in the database 110. For example, a gun would be given a relatively high threat level while a metallic nail file would be given a relatively low threat level, and perhaps a pocket knife would be given a threat level between that of the nail file and the gun.
Functionality may be provided to a user for allowing the user to sort the entries in the detected target object list 1634 based on one or more selection criteria. Such criteria may include, without being limited to, the detection levels of confidence and/or the threat level. For example, such functionality may be enabled by displaying a control (not shown) on the graphical user interface in the form of a pull-down menu providing a user with a set of sorting criteria and allowing the user to select the criteria via an input device. In response to the user's selection, the entries in the detected target object list 1634 are sorted based on the criteria selected by the user. Other manners for providing such functionality will become apparent and as such will not be described further here.
Functionality may also be provided to the user for allowing the user to add and/or remove one or more entries in the detected target object list 1634. Removing an entry may be desirable, for example, when screening personnel observes the detection results and decides that the detection was erroneous or, alternatively, that the object detected is not particularly problematic. Adding an entry may be desirable, for example, when the screening personnel observes the presence of a target object, which was not detected, on the image displayed. When an entry from the detected target object list 1634 is removed/added, the user may be prompted to enter information conveying a reason why the entry was removed/added from/to the detected target object list 1634. Such information may be entered using any suitable input device such as, for example, a mouse, keyboard, pointing device, speech recognition unit, or touch sensitive screen, to name a few.
In this embodiment, the graphical user interface enables a user to select one or more entries from the detected target object list 1634 for which third information is to be displayed in the second viewing window 1630. For example, the user can select one or more entries from the detected target object list 1634 by using an input device. A signal conveying the user's selection is received at the user input 1550. In response to receiving that signal at the user input 1550, information associated with the one or more entries selected in the detected target object list 1634 is displayed in the second viewing window 1630.
The graphical user interface may be adapted for displaying a second control (not shown) for allowing a user to cause the second information to be removed from the graphical user interface.
The graphical user interface may also be adapted for displaying one or more additional controls 1636 for allowing a user to modify a configuration of the graphical user interface. For example, the graphical user interface may display a control window in response to actuation of a control button 1680 allowing a user to select screening options. An example of such a control window is shown in
It is to be understood that other options may be provided to a user and that of the above example options may be omitted in certain embodiments.
In addition, certain options may be selectively provided to certain users or, alternatively, may require a password to be provided. For example, the setting threshold sensitivity/confidence level 1660 may only be made available to user having certain privileges (e.g., screening supervisors or security directors). As such, the graphical user interface may include some type of user identification/authentication functionality, such as a login process, to identify/authenticate a user. Alternatively, the graphical user interface, upon selection by a user of the setting threshold sensitivity/confidence level 1660 option, may prompt the user to enter a password for allowing the user to modify the detection sensitivity level of the screening system.
The graphical user interface may be adapted to allow a user to add complementary information to the information being displayed on the graphical user interface. For example, the user may be enabled to insert markings in the form of text and/or visual indicators in an image displayed on the graphical user interface. The markings may be used, for example, to emphasize certain portions of the receptacle. The marked-up image may then be transmitted to a third party location, such as a checking station, so that the checking station is alerted to verify the marked portion of the receptacle to potentially locate a target object. In such an implementation, the user input 1550 receives signals from an input device, the signals conveying commands for marking the image displayed in the graphical user interface. Any suitable input device for providing user commands may be used such as, for example, a mouse, keyboard, pointing device, speech recognition unit, touch sensitive screen, etc.
The apparatus 1510 may be adapted to store a history of the image signals received at the first input 1512 conveying information related to the contents of previously screened receptacles. The image signals may be stored in association with the corresponding detection signals received at the input 1502 and any corresponding user input signals received at the input 1550. The history of prior images may be accessed through a suitable control (not shown) provided on the graphical user interface. The control may be actuated by a user to cause a list for prior images to be displayed to the user. The user may then be enabled to select one or more entries in the list of prior images. For instance, the selection may be effected on the basis of the images themselves or by allowing the user to specify either a time or time period associated to the images in the history of prior images. In response to a user selection, the one or more images from the history of prior images may then be displayed to the user along with information regarding the target objects detected in those images. When multiple images are selected, the selected images may be displayed concurrently with another or may be displayed separately.
The apparatus 1510 may also be adapted to assign a classification to a receptacle depending upon the detection signal received at the second input 1502. The classification criteria may vary from one implementation to another and may be further conditioned on a basis of external factors such as national security levels. The classification may be a two level classification, such as an “ACCEPTED/REJECTED” type of classification, or alternatively may be a multi-level classification. An example of a multi-level classification is a three level classification where receptacles are classified as “LOW/MEDIUM/HIGH RISK”. The classifications may then be associated to respective handling procedures. For example, receptacles classified as “REJECT” may be automatically assigned to be manually inspected while receptacles classified as “ACCEPTED” may proceed without such an inspection. In one embodiment, each class is associated to a set of criteria. Examples of criteria may include, without being limited to: a threshold confidence level associated to the detection process, the level of risk associated with the target object detection, and whether a target object was detected. It will be appreciated that other criteria may be used.
Apparatus 106
With reference to
The first input 310 is adapted for receiving the image signal 150 associated with the receptacle 104 from the image generation device 102. It is recalled that the image signal 150 conveys the input image 800 related to the contents of the receptacle 104. The second input 314 is adapted for receiving data elements from the database 110, more specifically, filter data elements 4141-414K or image data elements 4121-412K associated with target objects. That is, in some embodiments, a data element received at the second input 314 may be a filter data element 414k while in other embodiments, a data element received at the second input 314 may be an image data element 412k. It will be appreciated that in embodiments where the database 110 is part of the apparatus 106, the second input 314 may be omitted. The output 312 is adapted for releasing, towards the output module 108, the detection signal 160 conveying the presence of one or more target objects in the receptacle 104.
Generally speaking, the processing unit of the apparatus 106 receives the image signal 150 associated with the receptacle 104 from the first input 310 and processes the image signal 150 in combination with the data elements associated with target objects (received from the database 110 at the second input 314) in an attempt to detect the presence of one or more target objects in the receptacle 104. In response to detection of one or more target objects (hereinafter referred to as “detected target objects”) in the receptacle 104, the processing unit of the apparatus 106 generates and releases at the output 312 the detection signal 160 which conveys the presence of the one or more detected target objects in the receptacle 104.
The functional entities of the processing unit of the apparatus 106 implement a process, an example of which is depicted in
Step 500
As mentioned above, in this embodiment, the correlation operation is performed by a digital correlator. Two examples of implementation of a suitable correlator 302 are shown in
In a first example of implementation, now described with reference to
In a second example of implementation, now described with reference to
In this second example of implementation, the data element accessed at step 503 thus conveys a particular filter 804′ for a particular image 804. Thus, in a modified version of step 504, and with continued reference to
More specifically, the detection signal generator module 306 is adapted for processing the correlation output to detect peaks. A strong intensity peak in the correlation output indicates a match between the input image 800 related to the contents of the receptacle 104 and the particular image 804. The location of the peak also indicates the location of the center of the particular image 804 in the input image 800 related to the contents of the receptacle 104.
The result of this processing is then conveyed to the user by the output module 108.
For more information regarding Fourier transforms, the reader is invited to consider B. V. K. Vijaya Kumar, Marios Savvides, Krithika Venkataramani, and Chunyan Xie, “Spatial frequency domain image processing for biometric recognition”, Biometrics ICIP Conference 2002 or alternatively J. W. Goodman, Introduction to Fourier Optics, 2nd Edition, McGraw-Hill, 1996, which is hereby incorporated by reference herein.
Fourier Transform and Spatial Frequencies
The Fourier transform as applied to images will now be described in general terms. The Fourier transform is a mathematical tool used to convert the information present within an object's image into its frequency representation. In short, an image can be seen as a superposition of various spatial frequencies and the Fourier transform is a mathematical operation used to compute the intensity of each of these frequencies within the image. The spatial frequencies represent the rate of variation of image intensity in space. Consequently, a smooth or uniform pattern mainly contains low frequencies. Sharply contoured patterns, by contrast, exhibit a higher frequency content.
The Fourier transform of an image f(x,y) is given by:
F(u,v)=∫∫f(x,y)e−j2π(ux+vy)dxdy (1)
where u, v are the coordinates in the frequency domain. Thus, the Fourier transform is a global operator: changing a single frequency of the Fourier transform affects the whole object in the spatial domain.
A correlation operation can be mathematically described by:
where ε and ξ represent the pixel coordinates in the correlation plane, C(ε,ξ) stands for the correlation, x and y identify the pixel coordinates of the input image, f(x, y) is the original input image, and h*(ε,ξ) is the complex conjugate of the correlation filter.
In the frequency domain, the same expression takes a slightly different form:
C(ε,ξ)=ℑ−1(F(u,v)H*(u,v)) (3)
where ℑ is the Fourier transform operator, u and v are the pixel coordinates in the Fourier plane, F(u,v) is the Fourier transform of the image f(x,y), and H*(u,v) is the Fourier transform complex conjugate of the template (or filter). Thus, the correlation between an input image and a template (or filter) is equivalent, in mathematical terms, to the multiplication of their respective Fourier transforms, provided that the complex conjugate of the template (or filter) is used. Consequently, the correlation can be defined in the spatial domain as the search for a given pattern (template/filter), or in the frequency domain, as filtering operation with a specially designed matched filter.
In order to speed up the computation of the correlation, the Fourier transform of a particular image can be computed beforehand and submitted to the correlator as a filter (or template). This type of filter is called a matched filter.
Generation of Filters (or Templates)
Matched filters, as their name implies, are specifically adapted to respond to one image in particular: they are optimized to respond to an object with respect to its energy content. Generally, the contour of an object corresponds to its high frequency content. This can be easily understood as the contour represent areas where the intensity varies rapidly (hence a high frequency).
In order to emphasize the contour of an object, the matched filter can be divided by its module (the image is normalized), over the whole Fourier transform image. The resulting filter is called a Phase-Only Filter (POF) and is defined by:
The reader is invited to refer to the following document, which is hereby incorporated herein by reference, for additional information regarding phase only filters (POF): “Phase-Only Matched Filtering”, Joseph L. Horner and Peter D. Gianino, Appl. Opt. Vol. 23 no. 6, 15 Mar. 1994, pp. 812-816.
Because these filters are defined in the frequency domain, normalizing over the whole spectrum of frequencies implies that each of the frequency components is considered with the same weight. In the spatial domain (e.g., usual real-world domain), this means that the emphasis is given to the contours (or edges) of the object. As such, the POF filter provides a higher degree of discrimination, sharper correlation peaks and higher energy efficiency.
The discrimination provided by the POF filter, however, has some disadvantages. It turns out that, the images are expected to be properly sized, otherwise the features might not be registered properly. To understand this requirement, imagine a filter defined out of a given instance of a ‘2’. If that filter is applied to a second instance of a ‘2’ whose contour is slightly different, the correlation peak will be significantly reduced as a result of the sensitivity of the filter to the original shape. A different type of filter, termed a composite filter, was introduced to overcome these limitations. The reader is invited to refer to the following document, which is hereby incorporated herein by reference, for additional information regarding this different type of composite filter: H. J. Caufield and W. T. Maloney, Improved Discrimination in Optical Character Recognition, Appl. Opt., 8, 2354, 1969.
In accordance with specific implementations, filters can be designed by:
The latter procedure forms the basis for the generation of composite filters. Thus composite filters are composed of the response of individual POF filters to the same symbol. Mathematically, this can be expressed by:
hcomp(x,y)=αaha(x,y)+αbhb(x,y)+ . . . +αxhx(x,y) (5)
A filter generated in this fashion is likely to be more robust to minor signature variations as the irrelevant high frequency features will be averaged out. In short, the net effect is an equalization of the response of the filter to the different instances of a given symbol.
Composite filters can also be used to reduce the response of the filter to the other classes of symbols. In equation (5) above, if the coefficient b, for example, is set to a negative value, then the filter response to a symbol of class b will be significantly reduced. In other words, the correlation peak will be high if ha(x,y) is at the input image, and low if hb(x,y) is present at the input. A typical implementation of composite filters is described in: Optical Character Recognition (OCR) in Uncontrolled Environments Using Optical Correlators, Andre Morin, Alain Bergeron, Donald Prevost and Ernst A. Radloff, Proc. SPIE Int. Soc. Opt. Eng. 3715, 346 (1999), which is hereby incorporated herein by reference.
Screening of People
It will be appreciated that the concepts described above can also be readily applied to the screening of people. For example, in an alternative embodiment, a system for screening people is provided. The system includes components similar to those described in connection with the system depicted in
Examples of Physical Implementation
It will be appreciated that, in some embodiments, certain functionality of various components described herein (including the apparatus 106) can be implemented on a general purpose digital computer 1300, an example of which is shown in
In other embodiments, certain functionality of various components described herein (including the apparatus 106) can be implemented using pre-programmed hardware or firmware elements (e.g., application specific integrated circuits (ASICs), electrically erasable programmable read-only memories (EEPROMs), etc.) or other related elements.
It will also be appreciated that the system 100 depicted in
The server system 1610 includes a program element 1616 for execution by a CPU. Program element 1616 includes functionality to implement methods described above and includes the necessary networking functionality to allow the server system 1610 to communicate with the client systems 1602, 1604, 1606 and 1608 over network 1612. In a specific implementation, the client systems 1602, 1604, 1606 and 1608 include display units responsive to signals received from the server system 1610 for displaying information to viewers of these display units.
Although the present invention has been described in considerable detail with reference to certain preferred embodiments thereof, variations and refinements are possible without departing from the spirit of the invention. Therefore, the scope of the invention should be limited only by the appended claims and their equivalents.
This application claims the benefit under 35 USC 120 and is a continuation-in-part of: U.S. patent application Ser. No. 11/407,217 filed on Apr. 20, 2006; U.S. patent application Ser. No. 11/431,719 filed on May 11, 2006; U.S. patent application Ser. No. 11/431,627 filed on May 11, 2006; and International Application PCT/CA2005/000716 designating the U.S. and filed on May 11, 2005. This application also claims the benefit under 35 USC 119(e) of U.S. Provisional Patent Application No. 60/865,340 filed on Nov. 10, 2006. These related applications are hereby incorporated by reference herein.
| Number | Date | Country | |
|---|---|---|---|
| 60865340 | Nov 2006 | US |
| Number | Date | Country | |
|---|---|---|---|
| Parent | 11407217 | Apr 2006 | US |
| Child | 11785116 | Apr 2007 | US |
| Parent | 11431719 | May 2006 | US |
| Child | 11785116 | Apr 2007 | US |
| Parent | 11431627 | May 2006 | US |
| Child | 11785116 | Apr 2007 | US |
| Parent | PCT/CA05/00716 | May 2005 | US |
| Child | 11785116 | Apr 2007 | US |