The embodiments described herein relate generally to computed tomography, and more particularly to grouping and classifying objects that are detected in a computed tomography system.
In at least some known computed tomography (“CT”) imaging systems used for baggage scanning in airports, for example, a human operator (“user”) separately identifies each object that passes through a CT scanner. That is, in these known CT systems, multiple identical or similar objects are each individually reviewed by a user and classified as either contraband or non-contraband. For example, if one hundred similar bottles pass through the scanner, either sequentially or in one large container, one bottle may contain an explosive substance whereas the other bottles do not. The effort to review and determine whether each individual bottle represents contraband or non-contraband is put forth by the user of the scanner. The presence of a large number of nuisance alarms reduces the probability that a screener or user will correctly identify the true contraband item.
In one aspect, a method for classifying objects in volumetric computed tomography (CT) data is provided. The method is implemented by a computing device having a processor and a memory coupled to the processor. The method includes receiving, by the computing device, one or more volumetric CT data sets, identifying, by the computing device, a first object in the one or more volumetric CT data sets. The method additionally includes identifying, by the computing device, a second object in the one or more volumetric CT data sets, determining, by the computing device, a first similarity amount between the first object and the second object, identifying, by the computing device, a first group comprising at least the first object and the second object, based at least in part on the first similarity amount, and designating, by the computing device, all of the objects in the first group as non-contraband.
In another aspect, a computing device comprising a processor and a memory coupled to the processor is provided. The memory includes computer-executable instructions that, when executed by the processor, cause the computing device to receive one or more volumetric CT data sets. The instructions additionally cause the computing device to identify a first object in the one or more volumetric CT data sets, identify a second object in the one or more volumetric CT data sets, determine a first similarity amount between the first object and the second object, identify a first group comprising at least the first object and the second object, based at least in part on the first similarity amount, and designate all of the objects in the first group as non-contraband.
In another aspect, a computer-readable storage device having computer-executable instructions embodied thereon is provided. When executed by a computing device having a processor and a memory coupled to the processor, the computer-executable instructions cause the computing device to perform the steps of receiving one or more volumetric CT data sets and identifying a first object in the one or more volumetric CT data sets. The computer-executable instructions additionally cause the computing device to perform the steps of identifying a second object in the one or more volumetric CT data sets, determining a first similarity amount between the first object and the second object, identifying a first group comprising at least the first object and the second object, based at least in part on the first similarity amount, and designating all of the objects in the first group as non-contraband.
Computing device 104 also includes at least one media output component 206 for presenting information to user 208. Media output component 206 is any component capable of conveying information to user 208. In some embodiments, media output component 206 includes an output adapter such as a video adapter and/or an audio adapter. An output adapter is operatively coupled to processor 202 and operatively couplable to an output device such as a display device (e.g., a liquid crystal display (LCD), organic light emitting diode (OLED) display, cathode ray tube (CRT), or “electronic ink” display) or an audio output device (e.g., a speaker or headphones). In some embodiments, at least one such display device and/or audio device is included in media output component 206.
In some embodiments, computing device 104 includes an input device 210 for receiving input from user 208. Input device 210 may include, for example, a keyboard, a pointing device, a mouse, a stylus, a touch sensitive panel (e.g., a touch pad or a touch screen), a gyroscope, an accelerometer, a position detector, or an audio input device. A single component such as a touch screen may function as both an output device of media output component 206 and input device 210.
Computing device 104 may also include a communication interface 212, which is communicatively couplable to a remote device such as scanner 102. Communication interface 212 may include, for example, a wired or wireless network adapter or a wireless data transceiver for use with a mobile phone network (e.g., Global System for Mobile communications (GSM), 3G, 4G or Bluetooth) or other mobile data network (e.g., Worldwide Interoperability for Microwave Access (WIMAX)).
Stored in memory area 204 are, for example, processor-executable instructions for providing a user interface to user 208 via media output component 206 and, optionally, receiving and processing input from input device 210. Memory area 204 may include, but is not limited to, any computer-operated hardware suitable for storing and/or retrieving processor-executable instructions and/or data. Memory area 204 may include random access memory (RAM) such as dynamic RAM (DRAM) or static RAM (SRAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and non-volatile RAM (NVRAM). Further, memory area 204 may include multiple storage units such as hard disks or solid state disks in a redundant array of inexpensive disks (RAID) configuration. Memory area 204 may include a storage area network (SAN) and/or a network attached storage (NAS) system.
In some embodiments, memory area 204 includes memory that is integrated in computing device 104. In some embodiments, memory area 204 includes a database, for example a relational database. For example, computing device 104 may include one or more hard disk drives as memory area 204. Memory area 204 may also include memory that is external to computing device 104 and may be accessed by a plurality of computing devices. The above memory types are exemplary only, and are thus not limiting as to the types of memory usable for storage of processor-executable instructions and/or data. Computing device 104 contains, within memory area 204, processor-executable instructions for receiving one or more sets of volumetric CT data from scanner 102, and identifying, grouping, and classifying objects in the received volumetric CT data. As will be understood by those skilled in the art of object identification, an object may be created by an automatic examination of a CT volume to find contiguous voxels that can be formed into an object. Alternatively, an object could be defined by empty space around it, or by detecting a regular array of objects.
In some embodiments, if all or a certain subset of characteristics for two or more objects are less than corresponding thresholds stored in memory area 204, computing device 104 determines that the objects are in the same group. In other embodiments, computing device 104 generates a total “distance” measurement by applying a weighting factor to each of the characteristics of objects scanned in scanner 102. In this context, “distance” is the inverse of “similarity”. For example, less “distance” means more “similarity”. If the total “distance” is less than a threshold value, computing device 104 determines that the objects are in the same group. As a group is built, computing device 104 determines an average value for each of the characteristics, such that a composite representation of the group is generated in memory area 204. Characteristics of subsequent objects must be within established thresholds (i.e., plus or minus a given amount) of the average values to be included in a particular group. Such a method prevents “creep” of the average values of the characteristics associated with a given group, which could otherwise occur when a first object is compared with a second object on an outside edge of characteristic values defining objects in the group. Characteristics that are evaluated in the methods described above include at least one of a mass, a volume, a density, a surface texture, a ratio of a surface area to volume, a first dimension, a ratio of the first dimension and to a second dimension, and a contour of a projection.
The object groups 304, 306, 308, and 310 displayed in overview 302 are located in container 106 (
User interface 300 includes a first field 322 that displays a total number of object groups 304, 306, 308, and 310 under review. More specifically, first field 322 displays the total number of groups that computing device 104 determined the objects in container 108 fell into, based on grouping methods such as those described above. User interface 300 additionally includes a second field 324 that displays a selected group number. User interface 300 also includes a third field 326 that displays a number of objects within the selected group. A decrease button 328 and an increase button 330 included in user interface 300 allow user 208 to increase or decrease the selected group number. When the selected group number is changed, computing device 104 causes overview 302 to be updated to visually indicate the selected group and causes section 320 to be updated to display a representative object from the selected group.
User interface 300 additionally includes a clear group button 332. When computing device 104 determines that user 208 has pressed clear group button 332, computing device 104 designates all objects in the selected group as non-contraband and stores the designation in memory area 204. Accordingly, user 208 is relieved of having to individually view each object in a group and determine whether each object in the group represents contraband or non-contraband. In some embodiments, computing device 104 performs a further step of decreasing the total number of groups in first field 322, such that the cleared group (i.e., formerly the selected group) is no longer selectable. In some embodiments, all objects are initially designated as contraband and one or more of the objects are subsequently designated as non-contraband as described above. User interface 300 additionally includes a radio button 334. When user 208 selects radio button 334, computing device 104 displays a user interface similar to user interface 400 (
When receiving volumetric CT data pertaining to one or more non-contraband, computing device 104, in some embodiments, will exclude the non-contraband from user interfaces 300 and 400, such that user 208 is not presented with them. In other embodiments, computing device 104 may display a notification through media output component 206 that the objects have been identified as non-contraband. By maintaining a library of non-contraband in memory area 204, comparing new objects to the library of non-contraband, and designating one or more new objects as non-contraband, user 208 is relieved of having to determine whether each object entering scanner 102 represents contraband or non-contraband.
To aid in comparing objects to each other, user interface 400 includes a set reference button 410 and a compare button 412. When computing device 104 determines that user 208 has pressed set reference button 410, computing device 104 stores a designation in memory area 204 that the selected object in section 320 is a reference object. By using decrease button 404 and/or increase button 406, user 208 may then select another object from the selected group (e.g., first group 304). User 208 may then press compare button 412. When computing device 104 determines that compare button 412 has been pressed, computing device 104 displays, through media output component 206, a comparison of the reference object and the selected object. In some embodiments, computing device 104 displays a comparison of the reference object and the selected object by alternately displaying the reference object and the selected object, such that differences and similarities between the reference object and the selected object may be readily perceived by user 208. In other embodiments, computing device 104 displays the reference object and the selected object adjacent to each other or with one overlaid on top of the other. In other embodiments, computing device 104 additionally or alternatively displays, through media output component 206, a listing of numerical values for the characteristics of the reference object and the selected object, such that user 208 may numerically compare the characteristics of the reference object and the selected object.
It should be understood that processor as used herein means one or more processing units (e.g., in a multi-core configuration). The term processing unit, as used herein, refers to microprocessors, microcontrollers, reduced instruction set circuits (RISC), application specific integrated circuits (ASIC), logic circuits, and any other circuit or device capable of executing instructions to perform functions described herein.
It should be understood that references to memory mean one or more devices operable to enable information such as processor-executable instructions and/or other data to be stored and/or retrieved. Memory may include one or more computer readable media, such as, without limitation, hard disk storage, optical drive/disk storage, removable disk storage, flash memory, non-volatile memory, ROM, EEPROM, random access memory (RAM), and the like.
Additionally, it should be understood that communicatively coupled components may be in communication through being integrated on the same printed circuit board (PCB), in communication through a bus, through shared memory, through a wired or wireless data communication network, and/or other means of data communication. Additionally, it should be understood that data communication networks referred to herein may be implemented using Transport Control Protocol/Internet Protocol (TCP/IP), User Datagram Protocol (UDP), or the like, and the underlying connections may comprise wired connections and corresponding protocols, for example, Institute of Electrical and Electronics Engineers (IEEE) 802.3 and/or wireless connections and associated protocols, for example, an IEEE 802.11 protocol, an IEEE 802.15 protocol, and/or an IEEE 802.16 protocol.
A technical effect of systems and methods described herein includes at least one of: (a) receiving, by a computing device, one or more volumetric CT data sets; (b) identifying, by the computing device, a first object in the one or more volumetric CT data sets; (c) identifying, by the computing device, a second object in the one or more volumetric CT data sets; (d) determining, by the computing device, a first similarity amount between the first object and the second object; (e) identifying, by the computing device, a first group comprising at least the first object and the second object, based at least in part on the first similarity amount; and (f) designating, by the computing device, all of the objects in the first group as non-contraband.
Exemplary embodiments of systems and method for grouping and classifying objects in computed tomography data are described above in detail. The methods and systems are not limited to the specific embodiments described herein, but rather, components of systems and/or steps of the methods may be utilized independently and separately from other components and/or steps described herein. For example, the methods may also be used in combination with other imaging systems and methods, and are not limited to practice with only the systems as described herein.
Although specific features of various embodiments of the invention may be shown in some drawings and not in others, this is for convenience only. In accordance with the principles of the invention, any feature of a drawing may be referenced and/or claimed in combination with any feature of any other drawing.
This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims.