The disclosure relates but is not limited to generating a classifier configured to detect an object corresponding to a type of interest in an inspection image generated using penetrating radiation. The disclosure also relates but is not limited to determining whether or not an object corresponding to a type of interest is present in an inspection image generated using penetrating radiation. The disclosure also relates but is not limited to producing a device configured to determine whether or not an object corresponding to a type of interest is present in an inspection image generated using penetrating radiation. The disclosure also relates but is not limited to corresponding devices and computer programs or computer program products.
Inspection images of containers containing cargo may be generated using penetrating radiation. In some examples, a user may want to detect objects corresponding to a type of interest, such as a threat (such as a weapon, an explosive material or a radioactive material) or a contraband product (such as drugs or cigarettes) on the inspection images. Detection of such objects may be difficult. In some cases, the object may not be detected at all. In cases where the detection is not clear from the inspection images, the user may inspect the container manually, which may be time consuming for the user.
Aspects and embodiments of the disclosure are set out in the appended claims. These and other aspects and embodiments of the disclosure are also described herein.
Any feature in one aspect of the disclosure may be applied to other aspects of the disclosure, in any appropriate combination. In particular, method aspects may be applied to device and computer program aspects, and vice versa.
Furthermore, features implemented in hardware may generally be implemented in software, and vice versa. Any reference to software and hardware features herein should be construed accordingly.
Embodiments of the present disclosure will now be described, by way of example, with reference to the accompanying drawings, in which:
In the figures, similar elements bear identical numerical references.
The disclosure discloses an example method for generating a classifier configured to detect an object corresponding to a type of interest in an inspection image generated using penetrating radiation (e.g. x rays, but other penetrating radiation is envisaged). In typical examples, the type of interest may be a threat, such as a weapon (e.g. a gun or a rifle), an explosive material, a radioactive material, and/or the type of interest may be a contraband product, such as drugs or cigarettes as non-limiting examples.
The disclosure also discloses an example method for determining whether or not an object corresponding to the type of interest is present in an inspection image generated using penetrating radiation.
The disclosure also discloses an example method for producing a device configured to determine whether or not an object corresponding to the type of interest is present in an inspection image generated using penetrating radiation.
The disclosure also discloses corresponding devices and computer programs or computer program products.
The method 100 of
generating, at S1, training data including the plurality of training images 101 including the objects 110 corresponding to the type of interest; and
training, at S2, the classifier 1 by applying a machine learning algorithm, using the generated training data.
As disclosed in greater detail later, the training of the classifier 1 may be performed either using the generated training data or a combination of observed (i.e. real) training data and the generated training data.
As illustrated in
As illustrated in
Referring back to
As described in more detail later in reference to
As described above the classifier 1 is derived from the training data using a machine learning algorithm, and is arranged to produce an output indicative of detection or not of an object 11 corresponding to the type of interest in the inspection image 1000.
The classifier 1 is arranged to detect more easily the presence or not of the object 11, after it is stored in a memory 150 of the device 15 (as shown in
Once configured, the device 15 may provide an accurate detection of the object 11 by applying the classifier 1 to the inspection image 1000. The detection process is illustrated (as process 300) in
Computer System and Detection Device
The computer system 10 of
The system 10 may be configured to communicate with one or more devices 15, via the interface 13 and a link 30 (e.g. Wi-Fi connectivity, but other types of connectivity may be envisaged).
The memory 11 is configured to store, at least partly, data, for example for use by the processor 12. In some examples the data stored on the memory 11 may include data such as the training data (and the data used to generate the training data) and/or the GAN 2 and/or the machine learning algorithm.
In some examples, the processor 12 of the system 10 may be configured to perform, at least partly, at least some of the steps of the method 100 of
The detection device 15 of
In a non-limiting example, the device 15 may also include an apparatus 3 acting as an inspection system, as described in greater detail later. The apparatus 3 may be integrated into the device 15 or connected to other parts of the device 15 by wired or wireless connection.
In some examples, as illustrated in
In other words the apparatus 3 may be used to acquire the plurality of observed images 102 used to generate the training data and/or to acquire the inspection image 1000.
In some examples, the processor 152 of the device 15 may be configured to perform, at least partly, at least some of the steps of the method 100 of
Obtaining the Training Data
Referring back to
The classifier 1 is trained using the training data, each corresponding to an instance when the classification (e.g. “object present” or “object absent”) is known. As described in greater detail later, the training data may include:
the training images 101 of
observed images 102 where no object is present (i.e. “object absent”).
Referring back to
In some examples, during the training at S12, the generator 21 may generate synthetized images 103 (i.e. fake images) including synthetized objects 123 (i.e. fake objects), from at least random values (e.g. a random noise 124), based on the initial training data including observed images 102 (i.e. real images) including an observed object 120 (i.e. real object).
The observed images 102 may correspond to past images acquired in situ by the same device 15 that is being configured, e.g. using the apparatus 3 of the device 15 as depicted in
In some examples, during the training at S12, the discriminator 22 may aim at classifying images including the synthetized images 103 (i.e. fake images) and the observed images 102 (i.e. real images) by their nature, i.e. real or fake.
The generator 21 is configured to try to fool the discriminator 22 that the synthetized images 103 are real. The discriminator 22 is configured to try to classify the synthetized images 103 as fake and the observed images 102 as real.
The generator 21 and the discriminator 22 compete with each other, based on a loss function. The quantification of the success of the generator 21 and of the discriminator 22 during the training at S12 is defined by the loss function. The discriminator 21 is configured to try to minimize the loss function. The generator 22 is configured to try to maximize the loss function.
In some examples, the loss function may be a function of the Euler-Lagrange type.
In some examples, the loss function may include at least one of a least mean square function and/or a combination of weighted Gaussian kernels.
In some examples, combination of the weighted Gaussian kernels includes a combination C(P,Q,D), such that:
where: n is the number of kernels,
E[X] is the expectation of a X function;
D(x) is the output of the discriminator 22 for an image x,
D(y) is the output of the discriminator 22 for an image y,
P is the probability distribution of the observed images 102,
Q is the probability distribution of the synthetized images 103,
(wk) is a set of positive real numbers such that Σwk=1, and
σk is a set of strictly positive numbers acting as standard deviations parameters for the Gaussian kernels.
In some examples P and Q may be absolutely continuous with respect to a measure. In a non-limiting example, Q may be absolutely continuous with respect to a Lebesgue measure, and P may or may not be absolutely continuous with respect to the Lebesgue measure.
In some examples, the combination C(P,Q,D) includes a barycenter of any number of Gaussian kernels, and, as non-limiting examples, one to thirteen Gaussian kernels, such as three Gaussian kernels, five Gaussian kernels, seven Gaussian kernels or ten Gaussian kernels.
Referring back to
As illustrated in
Other architectures are also envisaged for the generator 21. For example, deeper architectures may be envisaged and/or an architecture of the same shape as the architecture shown in
Referring back to
The discriminator 22 may include a first module 221, a second module 222, a third module 223, a fourth module 224, and a fifth module 225. The first module 221 may have e.g. a kernel size of 4×4 and e.g. a stride of 2×2, the second module 222 may have e.g. a kernel size of 4×4 and e.g. a stride of 2×2, the third module 223 may have e.g. a kernel size of 4×4 and e.g. a stride of 2×2, the fourth module 224 may have e.g. a kernel size of 4×4 and e.g. a stride of 1×1. The fifth module 225 may be a fully connected, linear activation layer.
Other architectures are also envisaged for the discriminator 22. The discriminator 22 is closely tied to the generator 21, and may be modified similarly to the generator 21. Hence, the discriminator 22 may be deeper than the discriminator 22 shown in
Referring back to
As illustrated in
Referring back to
In such an example, in the training at S12 of the generator 21 and of the discriminator 22, one or more observed images 102 may include a representation 410 of a container 4. In such an example, generating the synthetized images 103 (and the training images 101 as described below) may include using at least a part of a representation 410 of the container 4 in the one or more observed images 102. In some examples the whole of the representation 410 may be used in the synthetized images 103.
In some examples, generating the synthetized images 103 (and the training images 101) may further include using one or more synthetized objects 123. As illustrated in
In some examples, generating the synthetized images 103 may include using a Beer-Lambert law for combining the at least a part of representation 410 of the container 4 in the one or more observed images 102 and the one or more synthetized objects 123.
The synthetized images 103 classified as real by the discriminator 22 may be used as the training images 101 as part of the training data for the classifier 1, e.g. as part of the training data where the object 110 is present.
Once the generator 21 has been trained, the generator 21 may generate synthetized images 103 that are realistic enough to be used as the training images 101 as part of the training data for the classifier 1, e.g. as part of the training data where the object 110 is present.
The GAN 2 enables generating a great amount of training images 101. In some examples the GAN 2 may generate several thousands of training images 101, for example several hundreds of thousands of training images 101, for examples several millions of training images 101, from a relatively small amount of initial training data including the observed images 102 (such as a few tens of observed images 102 or less). The GAN 2 enables generating training images 101 including realistic objects 110. The GAN 2 enables generating training images 101 including objects 110 which are not in the observed images 102, and the GAN 2 enables adding variability within a range of objects 110 and/or obtaining a greater range of objects 110 in the type of interest.
The generated training images 101, given their great numbers and/or their realistic objects 110 and/or their variability of the objects 110, enable improving the classifier 1.
Generating the Classifier
Referring back to
The learning process is typically computationally intensive and may involve large volumes of training data. In some examples, the processor 12 of system 10 may include greater computational power and memory resources than the processor 152 of the device 15. The classifier generation is therefore performed, at least partly, remotely from the device 15, at the computer system 10. In some examples, at least steps S1 and/or S2 of the method 100 are performed by the processor 12 of the computer system 10. However, if sufficient processing power is available locally then the classifier learning could be performed (at least partly) by the processor 152 of the device 15.
The machine learning step involves inferring behaviours and patterns based on the training data and encoding the detected patterns in the form of the classifier 1.
Referring back to
The classification labels for the training data (specifying known presence or absence states) may be known in advance. In some examples, the training data include the generated training data. The training data may include the training images 101 including one or more object 110 (as illustrated in
The training data also includes training images not including any objects corresponding to the type of interest. The training images not including any objects corresponding to the type of interest may correspond to past data (e.g. past images) acquired in situ by the same device 15 that is being configured, e.g. using the apparatus 3 of the device 15 as depicted in
Device Manufacture
As illustrated in
obtaining, at S21, a classifier 1 generated by the method 100 according to any aspects of the disclosure; and
storing, at S22, the obtained classifier 1 in the memory 151 of the device 15.
The classifier 1 may be stored, at S22, in the detection device 15. The classifier 1 may be created and stored using any suitable representation, for example as a data description including data elements specifying classification conditions and their classification outputs (e.g. presence or absence of the object 11). Such a data description could be encoded e.g. using XML or using a bespoke binary representation. The data description is then interpreted by the processor 152 running on the device 15 when applying the classifier 1.
Alternatively, the machine learning algorithm may generate the classifier 1 directly as executable code (e.g. machine code, virtual machine byte code or interpretable script). This may be in the form of a code routine that the device 15 can invoke to apply the classifier 1.
Regardless of the representation of the classifier 1, the classifier 1 effectively defines a decision algorithm (including a set of rules) for classifying a presence status of the object 11 based on input data (i.e. the inspection image 1000).
After the classifier 1 is generated, the classifier 1 is stored in the memory 151 of the device 15. The device 15 may be connected temporarily to the system 10 to transfer the generated classifier (e.g. as a data file or executable code) or transfer may occur using a storage medium (e.g. memory card). In one approach, the classifier is transferred to the device 15 from the system 10 over the network connection 30 (this could include transmission over the Internet from a central location of the system 10 to a local network where the device 15 is located). The classifier 1 is then installed at the device 15. The classifier could be installed as part of a firmware update of device software, or independently.
Installation of the classifier 1 may be performed once (e.g. at time of manufacture or installation) or repeatedly (e.g. as a regular update). The latter approach can allow the classification performance of the classifier to be improved over time, as new training data becomes available.
Applying the Classifier to Perform Object Detection
Presence or absence classification is based on the classifier 1.
After the device 15 has been configured with the classifier 1, the device 15 can use the classifier based on locally acquired inspection images 1000 to detect whether or not an object 11 is present in the inspection images 1000.
In general, the classifier 1 is configured to detect an object 11 corresponding to a type of interest in an inspection image 1000 generated using penetrating radiation, the inspection image 1000 including one or more features at least similar to the training data used to generate the classifier 1 by the machine learning algorithm.
The method 300 includes:
obtaining, at S31, an inspection image 1000;
applying, at S32, to the obtained image 1000, the classifier 1 generated by the method according to any aspects of the disclosure; and
determining, at S33, whether or not an object corresponding to the type of interest is present in the inspection image, based on the applying.
The classifier includes a plurality of output states. In some examples the classifier is configured to output one of: a state corresponding to a presence of an object corresponding to the type of interest in the inspection image, and/or a state corresponding to an absence of an object corresponding to the type of interest in the inspection image.
Optionally the method 300 may further include outputting, e.g. at S34, trigger data to trigger an alarm in response to detecting an object 11 corresponding to the type of interest in the inspection image 1000.
The alarm may include an alarm signal (visual or aural), e.g. for triggering a further detection (e.g. manual inspection) of the container 4 (e.g. for verification).
The disclosure may be advantageous but is not limited to customs and/or security applications.
The disclosure typically applies to cargo inspection systems (e.g. sea or air cargo).
The apparatus 3 of
The container 4 configured to contain the cargo may be, as a non-limiting example, placed on a vehicle. In some examples, the vehicle may include a trailer configured to carry the container 4.
The apparatus 3 of
The radiation source 5 is configured to cause the inspection of the cargo through the material (usually steel) of walls of the container 4, e.g. for detection and/or identification of the cargo. Alternatively or additionally, a part of the inspection radiation may be transmitted through the container 4 (the material of the container 4 being thus transparent to the radiation), while another part of the radiation may, at least partly, be reflected by the container 4 (called “back scatter”).
In some examples, the apparatus 3 may be mobile and may be transported from a location to another location (the apparatus 3 may include an automotive vehicle).
In the source 5, electrons are generally accelerated under a voltage between 100 keV and 15 MeV.
In mobile inspection systems, the power of the X-ray source 5 may be e.g., between 100 keV and 9.0 MeV, typically e.g., 300 keV, 2 MeV, 3.5 MeV, 4 MeV, or 6 MeV, for a steel penetration capacity e.g., between 40 mm to 400 mm, typically e.g., 300 mm (12 in).
In static inspection systems, the power of the X-ray source 5 may be e.g., between 1 MeV and 10 MeV, typically e.g., 9 MeV, for a steel penetration capacity e.g., between 300 mm to 450 mm, typically e.g., 410 mm (16.1 in).
In some examples, the source 5 may emit successive x-ray pulses. The pulses may be emitted at a given frequency, between 50 Hz and 1000 Hz, for example approximately 200 Hz.
According to some examples, detectors may be mounted on a gantry, as shown in
It should be understood that the inspection radiation source may include sources of other penetrating radiation, such as, as non-limiting examples, sources of ionizing radiation, for example gamma rays or neutrons. The inspection radiation source may also include sources which are not adapted to be activated by a power supply, such as radioactive sources, such as using Co60 or Cs137. In some examples, the inspection system includes detectors, such as x-ray detectors, optional gamma and/or neutrons detectors, e.g., adapted to detect the presence of radioactive gamma and/or neutrons emitting materials within the load, e.g., simultaneously to the X-ray inspection. In some examples, detectors may be placed to receive the radiation reflected by the container 4.
In the context of the present disclosure, the container 4 may be any type of container, such as a holder or a box, etc. The container 4 may thus be, as non-limiting examples a palette (for example a palette of European standard, of US standard or of any other standard) and/or a train wagon and/or a tank and/or a boot of the vehicle and/or a “shipping container” (such as a tank or an ISO container or a non-ISO container or a Unit Load Device (ULD) container).
In some examples, one or more memory elements (e.g., the memory of one of the processors) can store data used for the operations described herein. This includes the memory element being able to store software, logic, code, or processor instructions that are executed to carry out the activities described in the disclosure.
A processor can execute any type of instructions associated with the data to achieve the operations detailed herein in the disclosure. In one example, the processor could transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by a processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array (FPGA), an erasable programmable read only memory (EPROM), an electrically erasable programmable read only memory (EEPROM)), an ASIC that includes digital logic, software, code, electronic instructions, flash memory, optical disks, CD-ROMs, DVD ROMs, magnetic or optical cards, other types of machine-readable mediums suitable for storing electronic instructions, or any suitable combination thereof.
As one possibility, there is provided a computer program, computer program product, or computer readable medium, including computer program instructions to cause a programmable computer to carry out any one or more of the methods described herein. In example implementations, at least some portions of the activities related to the processors may be implemented in software. It is appreciated that software components of the present disclosure may, if desired, be implemented in ROM (read only memory) form. The software components may, generally, be implemented in hardware, if desired, using conventional techniques.
Other variations and modifications of the system will be apparent to the skilled in the art in the context of the present disclosure, and various features described above may have advantages with or without other features described above. The above embodiments are to be understood as illustrative examples, and further embodiments are envisaged. It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the disclosure, which is defined in the accompanying claims.
Number | Date | Country | Kind |
---|---|---|---|
1900672.5 | Jan 2019 | GB | national |
This patent application is a National Stage Entry of PCT/GB2020/050081 filed on Jan. 15, 2020, which claims priority to GB Application No. 1900672.5 filed on Jan. 19, 2019, the disclosures of which are hereby incorporated by reference herein in their entirety as part of the present application.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/GB2020/050081 | 1/15/2020 | WO | 00 |