The present disclosure relates to an imaging element, an imaging method, an imaging device, and an image processing system.
Recent development of artificial intelligence (AI) and the like facilitates skillful falsification of still images and moving images. Against social problems in fake news or the like using a falsification image, the importance of image falsification detection has increased.
As a technology for image falsification detection, a technology of embedding an encryption code or the like in an original image, a technology of superimposing a digital watermark on an original image, and the like are known. In these technologies, if an image sensor and a processing unit that performs each processing are separated from each other, there is a possibility that a differential attack of, for example, taking over a communication channel between the image sensor and the processing unit may analyze the embedded or superimposed information. In addition, there is also a possibility that a falsified image is input to the processing unit and the falsified image may be guaranteed as an image not falsified, in the first place.
Therefore, a technology of using an image sensor and a processing unit that are integrally constituted has been proposed. For example, Patent Literature 1 describes a technology in which an image sensor includes a pixel substrate that includes a sensor unit, and a signal processing substrate on which an image information processing unit is arranged to process an electrical signal output from the sensor unit, the pixel substrate and the signal processing substrate being stacked and integrally configured in an image sensor, and identity between acquired image information and captured image information is guaranteed. According to the configuration of Patent Literature 1, falsification prevention processing is performed in the image sensor, and therefore, the image sensor is unlikely to have the differential attack.
Patent Literature 1: JP 2017-184198 A
However, even in the configuration in which the falsification prevention processing is performed in the image sensor, there is a possibility that an intentional input image such as a saturated image or an image with a low gain is generated against the falsification prevention processing, and the differential attack may analyze embedding information by the falsification prevention processing.
The present disclosure provides an imaging element, an imaging method, an imaging device, and an image processing system that enable falsification prevention processing with higher resistance against attack.
For solving the problem described above, an imaging element according to one aspect of the present disclosure has an imaging unit that outputs image information according to received light; an embedding information generation unit that obtains a feature amount of a predetermined area of an image based on the image information, determines whether to embed embedding information in the predetermined area based on the feature amount, and generates the embedding information based on the image information of the predetermined area into which the embedding information is determined to be embedded; and an embedding unit that embeds the embedding information, into the predetermined area.
For solving the problem described above, an imaging method according to one aspect of the present disclosure comprises, performed by a processor: an imaging step of outputting image information according to received light; an embedding information generation step of obtaining a feature amount of a predetermined area of an image based on the image information, determining whether to embed embedding information in the predetermined area based on the feature amount, and generating the embedding information based on the image information of the predetermined area into which the embedding information is determined to be embedded; and an embedding step of embedding the embedding information, into the predetermined area.
For solving the problem described above, an imaging device according to one aspect of the present disclosure has an imaging unit that outputs image information according to received light; an optical unit that guides light from a subject to the imaging unit; an embedding information generation unit that obtains a feature amount of a predetermined area of an image based on the image information, determines whether to embed embedding information in the predetermined area based on the feature amount, and generates the embedding information based on the image information of the predetermined area into which the embedding information is determined to be embedded; an embedding unit that embeds the embedding information, into the predetermined area; and a recording unit that records the image information into which the embedding information is embedded by the embedding unit.
For solving the problem described above, an image processing system according to one aspect of the present disclosure has an image processing apparatus; and an information processing apparatus that is connected to the image processing apparatus via a network, wherein the information processing apparatus includes a falsification detection unit that, based on a feature amount of a predetermined area of an image, acquires image information of an image for which whether to embed embedding information in the predetermined area, from the image processing apparatus through the network, extracts the embedding information from the acquired image information, detects presence or absence of falsification against the image information based on the extracted embedding information, adds falsification detection information indicating presence or absence of the detected falsification to the image information, and transmits the falsification detection information to the image processing apparatus, and the image processing apparatus includes an image processing unit that, when the falsification presence/absence information added to the image information transmitted from the information processing apparatus indicates absence of the falsification, performs image processing on the image information, performs image falsification prevention processing on the image information subjected to the image processing, and, when the falsification presence/absence information indicates presence of the falsification, adds information indicating presence of the falsification in the image information.
Embodiments of the present disclosure will be described in detail below with reference to the drawings. Note that in the following embodiments, the same portions are denoted by the same reference numerals and symbols, and redundant description thereof will be omitted.
Hereinafter, the embodiments of the present disclosure will be described in the following order.
First, an outline of the present disclosure will be described. The present disclosure relates to a technology of embedding digital watermark information for preventing falsification, as embedding information, in a captured image (image information) captured by an imaging element.
More specifically, in the imaging element 10, the captured image of a subject 30 captured by the imaging unit is supplied to the digital watermark generation unit 200 and an embedding unit 202 via an input unit 201. The digital watermark generation unit 200 determines a predetermined area into which the embedding information is embedded in the captured image, on the basis of a feature amount of the predetermined area. Furthermore, the digital watermark generation unit 200 generates the embedding information, as the digital watermark information, on the basis of the captured image supplied from the input unit 201. The embedding information and the information about the predetermined area into which the embedding information is embedded are passed to the embedding unit 202.
The embedding unit 202 embeds the embedding information, in the image information supplied from the input unit 201, on the basis of the embedding information and the information about the predetermined area into which the embedding information is embedded, passed from the digital watermark generation unit 200. The embedding unit 202 outputs the image information in which the embedding information has been embedded, as output information 40.
According to each embodiment of the present disclosure configured as described above, the embedding information for detecting the presence or absence of falsification of the captured image information is incorporated in the imaging element 10, together with the imaging unit, preventing takeover of the image information. At the same time, the imaging element determines the predetermined area into which the embedding information is embedded, on the basis of the feature amount of the predetermined area, and therefore, it is possible to resist a differential attack using a saturated image or the like.
Therefore, in each embodiment of the present disclosure, as described with reference to
An image falsification prevention technology according to the present disclosure is preferably applied to, for example, an image or video for important use that affects the life of a person. For example, the image falsification prevention technology is considered to be applied to falsification prevention of a captured image of a monitoring camera that can be used as an evidence image of a crime or the like.
Furthermore, in a field where medical image of an endoscope or digital X-ray imaging device is handled, the image falsification prevention technology is also considered to be applied to falsification prevention of association with an image of an electronic medical record or a user ID in remote medical care or the like. Note that the application of the image falsification prevention technology according to the present disclosure is not limited thereto.
Next, a configuration applicable to each embodiment of the present disclosure will be described.
The imaging element 10 has a light receiving surface and converts an analog image signal according to light received by the light receiving surface into digital image data, and outputs the image data as the image information. The optical unit 11 is provided to apply light from the subject to the light receiving surface of the imaging element 10, and includes one or more lenses, a focus mechanism, a diaphragm mechanism, and the like. For example, a nonvolatile recording medium such as a hard disk drive or flash memory is applicable to the recording unit 12, and is configured to record the image information output from the imaging element 10.
The output unit 13 is an interface for outputting the image information output from the imaging element 10 to the outside of the imaging device 1. The output unit 13 may be connected to an external device through wired communication using a cable or wireless communication. Furthermore, the output unit 13 may be configured to be connected to an external network such as the Internet or a local area network (LAN).
The control unit 14 controls the operations of the entire imaging device 1. For example, the control unit 14 includes a central processing unit (CPU), and memories such as a read only memory (ROM) and a random access memory (RAM), and controls the entire operations of the imaging device 1 by using the RAM as a work memory, for example, according to programs stored in the ROM. Furthermore, the control unit 14 is configured to generate a clock for driving the imaging element 10 or the like.
The element control unit 105 includes, for example, a processor, and controls the operations of the entire imaging element 10 according to an instruction from the control unit 14. Furthermore, the element control unit 105 generates a clock signal used by the drive unit 101 to drive the pixel array unit 100.
The pixel array unit 100 includes a pixel array having pixel circuits arranged in a matrix array, pixel circuits each including a light receiving element such as a photodiode that generates a charge according to light received by photoelectric conversion, and a reading circuit that converts the charge generated by the light receiving element into a pixel signal being an electric signal and that reads the pixel signal. The pixel array unit 100 further includes a conversion unit that converts the analog pixel signal read from each pixel circuit into the digital image data (image information).
The drive unit 101 controls exposure and read operations in the pixel array unit 100 on the basis of the clock signal supplied from the element control unit 105. The image information output from the pixel array unit 100 is passed to the signal processing unit 102. The signal processing unit 102 performs predetermined signal processing on the image information passed from the pixel array unit 100. The signal processing unit 102 performs, for example, level adjustment processing, white balance adjustment processing, and the like, on the image information.
The falsification prevention processing unit 103 performs the falsification prevention processing according to each embodiment of the present disclosure, on the image information subjected to the signal processing by the signal processing unit 102. More specifically, the falsification prevention processing unit 103 generates the embedding information on the basis of the image information, embeds the generated embedding information in the predetermined area of the image based on the image information, and the like.
The output I/F 104 is an interface for outputting the image information subjected to the falsification prevention processing by the falsification prevention processing unit 103, to the outside of the imaging element 10. As the output I/F 104, for example, Mobile Industry Processor Interface (MIPI) can be applied.
To the imaging element 10, a CMOS image sensor (CIS) can be applied that is obtained by integrally forming the units included in the imaging element 10 by using a complementary metal oxide semiconductor (CMOS). The imaging element 10 can be formed on a single substrate. The imaging element 10 is not limited to this configuration and may have a stacked CIS in which a plurality of semiconductor chips is stacked and integrally formed. Note that the imaging element 10 is not limited to this example, and may be another type of optical sensor such as an infrared sensor that performs imaging using infrared light.
In an example, the imaging element 10 can be formed by a stacked CIS having a two-layer structure in which the semiconductor chips are stacked in two layers.
The pixel unit 2020a includes at least the pixel array unit 100 in the imaging element 10. The memory+logic unit 2020b can include, for example, the drive unit 101, the signal control unit 102, the falsification prevention processing unit 103, the output I/F 104, and the element control unit 105. The memory+logic unit 2020b can further include a memory that stores the image information.
As illustrated on the right side of
In another example, the imaging element 10 can be formed into a three-layer structure in which the semiconductor chips are stacked in three layers.
As illustrated on the right side of
(3-1. Configuration According to First Embodiment)
Next, a first embodiment of the present disclosure will be described.
In
The block division unit 1030 corresponds to the input unit 201 in
The embedding information generation unit 1031 corresponds to the digital watermark generation unit 200 in
As the feature amount used by the embedding information generation unit 1031 to determine whether to embed the embedding information, a dispersion of the pixel values of the pixels included in the block can be applied. As the dispersion, a dispersion value, a standard deviation value, a range, and the like are allowed to be used. The feature amount is not limited thereto, and is allowed to use an average value. In addition, a relative value with respect to the maximum output value may be used.
The embedding information generation unit 1031 compares the obtained feature amount with a threshold and performs threshold determination. The embedding information generation unit 1031 determines, of the blocks passed from the block division unit 1030, a block in which the obtained dispersion exceeds a threshold, as a block into which the embedding information is embedded. The threshold is preferably optimized according to a use case in which falsification is desired to be prevented.
As described above, the embedding information generation unit 1031 sets the block having a feature amount exceeding the threshold, as the block into which the embedding information is embedded, and sets a block having a feature amount equal to or less than the threshold, as a block in which no embedding information is embedded. Therefore the embedding information is prevented from being embedded in a flat portion of the image, and resistance to the differential attack can be enhanced.
Furthermore, the embedding information generation unit 1031 generates the embedding information, on the basis of each block passed from the block division unit 1030. The embedding information generation unit 1031 generates information for identifying the image information, as the embedding information, on the basis of the image information output from the pixel array unit 100.
For example, the embedding information generation unit 1031 is configured to generate a cyclic redundancy check (CRC) value, a hash value, a total value of the pixel values, or the like, on the basis of the pixel values of the pixels included in each block, and generate the embedding information using the generated value. In this configuration, for example, when pixel data of each pixel has a bit length of m bits, the embedding information can be generated by using values from the most significant bit to, for example, an (m−1) bit. The generation of the embedding information is processing corresponding to, for example, embedding the embedding information into a bit position of the least significant bit in the embedding process for embedding information which is described later.
The present disclosure is not limited to this configuration, and in the embedding information generation unit 1031, the embedding information can also include supplementary information such as an imaging element ID for identification of the imaging element 10 itself, information indicating the imaging time and an imaging location at which an image has been captured from outside, and a program ID for identification of a program for implementing the embedding information generation unit 1031. The embedding information generated by the embedding information generation unit 1031 is passed to the embedding unit 1032.
The embedding unit 1032 embeds the embedding information that is generated by the embedding information generation unit 1031, into the block into which the embedding information is determined to be embedded by the embedding information generation unit 1031. At this time, the embedding unit 1032 embeds the embedding information into a pixel (referred to as specific pixel) at a predetermined position, of the plurality of pixels included in the block. Furthermore, the embedding unit 1032 embeds the embedding information into the least significant bit of the specific pixel. The embedding unit 1032 is not limited to this configuration, and can also embed the embedding information, at a bit positioned a plurality of bits (e.g., 2 bits) away from the least significant bit so as not to affect the image.
(3-2. Details of Processing According to First Embodiment)
Next, the processing according to the first embodiment will be described in more detail with reference to
Next, in Step S101 of
In the drawing, each of the data data_1 to data_N has a data length of m bits. The embedding information generation unit 1031 calculates the feature amount on the basis of values [m−1:1] from the most significant bit (MSB) to the (m−1) bit of the respective data data_1 to data_N. In this example, for the sake of description, it is assumed that the feature amount is calculated using range, and the embedding information generation unit 1031 calculates, as the feature amount, a difference between a maximum value [m−1:1] and a minimum value [m−1:1], on the basis of a value [m−1:1] of each pixel 60 included in the block 51.
Next, in Step S102 of
In Step S103 of
The processing of generating the embedding information performed in Step S103 and the processing of embedding the embedding information into the specific pixel 60em performed in step S104 in the flowchart of
In Step S120, the falsification prevention processing unit 103 calculates a total value of the data in the target block 51a, by using the embedding information generation unit 1031.
As illustrated in the section (a) of
In Step S121 of
In Step S122 of
In this example, 1 bit of the embedding information is embedded in the least significant bit of each specific pixel 60em. For example, in a case where two specific pixels 60em are set in the block 51a, the embedding information of 2 bits can be embedded in the block 51a. This is the reason why the lower 2 bits of the total value sum are acquired as the embedding information in Step S121.
Note that, here, the lower 2 bits of the total value sum are, not limited to this example, acquired as the embedding information, but bits higher than the lower 2 bits of the total value sum, for example, the lower 3 bits or the lower 4 bits may be acquired as the embedding information.
Furthermore, here, the embedding information acquired from the target block 51a is configured to be embedded in this target block 51a, but is not limited to this example. In other words, the embedding information acquired from a certain block 51a may be embedded in the specific pixel 60em of another block 51a different from the block 51a. This configuration makes it possible to more firmly prevent falsification.
Returning to
When the process according to the flowchart of
The left side of
In the example of
The processing method indicates a processing method (method of obtaining the CRC value, hash value, total value, or feature amount, or the like) used to generate the embedding information in Step S103 in
Note that, in a case where the processing method, the information about pixels and bits used for the processing, and the position information of the specific pixel are set to be fixed in default information for each image 50, the information can be omitted. The fixed information can be omitted, so that an encryption processing time may be reduced or deleted.
In the falsification inspection information 510, the threshold information indicates the threshold for comparison with the feature amount in Step S102 in the flowchart of
The imaging location is information (e.g., latitude, longitude, and altitude information) indicating a location where the image 50 is captured.
As described above, it can be said that the falsification inspection information 510 is extraction information used to extract the embedding information from the image.
In the first embodiment, part or all of the falsification inspection information 510 is encrypted and added to the output information. The falsification prevention processing unit 103 encrypts part or all of the falsification inspection information 510 with a public key, for example, by using the embedding unit 1032. In the example of
The encrypted falsification inspection information 510a is added to the image 50 as, for example, the header information 52.
Meanwhile, the falsification prevention code 520 is embedded in the image 50 as described with reference to
As described above, in the first embodiment, it is determined whether to embed the embedding information, on the basis of the feature amount of the block 51, for each block 51, as the predetermined area, obtained by dividing the captured image 50. Therefore, it is possible to have strong resistance against the differential attack using the saturated image or the like. Furthermore, in the first embodiment, the embedding information is generated on the basis of the image (pixel values) of the target block 51a into which the embedding information is embedded. Therefore, when falsification is detected, it is possible to readily identify which part of the image 50 has been falsified. Furthermore, in the first embodiment, the information for extracting and restoring the embedding information from the image 50 is encrypted with the public key and added to the image 50 to generate the output information 500. Therefore, it is extremely difficult to analyze the embedding information embedded in the image 50.
(3-3. First Modification of First Embodiment)
Next, a first modification of the first embodiment will be described. In the first embodiment described above, as the predetermined area for determining whether to embed the embedding information, the block 51 obtained by dividing the image 50 is used. Meanwhile, in the modification of the first embodiment, the object detection is performed on the image 50, an area corresponding to the detected object is set as the predetermined area, and it is determined whether to embed the embedding information on the basis of the feature amount in the predetermined area.
The object detection unit 1033 detects, on the basis of the image information supplied from the pixel array unit 100, the object included in the image based on the image information. The detection of the object by the object detection unit 1033 may be performed by pattern matching for a predetermined object image prepared in advance, or may be performed using a model trained by machine learning with the predetermined object image as training data. Furthermore, for the detection of the object by the object detection unit 1033, facial recognition may be used.
The object detection unit 1033 passes information indicating an object detection area in the image that includes the detected object, to an embedding information generation unit 1031a and an embedding unit 1032a, together with the image. At this time, as the object detection area, a minimum rectangular region including the detected object may be used, or a rectangular region having a predetermined margin compared with the minimum rectangular region may be used. In addition, the object detection unit 1033 passes an object detection value indicating a likelihood of the detected object, to the embedding information generation unit 1031a.
The embedding information generation unit 1031a performs threshold determination on the object detection values passed from the object detection unit 1033, and generates the embedding information, on the basis of pixel information about an object detection area having an object detection value exceeding the threshold. The embedding information generation unit 1031a passes the information indicating the object detection area and the corresponding embedding information, to the embedding unit 1032a.
The embedding unit 1032a embeds the embedding information at a predetermined position of a specific pixel in the object detection area, on the basis of the image and the information indicating the object detection area that are passed from the object detection unit 1033, the information indicating the object detection area and the corresponding embedding information that are passed from the embedding information generation unit 1031a. Here, the position of the specific pixel in the object detection area can be determined in advance as, for example, a relative pixel position with respect to upper and lower ends and left and right ends of the object detection area which is the rectangular region.
In Step S140, the falsification prevention processing unit 103 performs object detection processing of detecting the object included in the image based on the image information supplied from the pixel array unit 100, by using the object detection unit 1033.
In the next Step S141, the falsification prevention processing unit 103 determines whether the object detection value indicating the likelihood exceeds a threshold, for one of the objects detected in Step S140, by using the object detection unit 1033. When the falsification prevention processing unit 103 determines that the object detection value is equal to or less than the threshold (Step S141, “No”), the process proceeds to Step S144.
On the other hand, when the falsification prevention processing unit 103 determines that the object detection value exceeds the threshold (Step S141, “Yes”), the process proceeds to Step S142. In the example of
In Step S142, the embedding information generation unit 1031a generates the embedding information, on the basis of the pixel information (pixel value) in each of the object detection areas including the object having an object detection value exceeding the threshold. Here, the method described in Step S103 of the flowchart of
In the next Step S143, the falsification prevention processing unit 103 embeds the embedding information generated in Step S142 into the predetermined position of the specific pixel in the object detection area, by using the embedding unit 1032. Note that the processing is skipped for the object detection areas that are not the target for embedding the embedding information (the object detection areas including the objects having an object detection value equal to or less than the threshold).
In the next Step S144, the falsification prevention processing unit 103 determines whether the object detection area processed in Steps S141 to S143 is the last object detection area processed in the image 50. When the falsification prevention processing unit 103 determines that the object detection area is not the last object detection area (Step S144, “No”), the process returns to Step S141 and the processing of a next object detection area in the image 50 is performed. On the other hand, when the falsification prevention processing unit 103 determines that the object detection area is the last object detection area (Step S144, “Yes”), a series of process steps according to the flowchart of
As described above, setting the object detection area including the object having an object detection value exceeding the threshold as a target area for generation and embedding of the embedding information narrows the target area for generation and embedding of the embedding information is narrowed, and a falsified portion can be more readily identified.
Note that, in Step S141 described above, on the basis of the comparison of the object detection value with the threshold, it is determined whether to set the object detection area as the target area for generation and embedding of the embedding information, but determination is not limited to this example. For example, it is also possible to determine whether to set the target area for embedding according to the type of the detected object (person, vehicle, cloud, bird, etc.).
(3-4. Second Modification of First Embodiment)
Next, a second modification of the first embodiment will be described. The second modification of the first embodiment is a combination of the first embodiment and the first modification of the first embodiment which are described above. In other words, in the second modification of the first embodiment, the image supplied from the pixel array unit 100 is divided into the blocks 51, object detection is performed on the image, and blocks 51, of the blocks 51, including at least part of the object detection area having an object detection value exceeding the threshold is set as the target blocks 51 into which the embedding information is embedded.
On the basis of the image information supplied from the pixel array unit 100, the object detection/block division unit 1034 divides the image based on the image information into the blocks 51 and detects the object included in the image 50. For object detection method or the like, the method according to the first modification of the first embodiment described above can be applied directly, and the description thereof will be omitted here.
The object detection/block division unit 1034 passes the information indicating the object detection areas each including the detected object in the image and the image divided into the blocks 51, to an embedding information generation unit 1031b and an embedding unit 1032b. Furthermore, the object detection/block division unit 1034 also passes the object detection value corresponding to each object detection area, to the embedding information generation unit 1031b.
The embedding information generation unit 1031b performs threshold determination on the object detection value passed from the object detection unit 1033, and extracts an object detection area having an object detection value exceeding the threshold. Then, the embedding information generation unit 1031b extracts a block 51 including at least part of the extracted object detection area from the blocks 51 into which the image is divided.
Furthermore, in
The embedding information generation unit 1031b passes information indicating the blocks 51a and the embedding information corresponding to each block 51a, to the embedding unit 1032b.
The embedding unit 1032a embeds the embedding information into a predetermined position of the specific pixel of each block 51a, on the basis of the image passed from the object detection/block division unit 1034, the information about each target block 51a into which the embedding information is embedded, and the embedding information corresponding to each block 51a.
As described above, according to the second modification of the first embodiment, each block 51a including at least part of the object detection area based on the object detection is set as the target block into which the embedding information is embedded, and thus, the falsified portion can be readily identified as in the first modification of the first embodiment described above. In addition, a larger area in which the embedding information can be embedded is provided as compared with the first modification of the first embodiment described above, and it is possible to embed the embedding information having a larger data amount.
Next, a second embodiment of the present disclosure will be described. The second embodiment of the present disclosure is an example of using the image included in the output information 500 into which the embedding information has been embedded according to the first embodiment or the modifications thereof. In the second embodiment, the embedding information is extracted from the output information 500 and the presence or absence of falsification of the image is detected on the basis of the extracted embedding information.
(4-1. Existing Technology)
Prior to the description of the second embodiment, for ease of understanding, an existing technology related to falsification prevention will be schematically described.
For example, the image processing software 700a extracts the digital watermark information from the output information having been input according to falsification prevention processing 801a, and compares the extracted digital watermark information with digital watermark information obtained in advance. When both information match each other, the image processing software 700a determines that the output information (image) is not been falsified and outputs the output information (image) from the PC. The output information output from the PC is transmitted, for example, to another PC, and is subjected to falsification prevention processing 801b similarly by image processing software 700b.
In such a configuration, when a saturated image or an image captured with low noise and low gain is input to the image processing software 700a, there is a possibility that the position where the digital watermark information is embedded and the embedded digital watermark information are analyzed due to comparison of the input image with the output image by a differential attack 802.
(4-2. Configuration According to Second Embodiment)
An input image is input, for example, to a personal computer (PC) 20 as an image processing apparatus. This input image is data that has a configuration similar to that of the output image 500 described with reference to
The server 22 decrypts the encrypted falsification inspection information 510a included in the input image with a secret key, by using the falsification inspection software 90. The server 22 checks the presence or absence of falsification of the output image 500, on the basis of the falsification inspection information 510 obtained by decrypting the encrypted falsification inspection information 510a, by using the falsification inspection software 90 (Step S200).
The server 22 transmits a result of the checking of the presence or absence of falsification by the falsification inspection software 90, to the PC 20 via the network 21. The result of the checking of the presence or absence of falsification is acquired by the image processing software 70 in the PC 20. In Step S201, the PC 20 determines whether the acquired result of the checking of the presence or absence of falsification indicates the presence of falsification, by using the image processing software 70. When the PC 20 determines that the result of the checking indicates the absence of falsification, by using the image processing software 70 (Step S201, “absent”), the process proceeds to Step S202.
In Step S202, the PC 20 can perform image process processing (1) on the input image corresponding to the result of the checking by using the image processing software 70. Here, in the image process processing (1), processing that does not correspond to falsification of the input image is performed. As such processing, contrast correction, white balance adjustment, image format conversion, and the like for the image can be considered.
In the next Step S204, the PC 20 performs falsification prevention processing for preventing falsification by an external device, on the input image, by using the image processing software 70. Here, as the falsification prevention processing, the processing of generating and embedding the embedding information according to the first embodiment or the modifications thereof described above can be applied. After the processing of Step S204, a series of process steps according to the flowchart of
On the other hand, when the PC 20 determines that the result of the checking indicates the presence of falsification, by using the image processing software 70 (Step S201, “present”), the process proceeds to Step S203. In Step S203, the image processing software 70 can perform image process processing (2) on the input image corresponding to the result of the checking. In this case, the input image has already been falsified, and therefore, any processing can be performed as the image process processing (2). The image processing software 70 does not perform the falsification prevention processing on the image subjected to the image process processing (2).
(4-3. Details of Processing According to Second Embodiment)
Next, the processing according to the second embodiment will be described in more detail.
In the flowchart of
The server 22 receives the input image transmitted from the PC 20 (Step S231). In Step S240, the server 22 decrypts the header information 52 of the received input image with the secret key by using the falsification inspection software 90 to restore the falsification inspection information 510. Then, the falsification inspection software 90 acquires processing information included in the falsification inspection information 510, such as the processing method, information about pixels and bits used for the processing, and position information of the specific pixel.
In the next Step S241, the server 22 performs processing of generating the embedding information on the input image received in Step S231, according to the processing information acquired in Step S240 by using the falsification inspection software 90. In other words, the processing is the same as the processing of generating the embedding information performed in the falsification prevention processing unit 103 of the imaging device 1.
In the next Step S242, the server 22 acquires the embedded information that has been embedded in the input image, on the basis of the processing information acquired in Step S240, from the input image received from the PC 20 in Step S231, by using the falsification inspection software 90.
In the next Step S243, the server 22 compares the embedding information generated in Step S241 with the embedded information acquired from the input image in Step S242 and determines whether the generated embedding information and the acquired embedded information are the same, by using the falsification inspection software 90. When the server 22 determines that the generated embedding information and the acquired embedding information are the same, by using the falsification inspection software 90 (Step S243, “Yes”), the process proceeds to Step S244, and it is determined that the image received in Step S231 is not falsified (absence of falsification).
On the other hand, when the server 22 determines that the generated embedding information and the acquired embedding information are not the same, by using the falsification inspection software 90 (Step S243, “No”), the process proceeds to Step S245, and it is determined that the image received in Step S231 is falsified (presence of falsification).
After the processing of Step S244 or Step S245, the process proceeds to Step S246, and the server 22 transmits the result of the determination in Step S244 or Step S245, to the PC 20, by using the falsification inspection software 90. This result of the determination is received by the PC 20 in Step S232, and is input to the image processing software 70.
In Step S220, the PC 20 determines whether the target input image has been subjected to the falsification prevention processing, on the basis of, for example, the header information 52, by using the image processing software 70. When the PC 20 determines that the target input image has not been subjected to the falsification prevention processing, by using the image processing software 70 (Step S220, “No”), the process proceeds to Step S226. In this case, it cannot be guaranteed that the input image has not been falsified, and therefore, in Step S226, any image process processing can be performed by the image process processing (2). After the processing of Step S226, the PC 20 finishes a series of process steps according to the flowchart of
On the other hand, when the PC 20 determines that the falsification prevention processing has been performed, by using the image processing software 70 in Step S220 (Step S220, “Yes”), the process proceeds to Step S221.
In Step S221, the PC 20 checks whether the input image has been falsified, on the basis of the result of the determination transmitted from the server 22 in Step S232 of the flowchart of
In Step S227, the PC 20 adds information indicating the “presence of falsification” for the input image, by using the image processing software 70, and finishes a series of process steps according to the flowchart of
On the other hand, when the PC 20 determines the absence of falsification on the basis of the result of the determination, by using the image processing software 70 (Step S222, “absent”), the process proceeds to Step S223.
In Step S223, the PC 20 can perform the image process processing (1) that does not correspond to falsification of the image which has been described above on the input image corresponding to the result of the checking, by using the image processing software 70. In the next Step S224, the PC 20 determines whether the image process processing performed in Step S223 corresponds to falsification processing, by using the image processing software 70. When the PC determines that the image process processing (1) performed in Step S223 corresponds to the falsification processing, by using the image processing software 70 (Step S224, “Yes”), a series of process steps according to the flowchart of
On the other hand, when the PC 20 determines that the image process processing (1) performed in Step S223 does not correspond to the falsification processing, by using the image processing software 70 (Step S224, “No”), the process proceeds to Step S225. In Step S225, the PC 20 performs the falsification prevention processing on the input image by using the image processing software 70. Here, the processing described in the first embodiment or the modifications thereof described above can be applied to the falsification prevention processing. The falsification prevention processing may be performed, not limited to this configuration, by another method.
As described above, in the second embodiment, the encrypted falsification inspection information 510a obtained by encrypting the falsification inspection information 510 used to determine the presence or absence of falsification of the image with the public key is transmitted, from the PC 20 as the image processing apparatus, to the server 22 as the information processing apparatus. The server 22 decrypts the encrypted falsification inspection information 510a with the secret key, and processing of checking the presence or absence of falsification with the decrypted falsification inspection information 510 is performed on the server 22. This configuration prevents external decryption of the information of the encrypted falsification inspection information 510a, and the presence or absence of falsification of the image can be checked highly confidentially.
(4-4. Modifications of Second Embodiment)
Next, a modification of the second embodiment will be described. The modification of the second embodiment is an example in which only information necessary for checking falsification is transmitted from the PC 20 to the server 22 without transmitting the entire image. Transmission of only intermediate information from the PC 20 to the server 22 as described above can reduce a load on the network 21.
In the flowchart of
In the next step S252, the PC 20 acquires the encrypted falsification inspection information 510a included in the header information 52 of the input image by using the image processing software 70, and the acquired encrypted falsification inspection information 510a, the intermediate information of the embedding information generated in step S251, and the data of the least significant bit of the input image (when the embedding information is embedded in the least significant bit) are transmitted to the server 22.
In Step S260, the server 22 receives each piece of information transmitted from the PC 20 in Step S252. Each piece of the received information is input to the falsification inspection software 90.
In the next step S261, the server 22 decrypts the encrypted falsification inspection information 510a, of the pieces of information received in step S260, with the secret key by using the falsification inspection software 90, and restores the falsification inspection information 510. Then, the falsification inspection software 90 acquires processing information included in the falsification inspection information 510, such as the processing method, information about pixels and bits used for the processing, and position information of the specific pixel.
In the next Step S262, the server 22 acquires the intermediate information from the pieces of information received from the PC 20 in Step S260, by using the falsification inspection software 90, and a final value of the embedding information is generated on the basis of the acquired intermediate information and the processing information acquired in Step S261. The final value of the embedding information generated here corresponds to the embedding information embedded in the output information 500 that is captured, for example, by the imaging device 1 and that is generated in the first embodiment or the modifications thereof described above by the falsification prevention processing unit 103 included in the imaging device 1, and is information that is guaranteed as the image not falsified.
In the next Step S263, the server 22 reproduces the embedding information, from the information of the least significant bit of the input image, of the pieces of information received from the PC 20 in Step S260 and the position information of the specific pixel acquired in Step S261, by using the falsification inspection software 90. The embedding information reproduced here corresponds to the embedding information that has been embedded in the input image input to the PC 20.
In the next Step S264, the server 22 compares the embedding information as the final value generated in Step S262 with the embedding information reproduced in Step S263, by using the falsification inspection software 90, and it is determined whether the generated embedding information as the final value is the same as the reproduced embedding information. When the server 22 determines that the generated embedding information as the final value and the reproduced embedding information are the same, by using the falsification inspection software 90 (Step S264, “Yes”), the process proceeds to Step S265, and it is determined that the image received in Step S260 is not falsified (absence of falsification).
On the other hand, when the server 22 determines that the generated embedding information as the final value and the reproduced embedding information are not the same, by using the falsification inspection software 90 (Step S264, “No”), the process proceeds to Step S266, and it is determined that the image received in Step S260 is falsified (presence of falsification).
After the processing of Step S265 or Step S266, the process proceeds to Step S267, and the server 22 transmits the result of the determination in Step S265 or Step S266, to the PC 20, by using the falsification inspection software 90. This result of the determination is received by the PC 20 in Step S253, and is input to the image processing software 70.
The subsequent processing in the PC 20 is the same as the processing described with reference to
It should be noted that the effects described herein are merely examples and are not intended to restrict the present disclosure, and other effects may be provided.
Note that the present technology can also have the following configurations.
Number | Date | Country | Kind |
---|---|---|---|
2020-189017 | Nov 2020 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/040586 | 11/4/2021 | WO |