The present disclosure relates to an image processing method, an image processing system, and a recording medium storing a program, the method, the system, and the program performing processing on an image to which an annotation is added.
For constructing learning data for performing machine learning and so on, annotations, such as labels, are added to image data for recognition of the image data. For example, Japanese Unexamined Patent Application Publication No. 2013-161295 discloses a technology for performing labeling on image data.
Annotations are added to subjects, such as people and objects, included in images. Images in image data to which annotations are added in order to construct learning data for machine learning are large in quantity and various in kind. Thus, in the process of annotation processing, it is necessary to protect privacy related to people, such as people themselves captured on images and the photography locations.
One non-limiting and exemplary embodiment provides an image processing method, an image processing system, and a program for enhancing the privacy protection for images in the process of annotation processing.
In one general aspect, the techniques disclosed here feature an image processing method including: generating a plurality of privacy-protected images by performing privacy-protection image processing on each of a plurality of images; dividing each of the privacy-protected images into a plurality of areas to generate a plurality of divided images and ordering the divided images belonging to the same privacy-protected image so that the divided images form a continuous image; rearranging an order of the ordered divided images; and outputting, as processed images for annotation, the divided images ordered according to the rearranged order.
The image processing method and so on according to the present disclosure can improve privacy protection for images in the process of annotation processing.
It should be noted that general or specific embodiments may be implemented as a system, a method, an integrated circuit, a computer program, a storage medium, or any selective combination thereof.
Additional benefits and advantages of the disclosed embodiments will become apparent from the specification and drawings. The benefits and/or advantages may be individually obtained by the various embodiments and features of the specification and drawings, which need not all be provided in order to obtain one or more of such benefits and/or advantages.
The inventors according to the present disclosure, that is, the present inventors, have studied utilization of a technology using a neural network of deep learning and so on in order to improve the accuracies of recognition and detection of subjects, such as people, in images. Recognition of subjects in Deep Learning requires a large amount of image data for learning. In the image data for learning, information including the type, the position, and the area of each subject are added to the subject as annotation information, that is, is annotated thereto. Typically, in the annotation, a person inputs setting of the area of a subject to an image, such as surrounding a subject in an image. Also, a company or the like, such as the creator of image data for learning, is conducting a study on outsourcing annotation processing to an outside contractor. In addition, a study is being conducted on outsourcing the annotation processing to a large number of unspecified contractors by using crowdsourcing.
The present inventors also have studied employing digital image data clipped from digital moving images as a large amount of image data to be annotated. In particular, in order to obtain a large amount of image data, the present inventors have studied employing moving images obtained by a photographic device, such as a security camera or a vehicle-mounted camera, for capturing long-duration moving images. Images obtained from such moving images can include unspecified people and things associated with the people. Thus, the present inventors have raised, as an issue, the necessity of preventing a large number of unspecified contractors from recognizing privacy information regarding subjects, such as features of faces or the like of people in an image, things associated with people in an image, and the photography location thereof. In order to address the issue, the present inventors have found a technology regarding pre image-processing to be performed on an image to which an annotation is added.
An embodiment that the present inventors disclose based on the above-described knowledge will be described in detail with reference to the accompanying drawings.
The embodiment described below represents a general or specific example. Numerical values, shapes, materials, constituent elements, the arrangement positions and connections of constituent elements, steps, the order of steps, and so on described in the embodiment below are examples and are not intended to limit the present disclosure. Of the constituent elements in the embodiment described below, the constituent elements not set forth in the independent claims that represent the broadest concept will be described as optional constituent elements. In terms of expression, the ordinal numbers, such as first, second, and third, may be added to constituent elements, as appropriate.
An expression including “generally”, such as “generally parallel” or “generally orthogonal”, may be used in the following description of the embodiment. For example, the expression “generally parallel” not only means “being completely parallel” but also means “being substantially parallel”, that is including a difference of, for example, a few percent. This also applies to expressions including “generally”. Each accompanying figure is a schematic diagram and is not necessarily strictly depicted. In addition, in each figure, substantially the same constituent elements are denoted by the same reference numerals, and a redundant description may be omitted or may be briefly given herein.
The configuration of an image processing system 100 according to an embodiment will now be described with reference to
The image processing apparatus 10 may transmit the processed image data to an annotation processing apparatus 30 of an annotation processor or may transmit the processed image data to an annotation relaying apparatus 40, which relays transmission/reception of image data and annotation information between the annotation processing apparatus 30 and the server apparatus 20. For example, the annotation relaying apparatus 40 executes, for example, issuing a request for annotation processing to the annotation processing apparatus 30 of the annotation processor, obtaining image data to which an annotation is to be added from the server apparatus 20, transmitting the image data to the annotation processing apparatus 30, receiving information regarding the added annotation from the annotation processing apparatus 30, associating the information with the image data, and transmitting the associated information and image data to the server apparatus 20. Communication between the annotation processing apparatus 30 and the server apparatus 20 may be communication that is similar to that between the image processing apparatus 10 and the server apparatus 20 or may be implemented by other wireless communication or wired communication.
Communication between the annotation processing apparatus 30 and the annotation relaying apparatus 40 may be communication that is similar to that between the annotation processing apparatus 30 and the server apparatus 20 or may be implemented by other wireless communication or wired communication. For example, the communication between the annotation processing apparatus 30 and the annotation relaying apparatus 40 may employ a mobile communication standard utilized in a third-generation (3G) mobile communication system, a fourth-generation (4G) mobile communication system, or a mobile communications system based on LTE®.
The server apparatus 20 is configured so as to communicate with the image processing apparatus 10. The server apparatus 20 may be an information processing apparatus, such as a computer. The server apparatus 20 may include one or more server apparatuses or may constitute a cloud system. The server apparatus 20 includes a controller 21 that controls the entire server apparatus 20, a communicator 22 that communicates with the image processing apparatus 10, and a data accumulator 23 that accumulates various types of data therein. The communicator 22 communicates with the image processing apparatus 10 through a communication network, such as the Internet. The communicator 22 may be a communication circuit including a communication interface. For example, the communication between the communicator 22 and the image processing apparatus 10 may be implemented by a wireless local area network (LAN), such as Wi-Fi® (Wireless Fidelity), may be implemented by wired communication using a cable, or may be implemented by other wireless communication or wired communication.
The data accumulator 23 is configured with, for example, a hard disk. The data accumulator 23 includes a pre-processing image data accumulator 24, a processed-image data accumulator 25, a processing-parameter accumulator 26, and an annotation-data accumulator 27. Pre-processing image data, which are image data obtained by various photographic devices, are stored in the pre-processing image data accumulator 24 in conjunction with image identifiers (IDs) thereof. Processed image data, which are obtained by executing image processing on the pre-processing image data in the pre-processing image data accumulator 24, are stored in the processed-image data accumulator 25 in conjunction with image IDs thereof. Information regarding the image processing executed on the pre-processing image data is stored in the processing-parameter accumulator 26 in conjunction with the image IDs of the pre-processing image data and the processed image data. Information regarding annotations added to the processed image data is stored in the annotation-data accumulator 27.
The controller 21 controls the communicator 22 and the data accumulator 23. In response to a request from the image processing apparatus 10, the controller 21 retrieves data from the pre-processing image data accumulator 24, the processing-parameter accumulator 26, and the annotation-data accumulator 27 in the data accumulator 23 and transmits the data via the communicator 22. The controller 21 executes storage of corresponding data in the processed-image data accumulator 25 and the processing-parameter accumulator 26 via the communicator 22, the data being received from the image processing apparatus 10. The controller 21 also executes storage of corresponding data in the annotation-data accumulator 27, the data being received from the annotation processing apparatus 30 of the annotation processor. The controller 21 may execute storage of corresponding data in the annotation-data accumulator 27, the data being received from the annotation relaying apparatus 40.
The image processing apparatus 10 may singularly constitute one apparatus or may be incorporated into an information processing apparatus, such as a computer, or another apparatus. For example, the image processing apparatus 10 may be incorporated into the annotation relaying apparatus 40 or an apparatus including the annotation relaying apparatus 40. The image processing apparatus 10 includes a controller 11, a communicator 12, an image obscuring converter 13, an image divider 14, an image rearranger 15, a storage unit 16, and an input unit 17. The controller 11 controls the entire image processing apparatus 10. The input unit 17 is an element that receives various inputs, such as instructions. The storage unit 16 is an element in which various types of information are stored. The storage unit 16 may be configured with a semiconductor memory or the like or may be configured with a volatile memory, a nonvolatile memory, or the like.
The communicator 12 communicates with the communicator 22 in the server apparatus 20, as described above. The communicator 12 may be a communication circuit including a communication interface. A router that is a communication apparatus for relaying a communication between the communicators 12 and 22 may be provided therebetween. The router may relay a communication between the communicator 12 and the communication network.
Under the control of the controller 11, the image obscuring converter 13 obtains pre-processing image data and its image ID from the pre-processing image data accumulator 24 in the server apparatus 20 and performs obscuring processing on the pre-processing image. This obscuring processing is image processing for privacy protection of subjects in an image. More specifically, the image obscuring converter 13 performs obscuring processing, such as blurring processing, mosaic processing, and pixelization processing. Pixelization refers to reducing the pixel density of an image by augmenting a pixel value of each pixel. The image obscuring converter 13 may obscure an image having low pixel density by increasing the resolution thereof or may obscure an image having high pixel density by reducing the resolution thereof. The image obscuring converter 13 associates the image ID of pre-processing image data with obscured image data resulting from the obscuring processing.
In the present embodiment, for example, by using blurring processing, the image obscuring converter 13 converts an entire pre-processing image A illustrated in
Before performing the blurring processing, the image obscuring converter 13 executes character recognition on the pre-processing image A. The image obscuring converter 13 recognizes, for example, characters, symbols, and so on, such as those illustrated at the lower left in
The image obscuring converter 13 associates details, the intensity, and so on of the obscuring processing executed on the pre-processing image A with the image ID of the pre-processing image A and transmits the associated information as processing parameters to the processing-parameter accumulator 26 in the server apparatus 20 for storage. Details of the obscuring processing can include obscuring processing involving resolution changing, in addition to the processing, such as blurring processing, mosaic processing, or pixelization processing.
Under the control of the controller 11, the image divider 14 divides an image resulting from the obscuring processing into a plurality of divided images. This makes it difficult to identify elements related to subjects in the image. In addition, the image divider 14 orders the divided images in accordance with the order of arrangement of the divided images that constitute the image resulting from the obscuring processing. That is, the divided images are ordered according to the order of the ordered divided images so as to form a continuous image.
For example, the image divider 14 divides the image B resulting from the obscuring processing into a plurality of images, as illustrated in
Before dividing the image B, the image divider 14 sets a coordinate system therefor. Specifically, as illustrated in
The image divider 14 then sets the upper left corners of the respective divided images C1 to C5 as reference points E1 to E5 and determines the coordinates of each reference point. The image divider 14 also determines the sizes of the respective divided images C1 to C5. In the present embodiment, pixels are used as indicators of the size of each image. The size of the image B is 1080 (height)×1920 (width) pixels. The size of each of the divided images C1 to C5 is 1080 (height)×384 (width) pixels. The coordinates of the reference points of the divided images C1 to C5 and the sizes thereof constitute division position data of the divided images C1 to C5. The division position data of the divided images C1 to C5 may be coordinates indicating the division lines D1 to D4. In addition, the image divider 14 sets new image IDs for the respective divided images C1 to C5. For example, as illustrated in
Under the control of the controller 11, the image divider 14 transmits the processing parameters of the divided images C1 to C5 to the processing-parameter accumulator 26 in the server apparatus 20 for storage. Under the control of the controller 11, the image divider 14 may also temporarily store the divided images C1 to C5 in the storage unit 16 in conjunction with the respective image IDs and the image ID of the pre-processing image A and may transmit the divided images C1 to C5 and the image IDs to the processed-image data accumulator 25 in the server apparatus 20 for storage.
Under the control of the controller 11, the image rearranger 15 mixes divided images whose original images are the same and divided images whose original images are different from those divided images and arbitrarily rearranges the mixed divided images, that is, randomly shuffles the divided images. This makes it difficult to identify the location of photography of the image. In addition, the image rearranger 15 newly orders all the divided images in accordance with the arrangement order of all the randomly shuffled divided images.
For example, as illustrated in
In accordance with the arrangement order of the divided images illustrated in state (6b), the image rearranger 15 newly orders the divided images C1 to C5, Ca1 to Ca5, Cb1 to Cb5, and so on. The image rearranger 15 may obtain, from the storage unit 16, the divided images Ca1 to Ca5, Cb1 to Cb5, and so on whose original images are the images Aa, Ab, and so on, the divided images and so on being pre-stored in the storage unit 16, or may obtain the divided images Ca1 to Ca5, Cb1 to Cb5, and so on from the processed-image data accumulator 25, the images and so on being pre-stored in the processed-image data accumulator 25 in the server apparatus 20.
Under the control of the controller 11, the image rearranger 15 transmits a plurality of divided images obtained by mixing the divided images C1 to C5, Ca1 to Ca5, Cb1 to Cb5, and so on and rearranging the order thereof to the processed-image data accumulator 25 in the server apparatus 20 as one divided-image group, in conjunction with the newly set order of the divided images, and the divided-image group is stored in the processed-image data accumulator 25. Thus, the divided-image group includes the divided images, the image IDs, and the order thereof. For example, when divided images are supplied to the annotation processor, the divided images in the divided-image group are supplied on the basis of the divided-image group and in accordance with the above-described order.
The controller 21 in the server apparatus 20 and the constituent elements, namely, the controller 11, the image obscuring converter 13, the image divider 14, and the image rearranger 15, in the image processing apparatus 10 may be implemented by dedicated hardware or may be implemented by executing software program appropriate for the individual constituent elements. In this case, each constituent element may have, for example, a computational processing unit (not illustrated) and a storage unit (not illustrated) in which a control program is stored. Examples of the computational processing unit include a micro processing unit (MPU), a central processing unit (CPU), and so on. One example of the storage unit is a memory. Each constituent element may be constituted by a single element for performing centralized control or may be constituted by a plurality of elements for performing distributed control in cooperation with each other. The software program may be provided as application software via communication through a communication network such as the Internet, communication based on a mobile communication standard, or the like.
Each constituent element may be implemented by a circuit, such as a large-scale integrated (LSI) circuit or a system LSI circuit. A plurality of constituent elements may constitute one circuit as a whole or may constitute respective individual circuits. Each circuit may be a general-purpose circuit or a dedicated circuit.
The system LSI is a super-multifunctional LSI manufactured by integrating a plurality of constituent elements on one chip and is, specifically, a computer system that includes a microprocessor, a read-only memory (ROM), a random-access memory (RAM), and so on. A computer program is stored in the RAM. The microprocessor operates in accordance with the computer program, so that the system LSI realizes its functions. The system LSI and the LSI may each be a field programmable gate array (FPGA) that can be programmed after manufacture of an LSI or may include a reconfigurable processor that allows reconfiguration of connections and settings of circuit cells inside an LSI.
Some or all of the above-described constituent elements may be implemented by a detachable integrated circuit (IC) card or a single independent module. The IC card or the module may be a computer system including a microprocessor, a ROM, a RAM, and so on. The IC card or the module may include the above-described LSI or system LSI. The microprocessor operates in accordance with the computer program, so that the IC card or the module realizes its functions. The IC card or the module may be tamper-proof.
One example of the operation of the image processing system 100 will be described with reference to
An apparatus that is independent from the image processing system 100 stores various types of image data in the pre-processing image data accumulator 24 in the server apparatus 20. For example, an image provider that has a contract with the creator sends image data of captured moving images or the like, obtained with a security camera, a vehicle-mounted camera, or the like, to the pre-processing image data accumulator 24. In this case, when the server apparatus 20 configures a cloud system, the image data can be easily stored.
With respect to the operation of the image processing system 100, the controller 11 in the image processing apparatus 10 issues a request for pre-processing image data to the server apparatus 20, in accordance with an instruction that an operator (who may be the above-described creator) of the image processing apparatus 10 inputs to the input unit 17 (step S101).
When the server apparatus 20 receives the request, the controller 21 therein transmits the pre-processing image data stored in the pre-processing image data accumulator 24 and the image ID set for the image data to the image processing apparatus 10. As a result, the controller 11 in the image processing apparatus 10 obtains the pre-processing image data and the image ID (step S102).
The controller 11 in the image processing apparatus 10 causes the image obscuring converter 13 to execute character recognition for detecting characters, symbols, and so on displayed in the pre-processing image. In addition, for example, as illustrated in
Next, the controller 11 causes the image obscuring converter 13 to execute obscuring processing for obscuring the entire image resulting from the deletion of the character area (step S104). In the present embodiment, the image obscuring converter 13 executes blurring processing on the entire image, for example, as illustrated in
In addition, the controller 11 causes the image obscuring converter 13 to transmit, to the server apparatus 20 as processing parameters, information regarding the details and the intensity of the obscuring processing executed on the image resulting from the deletion of the character area. The server apparatus 20 then receives the processing parameters, and the controller 21 therein stores the processing parameters in the processing-parameter accumulator 26 (step S105).
Thereafter, the controller 11 causes the image divider 14 to divide the image resulting from the obscuring processing into a plurality of images (step S106). For example, as illustrated in
Also, based on a coordinate system that the image obscuring converter 13 or the image divider 14 sets for the pre-division image, the image divider 14 determines the coordinates of reference points and the areas of the images, that is, the sizes of the images, for the divided images. For example, as illustrated in
Next, the controller 11 causes the image divider 14 to temporarily store the divided images in the storage unit 16 in the image processing apparatus 10, in conjunction with the image ID, the order, and the image ID of the pre-processing image, which is the original image of the divided images (step S107).
The controller 11 also causes the image divider 14 to transmit division position data, which includes the image IDs of the respective divided images, the order of the divided images, the coordinates and the sizes of reference points of the divided images, and the image ID of the pre-processing image of the divided images, to the server apparatus 20 as processing parameters. The server apparatus 20 then receives the processing parameters, and the controller 21 therein stores the processing parameters in the processing-parameter accumulator 26 (step S108).
Next, with respect to the divided images stored in the storage unit 16, the controller 11 checks the number of pre-processing images, which are original images of the divided images, that is, the number of pre-processing images from which the divided images were generated. If the number of pre-processing images is larger than or equal to a predetermined number (“yes” in step S109), the process proceeds to a process in step S110. The controller 11 may check, for example, the image IDs of pre-processing images corresponding to the divided images and may determine whether or not the number of image IDs of the pre-processing images is larger than or equal to a predetermined number. If the number of pre-processing images from which the divided images were generated is smaller than the predetermined number (“no” in step S109), the process returns to the process in steps S101 in which the controller 11 issues a request for other pre-processing image data to the server apparatus 20. Then, the processes in steps S102 to S108 are repeated, so that new divided images whose original images are the other pre-processing image are generated and are stored in the storage unit 16. As a result, the number of pre-processing images from which the divided images are generated increases. The predetermined number can be selected from values that are larger than or equal to 2.
In step S110, the controller 11 causes the image rearranger 15 to obtain divided images whose original images are a plurality of pre-processing images stored in the storage unit 16, to mix the obtained divided images, and to execute image rearrangement processing for arbitrarily rearranging the arrangement order of the divided images, that is, for randomly shuffling the divided images, for example, as illustrated in
In addition, the controller 11 causes the image rearranger 15 to transmit the group of divided images on which the image rearrangement processing was executed, together with the image IDs of the divided images and the new order thereof, to the server apparatus 20 as processed image data. When the server apparatus 20 then receives the processed image data, the controller 21 therein stores the processed image data in the processed-image data accumulator 25 as image data for annotation (step S111).
With the processed image data obtained by executing the processes in steps S101 to S111 in the manner described above, the image obscuring processing makes it difficult to identify detailed subject features, such as the faces of people in each image, for example, as in the divided images illustrated in
The processing for adding annotations to the image data stored in the server apparatus 20 is executed as described below. Referring to
The processed images stored in the processed-image data accumulator 25 in the server apparatus 20 are supplied to the annotation processing apparatus 30 as images to which annotation is to be added. Although, in the present embodiment, the annotation relaying apparatus 40 transmits the processed image data in the server apparatus 20 to the annotation processing apparatus 30, the annotation processing apparatus 30 may directly obtain processed image data from the server apparatus 20.
The images supplied to the annotation processing apparatus 30 are based on a divided-image group including divided images whose original images are pre-processing images and on which the image rearrangement processing was performed as described above. images in the same divided-image group, that is, in the same processed image group, are sequentially supplied to the annotation processing apparatus 30 in accordance with the arrangement order resulting from the image rearrangement processing. Images in one processed image group may be supplied to a plurality of annotation processing apparatuses 30. With respect to the processed images that are sequentially supplied, the operator of the annotation processing apparatus 30 identifies a subject area, for example, by surrounding a subject, such as the person H4, with a frame An, as illustrated in the divided image C2 in
The supply of the processed images to the annotation processing apparatus 30 may be executed so that a plurality of processed images do not exist in the annotation processing apparatus 30 at the same time or may be executed so that a plurality of processed images are permitted to exist in the annotation processing apparatus 30 at the same time. However, it is desirable that the processed images be supplied so that associations therebetween are not identified.
In addition, by using the processing parameters stored in the processing-parameter accumulator 26 in the server apparatus 20, the creator can deactivate the obscuring processing on the processed image data stored in the processed-image data accumulator 25 and can further associate the processed image data with the annotation information stored in the annotation-data accumulator 27. Thus, the processed image data can be used as learning image data for machine learning.
The description below will be given of a first modification of the operation of the image processing system 100. When the image processing apparatus 10 divides an image resulting from the obscuring processing into a plurality of images, the division lines set for the image resulting from the obscuring processing in the embodiment described above correspond to the borders of the divided images, and thus the divided images do not overlap each other. However, in this modification, the divided images overlap each other, which is a difference from the embodiment. Differences from the embodiment will be mainly described below with respect to this modification.
Referring to
The image divider 14 sets horizontally arranged five divided images C21, C22, C23, C24, and C25 so that they extend beyond corresponding division lines D1 to D4. Specifically, the divided image C21 has one end at a position beyond the division line D1. The divided image C22 has two opposite ends at positions beyond the division lines D1 and D2. The divided image C23 has two opposite ends at positions beyond the division lines D2 and D3. The divided image C24 has two opposite ends at positions beyond the division lines D3 and D4. The divided image C25 has one end at a position beyond the division line D4. Hence, the divided images C21 and C22 overlap each other in an overlap area F1 along the division line D1, the divided images C22 and C23 overlap each other in an overlap area F2 along the division line D2. The divided images C23 and C24 overlap each other in an overlap area F3 along the division line D3. The divided images C24 and C25 overlap each other in an overlap area F4 along the division line D4.
The image divider 14 calculates the coordinates of reference points of the divided images C21 to C25 and the sizes thereof and further orders the divided images C21 to C25. The image divider 14 then associates the coordinates of the reference points, the sizes, and the order of the divided images C21 to C25 with the image IDs set for the divided images C21 to C25 and the image ID of the pre-processing image, which is the original image thereof, and stores the associated information in the processing-parameter accumulator 26 in the server apparatus 20 as processing parameters. Other operations of the image processing apparatus 10 are substantially the same as those in the embodiment.
As a result of providing the overlap areas F1 to F4 in the divided images C21 to C25, for example, each of the people H1, H2, and H3 included in the people H1 to H4 and located on the corresponding division lines D1, D3, and D4 is entirely or generally entirely displayed in at least one of two divided images including the corresponding overlap area F1, F3, or F4, as illustrated in
Now, when a description is given of the person H2 by way of example, the area and the coordinates of an annotation added to the person H2 can be determined based on the processing parameters and the annotation information of the divided image C23, that is, the processed image C23. The coordinates are based on a coordinate system set for the pre-processing image or the pre-division image. Also, the area and the coordinates of an annotation added to the person H2 can be determined based on the processing parameters and the annotation information of the processed image C24. The areas of the two annotations mostly overlap each other. As a result, the annotations added to the person H2 in the processed images C23 and C24 can be identified as being annotations for the same person. Hence, the accuracy of annotation increases.
On the other hand, as illustrated in
Thus, provision of overlap areas in divided images in the manner described above increases the accuracy of annotation for a subject displayed in the vicinity of a border between the divided images.
The following description will be given of a second modification of the operation of the image processing system 100. The operation of the image processing system 100 according to the second modification differs from that of the above-described embodiment in that, after the annotation processor adds annotations to processed images on which image processing as illustrated in
Next, annotation addition processing on processed images resulting from the first image processing is executed via the annotation processing apparatus 30 of the annotation processor (step S201). Annotation information added to the processed images is then stored in the annotation-data accumulator 27 in the server apparatus 20. In this case, for example, annotations are added to the people H1, H3, and H4 illustrated in
Thereafter, the controller 11 in the image processing apparatus 10 issues, to the server apparatus 20, a request for the processed image data on which the first image processing was performed and to which the annotations are added, processing parameters of the processed images, and annotation information of the processed images. The controller 21 in the server apparatus 20 transmits, to the image processing apparatus 10, corresponding processed images, processing parameters, and annotation information stored in the processed-image data accumulator 25, the processing-parameter accumulator 26, and the annotation-data accumulator 27. Thus, the image processing apparatus 10 obtains the requested data (step S202).
The controller 11 in the image processing apparatus 10 causes the image obscuring converter 13 to identify the ranges and the positions of the respective areas to which the annotations are added in the processed images. In addition, the image obscuring converter 13 executes obscuring processing on the identified annotation areas by using a degree of obscuration that is higher than or equal to that in the first obscuring processing. Specifically, the image obscuring converter 13 executes deletion processing for blotting out the identified annotation areas (step S203). For example, the annotation areas for the people H1, H3, and H4 illustrated in
After step S203, based on the processing parameters, the controller 11 causes the image obscuring converter 13 to identify the details of the first obscuring processing executed on the processed images. In addition, based on the identified information, the image obscuring converter 13 deactivates the first obscuring processing on the processed images and executes other obscuring processing using a relatively low degree of obscuration, which is lower than that of the first obscuring processing, that is, second obscuring processing, on the resulting processed images (step S204). In this case, since the first obscuring processing on the areas on which the first obscuring processing was executed is deactivated in the processed images, the obscuring processing for the annotation areas on which the obscuring processing was executed in step S203 is maintained. Thus, the degree of obscuration of the obscuring processing in step S203 may be equivalent to that of the first obscuring processing. Although the second obscuring processing is executed on the areas on which the first obscuring processing was deactivated, the second obscuring processing may be executed on the entire processed images. In this case, the obscuring processing on the annotation areas is also maintained. Herein, the second obscuring processing using the degree of obscuration that is lower than that of the first obscuring processing is one example of privacy-protection image processing by using a third intensity.
The relatively low degree of obscuration is, for example, a degree of obscuration with which the annotation processor can determine that the person H2 illustrated in
In addition, the controller 11 causes the image obscuring converter 13 to store information regarding the details and intensity of the second obscuring processing in the processing-parameter accumulator 26 in the server apparatus 20 as processing parameters (step S205). The controller 11 also causes the image obscuring converter 13 to store the processed image resulting from the second image processing in the processed-image data accumulator 25 in the server apparatus 20 as image data for annotation (step S206).
Next, annotation addition processing on the processed images resulting from the second image processing is executed via the annotation processing apparatus 30 of the annotation processor (step S207). Then, annotation information added to the processed images is stored in the annotation-data accumulator 27 in the server apparatus 20. In this case, for example, an annotation is added to the person H2 illustrated in
The description below will be given of a third modification of the operation of the image processing system 100. In the operation of the image processing system 100 according to the third modification, image processing is executed twice on some of a plurality of pre-processing images to be subjected to the image processing, as in the second modification. Then, image processing is further executed once on the remaining pre-processing images by using obscuring processing based on the obscuring processing executed on some of the pre-processing images. Differences from the embodiment and the first and second modifications will be mainly described with respect to this modification.
The pre-processing images stored in the pre-processing image data accumulator 24 have been ordered with numbers or the like given thereto. The first pre-processing images are some pre-processing images selected from pre-processing images to be subjected to the image processing, and the remaining pre-processing images excluding the first pre-processing images from the pre-processing images to be subjected to the image processing are second pre-processing images. In this modification, the first pre-processing images are remaining pre-processing images resulting from extracting one pre-processing image per two pre-processing images of pre-processing images to be subjected to the image processing. Thus, each of the first pre-processing images and the second pre-processing images has every other number.
The selection method for the first pre-processing images is not limited to the above-described method, and may be any method. For example, the number of first pre-processing images to be constituted may be equal to the number of second pre-processing images, as described above, or may be different therefrom. The first pre-processing images may also be pre-processing images selected so as to have continuous numbers, for example, may be pre-processing images that remain after extracting one pre-processing image per three or more pre-processing images. Alternatively, the first pre-processing images may be selected so that both the first pre-processing images and the second pre-processing images have continuous numbers, for example, may be pre-processing images that remain after extracting two pre-processing images with continuous numbers per four pre-processing images. Also, the first pre-processing images may be selected so that only the second pre-processing images have continuous numbers. In the image processing of the second pre-processing images, since the information regarding the obscuring processing on the first pre-processing images is used, it is desirable that a number of images which is larger than or equal to the number of second pre-processing images be constituted as the first pre-processing images.
After S206, if the controller 11 in the image processing apparatus 10 determines that the second image processing using the second obscuring processing on all the first pre-processing images is completed (“yes” in step S301), the process proceeds to a process in steps S302 in order to execute image processing on the second pre-processing images. On the other hand, upon determining that the second image processing on all the first pre-processing images is not completed (“no” in step S301), the process returns to the process in steps S101 in which the controller 11 executes the image processing on an unprocessed first pre-processing image. Alternatively, if there is a first pre-processing image on which only the first image processing is completed, the controller 11 returns to the process in step S202.
In step S302, the controller 11 in the image processing apparatus 10 issues a request for second pre-processing image data to the server apparatus 20. The controller 11 then obtains the second pre-processing image data stored in the pre-processing image data accumulator 24 and transmitted from the controller 21 in the server apparatus 20.
Next, in step S303, the controller 11 in the image processing apparatus 10 issues, to the server apparatus 20, a request for information regarding the details of the first obscuring processing and the second obscuring processing executed on the first pre-processing images previous to and subsequent to the obtained second pre-processing image. The controller 11 then obtains the information stored in the processing-parameter accumulator 26 in the server apparatus 20 and transmitted by the controller 21. For example, the number of the second pre-processing image is denoted by n, and the numbers of the first pre-processing images previous to and subsequent to the second pre-processing image are denoted by n−1 and n+1.
Next, the controller 11 in the image processing apparatus 10 causes the image obscuring converter 13 to determine the details of third obscuring processing, which is obscuring processing to be executed on the second pre-processing image with number n, based on the details of the first obscuring processing and the second obscuring processing on the first pre-processing images with two numbers n−1 and n+1 (step S304). In this modification, a degree of obscuration indicating the intensity of the third obscuring processing is determined. The degree of obscuration for the third obscuring processing is selected from values between two degrees of obscuration of the first obscuring processing on the first pre-processing images with numbers n−1 and n+1 and two degrees of obscurations of the second obscuring processing thereon. The average value, median, or the like of the degrees of obscuration of the first obscuring processing and the two degrees of obscuration of the second obscuring processing is used as the value to be selected. The average value may be an arithmetic mean, geometric mean, harmonic mean, weighted mean, or the like.
The details of the first obscuring processing and the second obscuring processing on the first pre-processing images with numbers close to number n may be used to determine the details of the third obscuring processing on the second pre-processing image with number n. The number of first pre-processing images used to determine the details of the third obscuring processing is not limited to two and may be three or more. For example, not only the first pre-processing images with numbers n−1 and n+1 but also the first pre-processing images with numbers close to number n, such as numbers n−3 and n+3, may be used to determine the details of the third obscuring processing on the second pre-processing image with number n. Also, instead of the first pre-processing images with numbers n−1 and n+1 previous to and subsequent to the second pre-processing image with number n, only the first pre-processing images with previous numbers, such as numbers n−3 and n−1, or only the first pre-processing images with subsequent numbers, such as numbers n+1 and n+3, may be used to determine the details of the third obscuring processing on the second pre-processing image with number n.
After determining the details of the third obscuring processing, the controller 11 in the image processing apparatus 10 generates processed images resulting from the image processing using the third obscuring processing, by executing the processes in steps S103 to S111, as in the embodiment. Then, annotation addition processing is executed on the generated processed images.
As a result of the processing described above, annotation addition processing is executed twice on the same image with respect to the first pre-processing images, and annotation addition processing is executed once on the same image with respect to the second pre-processing images. Hence, the number of times the annotation addition processing is executed decreases, compared with a case in which the annotation addition processing is executed twice on all images in the manner in the second modification. In this modification, at least one of the division processing on the image resulting from the obscuring processing and the rearrangement processing on the divided images, as in the processes in steps S106 to S110 in the embodiment, may be omitted.
As described above, the image processing system 100 according to the embodiment includes the image obscuring converter 13 and the image divider 14. The image obscuring converter 13 serves as an image converter that generates a privacy-protected image by performing obscuring processing, which is privacy-protection image processing, on an image. The image divider 14 generates a plurality of divided images by dividing the privacy-protected image into a plurality of areas. The image divider 14 orders the divided images that form the privacy-protected image. The image processing system 100 further includes the image rearranger 15 that rearranges the order of the ordered divided images and that newly orders the divided images according to the rearranged order.
In the above-described configuration, since the privacy-protected image of each image is an image that has been subjected to the obscuring processing, which is privacy-protection image processing, it is difficult to identify subjects' privacy information, such as features of subjects, in each privacy-protected image. In addition, since each privacy-protected image is divided, it is difficult to identify the subjects' privacy information, such as the photography location of the image. Additionally, since the order of divided images is rearranged, it is more difficult to identify the subjects' privacy information, such as the photography location of the image.
Also, the image processing system 100 according to the second modification of the embodiment includes the image obscuring converter 13 and the controller 11. The image obscuring converter 13 serves as a first image converter that generates a first privacy-protected image by performing the first obscuring processing, which is privacy-protection image processing using a first intensity, on an image. The controller 11 serves as a first output that outputs the first privacy-protected image as an image for annotation. The image obscuring converter 13 also functions as a second image converter that generates a second privacy-protected image by performing deletion processing, which is privacy-protection image processing using a second intensity that is higher than or equal to the first intensity, on an area to which an annotation is added and that is included in the first privacy-protected image to which the annotation is added. In addition, the image obscuring converter 13 functions as a third image converter that generates a third privacy-protected image by deactivating the first obscuring processing, performed by the first image converter, on the second privacy-protected image and performing the second obscuring processing, which is privacy-protection image processing using a third intensity lower than the first intensity. The controller 11 also functions as a second output that outputs the third privacy-protected image as an image for annotation.
In the above-described configuration, since an image is subjected to the first obscuring processing, the second obscuring processing, and the deletion processing, which are privacy-protection image processing, it is difficult to identify subject's privacy information, such as subject's features, in the image. In addition, the first privacy-protected image resulting from the first obscuring processing allows an annotation to be added to a clear subject in a pre-processing image, and also makes it difficult to identify the subject's features. Additionally, it is possible to make it difficult to add an annotation to an unclear subject in the pre-processing image. The second privacy-protected image resulting from the deletion processing maintains or increases the difficulty of identifying features of a subject to which an annotation is added in the first privacy-protected images. The third privacy-protected image resulting from the second obscuring processing allows an annotation to be added to a subject to which adding an annotation was difficult in the first privacy-protected image, but makes it difficult to identify the subject's features. Identifying features of a subject to which an annotation is added in the first privacy-protected image remains difficult. This can make it difficult to identify the privacy information of all subjects in an image, while making it possible to add annotations to all the subjects.
In the image processing system 100 according to the third modification of the embodiment, the image obscuring converter 13 obtains a fourth intensity for the third obscuring processing, which is privacy-protection image processing, based on the first intensity for the first obscuring processing and the third intensity for the second obscuring processing. In addition, the image obscuring converter 13 executes the third obscuring processing on an image on which the image processing has not been executed. The controller 11 then executes division processing, divided-image rearrangement processing, and so on on the image resulting from the third obscuring processing.
In the above-described configuration, some of a plurality of images to be subjected to the image processing are subjected to the image processing using the first obscuring processing and the second obscuring processing and are subjected to annotation processing twice, that is, between the first obscuring processing and the second obscuring processing and after the second obscuring processing. Other images to be subjected to the image processing are subjected to the image processing using the third obscuring processing and are subjected to annotation processing once after the third obscuring processing. Hence, compared with a case in which annotation processing is executed twice on all images, the number of times the annotation processing is executed decreases, thus making it possible to simplify and expedite the image processing and the annotation processing.
Also, an image processing method according to the embodiment includes: generating a plurality of privacy-protected images by performing obscuring processing, which is privacy-protection image processing, on each of a plurality of images; dividing each of the privacy-protected images into a plurality of areas to generate a plurality of divided images and ordering the divided images belonging to the same privacy-protected image so that the divided images form a continuous image; rearranging an order of the ordered divided images; and outputting, as processed images for annotation, the divided images ordered according to the rearranged order. The above-described method provides an advantage that is the same as or similar to that of the image processing system 100 according to the embodiment.
An image processing method according to the second modification of the embodiment further includes: obtaining, as first processed images, the processed images to which an annotation is added, after the outputting of the divided images; performing deletion processing on an area that is included in the first processed images and to which the annotation is added, the deletion processing being privacy-protection image processing using a second intensity higher than or equal to a first intensity of first obscuring processing that has been performed; performing, on the first processed images resulting from the deletion processing, second obscuring processing using a third intensity lower than the first intensity after deactivating the first obscuring processing; and outputting, as second processed images for annotation, the first processed images resulting from the second obscuring processing. The above-described method provides an advantage that is the same as or similar to that of the image processing system 100 according to the second modification.
An image processing method according to the second modification of the embodiment includes generating a first privacy-protected image by performing first obscuring processing using a first intensity on an image; outputting the first privacy-protected image as an image for annotation; obtaining the first privacy-protected image to which an annotation is added, after the outputting of the first privacy-protected image; generating a second privacy-protected image by performing, on an area to which the annotation is added and that is included in the first privacy-protected image to which the annotation is added, deletion processing using a second intensity higher than or equal to the first intensity; generating a third privacy-protected image by performing, on the second privacy-protected image, second obscuring processing using a third intensity lower than the first intensity after deactivating the first obscuring processing; and outputting the third privacy-protected image as an image for annotation. The above-described method also provides an advantage that is the same as or similar to that of the image processing system 100 according to the second modification.
An image processing method according to the third modification of embodiment further includes: obtaining a fourth intensity of privacy-protection image processing, based on the first intensity of the first obscuring processing and the third intensity of the second obscuring processing; performing third obscuring processing, which is privacy-protection image processing using the fourth intensity, on an unprocessed image that is included in the plurality of images and on which the privacy-protection image processing is not executed; dividing the unprocessed image resulting from the third obscuring processing into a plurality of areas to generate a plurality of divided images and ordering the divided images belonging to the same privacy-protected image so that the divided images form a continuous image; rearranging an order of the ordered divided images; and outputting, as processed images for annotation, the divided images ordered according to the rearranged order. The above-described method also provides an advantage that is the same as or similar to that of the image processing system 100 according to the third modification.
In the image processing system 100 and the image processing method according to the second and third modifications, the privacy-protection image processing using the second intensity is blotting out or deleting the area to which the annotation is added. Thus, redundantly adding an annotation to a subject to which an annotation has been added after the first obscuring processing is suppressed during annotation addition processing on an image for annotation after the second obscuring processing.
In the image processing system 100 and the image processing method according to the embodiment and the modifications, the obscuring processing is mosaic processing, blurring processing, or pixelization processing. This makes it possible to easily change the intensity of the obscuring processing.
In the image processing system 100 and the image processing method according to the embodiment and the modifications, the divided images belonging to the privacy-protected images, that is, the pre-processing images, are randomly shuffled in the rearranging processing of the divided images. This makes it difficult to associate the divided images belonging to the same pre-processing image and also makes it difficult for the annotation processor to restore the pre-processing image. In addition, since the order of the divided images belonging to the same pre-processing image is rearranged so that the divided images are not continuous, it is more difficult to associate the divided images belonging to the same pre-processing image.
In the image processing system 100 and the image processing method according to the first modification, the divided images belonging to the same privacy-protected image, that is, to the pre-processing image, have a partly overlapping area. Thus, borders between the divided images can overlap each other. Hence, subjects in the vicinity of the borders can be easily and accurately identified, and the accuracy of annotation improves.
In the image processing system 100 and the image processing method according to the embodiment and the modification, before the obscuring processing that is image processing, character recognition is executed on pre-processing images, and deletion processing, which is privacy-protection image processing, is executed on recognized characters on the pre-processing images. Thus, separately performing the obscuring processing on characters, symbols and so on in a pre-processing image and the obscuring processing on subjects in the pre-processing image, the degree of clarity of the characters, symbols in a pre-processing image, and so on and the degree of clarity of the subjects being different from each other, makes it possible to execute obscuring processing corresponding to each target. As a result, the accuracy of adding annotations to subjects increases.
The above-described method may also be implemented by an MPU, a CPU, a processor, a circuit such as an LSI circuit, an IC card, a single independent module, or the like.
In addition, the processing in the embodiment and modifications may be realized by a software program or digital signals provided by a software program. For example, the processing in the embodiment can be realized by a program as described below.
That is, this program is a program to be executed by a computer and includes: generating a plurality of privacy-protected images by performing obscuring processing on each of a plurality of images; dividing each of the privacy-protected images into a plurality of areas to generate a plurality of divided images; ordering the divided images belonging to the same privacy-protected image so that the divided images form a continuous image; rearranging an order of the ordered divided images; and outputting, as images for annotation, the divided images ordered according to the rearranged order.
Also, the processing in the second modification is implemented by a program as described below.
That is, this program is a program to be executed by a computer and includes: generating a first privacy-protected image by performing first obscuring processing using a first intensity on an image; outputting the first privacy-protected image as an image for annotation; obtaining the first privacy-protected image to which an annotation is added; generating a second privacy-protected image by performing, on an area to which the annotation is added and that is included in the first privacy-protected image to which the annotation is added, deletion processing using a second intensity higher than or equal to the first intensity; generating a third privacy-protected image by deactivating, on the second privacy-protected image, the first obscuring processing and performing second obscuring processing using a third intensity lower than the first intensity; and outputting the third privacy-protected image as an image for annotation.
The above-described program and the digital signals provided by the program may be recorded on computer-readable recording media, for example, a flexible disk, a hard disk, a compact disc read-only memory (CD-ROM), a magneto-optical (MO) disk, a DVD, a DVD-ROM, a DVD-RAM, a Blu-ray® Disc (BD), and a semiconductor memory.
The above-described program and the digital signals provided by the program may be transmitted over a telecommunication channel, a wireless or wired communication channel, a network typified by the Internet, data broadcasting, or the like.
The above-described program and the digital signals provided by the program may be realized by another independent computer system through transportation of the recording medium on which the program and the digital signals are recorded or transfer thereof over the network or the like.
The embodiment and the modifications have been described above as examples of the technology disclosed herein. The technology in the present disclosure, however, is not limited thereto, and can be applied to a modification of the embodiment or another embodiment obtained by making a change, replacement, addition, omission, or the like. Also, the constituent elements described in the embodiment and the modifications may be combined into a new embodiment or modification.
Although the image processing apparatus 10, the server apparatus 20, the annotation processing apparatus 30, and the annotation relaying apparatus 40 in the image processing system 100 according to the embodiment and the modifications are independent elements and are arranged apart from each other, the present disclosure is not limited thereto. For example, the image processing apparatus 10 and the annotation relaying apparatus 40 may constitute one apparatus. Alternatively, the server apparatus 20 and at least one of the image processing apparatus 10 and the annotation relaying apparatus 40 may constitute one apparatus. The annotation processing apparatus 30 and the annotation relaying apparatus 40 may constitute one apparatus.
Although the image processing system 100 according to the embodiment and the modifications has been used above in order to constitute a large amount of image data for learning in a neural network or the like of deep learning, the present disclosure is not limited thereto, and the image processing system 100 may be applied to any configuration for constructing image data.
General or specific aspects of the present disclosure may be implemented by a system, a method, an integrated circuit, a computer program, or a computer-readable recording medium, such as a CD-ROM. Also, general or specific aspects of the present disclosure may be implemented by any selective combination of a system, a method, an integrated circuit, a computer program, and a recording medium.
The embodiment and modifications have been described above as examples of the technology in the present disclosure. To this end, the accompanying drawings and the detailed description have been given. Thus, the constituent elements set forth in the accompanying drawings and the detailed description can not only include constituent elements essential for addressing the issue but also include constituent elements not essential for addressing the issue in order to illustrate the above-described technology. Thus, it should not be construed that such non-essential constituent elements being set forth in the accompanying drawings and the detailed description immediately certifies that the non-essential constituent elements are essential. In addition, the above-described embodiment and modifications are to exemplify the technology in the present disclosure, and thus various changes, replacements, additions, and omissions can be made within the scope of the present disclosure or a scope equivalent thereto.
The present disclosure is applicable to a technology for adding annotations to an image.
Number | Date | Country | Kind |
---|---|---|---|
2016-209036 | Oct 2016 | JP | national |