This nonprovisional application claims priority under 35 U.S.C. ยง119(a) on Patent Application No. 2011-183252 filed in Japan on Aug. 25, 2011, the entire contents of which are hereby incorporated by reference.
1. Field of the Invention
The present invention relates to an image processing apparatus and an image processing method for performing image processing.
2. Description of Related Art
There is a case where an unneeded object (a smudge or a mole on a face, or an electric wires in background) regarded as unnecessary by a user is depicted in a digital image, for example, which is obtained by photography using a digital camera. Concerning this, there is proposed an unneeded object elimination function for eliminating an unneeded object from an input image. In the unneeded object elimination function, the unneeded object elimination is realized by correcting an unneeded region specified by the user using an image signal in the periphery thereof For instance, an image signal in the unneeded region is replaced with an image signal in the periphery, or the image signal in the periphery is mixed to the image signal in the unneeded region, and hence the object in the unneeded region (namely, the unneeded object) can be eliminated from the input image.
In a first conventional method, random numbers are utilized so as to set a replacing pixel with respect to a pixel of interest specified by the user. Then, the pixel of interest is replaced with the replacing pixel in accordance with a luminance difference between the pixel of interest and the replacing pixel so that the unneeded object elimination is realized.
In addition, there is a second conventional method in which the entire image region of the input image is separated into a region of interest and an unneeded region based on edge intensity, and a rectangular region that does not include the unneeded region is clipped from the input image so that the unneeded object elimination is realized.
However, in the conventional methods including the first conventional method, the image signal to be used for correction is determined based on a simple criterion such as the random numbers and the luminance difference. Therefore, there is a possibility to perform an unnatural process of deforming contour of a person.
This is described with reference to
Note that if a main subject (person) and the unneeded object (tree) are close to each other like the input image 900, it is difficult to use the second conventional method (it is difficult to set the rectangular region that includes the main subject but does not include the unneeded object).
An image processing apparatus according to the present invention includes a correction process portion which corrects a correction target region in an input image using an image signal in a region for correction, a detecting portion which detects a specified object region in which a specified type of object exists in the input image, and a setting portion which sets the region for correction based on positions of the correction target region and the specified object region.
An image processing method according to the present invention includes a correction process step of correcting a correction target region in an input image using an image signal in a region for correction, a detecting step of detecting a specified object region in which a specified type of object exists in the input image, and a setting step of setting the region for correction based on positions of the correction target region and the specified object region.
Hereinafter, an example of an embodiment of the present invention is described specifically with reference to the attached drawings. In the drawings to be referred to, the same part is denoted by the same numeral or symbol so that overlapping description of the same part is omitted as a rule. Note that in this specification, for simple description, when a numeral or symbol represents information, a signal, physical quantity, state quantity, a member, or the like, a name of the information, the signal, the physical the quantity, the state quantity, the member, or the like corresponding to the numeral or symbol may be omitted or abbreviated.
The image processing apparatus 10 includes an image correcting portion 30 which corrects the input image.
The face region detecting portion 31 detects and extracts a face region where an image signal of a human face exists from the entire image region of the input image based on an image signal of the input image, so as to generate and output face region information indicating a position, a size, and a shape of the face region on the input image. In
The unneeded region setting portion 32 sets the unneeded region on the input image based on an unneeded region specifying operation performed by the user to the electronic apparatus 1, so as to generate and output unneeded region information indicating a position, a size, and a shape of the unneeded region on the input image. In
The correction region setting portion 33 sets the region for correction to be used for elimination of the unneeded object based on the face region information and the unneeded region information. The region for correction is an image region on the input image different from the unneeded region. For instance, a region adjacent to the unneeded region, which surrounds the unneeded region, may be the region for correction, or an image region that is not adjacent to the unneeded region may be the region for correction. A result of setting the correction region setting portion 33 is sent to the correction process portion 34 together with the unneeded region information.
The correction process portion 34 performs image processing for correcting the unneeded region using the image signal of the region for correction (hereinafter referred to as an unneeded object elimination process), so as to eliminate the unneeded object from the input image and generate an output image which is the input image after the elimination of the unneeded object. The electronic apparatus 1 can record the output image in the recording medium 14 and can display the same on the display portion 13. Arbitrary image processing for eliminating the unneeded object using an image signal of an image region other than the unneeded region can be used as the unneeded object elimination process. For instance, it is possible to eliminate the unneeded object by replacing an image signal of the unneeded region with the image signal of the region for correction, or the unneeded object may be eliminated by mixing the image signal of the region for correction to the image signal of the unneeded region. Note that the elimination may be complete elimination or may be partial elimination.
Hereinafter, as an example describing specific actions and the like of the electronic apparatus 1 and the image correcting portion 30, first to third examples are described. Description in one example can be applied to other example as long as no contradiction arises. In addition, in the following description, it is supposed that the input image is the input image 300 unless otherwise noted.
With reference to
When the input image 300 is supplied to the image correcting portion 30, first in Step S11, the electronic apparatus 1 accepts the unneeded region specifying operation by the user, and the unneeded region setting portion 32 sets the unneeded region according to the unneeded region specifying operation. After the unneeded region is set, in Step S12, the face region detecting portion 31 performs a process for detecting the face region based on an image signal of the input image 300. In the next Step S13, it is checked whether or not a human face exists in the input image 300.
If a face exists in the input image 300 (namely, a face region is detected from the input image 300), the process goes from Step S13 to Step S14. In Step S14, the correction region setting portion 33 decides whether or not the unneeded region is positioned outside the face region based on the unneeded region information and the face region information (more specifically, based on a positional relationship between the unneeded region and the face region), so as to made an outside decision or a non-outside decision. If the entire unneeded region is positioned outside the face region, the outside decision is made. Otherwise, the non-outside decision is made. For instance, if the region 302 of
If the non-outside decision is made in Step S14, the process goes back to Step S11, and the electronic apparatus 1 accepts again the unneeded region specifying operation performed by the user (urges the user to specify the unneeded region again). On the other hand, if the outside decision is made in Step S14, the correction region setting portion 33 sets a region in which the face region 301 is masked, namely a region remained after removing the face region 301 from the input image 300, as the reference region in Step S15. In
In Step S17, the correction region setting portion 33 sets the region for correction in the reference region (extracts the region for correction suitable for elimination of the unneeded object from the reference region), and the correction process portion 34 performs the unneeded object elimination process using the image signal of the region for correction.
Note that if it is decided in Step S13 that no face exists in the input image, the entire image region of the input image is set to the reference region, and the process goes to Step S17 via Step S16. In addition, if the non-outside decision is made in Step S14, it is possible to set the entire image region of the input image to the reference region so as to perform the process of Step S17 instead of going back to Step S11.
If the unneeded region is positioned outside the face region like the unneeded region 302 of
With reference to
In the second example, if a face exists in the input image 300, the process goes from Step S13 to Step S14a. In Step S14a, the correction region setting portion 33 decides whether or not the unneeded region is positioned inside the face region based on the unneeded region information and the face region information (more specifically, based on a positional relationship between the unneeded region and the face region), so as to made an inside decision or a non-inside decision. If the entire unneeded region is positioned inside the face region, the inside decision is made. Otherwise, the non-inside decision is made. For instance, if the region 303 of
If the non-inside decision is made in Step S14a, the process goes back to Step S11, and the electronic apparatus 1 accepts again the unneeded region specifying operation performed by the user (urges the user to specify the unneeded region again). On the other hand, if the inside decision is made in Step S14a, the correction region setting portion 33 sets a region in which other than the face region 301 is masked as the reference region, namely sets the face region 301 as the reference region in Step S15a. In
If the region 303 of
A third example is described. In the first and second examples described above, it can be said that the correction region setting portion 33 decides whether or not to use the image signal in the face region as the image signal of the region for correction based on the positional relationship between the unneeded region and the face region (based on whether or not the unneeded region is outside the face region, or based on whether or not the unneeded region is inside the face region). It is possible to combine the above-mentioned first and second examples.
The flowchart of
In the third example corresponding to
<<Variations>>
The embodiment of the present invention can be appropriately changed variously within the technical concept described in the claims. The embodiment described above is merely an example of the embodiment of the present invention, meanings of the present invention and elements thereof are not limited to those described above in the embodiment. As annotations that can be applied to the above-mentioned embodiment, Note 1 and Note 2 are described below. The descriptions in the Notes can be combined arbitrarily as long as no contradiction arises.
[Note 1]
In the embodiment described above, the correction process is performed by regarding the unneeded region as the correction target region. In this case, the face region is detected as the specified object region in which the specified type of object exists, and the region for correction is set by using the position of the face region. The human face is an example of the specified type of object, and the face region detecting portion 31 is an example of the specified object region detecting portion. The specified type of object may be other than the human face. An arbitrary object having a specified contour (for example, a face of an animal as a pet) may be the specified type of object.
[Note 2]
The image correcting portion 30 may be constituted of hardware or a combination of hardware and software. The functions of the image correcting portion 30 may be realized by using image processing software running on the electronic apparatus 1 for general purpose. In other words, it is possible to describe the actions executed by the image correcting portion 30 as a program, and to make the electronic apparatus 1 execute the program so as to realize the function of the image correcting portion 30.
Number | Date | Country | Kind |
---|---|---|---|
2011-183252 | Aug 2011 | JP | national |