This application claims priority to Japanese Patent Application No. 2013-065902 filed on Mar. 27, 2013 and No. 2014-50326 filed on Mar. 13, 2014. The entire disclosure of Japanese Patent Application No. 2013-065902 and No. 2014-50326 is hereby incorporated herein by reference.
1. Technical Field
The present disclosure relates to image processing performed on a selection range designated by a user in an image processing apparatus that allows a user to designate a portion of an image and perform image processing on that portion, for example.
2. Background Art
There are cases where a displayed image includes an unnecessary object, which is an object thought to be unnecessary by a user (such as facial wrinkles or moles, or electric cables in the background). Conventionally, a function for removing such an unnecessary object from the image and performing inpainting processing to prevent unnaturalness has been proposed. Specifically, after the user designates a specific region of an image as an unnecessary region, inpainting processing is performed on the unnecessary region using the surrounding portion of the image or the like, as disclosed in Japanese Laid-open Patent Publication 2013-045316A, for example. As the unnecessary region designation method performed by the user, it is common to designate the unnecessary region in units of pixels, such as the case where the user uses a digitizer or a pointing device such as a mouse to trace the outline portion of the unnecessary object while referencing an image displayed on a display.
The present disclosure provides an image processing apparatus and an image processing method for performing image region designation that is effective for obtaining more natural processing results in, for example, image inpainting processing for removing an unnecessary object.
An image processing apparatus according to one aspect of the present disclosure is an image processing apparatus that performs region designation with respect to a displayed image, including: a display unit configured to display an image constituted by a predetermined number of pixels; an input unit configured to receive a selection operation with respect to the image; and a control unit configured to control the display unit and the input unit. The control unit is further configured to generate a plurality of divided regions by dividing the image in accordance with similarity calculated based on pixel values and pixel locations, and identify a selection range constituted by one or more of the plurality of divided regions in accordance with the selection operation received by the input unit. The control unit is further configured to perform erosion processing with respect to the selection range by reducing a number of pixels constituting the selection range, and perform dilation processing with respect to the selection range resulting from the erosion processing by increasing the number of pixels constituting the selection range resulting from the erosion processing. The number of pixels constituting the selection range resulting from the dilation processing is greater than the number of pixels constituting the selection range before the erosion processing.
An image processing method according to another aspect of the present disclosure is an image processing method for performing region designation with respect to an image displayed on a display apparatus, including: generating a plurality of divided regions by dividing the image constituted by a predetermined number of pixels in accordance with similarity calculated based on pixel values and pixel locations; specifying a selection range constituted by one or more of the plurality of divided regions in accordance with a selection operation by a user; performing erosion processing with respect to the selection range by reducing a number of pixels constituting the selection range; and performing dilation processing with respect to the selection range resulting from the erosion processing by increasing the number of pixels constituting the selection range resulting from the erosion processing. The number of pixels constituting the selection range resulting from the dilation processing is greater than the number of pixels constituting the selection range before the erosion processing.
The image processing apparatus and the image processing method of the present disclosure are effective in performing image region designation for obtaining more natural processing results in, for example, image inpainting processing for removing an unnecessary object.
Hereinafter, embodiments of the present disclosure will be described in detail with reference to the drawings as appropriate. Note that there are cases where descriptions in greater detail than necessary will not be given. For example, there are cases where detailed descriptions will not be given for well-known matter, and where redundant descriptions will not be given for configurations that are substantially the same. The purpose of this is to avoid unnecessary redundancy in the following description and to facilitate understanding by a person skilled in the art. It will be apparent to those skilled in the art from this disclosure that the following descriptions of the embodiments are provided for illustration only and not for the purpose of limiting the invention as defined by the appended claims and their equivalents.
Note that the accompanying drawings and following description are provided for sufficient understanding of the present disclosure by a person skilled in the art, and are not intended to limit the subject matter recited in the claims.
The following describes Embodiment 1 with reference to
1. Configuration
The control unit 2 includes a processor such as a CPU, and executes operations of the image processing apparatus 1 by executing processing in a predetermined program. The storage unit 4 may be constituted by a hard disk, a silicon disk, an SD card (semiconductor memory), or the like. The storage unit 4 may be constituted by an element that performs temporary storage, such as a cache or a RAM.
Also, the operation input unit 6 may be a pointing device such as a mouse or a tablet, or may be a keyboard. Alternatively, the operation input unit 6 may be an electronic pen of any of various systems, or the like.
2. Operations
2-.1. Operations of Image Processing Apparatus
Operations of the image processing apparatus 1 configured as described above will be described below with reference to the flowchart of
Next, the control unit 2 performs segment division processing for dividing the image into multiple small regions (segments) based on similarity between values of the pixels of the image (step S102).
When the user selects one or more small regions out of the divided small regions (segments), the control unit 2 performs region designation processing for designating a region to be a target of image processing (step S103).
Next, the control unit 2 performs binary image processing for generating a binary image in which all of the pixels included in the small regions (segments) selected by the user are set as the selection range (step S104).
The control unit 2 then performs image inpainting processing that excludes an unwanted region that has gone through the binary image processing (step 105).
The operations in the segment division processing (step S102), the region designation processing (step S103), the binary image processing (step S104), and the image inpainting processing (step S105) will be described in detail.
2-2. Operations in Segment Division Processing
The following describes the segment division processing, which is for dividing an image into multiple small regions (example of divided regions, which will be referred to hereinafter as “segments”) based on similarity between values of the pixels of the image. In the present embodiment, a segment division method based on k-means clustering is used as the segment division processing.
As shown in
Note that the input image 10 is digital data expressed in the YUV color space, for example, and can be subjected to digital processing by the control unit 2. Specifically, the input image 10 is constituted by M×N (e.g., 640×480) pixels, and each pixel has data indicating three values, namely a luminance Y and color differences U and V (referred to hereinafter as the “pixel value”). Note that since the image data format allows conversion between color spaces such as the RGB color space and the Lab color space, processing may be performed using the results of conversion into another color space as the pixel value.
As an initialization task, the control unit 2 divides the image 10 into k (k being an integer of two or more) initial segments (step S201). The centroids of these k initial segments are arranged evenly both vertically and horizontally in the image 10. The interval between adjacent centroids is S (pixels).
Each segment is then given an individually unique label (step S202). For example, with the top-left segment in the screen serving as the first segment, the segments in
Next, the control unit 2 performs the processing of loop A on all of the pixels in the image 10 (step S203). In the processing of loop A, the processing of steps S204 and S205 is performed on each pixel in the image 10.
For each pixel, the control unit 2 calculates a distance Ds with respect to the centroid of each of the segments (step S204). This distance Ds is a value indicating similarity defined using the pixel value and the pixel location. Here, the smaller the distance Ds is, the higher the similarity of the pixel to the centroid of the segment is determined to be.
For example, in the case of the i-th pixel that is at the pixel location (xi,yi) and has the pixel value (Yi,Ui,Vi), the distance Ds to the k-th segment is calculated using Equation 1 below.
Here, the centroid of the segment is at the pixel location (xk,yk) and has the pixel value (Yk,Uk,Vk). The initial values of this segment centroid may be the location of the corresponding segment centroid after the segments are evenly arranged as shown in
Also, m is a coefficient for obtaining balance between the influence that a distance D1 based on the pixel value exerts on the distance Ds and the influence that a distance D2 based on the pixel location exerts on the distance Ds. This coefficient m may be determined in advance experimentally or empirically.
Next, the control unit 2 determines the segment that the target pixel i is to belong to using the distances Ds (step S205). Specifically, the segment that has the centroid corresponding to the lowest distance Ds is determined to be the affiliated segment for the target pixel i.
This processing in steps S204 and S205 is carried out on all of the pixels included in the image 10 (step S203), thus determining an affiliated segment for each of the pixels. Specifically, M×N data pieces given the labels of the segments to which the pixels belong are obtained. These data pieces will collectively be referred to hereinafter as a label image 21.
Next, the control unit 2 updates the centroid of each segment in which a belonging pixel was changed in the processing of loop A (step S206). Updating the centroids of the segments makes it possible to perform more accurate division processing. The control unit 2 then calculates the pixel location (xk,yk) and the pixel value (Yk,Uk,Vk) of the new centroid using Equation 2 below.
Here, Σ in Equation 2 represents the sum for all of the pixels included in the k-th segment, and N represents the total number of pixels included in the k-th segment.
Next, the control unit 2 determines whether or not the division processing is to be ended (step S207). If the division processing is to be continued (No in step S207), the processing of steps S203 to S206 is performed again, and the label image 21 is updated.
This end determination in step S207 may be a determination made by, for example, monitoring the updated state of the centroid in Equation 2. Specifically, if there is little change in the pixel location (xk,yk) and the pixel value (Yk,Uk,Vk) of the centroid between before and after the update, it is determined that the segment division processing is to be ended (Yes in step S207). Alternatively, the end determination may be made based on the number of times that steps S203 to S206 are repeated. For example, the segment division processing may be ended when the processing of steps S203 to S206 has been carried ten times.
The control unit 2 repeatedly updates the label image 21 in this way. As a result, the image 10 is divided into the segments 20 shown in
Accordingly, it can be seen that in order for the user to designate the first subject 11, the user need only designate the segments included in the first subject 11. Since there is no need to designate a region in units of pixels by carefully tracing the outline portion of the first subject 11 as in conventional technology, the region designation can be executed through an easier task.
2-3. Operations in Region Designation Processing
As shown in
Next, the appearance of noise will be described with reference
There are cases where noise such as dark spots are recorded in the original image 10. Also, there are cases where noise appears during the compression/expansion of digital data during recording to the storage unit 4 or reproduction. Even when very fine patterns are scattered in the image, there are cases where they appear to be noise in a macroscopic view. Isolated spots such as those described above sometimes appear depending on these kinds of noise or the pixel values (luminance and color difference) of the pattern.
The following description takes the example of the pixel constituting the noise region 23c. It will be assumed that the luminance and color difference of this pixel completely match the pixel value at the centroid of the segment 23. However, it will be assumed that the pixel value of this pixel is completely different from the pixel values of the surrounding pixels. Accordingly, the following phenomenon occurs when the distance Ds to the centroids of each of the segments is calculated based on Equation 1 in step S204 in
For this reason, when the user selects the segment 23 inside the first subject 11, the noise regions 23a, 23b, and 23c, which are not originally included in the first subject 11, will also be included in the region targeted for image processing. Similarly, when the user selects the segment 24 inside the first subject 11, the noise regions 24a, 24b, and 24c, which are not included in the first subject 11, will also be included in the region targeted for image processing.
2-4. Operations in Binary Image Processing
The control unit 2 carries out the binary image processing in accordance with the flowchart in
2-4-1. Binary Image Generation
The following describes the binary image generation method in step S301. In the region designation processing (step S103) in
Note that in the example in
In this binary image 41, the region shown in gray is a selection range 42 made up of the pixels included in all of the segments selected by the user. It can be seen that the selection range 42 is a continuous region that conforms to the shape of the edges of the first subject 11. Note, however, that the selection range 42 also includes six isolated regions 43 to 48 that are isolated spots. These isolated spots correspond to the noise regions 23a to 23c, 24a to 24c, and the like that were described with reference to
2-4-2. Selection Range Erosion Processing
Next, selection range erosion processing (step S302) will be described with reference to
Next, the control unit 2 performs erosion processing on the entirety of the binary image 41. The erosion processing of the present embodiment is carried out as described below. As shown in
It can be seen that as a result of this erosion processing, the selection range 42 is made smaller from the outside toward the inside (reduced in size). In other words, the selection range 42 is reduced by one or more pixels.
Note that rather than performing the above-described erosion processing only once, it may be executed multiple times consecutively. Even if the isolated regions 43, 44, and 45 are each made up of multiple pixels, they can be more reliably removed by carrying out the erosion processing multiple times.
2-4-3. Selection Range Dilation Processing
Next, selection range dilation processing (step S303) will be described with reference to
Note that although it was described that the dilation processing is executed four times in the present embodiment, this is merely one example, and the number of times it is executed is not limited to this. The dilation processing may be executed any number of times as long as the number of pixels by which the selection range is expanded is greater than the number of pixels by which it was reduced through the erosion processing.
Also, the values of the eight adjacent pixels surrounding the pixel of interest are determined in the above-described dilation processing, but it is possible to determine the values of 24 pixels including the next 16 surrounding pixels. In this case, it is possible to increase the number of pixels by which the selection range is expanded when performing the dilation processing one time, thus increasing the efficiency of the calculation processing performed by the control unit 2.
2-5. Operations in Image Inpainting Processing
2-5-1. Image Inpainting Processing According to Present Embodiment
The control unit 2 performs the image inpainting processing (step S105 in
2-5-2. Image Inpainting Processing According to Comparative Example
The following describes problems in cases where the erosion processing of step S302 or the dilation processing of step S303 is not carried out.
First, if the processing of steps S302 and S303 is not carried out, the binary image will remain in the state shown in
It is common that the user designates a region surrounded by edges as an unnecessary object. These edges are groups of pixels that have different pixel values (luminance and color difference) from both the inside of the unnecessary region and the outside thereof. As can be seen in
For this reason, in the segment division processing of step S102 in
On the other hand, the dilation processing of step S303 is carried out in the image processing apparatus 1 of the present embodiment, and therefore the selection range 42 can completely cover the edges of the first subject 11 as shown in
The following describes a problem in the case where the image inpainting processing is performed without having carried out the erosion processing of step S302. In this case, the image inpainting processing is performed using the selection range 42 that has been subjected to only the dilation processing of step S303 as the unnecessary region.
On the other hand, since the erosion processing of step S302 is carried out before the dilation processing in the image processing apparatus 1 of the present embodiment, the isolated regions 43 to 48 can be eliminated in advance.
3. Variations
The erosion processing of step S302 described in the above embodiments can also be replaced with a processing method such as the following. Specifically, as shown in
If the value of p is p=8, the value of the pixel of interest 51 is changed to 0 only if all of the eight adjacent pixels surrounding the pixel of interest 51 have the value of 0. Performing this erosion processing on the binary image 41 shown in
4. Conclusion
As described above, with the image processing apparatus 1 of the present embodiment, an image is divided into multiple small regions (segments) (step S102 in
Furthermore the image processing apparatus 1 of the present embodiment performs image inpainting processing using the selection range resulting from the dilation processing as an unnecessary region (step S105 in
Some or all of the processing in the above-described embodiments may be realized by computer programs. Also, some or all of the processing executed by the image processing apparatus 1 is executed by a processor such as a central processing unit (CPU) in a computer. Also, programs for executing the processing are stored in a storage device such as a hard disk or a ROM, and are executed in the ROM or read out to a RAM and then executed.
Also, the processing executed by the image processing apparatus 1 may be realized by hardware, or may be realized by software (including the case of being realized together with an OS (operating system), middleware, or a predetermined library). Furthermore, such processing may be realized by a combination of software and hardware.
The image processing apparatus 1 of the above-described embodiments may be realized as an image processing method or a computer program for causing a computer to execute image processing. Also, a computer-readable recording medium recording the program is encompassed in the present invention. Here, examples of the computer-readable recording medium include a flexible disk, a hard disk, a CD-ROM, an MO, a DVD, a DVD-ROM, a DVD-RAM, a BD (Blu-ray Disc), and a semiconductor memory.
The computer program is not limited to being recorded on the recording medium, and may be transmitted via, for example, an electrical communication line, a wireless or wired communication line, or a network typified by the Internet.
Also, the execution sequence of the image processing in the above-described embodiments is not necessarily limited to the description of the above embodiments, and the steps in the execution sequence can be interchanged without departing from the gist of the invention.
Embodiments have been described above as illustrative examples of techniques of the present invention. The accompanying drawings and detailed description have been provided for this purpose.
Accordingly, the constituent elements included in the accompanying drawings and the detailed description may include not only constituent elements that are essential to solving the problem, but also constituent elements that are not essential to solving the problem, in order to illustrate examples of the techniques. For this reason, these non-essential constituent elements should not be immediately found to be essential constituent elements based on the fact that they are included in the accompanying drawings or detailed description.
Also, the above-described embodiments are for illustrating examples of the techniques of the present invention, and therefore various modifications, substitutions, additions, omissions, and the like can be made within the scope of the claims or a scope equivalent thereto.
The present disclosure is applicable to electronic devices that have an image display function, such as digital cameras, digital video cameras, personal computers, mobile phones, and information terminals.
In understanding the scope of the present disclosure, the term “comprising” and its derivatives, as used herein, are intended to be open ended terms that specify the presence of the stated features, elements, components, groups, integers, and/or steps, but do not exclude the presence of other unstated features, elements, components, groups, integers and/or steps. The foregoing also applies to words having similar meanings such as the terms, “including”, “having” and their derivatives. Also, the terms “part,” “section,” “portion,” “member” or “element” when used in the singular can have the dual meaning of a single part or a plurality of parts. Also as used herein to describe the above embodiment(s), the following directional terms “forward”, “rearward”, “above”, “downward”, “vertical”, “horizontal”, “below” and “transverse” as well as any other similar directional terms refer to those directions of the image processing apparatus and image processing method. Accordingly, these terms, as utilized to describe the technology disclosed herein should be interpreted relative to the image processing apparatus and image processing method.
The term “configured” as used herein to describe a component, section, or part of a device includes hardware and/or software that is constructed and/or programmed to carry out the desired function.
The terms of degree such as “substantially”, “about” and “approximately” as used herein mean a reasonable amount of deviation of the modified term such that the end result is not significantly changed.
While only selected embodiments have been chosen to illustrate the present invention, it will be apparent to those skilled in the art from this disclosure that various changes and modifications can be made herein without departing from the scope of the invention as defined in the appended claims. For example, the size, shape, location or orientation of the various components can be changed as needed and/or desired. Components that are shown directly connected or contacting each other can have intermediate structures disposed between them. The functions of one element can be performed by two, and vice versa. The structures and functions of one embodiment can be adopted in another embodiment. It is not necessary for all advantages to be present in a particular embodiment at the same time. Every feature which is unique from the prior art, alone or in combination with other features, also should be considered a separate description of further inventions by the applicants, including the structural and/or functional concepts embodied by such feature(s). Thus, the foregoing descriptions of the embodiments according to the present invention are provided for illustration only, and not for the purpose of limiting the invention as defined by the appended claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2013-065902 | Mar 2013 | JP | national |
2014-050326 | Mar 2014 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
5887080 | Tsubusaki et al. | Mar 1999 | A |
6289123 | Xiaomang et al. | Sep 2001 | B1 |
6583823 | Shimada et al. | Jun 2003 | B1 |
7411612 | Yamada et al. | Aug 2008 | B2 |
7961951 | Tsutsumi | Jun 2011 | B2 |
8009909 | Hirai et al. | Aug 2011 | B2 |
8023014 | Kim et al. | Sep 2011 | B2 |
8027535 | Sanami | Sep 2011 | B2 |
8208752 | Ishii | Jun 2012 | B2 |
8229674 | Guittet | Jul 2012 | B2 |
8306324 | Sanami | Nov 2012 | B2 |
8472076 | Ikeda | Jun 2013 | B2 |
9171108 | Chen | Oct 2015 | B2 |
20020015526 | Nomura et al. | Feb 2002 | A1 |
20040208395 | Nomura | Oct 2004 | A1 |
20050057657 | Yamada et al. | Mar 2005 | A1 |
20050089241 | Kawanishi et al. | Apr 2005 | A1 |
20050129326 | Matama | Jun 2005 | A1 |
20060082595 | Liu | Apr 2006 | A1 |
20060256383 | Yamakawa | Nov 2006 | A1 |
20070035790 | Kotani | Feb 2007 | A1 |
20070201744 | Sanami | Aug 2007 | A1 |
20080158386 | Miki | Jul 2008 | A1 |
20080240608 | Ishii | Oct 2008 | A1 |
20090153923 | Lin | Jun 2009 | A1 |
20090245678 | Ming | Oct 2009 | A1 |
20100066761 | Tousch et al. | Mar 2010 | A1 |
20110032581 | Ikeda | Feb 2011 | A1 |
20110304637 | Sanami | Dec 2011 | A1 |
20120230591 | Shibata | Sep 2012 | A1 |
20130016246 | Hatanaka et al. | Jan 2013 | A1 |
20130051679 | Tsuda et al. | Feb 2013 | A1 |
20130081982 | Tanaka | Apr 2013 | A1 |
20130287259 | Ishii | Oct 2013 | A1 |
20130293469 | Hakoda | Nov 2013 | A1 |
20140355831 | Han | Dec 2014 | A1 |
Number | Date | Country |
---|---|---|
H05-216992 | Aug 1993 | JP |
H07-078263 | Mar 1995 | JP |
H07-282219 | Oct 1995 | JP |
H09-016368 | Jan 1997 | JP |
H09-270022 | Oct 1997 | JP |
H11-039480 | Feb 1999 | JP |
H11-103447 | Apr 1999 | JP |
2003-108980 | Apr 2003 | JP |
2003-123066 | Apr 2003 | JP |
2004-318696 | Nov 2004 | JP |
2005-092284 | Apr 2005 | JP |
3824237 | Sep 2006 | JP |
2006-279442 | Oct 2006 | JP |
2006-279443 | Oct 2006 | JP |
2006-311080 | Nov 2006 | JP |
2007-060325 | Mar 2007 | JP |
2007-110329 | Apr 2007 | JP |
2007-128238 | May 2007 | JP |
2007-226671 | Sep 2007 | JP |
2008-092447 | Apr 2008 | JP |
2008-164758 | Jul 2008 | JP |
2008-244867 | Oct 2008 | JP |
2008-299516 | Dec 2008 | JP |
2008-299591 | Dec 2008 | JP |
2008-300990 | Dec 2008 | JP |
2009-181528 | Aug 2009 | JP |
2010-511215 | Apr 2010 | JP |
2011-035567 | Feb 2011 | JP |
2011-055488 | Mar 2011 | JP |
4652978 | Mar 2011 | JP |
2011-170838 | Sep 2011 | JP |
2011-170840 | Sep 2011 | JP |
2011-191860 | Sep 2011 | JP |
2012-088796 | May 2012 | JP |
2013-045316 | Mar 2013 | JP |
2009142333 | Nov 2009 | WO |
2011061943 | May 2011 | WO |
2013073168 | May 2013 | WO |
Entry |
---|
Office Action from the co-pending U.S. Appl. No. 14/243,913 issued on May 21, 2015. |
Office Action from the corresponding Japanese Patent Application No. 2014-050332 issued on Mar. 10, 2015. |
Office Action from the corresponding Japanese Patent Application No. 2014-050324 issued on Mar. 17, 2015. |
Office Action from the corresponding Japanese Patent Application No. 2014-050326 issued on Apr. 14,2015. |
Office Action from the co-pending U.S. Appl. No. 14/225,426 issued on Aug. 16, 2016. |
Number | Date | Country | |
---|---|---|---|
20140294301 A1 | Oct 2014 | US |