This application claims the benefit of Korean Patent Application No. 10-2005-0131986, filed on Dec. 28, 2005, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein by reference.
1. Field of the Invention
The present invention relates to a method and apparatus for editing an image using a contour-extracting algorithm, and more particularly to a method and apparatus for editing an image using a contour extracted from an input image.
2. Description of the Related Art
A conventional object contour-extracting method using an energy-based algorithm is described in U.S. Pat. No. 6,912,310 in which an object is extracted from a first image frame and then object template matching is performed for a subsequent image frame. However, such an object contour-extracting method has a problem in that in the case of application to a video with a complex background, an edge portion of an object contour as well as a background within the video is increased in density, which makes it difficult to substantially and precisely identify the object contour.
Also, a conventional object contour-extracting method based on color and motion region segmentation is described in U.S. Pat. No. 6,785,329 in which a video is segmented in a Blob format using color information and the object contour is extracted through the segmentation and combination of Blobs. However, such an object contour-extracting method encounters a problem in that in the case of application to a video with a complex background, the video is segmented into a huge number of Blobs, which makes it difficult to substantially and precisely identify the contour of the object.
Also, in the case of a conventional contour model-based object contour-extracting method, the contour model is formed using a training sample and contour searching is performed to maintain the form of the contour model. But such a contour model-based object contour-extracting method also has a shortcoming in that it depends on learning a data characteristic since control points are detected based only on the contour model, such that if there is a slight difference between learned contour models, it is difficult to identify an appropriate object contour.
As such, the conventional object contour-extracting methods make it difficult to substantially and precisely identify an object contour.
Therefore, there is an urgent need for a solution that substantially and precisely detects a contour of an object and edits an image using the detected contour.
Accordingly, the present invention has been made in view of the aforementioned problems occurring in the prior art, and it is an aspect of the present invention to provide a method and apparatus for editing an image using an object contour to extract, from a complex background image, a body in a foreground.
Another aspect of the present invention is to provide an image-editing method and apparatus, in which an object contour extracted from an image data is optimized to be synthesized with any other background scene.
Still another aspect of the present invention is to provide an image-editing method and apparatus, in which an object contour extracted from an image data is optimized, a clothing region and a facial region of the image object, i.e. a person, is segmented using skin color detection, and the shape of the segmented clothing region and the brightness of the segmented facial region are adjusted.
Yet another aspect of the present invention is to provide an image-editing method and apparatus, in which an object contour extracted from image data is optimized, and the brightness of the background region is then adjusted.
According to one aspect of the present invention, there is provided a method of editing an image using an object contour-extracting algorithm, the method including: inputting image data; extracting an object contour from the input image data; optimizing the extracted contour using the characteristics of the input image data; editing the input image data using the optimized extracted contour; and outputting the edited image data.
According to another aspect of the present invention, there is also provided an apparatus for editing an image using an object contour-extracting algorithm, the apparatus including: an image input section for inputting image data; an object contour-extracting section for extracting an object contour from the input image data applied to the object contour-extracting section from the image input section; an object contour-optimizing section for optimizing the extracted contour applied to the object contour-optimizing section from the object contour-extracting section using the characteristics of the input image data; an image-editing section for editing the image data using the optimized extracted object contour applied to the image-editing section from the object contour-optimizing section; and an image output section for outputting the edited image data applied thereto from the image-editing section.
Additional aspects and/or advantages of the invention will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the invention.
The above and/or other aspects and advantages of the present invention will become apparent and more readily appreciated from the following detailed description, taken in conjunction with the accompanying drawings of which:
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout. The embodiments are described below in order to explain the present invention by referring to the figures.
Referring to
The image input section 110 is inputted with image data including data of a person that is to be edited.
The object contour-extracting section 120 extracts an object contour from the input image data applied thereto from the image input section 110. That is, the contour-extracting section 120 can detect at least one of a face, eyes, and a skin color of an image object, i.e. a person, from the input image data, or extract a position of the person through an entry of a user and extract an initial object contour from the data of the person, which is contained in the image data using a specific contour model.
The object contour-optimizing section 130 optimizes the extracted object contour applied thereto from the object contour-extracting section 120 using the characteristics of the input image data. That is, the object contour-optimizing section 130 can optimize the extracted initial contour using characteristics of energy or an edge of the input image data.
The image-editing section 140 edits the input image data using the optimized contour applied thereto from the contour-optimizing section 130.
The image-editing section 140 can edit the image data using the optimized contour to segment a clothing region and a facial region of an image object, i.e. a person, using skin color detection, and adjust the shape of the segmented clothing region and the brightness of the segmented facial region. The image-editing section 140 can also edit the image data to adjust the brightness of a background region for the image data.
The image output section 150 outputs the edited image data applied thereto from the image-editing section 140.
As such, the image-editing apparatus according to the present invention extracts an object contour from the input image data, such as a contour of a person, and optimizes the extracted contour to more precisely detect the contour of the object, for example, the person.
Accordingly, the image-editing apparatus according to the present invention can edit the image in various fashions such as synthesizing an image object, i.e. a person, with any other background image, deforming the clothing shape or the face of the person, adjusting the brightness of the background screen, etc., using the precisely detected object contour.
Referring to
In operation 220, the image-editing apparatus 100 extracts an object contour from the input image data. The process for extracting an initial contour in operation 220 will be described hereinafter in more detail with reference to
Referring to
In operation 320, the image-editing apparatus 100 extracts initial contour data 430 from the input image data 410 using a specific object contour model 420 as shown in
In operation 320, the image-editing apparatus 100 can extract the size of the person based on, for example, the distance between both eyes of the detected person, and then map the specific contour model 420 to the input image data 410 to extract the initial object contour data 430.
The initial contour data 430 allows the contour for the input image data 410 to be represented as control points for main pixels.
In operation 320, the image-editing apparatus 100 extracts the size of the person based on, in this example, the distance between both eyes of the person, and subjects the extracted size of the person to a model scaling to represent the object contour as control points.
In operation 320, the image-editing apparatus 100 can represent the contour model by eigenvectors generated using training images labeled manually by a principle component analysis (PCA).
In operation 330, the image-editing apparatus 100 extracts gradient information included in a gradient vector flow (GVF) image data 440 shown in
Also, in operation 330, the image-editing apparatus 100 can extract the gradient information from the input image data 410 using a gradient vector flow (GVF). A gradient direction of the image in the GVF denotes a direction in which an edge density of a pixel is high. That is, according to the GVF image data 440 as shown in
Subsequently, in operation 340, the image-editing apparatus 100 modifies the extracted initial contour data to conform to the extracted gradient information from the input image data.
In operation 340, the image-editing apparatus 100 can move the control points of the initial contour to a neighboring pixel whose edge density is high.
Namely, in operation 340, the image-editing apparatus 100 can provide the modified object contour image data 450 as shown in
As such, the image-editing method according to an embodiment of the present invention can extract the initial object contour in a form as close as possible to the form of the person so as to increase precision and efficiency in detection of the contour.
In operation 230, the image-editing apparatus 100 optimizes the extracted object contour using characteristics of the input image data. The process for optimizing the extracted contour in operation 230 will be described hereinafter in more detail with reference to
Referring to
That is, in operation 510, the image-editing apparatus 100 can retrieve control points of the optimum object contour from current image data using the characteristics of the input image data and the contour model.
In operation 510, as shown in
E=α=Econtinuity+β×Esmoothness+γ×EEdgeκ×EShape+λ×EColor [Equation 1]
where α, β, γ, κ and λ denote the weighted values for respective terms of the energy function (E).
Econtinuity denotes a function representing whether or not a curve represented by the control point has continuity and can be represented as a first derivative value. The Econtinuity can be expressed as given by Equation 2.
Econtinuity=∥pi−pi−1∥2 [Equation 2]
where pi denotes information about the ith pixel. Esmoothness denotes a function representing whether or not a curve represented by the control point is smoothly connected in a curvature form, has continuity and can be represented as a second derivative value. The Esmoothness can be expressed as given by Equation 3.
Esmoothness∥pi−1−2×pi+pi+1∥2 [Equation 3]
EEdge is a function representing whether or not a curve represented by the control point is similar to an edge of the input image data. EEdge is a distance between the control point and a zero crossing point on the GVF image data and can be used as an edge density.
EShape is a function representing whether or not a shape represented by the control point is similar to that of the object contour model. EShape is a comparison value between the control point and the contour model and can be expressed as given by Equation 4.
EShape=∥Ci−Mi∥2,
Ci=Control Points, Mi=Model Control Points [Equation 4]
EColor is a function representing whether or not there is a difference in color in the surroundings of the control point and can be expressed as a reciprocal of a dispersion value of a color difference between the control point and the surrounding pixels. In this case, as the dispersion value of the color difference increases, the probability that the control point is within the boundary of the image object, i.e. the person, increases.
In operation 520, the image-editing apparatus 100 updates the contour model using the retrieved optimum object contour. In other words, in operation 520, the image-editing apparatus 100 can modify the contour model by the sample to conform to the current object contour.
In operation 520, the image-editing apparatus 100 can assume a currently detected control point as an optimum control point and use the currently-detected control point to update the contour model.
Also in operation 520, the image-editing apparatus 100 can add a difference value between the currently-detected control point and the control point of the contour model to the control point of the contour model.
In operation 520, as shown in
Mt+1=Mt+(Mt−Ct)′[Equation 5]
In operation 530, the image-editing apparatus 100 determines whether or not the detection of the contour from the input image data is completed. That is, in operation 530, the image-editing apparatus 100 can restrict the detection completion test for the object contour to, for example, a number of times of detection, whether or not there is a convergence of the retrieval function of an optimum object contour, etc.
If it is determined in operation 530 that the detection of the contour has not been completed, the program returns to operation 510, and in this manner the image-editing apparatus 100 repeatedly performs the operation 510 until the detection of the object contour is completed.
On the other hand, if it is determined in operation 530 that the detection of the contour has been completed, the process proceeds to operation 540, where the image-editing apparatus 100 outputs a result of the automatically-detected contour.
As such, the image-editing method according to the present invention precisely extracts the contour of an object image region to be synthesized so that the editing work using the extracted contour can be more naturally performed.
Referring to
In operation 820, the image-editing apparatus 100 adjusts the position of the control point for the automatically-detected contour in response to the received request for correction of the contour.
In operation 830, the image-editing apparatus 100 optimizes the object contour according to the energy function which has been altered due to the adjusted control point position.
In operation 840, the image-editing apparatus 100 determines whether or not the correction of the contour has been completed.
If it is determined in operation 840 that the correction of the contour has not been completed, the program returns to the previous operation 820, and in this manner the image-editing apparatus 100 repeatedly performs the operation 820 until the correction of the contour is completed.
If, on the other hand, it is determined in operation 840 that the correction of the contour has been completed, the program proceeds to operation 850, where the image-editing apparatus 100 outputs the final object contour in which the correction of the contour has been completed to provide the output contour to the user.
As such, the image-editing method according to the present invention allows the user to adjust the result of the automatically-detected contour to provide a satisfactory contour result to the user.
Referring back to
Referring to
Referring to
As such, the image-editing method according to the present invention may extract a contour from the input image data, and insert the extracted contour into background data, which a user wants to synthesize, in a suitable size.
The process for the image-editing apparatus 100 to insert the image object, i.e. the person, into a background image in operation 910 will be described hereinafter in more detail with reference to
Referring to
In operation 1120, the image-editing apparatus 100 calculates a scaling ratio between the image of the person and the background image. For example, in the case where a resolution of the image of the person is 320*240 and a resolution of the background image is 240*240, the width scaling ratio (Wr) between the image of the person and the background image is 0.75(240/320), and the height scaling ratio (Hr) between the image of the person and the background image is 1(240/240).
In operation 1130, the image-editing apparatus 100 generates a bounding box using the largest width and height in the object region. For example, in the case where the largest width is ‘40’ and the largest height is ‘80’ in the object region, the size of the bounding box is ‘40*80’.
In operation 1140, the image-editing apparatus 100 scales the object region to conform to the smaller one of the calculated width and height scaling ratios between the image of the person and the background image, and the ratio of the bounding box.
The case where the width scaling ratio (Wr) is ‘0.75’, the height scaling ratio (Hr) is ‘1’, and the size of the bounding box is ‘40*80’ will be described hereinafter as an example.
In operation 1140, the image-editing apparatus 100 performs a sub-sampling for the width of the bounding box so that the size of the width scaling ratio and the width of the bounding box becomes ‘40*0.75=30’, and performs the sub-sampling for the height of the bounding box so that the ratio of the width and the height of the bounding box maintains a relationship of ‘40:80=1:2’.
In operation 1150, the image-editing apparatus 100 synthesizes the scaled object region with the background image. That is, in operation 1150, the image-editing apparatus 100 replaces a pixel at a position defined within the background image with a pixel of the object region so that the scaled object region can be synthesized with the background image.
In operation 920, the image-editing apparatus 100 performs an image matting for the inserted contour. That is, in operation 920, the image-editing apparatus 100 can employ Bayesian/Poisson matting method and the like to perform the image matting which adjusts a pixel value of the boundary (or edge) portion of the image object, i.e. the person, inserted into the background image so that the boundary portion of the person can be smoothly synthesized.
In operation 240, the image-editing apparatus 100 may edit the image for clothing/facial regions of the person using the optimized object contour. The process for the image-editing apparatus 100 to edit images for the clothing and facial regions of the person in operation 240 will be described hereinafter in more detail with reference to
Referring to
In operation 1220, the image-editing apparatus 100 adjusts the shape of the segmented clothing region and the brightness of the segmented facial region.
As such, in the image-editing method according to an embodiment of the present invention, the object contour is optimized from the input image data, a skin color of the person in the input image data is detected based on the optimized contour to segment the clothing region and the facial region, and the shape of the segmented clothing region and the brightness of the segmented facial region are adjusted so that a user can edit the input image data in the form of various images.
In operation 240, the image-editing apparatus 100 may edit the image data to adjust the brightness of the background region for the image data using the optimized contour.
As such, in the image-editing method according to an embodiment of the present invention, the object contour is optimized from the input image data, the background region and the object region in the input image data are segmented based on the optimized contour, and the brightness of the segmented background region can be adjusted so that a user can adjust the background region.
Therefore, the present invention can provide a more discriminating image-editing service in a variety of devices (for example, personal video recorders, home servers, smart mobile devices, etc.) which allow a user to store and view photographs and videos using an automated contour-extracting algorithm. In addition, since the contour can be extracted precisely, the present invention can be applied to a photo-browsing service.
The image-editing apparatus according to the present invention may include a computer-readable medium including a program instruction for executing various operations realized by a computer. The computer-readable medium may include a program instruction, a data file, and a data structure, separately or cooperatively. The program instructions and the media may be those specially designed and constructed for the purposes of the present invention, or they may be of the kind well known and available to those skilled in the art of computer software arts. Examples of the computer-readable media include magnetic media (e.g., hard disks, floppy disks, and magnetic tapes), optical media (e.g., CD-ROMs or DVD), magneto-optical media (e.g., floptical disks), and hardware devices (e.g., ROMs, RAMs, or flash memories, etc.) that are specially configured to store and perform program instructions. The media may also be transmission media such as optical or metallic lines, wave guides, etc. including a carrier wave transmitting signals specifying the program instructions, data structures, etc. Examples of the program instructions include both machine code, such as that produced by a compiler, and files containing high-level language codes that may be executed by the computer using an interpreter.
According to the present invention, there is provided a method and apparatus for editing an image using an object contour to extract, from a complex background image, a body in a foreground.
Also, according to an embodiment of the present invention, there is provided an image-editing method and apparatus, in which a contour extracted from image data is optimized to be synthesized with any other background scene.
Further, according to an embodiment of the present invention, there is provided an image-editing method and apparatus, in which an object contour extracted from image data is optimized, a clothing region and a facial region of an image object, i.e. a person, is segmented using skin color detection, and the shape of the segmented clothing region and the brightness of the segmented facial region are adjusted.
Further still, according to an embodiment of the present invention, there is provided an image-editing method and apparatus, in which a personal contour extracted from image data is optimized, and then the brightness of the background region is adjusted.
In addition, the present invention can provide various image-editing services which a user desires through the automated extraction of the contour.
Furthermore, the present invention can provide a more discriminating image-editing service in a variety of devices which allow a user to store and view photographs and videos using an automated contour-extracting algorithm since it can be applied to a photo-browsing system.
Although a few embodiments of the present invention have been shown and described, the present invention is not limited to the described embodiments. Instead, it would be appreciated by those skilled in the art that changes may be made to these embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
| Number | Date | Country | Kind |
|---|---|---|---|
| 10-2005-0131986 | Dec 2005 | KR | national |