Claims
- 1. A batch processing method for enhancing an appearance of a face located in a digital image, where the image is one of a large number of images that are being processed through a batch process, said method comprising the steps of:
(a) providing a script file that identifies one or more original digital images that have been selected for enhancement, wherein the script file includes an instruction for the location of each original digital image; (b) using the instructions in the script file, acquiring an original digital image containing one or more faces; (c) detecting a location of facial feature points in the one or more faces, said facial feature points including points identifying salient features including one or more of skin, eyes, eyebrows, nose, mouth, and hair; (d) using the location of the facial feature points to segment the face into different regions, said different regions including one or more of skin, eyes, eyebrows, nose, mouth, neck and hair regions; (e) determining one or more facially relevant characteristics of the different regions; (f) based on the facially relevant characteristics of the different regions, selecting one or more enhancement filters each customized especially for a particular region and selecting the default parameters for the enhancement filters; (g) executing the enhancement filters on the particular regions, thereby producing an enhanced digital image from the original digital image; (h) storing the enhanced digital image; and (i) generating an output script file having instructions that indicate one or more operations in one or more of the steps (c)-(f) that have been performed on the enhanced digital image.
- 2. The method as claimed in claim 1 wherein the script file includes additional instructions for controlling the operation of the batch process with regard to each image.
- 3. The method as claimed in claim 2 wherein the additional instructions identify the enhancement filters that are available for the process.
- 4. The method as claimed in claim 1 wherein one or more of the steps (c) through (e) produce an intermediate image which is saved, and the script file includes instructions for the location of the saved intermediate image.
- 5. The method as claimed in claim 4 wherein the script file includes instructions for the deletion of one or more of the original image, the intermediate image, and the enhanced image.
- 6. The method as claimed in claim 1 wherein the original digital images are supplied to the batch process with metadata and the script file includes instructions for locating the images.
- 7. The method as claimed in claim 1 further adapted for interactive retouching in a user-assisted batch process, said method further comprising the steps of:
acquiring the output script file and instructions for locating the original and enhanced digital images; acquiring the enhanced and original digital images; using data in the output script file to comparatively display the enhanced digital image and the original digital image; providing an interactive mode for reviewing the images and determining whether to accept or reject the enhanced image; if the decision from the preceding step is to reject the enhanced image, providing an interactive retouching mode to modify the image by interactively adjusting one or more of the steps (c) through (f) in claim 1 to generate a retouched image; and storing the retouched digital image.
- 8. The method as claimed in claim 7 further including in claim 1 the step of generating a flag to represent a probability of acceptance of the enhanced image, wherein the flag is accessible for determining which enhanced images are acquired for the interactive retouching in claim 7.
- 9. The method as claimed in claim 8 wherein the probability of acceptance is related to the amount of enhancement that is provided to the digital image.
- 10. The method as claimed in claim 7 further including in claim 1 the step of generating a flag to represent the a textual description of the state of the enhanced image, wherein the flag is accessible for determining which enhanced images are acquired for the interactive retouching in claim 7.
- 11. The method as claimed in claim 10 wherein the textual description includes one or more descriptions selected from the group comprising no face found, facial hair present and glasses present.
- 12. The method as claimed in claim 1 further comprising the step of applying an image utilization process to the enhanced digital image.
- 13. The method as claimed in claim 1 wherein the enhancement filters include at least two of texture, skin tone, eye, teeth and shape filters.
- 14. The method as claimed in claim 1 wherein a gender classification algorithm is applied to determine the gender of the faces prior to the selection of the enhancement filters in step (f).
- 15. The method as claimed in claim 1 wherein an age classification algorithm is applied to determine the age of the faces prior to the selection of the enhancement filters in step (f).
- 16. The method as claimed in claim 1 wherein a distance between the eyes of the face is used to determine the default parameters for one or more of the enhancement filters selected in step (f).
- 17. The method as claimed in claim 12 wherein the utilization process includes printing the image on a local device.
- 18. The method as claimed in claim 12 wherein the utilization process includes archiving the image on a local device.
- 19. The method as claimed in claim 12 wherein the utilization process includes printing the image on a remote device.
- 20. The method as claimed in claim 12 wherein the utilization process includes archiving the image on a remote device.
- 21. The method as claimed in claim 1 wherein the facial characteristics determined in step (e) include at least one characteristic selected from the group including a size of the face, a distance between facial features in a particular region, a gender of a subject in the image, and an age of the subject.
- 22. A computer storage medium having instructions stored therein for causing a computer to perform the method of claim 1.
- 23. A method for enhancing an appearance of one or more faces appearing in a plurality of digital images that can be assigned to one or more classes based on subject matter, said method comprising the steps of:
(a) detecting a location of facial feature points in the one or more faces, said facial feature points including points identifying salient features including one or more of skin, eyes, eyebrows, nose, mouth, and hair; (b) using the location of the facial feature points to segment the face into different regions, said different regions including one or more of skin, eyes, eyebrows, nose, mouth, neck and hair regions; (c) determining one or more facially relevant characteristics of the different regions; (d) based on the facially relevant characteristics of the different regions, selecting (1) one or more enhancement filters each customized especially for a particular region and (2) the default parameters for the enhancement filters, wherein the default parameters are dependent on the class of images, wherein the class is based on a category of substantially similar subject matter; and (e) executing the enhancement filters on the particular regions, thereby producing an enhanced digital image from the digital image.
- 24. The method as claimed in claim 23 wherein operation according to one or more of the steps (a) through (c) is also dependent on the class of images.
- 25. The method as claimed in claim 24 wherein the class is based on a portrait category such as school portrait images, family portrait images or baby portrait images.
- 26. The method as claimed in claim 24 wherein the enhancement filters employed in step (d) include at least one of texture, skin tone, eye, teeth and shape filters.
- 27. A computer storage medium having instructions stored therein for causing a computer to perform the method of claim 23.
- 28. An automatic retouching method for enhancing an appearance of a face located in a digital image, wherein said face has hair in a skin region thereof, said method comprising the steps of:
(a) acquiring a digital image containing one or more faces; (b) detecting a location of facial feature points in the one or more faces, including special feature points that are useful for identifying the location of a facial hair region; (c) defining a bounding box for the facial hair region by utilizing the feature points; (d) generating a plurality of feature probability maps within the bounding box; (e) combining the feature probability maps to generate a combined feature probability map for the facial hair region; and (f) based on the combined feature probability map for the facial hair region, using an enhancement filter to enhance at least the texture or the skin tone of the facial hair region while masking off the facial hair, thereby producing an enhanced digital image from the digital image.
- 29. The method as claimed in claim 28 wherein the special feature points are eye locations and hairline and the hair is in a forehead area.
- 30. The method as claimed in claim 28 wherein the hair in a skin region comprises at least one of hair overlapping a forehead region and facial hair such as a moustache or beard.
- 31. The method as claimed in claim 28 wherein the feature probability maps includes texture information and color information.
- 32. The method as claimed in claim 28 wherein the step of generating a plurality of feature probability maps comprises using a plurality of directional edge detectors to generate the maps.
- 33. The method as claimed in claim 28 wherein a feature probability map for hair texture is generated by the steps of:
calculating a vertical edge gradient; generating a connected component map for all vertical edge gradients greater than a specified threshold; calculate a normalized connected component density map from the connected component map; and thresholding the normalized connected component density map to obtain a hair texture probability map.
- 34. A computer storage medium having instructions stored therein for causing a computer to perform the method of claim 28.
- 35. A method for enhancing an appearance of a face located in a digital image, wherein the appearance includes one or more symmetrical features, said method comprising the steps of:
(a) acquiring an original digital image containing one or more faces; (b) detecting a location of facial feature points in the one or more faces, said facial feature points including points identifying salient symmetrical features including one or more of skin, eyes, eyebrows, nose, mouth, and hair; (c) using the location of the facial feature points to segment the face into different regions, said different regions including one or more of skin, eyes, eyebrows, nose, mouth, neck and hair regions; (d) determining one or more facially relevant characteristics of the different regions; (e) based on the facially relevant characteristics of the different regions, selecting (1) one or more enhancement filters each customized especially for a particular region and (2) the default parameters for the enhancement filters; and (f) executing the enhancement filters on the particular regions in a proportional manner to account for symmetry between the facially relevant characteristics of the different regions, thereby producing an enhanced digital image from the original digital image.
- 36. The method as claimed in claim 35 wherein the enhancement filters employed in step (f) either preserve symmetry or improve symmetry between facially relevant characteristics of the different regions.
- 37. A computer storage medium having instructions stored therein for causing a computer to perform the method of claim 35.
- 38. A method for enhancing the skin texture of a face appearing in a digital image, said method comprising the steps of:
(a) generating a luminance image from the digital image; (b) using a valley/peak detector to detect skin features in the luminance image; (c) classifying the skin features according to their feature-based characteristics, wherein the skin features are classified according to location in a skin region; (d) selecting relevant skin features for modification; (e) modifying the relevant skin features using an adaptive interpolation procedure, thereby producing a modified image; and (f) blending the digital image and the modified image to produce an enhanced image, wherein the amount of blending is dependent upon a characteristic of a relevant skin feature being between a minimum and maximum size threshold, wherein the thresholds are varied based on location of the skin feature.
- 39. The method as claimed in claim 38 wherein the characteristic of a relevant skin feature in step (f) is a size of the relevant skin feature.
- 40. The method as claimed in claim 38 wherein the selection of relevant skin features for modification in step (d) further includes selecting relevant skin features based on their size being between additional minimum and maximum size thresholds, in order to preserve skin texture, and wherein these additional thresholds are varied in step (f) to control the amount of blending.
- 41. The method as claimed in claim 38 wherein the valley/peak detector used in step (b) has a spatial size that is dependent upon a size of the face.
- 42. The method as claimed in claim 38 further including a user interface having a slider adjustment and the method further comprises the step of adjusting the slider to vary the thresholds used in step (f).
- 43. A computer storage medium having instructions stored therein for causing a computer to perform the method of claim 38.
- 44. A method for enhancing the skin texture of a face appearing in a digital image, said method comprising the steps of:
(a) generating a luminance image from the digital image; (b) using a valley/peak detector to detect skin features in the luminance image; (c) classifying the skin features according to their feature-based characteristics, wherein the skin features are classified according to a type of skin feeature; (d) selecting relevant skin features for modification; (e) modifying the relevant skin features using an adaptive interpolation procedure, thereby producing a modified image; and (f) blending the digital image and the modified image to produce an enhanced image based on the type of the skin feature, wherein the degree of blending is varied based on the type of feature.
- 45. The method as claimed in claim 44 wherein the valley/peak detector used in step (b) has a spatial size that is dependent upon a size of the face.
- 46. The method as claimed in claim 44 wherein the skin features are further classified in step (c) according to at least one of their size, shape, color and location in the skin region, thereby providing classification information relating to the skin features.
- 47. The method as claimed in claim 46 wherein, using the classification information provided in claim 46, the skin features are further classified according to at least one of blemish, wrinkle, beauty mark, mole, shadow and highlight.
- 48. The method as claimed in claim 46 wherein relevant skin features are selected in step (d) based on their size being between a minimum and maximum size, in order to preserve skin texture.
- 49. The method as claimed in claim 48 wherein the minimum and maximum sizes are scaled to the size of the face.
- 50. The method as claimed in claim 44 wherein relevant skin features are selected in step (d) based on their size being larger than a specified minimum size, in order to preserve skin texture.
- 51. The method as claimed in claim 44 wherein relevant skin selected in step (d) are symmetrical features and the blending in step (f) accounts for a proportionality between the features, in order to preserve the symmetrical appearance of the features.
- 52. A computer storage medium having instructions stored therein for causing a computer to perform the method of claim 44.
- 53. A method for enhancing the skin texture of a face appearing in a digital image by re-texturizing skin regions of the face, said method comprising the steps of:
(a) detecting skin features in the digital image; (b) classifying the skin features according to their feature-based characteristics; (c) selecting relevant skin features for modification; (e) enhancing the relevant skin features; (f) generating a spatially scaled idealized texture; and (g) adding the spatially scaled idealized texture into the relevant skin features, thereby producing a re-texturized digital image.
- 54. The method as claimed in claim 53 wherein the amount of spatially scaled idealized texture added in step (g) is proportional to the amount of enhancement performed in step (e).
- 55. The method as claimed in claim 53 wherein the amount of spatially scaled idealized texture added in step (g) is proportional to local smoothness of a skin region.
- 56. The method as claimed in claim 53 wherein the spatially scaled idealized texture is spatially scaled based upon the size of the face.
- 57. The method as claimed in claim 53 wherein the spatially scaled idealized texture is spatially scaled based upon a separation between eyes.
- 58. The method as claimed in claim 53 further including a user interface having a slider adjustment and the method further comprises the step of adjusting the slider to vary the amount of the spatially scaled idealized texture added into the relevant skin features.
- 59. A computer storage medium having instructions stored therein for causing a computer to perform the method of claim 53.
- 60. A method for enhancing the shape of symmetrical features appearing in a face located in a digital image, said method comprising the steps of:
(a) acquiring a digital image containing one or more faces; (b) detecting a location of facial feature points in the one or more faces, said facial feature points including points identifying a pair of salient symmetrical features amenable to shape adjustment; (c) using the facial feature points to determine two or more source points that are positioned relative to the salient features; (d) using the facial feature points to determine two or more destination points which determine the desired change of the salient features; and (e) warping the shape of the salient feature by using the source points and the destination points in a manner that substantially equalizes changes in shape for the pair of symmetrical features.
- 61. The method as claimed in claim 60 wherein the pair of symmetrical features is a pair of eyes.
- 62. The method as claimed in claim 60 further including a user interface having a slider adjustment and the method further comprises the step of adjusting the slider to vary the amount of shape enhancement.
- 63. A computer storage medium having instructions stored therein for causing a computer to perform the method of claim 60.
- 64. A processing method for enhancing an appearance of a face located in a digital image that is processed in a user-interactive kiosk environment, the method comprising the steps of:
acquiring an original digital image containing one or more faces; detecting a face in the original digital image; processing the original digital image to determine if the face is suitable for enhancement based on at least one of size and state of focus of the face in the image; if the face is found to be acceptable for enhancement relative to a predetermined threshold related to size and state of focus, then enhancing the image according to one or more enhancement algorithms and displaying the image for review by a user of the kiosk; and if the face is found not to be acceptable by the user, then providing an interactive retouching mode for the user to interactively adjust the one or more enhancement algorithms until the image is acceptable.
- 65. A user-interactive photographic kiosk for processing a digital image for enhancing an appearance of a face located in the image, the kiosk comprising:
an input interface for acquiring an original digital image containing one or more faces; a processor for (a) detecting a face in the original digital image; (b) processing the original digital image to determine if the face is suitable for enhancement based on at least one of size and state of focus of the face in the image; and (c) if the face is found to be acceptable for enhancement relative to a predetermined threshold related to size and state of focus, then enhancing the image according to one or more enhancement algorithms; a display for displaying the enhanced image for review by a user of the kiosk; and an interactive retouching mode for enabling the user, if the face is found not to be acceptable, to interactively adjust the one or more enhancement algorithms until the image is acceptable.
- 66. A method for enhancing the appearance of teeth and eyes in a face appearing in a digital image, while balancing their color and uniformity, said method comprising the steps of:
acquiring a digital image having pixel values representative of teeth and eye regions of a face contained in the image; determining a preferred color for the teeth and eye regions; calculating a difference between each pixel value and a mean color value determined from the pixel values; providing an enhanced image of the teeth and eyes from a scaled combination of said difference and the preferred color for the teeth and eye regions, whereby the mean and variance of the color distributions of the pixels of the teeth and eyes are shifted such that the teeth and eye regions are given a more pleasing appearance.
- 67. The method as claimed in claim 66 wherein the pixels are provided in CIELAB space comprising in part a luminance color vector, and the luminance color vector of the preferred color is chosen such that it is greater than the luminance color vector of the mean color.
- 68. The method as claimed in claim 66 wherein a color saturation of the preferred color is greater than the color saturation of the mean color.
- 69. The method as claimed in claim 66 wherein, to equalize the whites of the two eyes, either a same preferred color is used for both eyes or a color difference between the preferred colors of each eye is smaller than the color difference between the mean color for each eye.
CROSS-REFERENCE TO RELATED APPLICATION
[0001] Reference is made to commonly assigned copending application Ser. No. 10/160,421, entitled “Method and System for Enhancing Portrait Images” and filed 31 May 2002 in the names of R. A. Simon, T. Matraszek, M. R. Bolin, and H. Nicponski, which is assigned to the assignee of this application.