Claims
- 1. A digital image processing method for detecting facial features in a digital image, comprising the steps of:
detecting iris pixels; clustering the iris pixels; selecting at least one of the following methods to identify eye positions in an image:
i) applying geometric reasoning to detect eye positions using the iris pixel clusters; ii) applying a summation of squared difference method to detect eye positions based upon the iris pixel clusters; and, iii) applying a summation of squared difference method to detect eye positions from the pixels in the image; wherein the method applied is selected on the basis of the number of iris pixel clusters; and locating facial features using identified eye positions.
- 2. The method of claim 1, wherein less than two iris pixel clusters are detected and wherein detection method iii) is applied.
- 3. The method of claim 1, wherein at least two iris pixel clusters are detected and wherein method i) is applied.
- 4. The method of claim 3, wherein method i) does not detect eye positions and wherein method ii) is then applied to detect eye positions.
- 5. The method of claim 4, wherein method ii) does not detect eye positions and wherein method iii) is then applied.
- 6. The method of claim 1, wherein the step of applying geometric reasoning using the detected iris color pixels comprises the steps of:
finding the center of each iris pixel cluster; dividing the iris pixel clusters into left-half pixel clusters and right-half pixel clusters; and detecting a pair of eyes based on the geometric relationship between the iris pixel clusters.
- 7. The method of claim 3, wherein the step of applying geometric reasoning using the detected iris color pixels comprises the steps of:
finding the center of each iris pixel cluster; dividing the iris pixel clusters into left-half pixel clusters and right-half pixel clusters; and detecting a pair of eyes based on the geometric relationship between the iris pixel clusters.
- 8. The method of claim 1, wherein the step of applying the summation squared difference method to detect eye positions based upon the iris pixel clusters, comprises the steps of:
finding the center of each iris pixel cluster; defining a window of pixels surrounding each of the centers of the iris pixel clusters in the image; dividing the iris pixel clusters into left-half pixel clusters and right-half iris pixel clusters; locating the most likely left eye position based on the summation of squared difference between an average eye and patches of the image centered at each of the pixels in each of the windows surrounding a left-half iris pixel cluster; and locating the most likely right eye position based on the summation of squared difference between an average eye and patches of the image centered at each of the pixels in each of the windows surrounding a right-half iris pixel cluster.
- 9. The method of claim 4, wherein the step of applying the summation squared difference method to detect eye positions based upon the iris pixel clusters, comprises the steps of:
finding the center of each iris pixel cluster; defining a window of pixels surrounding each of the centers of the iris pixel clusters in the image, dividing the iris pixel clusters into left-half pixel clusters and right-half iris pixel clusters; locating the most likely left eye position based on the summation of squared difference between an average eye and patches of the image centered at each of the pixels in each of the windows surrounding a left-half iris pixel cluster; and locating the most likely right eye position based on the summation of squared difference between an average eye and patches of the image centered at each of the pixels in each of the windows surrounding a right-half iris pixel cluster.
- 10. The method of claim 1, wherein the step of applying a summation of squared difference method to detect eye positions from the pixels in the image comprises the steps of:
dividing the image pixels into left-half pixels and right-half pixels; locating the most likely left eye position based on the summation of squared difference between an average eye and patch of the image centered at each of the left-half pixels; and locating the most likely right eye position based on the summation of squared difference between an average eye and patch of the image centered at each of the right-half pixels.
- 11. The method of claim 10, further comprising detecting a skin color region in the image, wherein the summation of the squared difference method is only applied to pixels within the skin color region.
- 12. The method claimed in claim 2, wherein the step of applying a summation of squared difference method to detect eye positions from the pixels in the image comprises the steps of:
dividing the image pixels into left-half pixels and right-half pixels; locating the most likely left eye position based on the summation of squared difference between an average eye and patch of the image centered at each of the left-half pixels; and locating the most likely right eye position based on the summation of squared difference between an average eye and patch of the image centered at each of the right-half pixels.
- 13. The method of claim 12 further comprising detecting a skin color region in the image, wherein the summation of the squared difference method is only applied to pixels within the skin color region.
- 14. The method claimed in claim 5, wherein the step of applying a summation of squared difference method to detect eye positions from the pixels in the image comprises the steps of:
dividing the image pixels into left-half pixels and right-half pixels; locating the most likely left eye position based on the summation of squared difference between an average eye and patch of the image centered at each of the left-half pixels; and locating the most likely right eye position based on the summation of squared difference between an average eye and patch of the image centered at each of the right-half pixels.
- 15. The method of claim 12 further comprising detecting a skin color region in the image, wherein the summation of the squared difference method is only applied to pixels within the skin color region.
- 16. The method of claim 8 further comprising the steps of detecting a skin color region in the image, and dividing the skin color region into a left-half region and right-half region wherein the iris pixel clusters are divided into left-half iris pixel clusters and right-half iris pixel clusters based upon the region in which they are located.
- 17. The method of claim 9 further comprising the steps of detecting a skin color region in the image, and dividing the skin color region into a left-half region and right-half region wherein the iris pixel clusters are divided into left-half iris pixel clusters and right-half iris pixel clusters based upon the region in which they are located.
- 18. The method of claim 1 further comprising the step of validating iris pixel clusters, wherein the selection of the method to be applied is made based upon the number of valid clusters.
- 19. The method of claim 1, wherein the estimated locations to search for the facial features are based on the automatically identified eye positions.
- 20. The method of claim 19, wherein the estimated locations to search for the facial features are found by aligning the eye positions within a model of the shape of the facial features with the automatically identified eye positions.
- 21. The method of claim 19, wherein estimated locations to search for the facial features are based on the average position of these features within a set of example faces.
- 22. The method of claim 1, wherein the facial feature positions are identified using an active shape model technique.
- 23. The method of claim 22, wherein the shape model technique uses texture windows and the size of the texture windows is automatically scaled based on a current estimate of the size of the face.
- 24. The method of claim 22, wherein the spacing of search locations is automatically scaled based on a current estimate of the size of the face.
- 25. The method of claim 23 wherein an estimate of the size of the face is found by determining the scale that best aligns a current estimate of the feature positions with a model of the average positions of the facial features using a least squares process.
- 26. The method of claim 24 wherein an estimate of the size of the face is found by determining the scale that best aligns a current estimate of the feature positions with a model of the average positions of the facial features using a least squares process.
- 27. The method of claim 22, wherein the positions of the facial features that are outside a shape space boundary are constrained to locations of the shape found at the nearest point on a hyper-elliptical boundary of the shape space.
- 28. A computer program product for detecting facial features in a digital image, the computer program product comprising a computer readable storage medium having a computer program stored thereon for performing the steps of:
detecting iris pixels; clustering the iris pixels; selecting at least one of the following methods to identify eye positions in the image:
i) applying geometric reasoning to detect eye positions using the iris pixel clusters; ii) applying a summation of squared difference method to detect eye positions based upon the iris pixel clusters; and iii) applying a summation of squared difference method to detect eye positions from the pixels in the image; wherein the method applied is selected on the basis of the number of iris pixel clusters; and locating facial features using identified eye positions.
- 29. The computer program product of claim 28, wherein less than two valid iris pixel clusters are detected and wherein detection method iii) is applied.
- 30. The computer program product of claim 28, wherein at least two valid iris pixel clusters are detected and wherein method i) is applied.
- 31. The computer program product of claim 30, wherein method i) does not detect eyes and wherein method ii) is then applied to detect eyes.
- 32. The computer program product of claim 31, wherein method ii) does not detect eyes and wherein method iii) is then applied.
- 33. The computer program product of claim 28, wherein the step of applying geometric reasoning using the detected iris color pixels comprises the steps of:
finding the center of each iris pixel cluster; dividing the iris pixel clusters into left-half iris pixel clusters and right-half iris pixel clusters; and detecting a pair of eyes based on the geometric relationship between the left-half iris pixel clusters and the right-half iris pixel clusters.
- 34. The computer program product of claim 30, wherein the step of applying geometric reasoning using the detected iris color pixels comprises the steps of:
finding the center of each iris pixel cluster; dividing the iris pixel clusters into left-half iris pixel clusters and right-half iris pixel clusters; and detecting a pair of eyes based on the geometric relationship between the left-half iris pixel clusters and the right-half iris pixel clusters.
- 35. The computer program product of claim 28, wherein the step of applying the summation squared method to detect eye positions based upon the iris pixel clusters, comprises the steps of:
finding the center of each cluster; defining a window of pixels surrounding each of centers of the pixel clusters in the image; dividing the iris pixel clusters into left-half pixel clusters and right-half pixel clusters; locating the most likely left eye position based on the summation of squared difference between an average eye and patch of the image centered at each of the pixels in each of the windows surrounding a left-half iris pixel cluster; and locating the most likely right eye position based on the summation of squared difference between an average eye and patches of the image centered at each of the pixels in each of the windows surrounding a right-half iris pixel cluster.
- 36. The computer program product of claim 31, wherein the step of applying the summation squared method to detect eye positions based upon the iris pixel clusters, comprises the steps of:
finding the center of each cluster; defining a window of pixels surrounding each of the centers of the pixel clusters in the image; dividing the iris pixel clusters into left-half pixel clusters and right-half pixel clusters; locating the most likely left eye position based on the summation of squared difference between an average eye and patch of the image centered at each of the pixels in each of the windows surrounding a left-half iris pixel cluster; and locating the most likely right eye position based on the summation of squared difference between an average eye and patches of the image centered at each of the pixels in each of the windows surrounding a right-half iris pixel cluster.
- 37. The computer program product claimed in claim 28 wherein the step of applying a summation of squared difference method using image pixels to detect eye positions comprises the steps of:
dividing the pixels in the image into left-half pixels and right-half pixels; locating the most likely left-eye position based on the summation of squared difference between an average eye and patch of the image centered at each of left-half pixels; and locating the most likely right eye position based on the summation of squared difference between an average eye and patch of the image centered at each of the right-half pixels.
- 38. The computer program product of claim 37, further comprising detecting a skin color region in the image, wherein the summation of the squared difference method is only applied to pixels within the skin color region.
- 39. The computer program product of claim 28 wherein the step of detecting iris color pixels comprises using a Bayes model and:
measuring the red intensity of the pixels in the skin color region; determining the probability that each pixel is an iris based upon the red intensity of the pixel; determining the probability that each pixel is not an iris based upon the red intensity of the pixel; and applying the Bayes model to the probability that the pixel is an iris, the probability that the pixel is not an iris, the probability of the occurrence of an iris in the skin colored region and probability of the occurrence of a non-iris pixel in the skin colored region.
- 40. The computer program product of claim 39, further comprising detecting a skin color region in the image, wherein the summation of the squared difference method is only applied to pixels within the skin color region.
- 41. The computer program product of claim 33 wherein the step of detecting iris color pixels comprises using a Bayes model and:
measuring the red intensity of the pixels in the skin color region; determining the probability that each pixel is an iris based upon the red intensity of the pixel; determining the probability that each pixel is not an iris based upon the red intensity of the pixel; and applying the Bayes model to the probability that the pixel is an iris, the probability that the pixel is not an iris, the probability of the occurrence of an iris in the skin colored region and probability of the occurrence of a non-iris pixel in the skin colored region.
- 42. The computer program product of claim 41, further comprising detecting a skin color region in the image, wherein the summation of the squared difference method is only applied to pixels within the skin color region.
- 43. The computer program product of claim 35 further comprising the steps of detecting a skin color region in the image, and dividing the skin color region into a left-half region and a right-half region wherein the iris pixel clusters are divided into left-half iris pixel clusters and right-half iris pixel clusters based upon the region in which they are located.
- 44. The computer program product of claim 36 further comprising the steps of detecting a skin color region in the image, and dividing the skin color region into a left-half region and a right-half region wherein the iris pixel clusters are divided into left-half iris pixel clusters and right-half iris pixel clusters based upon the region in which they are located.
- 45. The computer program product of claim 28, wherein the estimated locations to search for the facial features are based on the automatically identified eye positions.
- 46. The computer program product of claim 45, wherein the estimated locations to search for the facial features are found by aligning the eye positions within a model of the shape of the facial features with the automatically identified eye positions.
- 47. The computer program product of claim 45, wherein estimated locations to search for the facial features are based on the average position of these features within a set of example faces.
- 48. The computer program product of claim 28, wherein the facial feature positions are identified using an active shape model technique.
- 49. The computer program product of claim 48, wherein the shape model technique uses texture windows and the size of the texture windows is automatically scaled based on a current estimate of the size of the face.
- 50. The computer program product of claim 48, wherein the spacing of search locations is automatically scaled based on a current estimate of the size of the face.
- 51. The computer program product of claim 49 wherein an estimate of the size of the face is found by determining the scale that best aligns a current estimate of the feature positions with a model of the average positions of the facial features using a least squares process.
- 52. The computer program product of claim 50 wherein an estimate of the size of the face is found by determining the scale that best aligns a current estimate of the feature positions with a model of the average positions of the facial features using a least squares process.
- 53. The computer program product of claim 48, wherein the positions of the facial features that are outside a shape space boundary are constrained to locations of the shape found at the nearest point on a hyper-elliptical boundary of the shape space.
CROSS-REFERENCE TO RELATED APPLICATION(S)
[0001] Reference is made to commonly assigned copending application Ser. No. 09/740,562, filed Dec. 19, 2000 and entitled “Multi-Mode Digital Image Processing Method for Detecting Eyes”, in the names of Shoupu Chen and Lawrence A. Ray.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60323579 |
Sep 2001 |
US |