Claims
- 1. A method of processing two or more images to distinguish a head in the images from other portions of the images, comprising the following actions:
predicting portions of the images that form facial portions of the images; calculating one or more skin colors by sampling the images at the predicted portions; computing a first mask image that marks any image pixels whose colors are different in the two or more images; creating a second mask image that marks any image pixels whose colors correspond to the calculated one or more skin colors; combining the first and second mask images to create a final mask image.
- 2. A method as recited in claim 1, wherein the combining comprises intersecting.
- 3. A method as recited in claim 1, wherein the combining comprises joining.
- 4. A method as recited in claim 1, wherein the predicting is based on points supplied by a human user.
- 5. A method as recited in claim 1, wherein the predicting is based on point supplied by a human user, the points corresponding to a plurality of distinct facial features.
- 6. One or more computer-readable media containing a program that is executable by a computer to process two or more images for distinguishing a head in the images from other portions of the images, the program comprising the following actions:
identifying locations of a plurality of distinct facial features in the images; predicting an outer area that corresponds to the head, based on the indicated locations of facial features; predicting an inner area within the outer area that corresponds to a face portion of the head, based on the indicated locations of facial features; calculating one or more skin colors by sampling the images at locations that are specified relative to the indicated locations of facial features; creating a first mask image that marks any image pixels whose colors are different in the two images; creating a second mask image that marks any image pixels whose colors correspond to the calculated one or more skin colors; within the inner area, noting all of the marked pixels on the first mask image and also noting any unmarked pixels of the first mask image that correspond in location to marked pixels in the second mask image; forming a final mask image that marks the noted pixels as being part of the head.
- 7. One or more computer-readable media as recited in claim 6, the actions further comprising:
predicting a lower area of the image that corresponds to a chin portion of the head; within the lower area, noting any marked pixels in the first mask image that correspond in location to marked pixels in the second mask image.
- 8. One or more computer-readable media as recited in claim 6, wherein the identifying comprises accepting input from a human user.
- 9. One or more computer-readable media as recited in claim 6, wherein the identified locations correspond to eyes, nose and mouth.
- 10. One or more computer-readable media as recited in claim 6, wherein the identified locations comprise eye corners, mouth ends, and nose tip.
- 11. One or more computer-readable media as recited in claim 6, wherein the inner and outer areas are defined by inner and outer ellipses, and the outer ellipse is approximately 25% larger than the inner ellipse.
- 12. One or more computer-readable media as recited in claim 6, wherein:
the sizes of the ellipses are calculated based on spacial relationships of the identified locations.
- 13. One or more computer-readable media as recited in claim 6, wherein:
the identified locations comprise the inner eye corners and the mouth ends; the inner and outer areas are defined by inner and outer ellipses; and the sizes of the ellipses are calculated based on spacial relationships of the identified locations.
- 14. One or more computer-readable media as recited in claim 6, wherein:
the identified locations comprise the inner eye corners and the mouth ends; the inner and outer areas are defined by inner and outer ellipses; the width of the inner ellipse is five times the distance between the inner eye corners; and the height of the inner ellipse is three times the vertical distance between the eye corners and the mouth ends.
- 15. One or more computer-readable media as recited in claim 6, wherein:
the identified locations comprise the inner eye corners and the mouth ends; the inner and outer areas are defined by inner and outer ellipses; the width of the inner ellipse is five times the distance between the inner eye corners; the height of the inner ellipse is three times the vertical distance between the eye corners and the mouth ends; and the outer ellipse is 25% larger than the inner ellipse.
RELATED APPLICATIONS
[0001] This application is a divisional of copending U.S. application Ser. No. 09/754,938, filed Jan. 4, 2001, which claims the benefit of U.S. Provisional Application No. 60/188,603, filed Mar. 9, 2000.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60188603 |
Mar 2000 |
US |
Divisions (1)
|
Number |
Date |
Country |
Parent |
09754938 |
Jan 2001 |
US |
Child |
10846086 |
May 2004 |
US |