Not Applicable
Not Applicable
A portion of the material in this patent document is subject to copyright protection under the copyright laws of the United States and of other countries. The owner of the copyright rights has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the United States Patent and Trademark Office publicly available file or records, but otherwise reserves all copyright rights whatsoever. The copyright owner does not hereby waive any of its rights to have this patent document maintained in secrecy, including without limitation its rights pursuant to 37 C.F.R. § 1.14.
A portion of the material in this patent document is also subject to protection under the maskwork registration laws of the United States and of other countries. The owner of the maskwork rights has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the United States Patent and Trademark Office publicly available file or records, but otherwise reserves all maskwork rights whatsoever. The maskwork owner does not hereby waive any of its rights to have this patent document maintained in secrecy, including without limitation its rights pursuant to 37 C.F.R. § 1.14.
1. Field of the Invention
This invention pertains generally to skin tone detection systems and methods, and more particularly to skin tone detection systems and methods for digital video images in YCbCr space.
2. Description of Related Art
In order to minimize bandwidth for transmission and the amount of storage space in video applications, compression techniques are utilized to reduce the size of the video. These compression techniques generally have an adverse effect on the quality of the video image, such as texture loss and other artifacts.
Because facial regions receive a high degree of attention as opposed to other objects in the image, one way for increasing the quality of the image is to concentrate processing procedures on face regions.
There are various known approaches for detecting face regions in images. These include feature-based, motion-based and color-based approaches. Feature-based approaches try to identify a face region by detecting certain facial features, such as the eyes, the nose and the mouth. Motion-based approaches operate on the principle that a moving region in an image is likely to be a face. A color-based approach looks for skin-colored regions in an image.
Many of the known face detection approaches are computationally expensive, and are thus not ideal for real time applications such as digital video coding. The preferred approach for such applications is a color-based approach.
Color-based, or skin tone face detection involves extracting the regions of an image which have color corresponding to skin color. The skin tone detection system should be able to detect a range of skin tones, such as African, Asian and European, and should also be able to detect skin color irrespective of the lighting conditions under which the image is captured.
Accordingly, many known color-based face detection methods involve complex parametric modeling of the human skin tone. Such modeling requires a heavy computation cost, and while such computation costs may be acceptable in still image editing of JPEG or other still image files, they are prohibitive in current video standards (e.g., MPEG or H.263).
Accordingly, it is an object of the present invention to provide an improved skin tone detection algorithm for video images.
It is a further object of the present invention to provide a skin tone detection algorithm that has minimal computational costs and that improves the visual quality of video images by identifying human skin tone regions for further processing by a video encoder.
An aspect of the invention is a method for detecting human skin tone in a video signal, the video signal comprising a plurality of frames each having image data. The method comprises separating the image data for each frame into sets of data, the image data comprising a plurality of pixels each having a plurality of color components, averaging the image data in each data set to generate mean values for each color component in the data set, comparing the mean values to a stored color profile, the color profile correlating to human skin tone, and identifying data sets falling within the stored color profile.
In a preferred mode, generating a mean value for each component comprises generating Y, Cb, and Cr components in YCbCr color space. However, the method may be performed in a variety of color spaces known in the art.
Generally, the image data is further subdivided into subsets, wherein each subset is averaged to generate mean values for each color component in the data subset. Preferably, the image data is divided into four subsets. In YCbCr space, each subset for the Cb and Cr components prefereably comprises a block of 4×4 pixels, and each subset for the Y component preferably comprises a block of 8×8 pixels.
According to a preferred embodiment, the mean values of each subset are compared to the stored color profile. Each subset may then be assigned a voting number identifying whether the subset falls within the stored color profile. The subsets may then be summed to form a data set voting number. The data set voting number is compared to a threshold number, and the data set is assigned to a skin tone candidate list when the data set voting number is greater than the threshold number.
In a preferred mode, the stored color profile comprises one or more color component ranges, wherein the color component ranges are acquired from a plurality of training sequences. Generally, each color component range comprises a maximum and minimum value for each of the Y, Cb, and Cr components.
In another aspect of the invention, the identified data sets are assigned to a skin tone candidate list. The candidate list may then be subjected to a number of additional refinement or processing steps, such as pixel based refinement, removal of data sets in the candidate list that are not bordering any other data sets are removed from the list, supplementing the candidate list with data sets surrounded by data sets assigned to the candidate list. The identified data sets are then typically subjected to further processing by a video encoder.
In another aspect of the invention, a method for detecting human skin tone in a video signal comprises acquiring image data from a plurality of training sequences, generating a color profile from the plurality of training sequences, the color profile comprising one or more sets of component ranges indicative of human skin tone, comparing image data from each frame to the color profile, and identifying data sets from each frame that fall within the skin tone component ranges.
The comparing step generally comprises separating the image data for each frame into sets of data, averaging the image data in each data set to generate mean values for each color component in the data set, and comparing the mean values of the data set against the color profile.
In yet another aspect of the invention, an apparatus for detecting human skin tone in a video signal comprises means for partitioning the image data from the video signal into a plurality of macroblocks, means for averaging at least a portion of the data in each macroblock to generate mean values for each color component; and means for comparing the mean values to a stored color profile to identify macroblocks falling within the color profile, the color profile correlating to human skin tone
Further aspects of the invention will be brought out in the following portions of the specification, wherein the detailed description is for the purpose of fully disclosing preferred embodiments of the invention without placing limitations thereon.
The invention will be more fully understood by reference to the following drawings which are for illustrative purposes only:
Referring more specifically to the drawings, for illustrative purposes the present invention is embodied in the systems and methods generally shown in
With the advent of computer graphics and video signal transmission standards, a number of “color spaces” have evolved to represent the color spectrum. For example, RGB, YCbCr, HSI, YIQ, YES, YUV, etc. are all different standards developed to model the color spectrum. Because YCbCr has been widely adopted for use in digital video, the following description will be directed to skin tone recognition techniques in YCbCr space. However, it is appreciated that the following description may be applied for any color space commonly known in the art.
Referring now to
To allow for skin tone detection in real-time applications without losing its merit, the present invention uses a simple skin-tone profile table to represent the Y, Cb and Cr values for human skin tone. As illustrated in
Referring now to
In each entry of the skin tone profile table, six elements are included to specify the range of pixel values in YCbCr space. Each entry has a Max and Min for each of the Y, Cb and Cr components. Once the entries are compiled, the profile table may be stored for later lookup to compare actual video component values to the profile table. The following is an exemplary skin tone profile table having 8 entries:
Referring again to
Referring now
Referring again to
If Vote≧2 then MB0 is classified as skin tone, else non-skin tone.
All of the macroblocks identified as skin tone are placed into candidate list shown as block 40 in
The skin tone detection algorithm may also be refined to employ extra scrutiny for certain regions of the video frame. Generally, the center area of a picture attracts more attention of our vision system. To reduce computational cost; a region of interest (ROI) may be considered for extra skin tone detection. For example, we can shrink the ROI to be an n/mth of its height and width in unit of a macroblock from horizontal and vertical direction, where n<m, or just simply shrink one macroblock wide in four sides.
Now referring to
One process that may be employed to improve the accuracy of the skin tone detection is pixel-based refinement of the candidate list, shown as block 96. For each skin tone macroblock in the candidate list, every pixel (16*16 pixels from Y, 8*8 pixels from Cb, and 8×8 pixels from Cr) is checked by calling the function SkinToneRange(Y_pixel, Cb_Pixel, and Cr_pixel).
If a pixel is in the range defined in the profile table, the pixel is called SkinTonePixel. The number of SkinTonePixels in the range is counted, and if the percentage of SkinTonePixels is larger than the threshold value (e.g., one third of the total pixels (i.e., 256 pixels) in a macroblock), then the macroblock remains in the skin tone candidate list. If the percentage of pixels is lower than the threshold, the macroblock is removed from the list.
One further improvement is to consider the shape (e.g., oval, rectangle) of continuous skin tone macroblocks, shown as block 98. For example, a thin strait region of skin tone macroblocks is unlikely a human face, and thus is removed from the candidate list.
The candidate list may further be refined based on the interrelationships between the macroblocks. For example, a single, isolated skin tone macroblock may be removed from the candidate list, shown as block 99. Because isolated macroblocks are unlikely to be part of a facial feature, the possibility of false alert is reduced. Generally, the four neighboring macroblocks (top, bottom, left and right) are examined. If none of the neighboring macroblocks are skin tone it is removed from the candidate list.
The final step of detection is to group the skin tone macroblocks together to form a contour, shown as block 100. As shown in the following figure, the center macroblock marked “X” will be classified as Skin tone macroblock even though it is not in the Skin tone macroblock candidate list.
The grouping operation is performed for each non-skin tone macroblock. Considering a macroblock X in
Once human skin tone regions have been identified, the encoder can properly and more precisely improve this human skin area by, for example, decreasing Qp parameter. Because a region with human skin tone attracts more HVS (human vision system) attention in evaluating picture quality, the Qp parameter may be adjusted accordingly. In the video compression applications such as home videos, human skin tone such as human face is the major focus for viewers. This observation leads to the development of the proposed algorithm which to improve the viewing experience.
The skin detection algorithm may also be integrated with a MPEG/AVC encoder to improve over-all video quality, especially the handling of texture loss problem found in current video encoders. The skin-tone detection may be combined with the variance analysis for quality improvement. The basic idea is to properly change/decrease the Qp value in the Skin-tone MBs. The amount of Qp decreasing depends on its variance. Larger variance tends to smaller Qp decrease, while smaller variance tends to larger Qp decrease since variance in some sense represents the coding complexity.
Experimental Results
The results of the Skin Tone Detection Algorithm are shown in the following two tests. In the first test sequence illustrated in
In the second sequence test sequence illustrated in
Although the description above contains many details, these should not be construed as limiting the scope of the invention but as merely providing illustrations of some of the presently preferred embodiments of this invention. Therefore, it will be appreciated that the scope of the present invention fully encompasses other embodiments which may become obvious to those skilled in the art, and that the scope of the present invention is accordingly to be limited by nothing other than the appended claims, in which reference to an element in the singular is not intended to mean “one and only one” unless explicitly so stated, but rather “one or more.” All structural, chemical, and functional equivalents to the elements of the above-described preferred embodiment that are known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the present claims. Moreover, it is not necessary for a device or method to address each and every problem sought to be solved by the present invention, for it to be encompassed by the present claims. Furthermore, no element, component, or method step in the present disclosure is intended to be dedicated to the public regardless of whether the element, component, or method step is explicitly recited in the claims. No claim element herein is to be construed under the provisions of 35 U.S.C. 112, sixth paragraph, unless the element is expressly recited using the phrase “means for.”
This application claims priority from U.S. provisional application Ser. No. 60/554,532 filed on Mar. 18, 2004, incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
60554532 | Mar 2004 | US |