Automatic Red-Eye Object Classification In Digital Photographic Images

Information

  • Patent Application
  • 20110080616
  • Publication Number
    20110080616
  • Date Filed
    October 07, 2009
    14 years ago
  • Date Published
    April 07, 2011
    13 years ago
Abstract
Automatic red-eye object classification in digital photographic images. In a first example embodiment, a method for classifying a candidate red-eye object in a digital photographic image includes several acts. First, a candidate red-eye object in a digital photographic image is selected. Next, RGB pixels of the candidate red-eye object are converted into YUV pixels. Then, the YUV pixels satisfying a constraint that is a function of the YUV pixels are summed. Next, the sum is determined to be greater than or equal to a scaled version of the total number of YUV pixels in the candidate red-eye object. Finally, the candidate red-eye object is transformed into a true red-eye object.
Description
THE FIELD OF THE INVENTION

Embodiments of present invention relate to digital image processing and pattern recognition. More specifically, example embodiments of the present invention relate to methods for automatic red-eye object classification in digital photographic images.


BACKGROUND

Red-eye detection and correction technologies are used in printers, digital cameras, photo viewers, image editing software and related imaging devices and applications to localize and correct the red-eye effects in digital photographic images that are often captured when using a flash. Though there has been a great deal of progress in red-eye detection and correction in the last few years, many problems remain unsolved. For example, red-eye detection and correction must deal with varying illumination, low image quality and resolution, eye size and face orientation variations, and background changes in complex real-life scenes.


Typically, early stages of a red-eye detection pipeline have to distinguish between true red-eye objects and a number of incorrectly detected non-red-eye objects, also known as false red-eye objects. False red-eye objects are particularly prevalent in complex visual scenes. False red-eye objects can be reduced based on the evaluation of the objects' color, structural and geometric characteristics. Unfortunately, many real-world patterns exhibit similar color and structural characteristics as true red-eye objects, thus resulting in a high number of false red-eye objects even at higher stages of the detection pipeline.


SUMMARY

In general, example embodiments relate to methods for automatic red-eye object classification in digital photographic images. Some example embodiments employ various object and image characteristics such as luminance, chrominance, contrast, smoothness, binary patterns, and feature spatial distributions to target different types of false red-eye objects, thus resulting in improved performance and efficiency of the detection pipeline.


In a first example embodiment, a method for classifying a candidate red-eye object in a digital photographic image includes several acts. First, a candidate red-eye object in a digital photographic image is selected. Next, RGB pixels of the candidate red-eye object are converted into YUV pixels. Then, the YUV pixels satisfying a constraint that is a function of the YUV pixels are summed. Next, the sum is determined to be greater than or equal to a scaled version of the total number of pixels in the candidate red-eye object. Finally, the candidate red-eye object is transformed into a true red-eye object.


In a second example embodiment, another method for classifying a candidate red-eye object in a digital photographic image includes several acts. First, a candidate red-eye object in a digital photographic image is selected. Next, standard deviation values of pixels of the candidate red-eye object for a particular feature signal are calculated. Then, the pixels satisfying a constraint that is a function of the standard deviation are summed. Next, the sum is determined to be less than or equal to a tunable minimum number of signals. Finally, the candidate red-eye object is transformed into a true red-eye object.


In a third example embodiment, yet another method for classifying a candidate red-eye object in a digital photographic image includes several acts. First, an area of a candidate red-eye object is mirrored in order to create a set of blocks neighboring with the candidate red-eye object. Each block has dimensions identical with dimensions of the area of the candidate red-eye object. Next, a set of binarization thresholds is adaptively obtained. Then, the candidate red-eye object and its mirror regions are binarized using one of the thresholds. Next, a ratio is calculated between the number of white pixels and the number of black pixels for each of the blocks. Then, the block ratios are compared with predetermined values in order to determine that an eye binary pattern constraint is satisfied. Finally, the candidate red-eye object is transformed into a true red-eye object.


In a fourth example embodiment, yet another method for classifying a candidate red-eye object in a digital photographic image includes several acts. First, a candidate red-eye object in a digital photographic image is selected. Next, an area of the candidate red-eye object is redefined by placing a new area in the center of the candidate red-eye object or in any other point within a bounding box of the candidate red-eye object. Then, pixels satisfying a feature constraint are labeled in order to determine the number of disjoint objects and the object with maximum dimensions created from pixels satisfying the feature constraint. Next, the number of disjoint objects and the maximum object dimensions are determined to be greater than predetermined values. Finally, the candidate red-eye object is transformed into a true red-eye object.


In a fifth example embodiment, another method for classifying a candidate red-eye object in a digital photographic image includes several acts. First, a candidate red-eye object in a digital photographic image is selected. Next, RGB data of the digital photographic image is converted into a color-ratio-like redness. Then, pixels of the candidate that satisfy a redness constraint are summed. Next, the sum is determined to be less than a threshold. Finally, the candidate red-eye object is transformed into a true red-eye object.


In sixth, seventh, eighth, ninth, and tenth example embodiments, one or more computer-readable media have computer-readable instructions thereon which, when executed by a processor, implement the methods discussed above in connection with the first, second, third, fourth, and fifth example embodiments, respectively.


This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential characteristics of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.


Additional features will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the teachings herein. Features of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. Features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.





BRIEF DESCRIPTION OF THE DRAWINGS

To further develop the above and other aspects of example embodiments of the invention, a more particular description of these examples will be rendered by reference to specific embodiments thereof which are disclosed in the appended drawings. It is appreciated that these drawings depict only example embodiments of the invention and are therefore not to be considered limiting of its scope. It is also appreciated that the drawings are diagrammatic and schematic representations of example embodiments of the invention, and are not limiting of the present invention. Example embodiments of the invention will be disclosed and explained with additional specificity and detail through the use of the accompanying drawings in which:



FIG. 1 is a schematic representation of an example printer;



FIG. 2 is a flowchart of an example method for automatic red-eye detection and correction;



FIG. 3 is a flowchart of a first example red-eye object classification method;



FIG. 4 is a flowchart of a second example red-eye object classification method;



FIG. 5 is a flowchart of a third example red-eye object classification method;



FIG. 6 is a flowchart of a fourth example red-eye object classification method;



FIG. 7 is a schematic illustration of a candidate red-eye object and four neighboring blocks surrounding the candidate under consideration;



FIG. 8 is a flowchart of a fifth example red-eye object classification method; and



FIG. 9 is a flowchart of a sixth example red-eye object classification method.





DETAILED DESCRIPTION OF SOME EXAMPLE EMBODIMENTS

In the following detailed description of the embodiments, reference is made to the accompanying drawings that show, by way of illustration, specific embodiments of the invention. In the drawings, like numerals describe substantially similar components throughout the several views. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. Other embodiments may be utilized and structural, logical and electrical changes may be made without departing from the scope of the present invention. Moreover, it is to be understood that the various embodiments of the invention, although different, are not necessarily mutually exclusive. For example, a particular feature, structure, or characteristic described in one embodiment may be included within other embodiments. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims, along with the full scope of equivalents to which such claims are entitled.


In general, example embodiments relate to systems and methods for automatic red-eye object classification in digital photographic images. Example embodiments can be used to automatically identify and remove false red-eye objects from a list of candidate red-eye objects.


I. Example Environment

The example methods and variations thereof disclosed herein can be implemented using computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media can be any available media that can be accessed by a processor of a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store program code in the form of computer-executable instructions or data structures and which can be accessed by a processor of a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.


Computer-executable instructions comprise, for example, instructions and data which cause a processor of a general purpose computer or a special purpose computer to perform a certain function or group of functions. Although the subject matter is described herein in language specific to methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific acts described herein. Rather, the specific acts described herein are disclosed as example forms of implementing the claims.


Examples of special purpose computers include image processing apparatuses such as digital cameras (an example of which includes, but is not limited to, the Epson R-D1 digital camera manufactured by Seiko Epson Corporation headquartered in Owa, Suwa, Nagano, Japan), digital document cameras (an example of which includes, but is not limited to, the Epson DC-10s document camera manufactured by Seiko Epson Corporation) digital camcorders, projectors, printers (examples of which include, but are not limited to, the Epson Artisan® 50 Ink Jet Printer, Epson WorkForce 30 and 40 Ink Jet Printers, the Epson Stylus C88+, Photo R280, Photo 1400, Photo R1900, and Photo R2880 Ink Jet Printers, and Epson B-300 and B-500DN Color Business Ink Jet Printers, all manufactured by Seiko Epson Corporation), scanners (examples of which include, but are not limited to, the Epson Perfection© V30, V200, V300, V500, V700, 4490, V750-M Pro, and 4490, the Epson Expression© 10000XL, and the Epson GT-S80, GT-1500, GT-2500, GT-15000, GT-20000, and GT-30000, all manufactured by Seiko Epson Corporation), copiers, portable photo viewers (examples of which include, but are not limited to, the Epson P-3000, P-5000, P-6000, and P-7000 portable photo viewers manufactured by Seiko Epson Corporation), or portable movie players, or some combination thereof, such as printer/scanner/copier combinations or “All-in-Ones” (examples of which include, but are not limited to, the Epson Stylus Photo RX580, RX595, or RX680, the Epson Stylus CX4400, CX7400, CX8400, or CX9400Fax, the Epson AcuLaser® CX11NF, and the Epson Artisan® 500, 600, 700, and 800, all manufactured by Seiko Epson Corporation) or a digital camera/camcorder combinations.


An image processing apparatus may include automatic red-eye detection and correction capability, which includes automatic red-eye object classification capabilities, for example, to automatically detect and correct red-eye objects in a digital photographic image. For example, a printer with this automatic red-eye detection and correction capability may include one or more computer-readable media that implement the example methods disclosed herein, or a computer connected to the printer may include one or more computer-readable media that implement the example methods disclosed herein.


While any imaging apparatus could be used, for purposes of illustration an example embodiment will be described in connection with an example printer, a schematic representation of which is denoted at 100 in FIG. 1. Example embodiments of the printer 100 include, but are not limited to, the printer models or printer/scanner/copier “All-in-One” models disclosed herein.


The example printer 100 exchanges data with a host computer 150 by way of an intervening interface 102. Application programs and a printer driver may also be stored for access on the host computer 150 or on the printer 100. When an image retrieve command is received from the application program, for example, the printer driver controls conversion of the command data to a format suitable for the printer 100 and sends the converted command data to the printer 100. The driver also receives and interprets various signals and data from the printer 100, and provides necessary information to the user by way of the host computer 150.


When data is sent by the host computer 150, the interface 102 receives the data and stores it in a receive buffer forming part of a RAM 104. The RAM 104 can be divided into a number of sections, through addressing for example, and allocated as different buffers, such as a receive buffer or a send buffer. For example, digital photographic image data can be sent to the printer 100 from the host computer 150. Digital photographic image data can also be obtained by the printer 100 from the flash EEPROM 106 or the ROM 108. For example, a portable flash EEPROM card can be inserted into the printer 100. This digital photographic image data can then be stored in the receive buffer or the send buffer of the RAM 104.


A processor 110 uses computer-executable instructions stored on the ROM 108 or on the flash EEPROM 106, for example, to perform a certain function or group of functions, such as the example methods for automatic red-eye detection and correction, or for automatic red-eye object classification disclosed herein. Where the data in the receive buffer of the RAM 104 is a digital photographic image, for example, the processor 110 can implement the methodological acts on the digital photographic image of the example methods for automatic red-eye detection and correction disclosed herein to automatically detect and then correct red-eye objects in the digital photographic image. The corrected digital photographic image can then be sent to a display 112 for a preview display thereon, to the printing mechanism(s) 114 for printing thereon, or to the host computer 150 for storage or display thereon, for example. The processor 110 is in electronic communication with the display 112, which can be any type of an electronic display including, but not limited to a visual display such as a liquid crystal display (LCD). The processor 110 is also in electronic communication with the printing mechanism(s) 114, which can be any type of printing mechanism(s) including, but not limited to, ink-jet, laser, LED/LCD, impact, solid ink, and dye sublimation printing mechanism(s).


II. Example Method for Automatic Red-Eye Detection and Correction


FIG. 2 is a flowchart of an example method 200 for automatic red-eye detection and correction. The example method 200 detects and corrects red-eye objects in a digital photographic image. Accordingly, the example method 200 results in the transformation of a digital photographic image with one or more red-eye objects into a corrected digital photographic image with fewer or no red-eye objects. The various acts of the method 200 will now be discussed in turn.


First, at 202, a digital photographic image is selected for red-eye detection and correction. For example, a digital color photograph or a digitized version of a traditional color photograph can be selected for red-eye detection and correction. The digital photographic image may constitute a red-green-blue (RGB) color image x with K1×K2 pixels x(r,s)=[x(r,s)1,x(r,s)2,x(r,s)3] where x(r,s)1, x(r,s)2, and x(r,s)3 denote the R, G, and B component, respectively. The term (r,s) denotes the pixel location with r=1, 2, . . . , K1 and s=1, 2, . . . , K2 indicating the image row and column, respectively.


Next, at 204, candidate red-eye pixels are identified in the digital photographic image. For example, this identification of candidate red-eye pixels can be accomplished by transforming the image x into a binary map d with a resolution of K1×K2 pixels d(r,s) where the value d(r,s)=1 indicates that x(r,s) is a candidate red-eye pixel and d(r,s)=0 denotes that x(r,s) is not a candidate red-eye pixel. The candidate red-eye pixels can be localized using any known method of red-eye detection.


Then, at 206, candidate red-eye pixels are grouped into candidate red-eye objects. For example, this grouping can be accomplished by performing a procedure whereby the map d undergoes object segmentation which groups adjacent pixels with d(r,s)=1. This procedure partitions the map d into N disjoint candidate red-eye objects Oi={(r,s)ε Φi;d(r,s)i=1}, for i=1, 2, . . . , N. Each candidate red-eye object Oi is characterized by Φi, which is the set of pixel locations (r,s) where d(r,s)i=1 which are bounded by a Φiy×Φix bounding box with height Φiy and width Φix. Thus, the object Oi can be seen as an image of dimensions Φiy×Φix and can be handled separately from all other objects in {Oi;i=1, 2, . . . , N}.


Next, at 208, one or more classification methods is performed on each candidate red-eye object in order to eliminate candidate red-eye objects that are false red-eye objects and confirm the remaining red-eye objects as true red-eye objects, and potentially refine the true red-eye objects. For example, one or more of the classification methods 400, 500, 600, 800, and 900 disclosed herein can be performed on each of the N disjoint candidate red-eye objects Oi={(r,s)ε Φi; d(r,s)i=1}. The performance of the methods 400, 500, 600, 800, and 900 results in the elimination of false red-eye objects.


Finally, at 210, the original digital photographic image is transformed into a red-eye corrected digital photographic image by removing the red-eye effect from each confirmed true red-eye object. The red-eye effect can be removed using any known method of red-eye correction.


It is noted that the example method 200 for automatic red-eye detection and correction transforms electronic data that represents a physical and tangible object. In particular, the example method 200 transforms an electronic data representation of a photographic image that represents a real-world visual scene, such as a photograph of a person or a landscape, for example. During the example method 200, the data is transformed from a first state into a second state. In the first state, the data represents the real-world visual scene with red-eye objects present in the scene. In the second state, the data represents the real-world visual scene with the red-eye object removed from the scene.


Several example methods for classifying red-eye objects will now be disclosed. It is noted that each of these example methods transforms electronic data that represents a physical and tangible object. In particular, each of these example methods transforms an electronic data representation of a list of candidate red-eye objects in a photographic image that represents a real-world visual scene, such as a photograph of one or more people, for example. During these example methods, the data is transformed from a first state into a second state. In the first state, the data represents a list of candidate red-eye objects in the real-world visual scene. In the second state, the data represents a list of true red-eye objects in the real-world visual scene with false red-eye objects removed from the list.


III. First Example Method for Classifying a Candidate Red-Eye Object

With reference now to FIG. 3, a first example method 300 for classifying a candidate red-eye object in a digital photographic image is disclosed. The example method 300 employs luminance and chrominance characteristics of a candidate red-eye object to classify the object. The various acts of the method 300 will now be discussed in turn.


First, at 302, RGB pixels of the candidate red-eye object are converted into YUV pixels. For example, the act 302 can be accomplished using an RGB to YUV conversion defined as follows:










[




x


(

r
,
s

)


1








x


(

r
,
s

)


2








x


(

r
,
s

)


3






]

=


[



0.299


0.587


0.114





-
0.14173




-
0.28886



0.436




0.615



-
0.51499



0.10001



]



[




x


(

r
,
s

)


1







x


(

r
,
s

)


2







x


(

r
,
s

)


3





]






(
1
)







Equation (1) decomposes each RGB pixel x(r,s)=[x(r,s)1,x(r,s)2,x(r,s)3] into one luminance component x′(r,s)2 and two chrominance components x′(r,s)1 and x′(r,s)3. In the resulting vector x′(r,s)=[x′(r,s)1,x′(r,s)2,x′(r,s)3], which is a YUV version of the original RGB pixel x(r,s)=[x(r,s)1,x(r,s)2,x(r,s)3], the component x′(r,s)1 determines brightness of the color (referred to as luminance), while the components x′(r,s)2 and x′(r,s)3 determine the color itself (referred to as chrominance).


Next, at 304, the number of pixels satisfying a first constraint that is a function of the YUV pixels is summed. Then, at 306, the first sum is compared to a first scaled version of the total number of pixels in the candidate red-eye object. If the first sum is less than the first scaled version of the total number of pixels in the candidate red-eye object, then, at 308, the candidate red-eye object is classified as a false red-eye object. Alternatively, if the first sum is greater than or equal to the first scaled version of the total number of pixels in the candidate red-eye object, then, at 310, the number of pixels satisfying a second constraint that is a function of the YUV pixels is summed. Then, at 312, the second sum is compared to a second scaled version of the total number of pixels in the candidate red-eye object. If the second sum is greater than the second scaled version of the total number of pixels in the candidate red-eye object, then, at 308, the candidate red-eye object is classified as a false red-eye object. Alternatively, if the second sum is less than or equal to the second scaled version of the total number of pixels in the candidate red-eye object, then, at 314, the candidate red-eye object is classified as a true red-eye object.


The acts 304, 306, 308, 310, 312, and 314 can be performed as follows. To precisely distinguish red-eye effects from other eye's parts, background colors, and skin tones, contributions of blue and green to the chrominance components should be reduced. One possible way to accomplish this reduction is to subtract x′(r,s)2 from x′(r,s)3. Since the value of x′(r,s)3−x′(r,s)2 becomes high in areas which appear to be red, this value is comparable with the luminance component x′(r,s)1. This comparison can be formally written as x′(r,s)3−x′(r,s)2>δx′(r,s)1 where δ is a tunable parameter. Since most red-eye effects are typically characterized by low values of the green and blue components, another solution may be achieved by comparing x′(r,s)3−x′(r,s)2 with the minimum of x(r,s)2 and x(r,s)3, which forms x′(r,s)3−x′(r,s)2>δ′ min(x(r,s)2,x(r,s)3). Similar to the previous solution, δ′ is a tunable parameter. Evaluation of all pixels of the object Oi using either or both of the above solutions can be used to form the following classification rule:










O
i

=

{




{



d

(

r
,
s

)

i

=
0

;


(

r
,
s

)



Φ
i



}























if










(

r
,
s

)



Φ
i









(



x


(

r
,
s

)


3



-

x


(

r
,
s

)


2




>

δ






x


(

r
,
s

)


1





)



<







A




Φ
i










or










(

r
,
s

)



Φ
i









(






x


(

r
,
s

)


3



-

x


(

r
,
s

)


2




>







δ




min


(


x


(

r
,
s

)


2


,

x


(

r
,
s

)


3



)






)



>







A






Φ
i



























O
i



otherwise








(
2
)







where {d(r,s)i=0;(r,s)ε Φi} denotes a false red-eye object being removed from the label map, |Φi| denotes the total number of pixels of Oi, while A and A′ are tunable parameters. The purpose of the first criterion in Equation (2) is to identify candidate red-eye objects with a low percentage of pixels which exhibit significant redness while having low luminance. The rationale behind this criterion is that red-eyes should have a quite high number of pixels with the above characteristics. The second criterion is used to identify candidate red-eye objects with a very high percentage of pixels which have significant contributions of red and low contributions of green or blue. This concept is quite useful for detection pipelines which perform morphological dilation on the map d prior to the labeling operation. The rationale behind this concept is that dilated red-eye object areas should spawn over non-red eye portions, meaning that true red-eye objects should not have all their pixels red, as opposed to skin or some background objects. Using tunable parameters, the classifier in Equation (2) achieves processing flexibility. In some example embodiments, Equation (2) can be performed by setting the tunable parameters to δ=0.2 , δ′=0.6 , A=0.05, and A′=0.9.


IV. Second Example Method for Classifying a Candidate Red-Eye Object

With reference now to FIG. 4, a second example method 400 for classifying a candidate red-eye object in a digital photographic image is disclosed. The example method 400 employs contrast characteristics of a candidate red-eye object to classify the object. Red-eyes are usually characterized by high contrast values of pixels on the boundary of defective red areas. Thus, the contrast characteristics of candidate red-eye objects can be used to reduce false detection. The various acts of the method 400 will now be discussed in turn.


First, at act 402, local contrast values from a grayscale version of the original color pixels are calculated by using a 3×3 window ζ={(r+i,s+j); −1≦i≦1,−1≦j≦1} moved over an image lattice. Such grayscale values can be obtained as x′(•,•)1 in Equation (1) and can be used to calculate a 3×3 matrix of contrast values as x′(p,q)1−x′(r,s)1, for (p,q)ε ζ.


Next, at 404, the local contrast value of each pixel is compared to the local contrast constraint Σ(x′(p,q)1−x′(r,s)1κ)≧C. Tunable parameters κ and C denote, respectively, the local contrast threshold and the minimum number of x′(•,•)1 values in ζ which are readily distinguishable from x′(r,s)1 located in the center of ζ. Next, at 406, it is determined whether any of the pixel locations (r,s)ε Φi satisfy the local contrast constraint Σ(x′(p,q)1−x′(r,s)1>κ)≧C. If none of the pixel locations (r,s)ε Φi satisfy the local contrast constraint Σ(x′(p,q)1−x′(r,s)1>κ)≧C, then, at 408, the candidate red-eye object is classified as a false red-eye object. Alternatively, if one or more of the pixel locations (r,s)ε Φi satisfy the local contrast constraint Σ(x′(p,q)1−x′(r,s)1>κ)≧C, then, at 410, the candidate red-eye object is classified as a true red-eye object.


The acts 402, 404, 406, and 408 can be performed according to the following classification rule:










O
i

=

{




O
i











if










(

p
,
q

)


ζ




(



x


(

p
,
q

)


1



-

x


(

r
,
s

)


1




>
κ

)





C





for






d

(

r
,
s

)




=


1





with






(

r
,
s

)




Φ
i









{



d

(

r
,
s

)

i

=
0

;


(

r
,
s

)



Φ
i



}



otherwise








(
3
)







This classifier removes candidate red-eye objects which consist of pixels with insufficient local contrast. In some example embodiments, Equation (3) can be performed by setting κ=40 and C=1.


V. Third Example Method for Classifying a Candidate Red-Eye Object

With reference now to FIG. 5, a third example method 500 for classifying a candidate red-eye object in a digital photographic image is disclosed. The example method 500 employs smoothness characteristics of a candidate red-eye object to classify the object. It is quite common that a number of candidate red-eye objects coming out at early stages of the red-eye detection pipeline are background or human skin objects which exhibit similar color characteristics as true red-eye objects. However, most of those falsely detected objects are fairly smooth unlike eyes which typically have rich high-frequency contents. Smoothness of candidate objects is thus another feature applicable to red-eye candidate classification. The various acts of the method 500 will now be discussed in turn.


First, at act 502, a standard deviation value of the pixels of a candidate red-eye object is calculated for a particular feature signal. For example, operating on color data x(•,•) of Oi, the standard deviation values are obtainable in a component-wise manner as follows:












σ
k
i

=



1



Φ
i










(

r
,
s

)



Φ
i





(


x


(

r
,
s

)


k


-

μ
k
i


)





,
where








μ
k
i

=


1



Φ
i










(

r
,
s

)



Φ
i





x


(

r
,
s

)


k









(
4
)







Next, at 504, feature signals satisfying a constraint that is a function of the standard deviation are summed. Then, at 506, the sum is compared to a minimum number of feature signals for which an object should exhibit certain smoothness in order to be considered a false red-eye object. If the sum is greater than the minimum number of signals, then, at 508, the candidate red-eye object is classified as a false red-eye object. Alternatively, if the sum is less than or equal to the minimum number of signals, then, at 510, the candidate red-eye object is classified as a true red-eye object.


The acts 504, 506, 508, and 510 can be performed according to the following classification rule:










O
i

=

{




{



d

(

r
,
s

)

i

=
0

;


(

r
,
s

)



Φ
i



}





if
(



k







(


σ
k
i

<

T
1
σ


)


)

>

T
2
σ







O
i



otherwise








(
5
)







where T1σ and T2σ are tunable parameters denoting respectively the maximum standard deviation value for which Oi is considered as a smooth region and the minimum number of signals for which the standard deviation constraint should be satisfied in order to consider Oi as a false red-eye object.


Since color edges may not be present in all color channels, the amount of smoothness evaluated in each color channel separately can increase the processing accuracy. Improved performance characteristics of Equation (5) can be obtained by using a number of feature signals. For example, in addition to the red, green and blue components x(r,s)1, x(r,s)2, and x(r,s)3, other suitable feature signals may include, but they are not limited to, the luminance x(r,s)4=x′(r,s)1, the red-green color difference x(r,s)5=x(r,s)1−x(r,s)2 and the red-blue color difference x(r,s)6=x(r,s)1−x(r,s)3. In some example embodiments, Equation (5) can be performed by setting k=6, T1σ=10, and T2σ=2.


VI. Fourth Example Method for Classifying a Candidate Red-Eye Object

With reference now to FIG. 6, a fourth example method 600 for classifying a candidate red-eye object in a digital photographic image is disclosed. Since images are highly nonstationary due to the presence of edges and details, simplification of image contents can reveal some valuable information which is not apparent when inspecting the original image data which often can have a noise appearance. The level of simplification can vary depending on application constraints and user needs. The example method 600 therefore aims at simplify the original data by transforming the full-color image into its binary version. Other image simplification solutions are also applicable within the proposed framework.


First, at 602, an area of a candidate red-eye object is mirrored in order to create a set of blocks neighboring with the candidate red-eye object, where these blocks have dimensions identical with dimensions of the candidate red-eye object's area. Then, at 604, a set of binarization thresholds is adaptively obtained. Next, at 606, one of the set of thresholds is chosen to binarize the red-eye object and its mirror regions, and then the chosen threshold is used to calculate a ratio between the number of white pixels and the number of black pixels for each of the generated blocks. Next, at 608, these block ratios are compared with predetermined values in order to decide whether the eye binary pattern constraint is satisfied. If the candidate red-eye object does not satisfy the eye binary pattern constraint for the chosen threshold value, then, at 610, it is determined whether all of the thresholds in the set have been chosen. If not, the method 600 returns to acts 606 and 608 where a new threshold is chosen. If so, at 612, the candidate red-eye object is classified as a false red-eye object. Alternatively, if the candidate red-eye object does satisfy the eye binary pattern constraint for the chosen threshold value, then, at 614, the candidate red-eye object is classified as a true red-eye object.


An example embodiment of the acts 602, 604, 606, 608, and 610 of the method 600 will now be disclosed. The method 600 takes a Φiy×Φix bounding box of the object Oi to adaptively calculate the threshold used to produce a binary image b with pixels b(•,•) from a 3Φiyix rectangle 700 centered in the center of Φiy×Φix, as disclosed in FIG. 7. One possible way is to calculate an average μΦi of grayscale versions x′(•,•)1 of pixels x′(•,•) in Φiy×Φix and use this average value to limit the range of the binarization threshold Tb, for example, as Tb ε[μΦi−40, μΦi40]. However, other operators and signals can also be employed. This threshold Tb is used to determine the values of pixels in b as b(•,•)=1 for x′(•,•)1>Tb and b(•,•)=0 for x′(•,•)1≦Tb. The binary image b is further divided into nine non-overlapping blocks, each of size Φiy×Φixpixels, as disclosed in FIG. 7. Since b is centered in the center of the original bounding box Φiy×Φix of Oi, the statistics of eight blocks surrounding Φiy×Φix can be compared with the statistics of the center block BC in order to determine whether Oi matches a red-eye binary pattern.


To simplify calculations, only four neighboring blocks are considered in this embodiment, although other configurations are also possible. For each of BL, BR, Bu, BD, and BC shown in FIG. 1, the binary block statistics is determined as follows:










β
g

=








(

r
,
s

)



B
g





b

(

r
,
s

)






Φ
i
y



Φ
i
x


-





(

r
,
s

)



B
g





b

(

r
,
s

)










for





g



{

L
,
R
,
U
,
D
,
C

}






(
6
)







where βg denotes the ratio between the number of white pixels (b(•,•)=1) and the number of black pixels (b(•,•)=0) in the block Bg. Since pupil regions are much darker than surrounding regions, BC is expected to contain a fairly high percentage of black pixels (i.e., low βC value) while BL, BR, BU, and BD are expected to contain a fairly high percentage of white pixels (i.e., high) βL, βR, βU, βD values). Based on these specific eye pattern characteristics, the method 600 can be performed according to the following classification rule:










O
i

=

{




O
i








if






(


β
C

>


T
C






and






β
L


<


T
S






and






β
R


<

T
S


)






or












(


β
C

>


T
C






and






β
U


<


T
S






and






β
D


<

T
S


)














{



d

(

r
,
s

)

i

=
0

;


(

r
,
s

)



Φ
i



}



otherwise








(
7
)







where TC and TS are tunable thresholds for the central block and its neighbors, respectively. In some example embodiments, Equation (7) can be performed by setting TC=0.002 and TS=0.3.


It should be noted that the performance of this embodiment of the method 600 also greatly depends on the employed binarization threshold Tb. Since any error in location and size of  iy×Φix will affect the reliability of Equation (7), a number of Tb values can be chosen prior to calculating the binary statistics in Equation (6) and making the decision in Equation (7) on whether Oi is a true or false red-eye object. These extra calculations in the performance of the method 600 have a fairly small impact on the overall computational efficiency of the classification cascade. A candidate red-eye object is considered a true red-eye object if the classification rule in Equation (7) is satisfied for any allowed Tb value.


VII. Fifth Example Method for Classifying a Candidate Red-Eye Object

With reference now to FIG. 8, a fifth example method 800 for classifying a candidate red-eye object in a digital photographic image is disclosed. In the design of a cascade of classifiers, it is desirable to minimize overlaps in the feature space among individual cascaded modules in order to build a system that can effectively remove false red-eye objects while preserving desired features. To complement other classification methods presented herein, the classification method 800 performs pixel-based feature extraction and object segmentation in cropped image regions of a unified size.


First, at 802, the area of the candidate red-eye object is redefined by placing the new area in the center of the candidate or in any other point within the bounding box of the candidate or its close neighborhood. Next, at 804, pixels satisfying a feature constraint are labeled in order to determine the number of disjoint objects and the object with maximum dimensions created from pixels satisfying the feature constraint. Then, at 806, the number of disjoint objects and the maximum object dimensions are compared to predetermined values. If the number of disjoint objects and the maximum object dimensions are greater than the predetermined values, then, at 808, the candidate red-eye object is classified as a false red-eye object. Alternatively, at 810, the candidate red-eye object is classified as a true red-eye object.


An example embodiment of the acts 802, 804, 806, 808, and 810 of the method 800 will now be disclosed. The method 800 employs pixel-based feature extraction and object segmentation in cropped image regions of a unified size. These cropped image regions can be centered in the center of Oi. However, alternative solutions can use any point in Φiy×Φix or its close neighborhood as the center of the cropped region. The cropped region can be seen as a Φy×Φx rectangle of RGB pixels x(r,s), for (r,s)ε Φi where Φ′i is the set of all spatial locations within the rectangle. The dimensions of the cropped region can fixed for all candidate objects. In this case, Φy and Φx are constant for all Oi, i=1, 2, . . . , N. Alternatively, the dimensions of the cropped region can be a function of the dimensions of the original object, for example, a scaled version of original dimensions. Each pixel in Φ′i undergoes pixel-based color feature extraction in order to produce a Φy×Φx binary map q with pixels q(r,s)=1 if the desired feature has been extracted in (r,s)ε Φ′i or q(r,s)=0 otherwise. For example, one possible implementation of a color feature extraction process is as follows: q(r,s)=(x′(r,s)x−x′(r,s)2>Qx′2(r,s)1/x(r,s)1) where x′(r,s)=[x′(r,s)1,x′(r,s)2,x′(r,s)3] is a YUV version of the original RGB pixel x(r,s)=[x(r,s)1,x(r,s)2,x(r,s)3] and Q is a tunable parameter. The map q is t in order to segment objects O′ji, for j=1, 2, . . . , N′. Of particular interest is N′ denoting the number of objects in q and O′maxi denoting the object with the maximum width or height among {O′ji; j=1, 2, . . . , N′} which can be used to define the following classification rule:










O
i

=

{




{



d

(

r
,
s

)

i

=
0

;


(

r
,
s

)



Φ
i



}





if






N



>


ɛ







or






(


max


(


Φ
j
y

,

Φ
j
x


)


>


T
max
Φ






and






R
j


>

R
max
Φ


)








O
i



otherwise








(
8
)







where Φjy and Φjx are dimensions of O′maxi, and Rj=max(Φjyjx)/min(Φjyjx) denotes the ratio between the maximum and the minimum dimension of O′maxi. The above classifier rejects all Oi objects with the corresponding binary map q having a noise-like character, that is, N′ exceeding the value of the tunable parameter ε′. This is a property typical for complex background areas. The classifier also rejects those Oi objects which exhibit line-like objects, which can be identified using the dimension constraint TmaxΦ and the dimension ratio RmaxΦ. This is a property characteristic for mouth regions.


Experimentation showed that Equation (8) can produce good results for a number of configurations of its parameters. In some example embodiments, Equation (8) can be performed with Φy and Φx set to a square with the width and height equal to 19, and parameters N′=10, TmaxΦ=0.5 max(Φyx), RmaxΦ=2.5 and Q=0.75. In addition, the feature signal can include a thresholded edge and/or color detection responses.


VIII. Sixth Example Method for Classifying a Candidate Red-Eye Object

With reference now to FIG. 9, a sixth example method 900 for classifying a candidate red-eye object in a digital photographic image is disclosed. It is not unusual that some false red-eye objects present in the face area are actually parts of a pair of eye glasses. This situation often occurs when color and/or intensity in the glasses frames area is/are similar to that of an eye. The example method 900 employs color-ratio based contrast characteristics of a candidate red-eye object to classify the candidate red-eye object.


First, at 902, RGB data of a candidate red-eye object is converted into a color-ratio-like redness. For example, the color-ratio based contrast between the red component x(r,s)1 and the green and blue components x(r,s)2 and x(r,s)3 of the RGB color pixel x(r,s) can serve as the basis for the following image data conversion:






y
(r,s)100x(r,s)12/x(r,s)22+x(r,s)33+1)   (9)


where y(r,s) is a scalar value bounded between [0,255]. Equation (9) allows for the discrimination between eye regions and glasses frame regions as eye regions, because glasses frame regions are usually characterized by a number of large y(r,s) values.


Next, at 904, the number of pixels of the candidate that satisfy a redness constraint are summed. Then, at 906, the sum is compared with a threshold. If the sum is greater than or equal to the threshold, then, at 908, the candidate red-eye object is classified as a false red-eye object. Alternatively, if the sum is less than the threshold, then, at 910, the candidate red-eye object is classified as a true red-eye object.


The acts 904, 906, 908, and 910 can be performed according to the following classification rule:










O
i

=

{




{



d

(

r
,
s

)

i

=
0

;


(

r
,
s

)



Φ
i



}





if
(





(

r
,
s

)



Φ
i





(


y

(

r
,
s

)


>

T
1
y


)


)

<

T
2
y







O
i



otherwise








(
10
)







where T1y and T2y are, respectively, tunable parameters used to identify large y( r,s) values and to determine the minimum number of these large y(r,s) values needed for a candidate red-eye object to be considered as an eye. In some example embodiments, Equation (10) can be performed by setting T1y=150 and T2y=min(8,TmaxA/2)) where TmaxA is the maximum value of Σ(r,s)(y(r,s)>T1y) for objects in the close neighborhood of the candidate red-eye object Oi.


In order to avoid the rejection of true red-eye objects by Equation (10) in situations with no glasses frames or in complex image scenarios, the candidate objects can be searched for an object satisfying (Σ(r,s)εΦi(y(r,s)>T1y))>T3y, that is, an object for which the number of large y(r,s) values exceeds the value of another predetermined parameter T3y. If such an object is found, then all its surrounding objects satisfying the constraint in Equation (10) can be considered to be false red-eye objects and can be removed from the list of candidate red-eye objects. In some example embodiments, Equation (10) can be performed by setting T3y=30.


The example embodiments disclosed herein may be embodied in other specific forms. The example embodiments disclosed herein are to be considered in all respects only as illustrative and not restrictive.

Claims
  • 1. A method for classifying a candidate red-eye object in a digital image, the method comprising the acts of: i) selecting a candidate red-eye object in a digital image;ii) converting RGB pixels of the candidate red-eye object into YUV pixels;iii) summing the YUV pixels satisfying a constraint that is a function of the YUV pixels;iv) determining that the sum is greater than or equal to a scaled version of the total number of YUV pixels in the candidate red-eye object; andv) transforming the candidate red-eye object into a true red-eye object.
  • 2. The method as recited in claim 1, wherein the act iii) comprises summing the number of pixels that satisfy the following constraint: x′(r,s)3−x′(r,s)2>δx′(r,s)1 where: x is the digital photographic image with RGB pixels x(r,s)=[x(r,s)1,x(r,s)2,x(r,s)3];x′ is a YUV version of x with YUV pixels x′(r,s)=[x′(r,s)1,x′(r,s)2,x′(r,s)3]; andδ is a tunable parameter.
  • 3. The method as recited in claim 2, further comprising the acts of: iii.a) summing the number of pixels satisfying a second constraint that is a function of the YUV pixels; andiv.a) determining that the sum is less than or equal to a second scaled version of the total number of YUV pixels in the candidate red-eye object.
  • 4. The method as recited in claim 3, wherein the act iii.a) comprises summing the number of pixels that satisfy the following constraint: x′(r,s)3−x′(r,s)2>δ′ min(x(r,s)2,x(r,s)3).
  • 5. The method as recited in claim 4, wherein the acts iii), iii.a), iv), iv.a), and v) are accomplished according to the following equation:
  • 6. One or more computer-readable media having computer-readable instructions thereon which, when executed by a processor, implement a method for classifying a candidate red-eye object in a digital image, the method comprising the acts of: i) selecting a candidate red-eye object in a digital image;ii) converting RGB pixels of the candidate red-eye object into YUV pixels;iii) summing the YUV pixels satisfying a constraint that is a function of the YUV pixels;iv) determining that the sum is greater than or equal to a scaled version of the total number of YUV pixels in the candidate red-eye object; andv) transforming the candidate red-eye object into a true red-eye object.
  • 7. An image processing apparatus comprising: an electronic display; anda processor in electronic communication with the electronic display; andthe one or more computer-readable media as recited in claim 6.
  • 8. The image processing apparatus as recited in claim 7, wherein: the image processing apparatus comprises a printer;the one or more computer-readable media comprises one or more of a RAM, a ROM, and a flash EEPROM; andthe electronic visual display comprises a liquid crystal display.
  • 9. A method for classifying a candidate red-eye object in a digital image, the method comprising the acts of: i) selecting a candidate red-eye object in a digital image;ii) calculating standard deviation value of pixels of the candidate red-eye object for a particular feature signal;iii) summing the feature signals satisfying a constraint that is a function of the standard deviation;iv) determining that the sum is less than or equal to a tunable minimum number of signals; andv) transforming the candidate red-eye object into a true red-eye object.
  • 10. The method as recited in claim 9, wherein the act ii) comprises calculating the standard deviation value σki of the pixels of the candidate red-eye object for a particular feature signal according to the following equation:
  • 11. The method as recited in claim 10, wherein the feature signal x(r,s)k comprises one of: a red feature signal x(r,s)1;a green feature signal x(r,s)2;a blue feature signal x(r,s)3;a luminance feature signal x(r,s)4=x′(r,s)1;a red-green color difference feature signal x(r,x)5=x(r,s)1−x(r,s)2; ora red-blue color difference feature signal x(r,s)6=x(r,s)1−x(r,s)3.
  • 12. The method as recited in claim 10, wherein the acts iii) and iv) are accomplished according to the following equation:
  • 13. The method as recited in claim 12, wherein: k=6;T1σ=10; andT2σ=2.
  • 14. One or more computer-readable media having computer-readable instructions thereon which, when executed by a processor, implement a method for classifying a candidate red-eye object in a digital image, the method comprising the acts of: i) selecting a candidate red-eye object in a digital image;ii) calculating standard deviation value of pixels of the candidate red-eye object for a particular feature signal;iii) summing the feature signals satisfying a constraint that is a function of the standard deviation;iv) determining that the sum is less than or equal to a tunable minimum number of signals; andv) transforming the candidate red-eye object into a true red-eye object.
  • 15. An image processing apparatus comprising: an electronic display; anda processor in electronic communication with the electronic display; andthe one or more computer-readable media as recited in claim 14.
  • 16. The image processing apparatus as recited in claim 15, wherein: the image processing apparatus comprises a printer;the one or more computer-readable media comprises one or more of a RAM, a ROM, and a flash EEPROM; andthe electronic visual display comprises a liquid crystal display.
  • 17. A method for classifying a candidate red-eye object in a digital image, the method comprising the acts of: i) mirroring an area of a candidate red-eye object in order to create a set of blocks neighboring with the candidate red-eye object, where each block has dimensions identical with dimensions of the area of the candidate red-eye object;ii) adaptively obtaining a set of binarization thresholds;iii) binarizing the candidate red-eye object and its mirror regions using one of the thresholds iv) calculating a ratio between the number of white pixels and the number of black pixels for each of the blocks;v) comparing the block ratios with predetermined values in order to determine that an eye binary pattern constraint is satisfied; andvi) transforming the candidate red-eye object into a true red-eye object.
  • 18. The method as recited in claim 17, wherein the act iv) is accomplished according to the following equation:
  • 19. The method as recited in claim 18, wherein the acts v) and vi) are accomplished according to the following equation:
  • 20. The method as recited in claim 19, wherein TC=0.002 and TS=0.3.
  • 21. One or more computer-readable media having computer-readable instructions thereon which, when executed by a processor, implement a method for classifying a candidate red-eye object in a digital image, the method comprising the acts of: i) mirroring an area of a candidate red-eye object in order to create a set of blocks neighboring with the candidate red-eye object, where each block has dimensions identical with dimensions of the area of the candidate red-eye object;ii) adaptively obtaining a set of binarization thresholds;iii) binarizing the candidate red-eye object and its mirror regions using one of the thresholds iv) calculating a ratio between the number of white pixels and the number of black pixels for each of the blocks;v) comparing the block ratios with predetermined values in order to determine that an eye binary pattern constraint is satisfied; andvi) transforming the candidate red-eye object into a true red-eye object.
  • 22. An image processing apparatus comprising: an electronic display; anda processor in electronic communication with the electronic display; andthe one or more computer-readable media as recited in claim 21.
  • 23. The image processing apparatus as recited in claim 22, wherein: the image processing apparatus comprises a printer;the one or more computer-readable media comprises one or more of a RAM, a ROM, and a flash EEPROM; andthe electronic visual display comprises a liquid crystal display.
  • 24. A method for classifying a candidate red-eye object in a digital image, the method comprising the acts of: i) selecting a candidate red-eye object in a digital image;ii) redefining an area of the candidate red-eye object by placing a new area in the center of the candidate red-eye object or in any other point within a bounding box of the candidate red-eye object;iii) labeling pixels satisfying a feature constraint in order to determine the number of disjoint objects and the object with maximum dimensions created from pixels satisfying the feature constraint;iv) determining that the number of disjoint objects and the maximum object dimensions are less than or equal to predetermined values; andv) transforming the candidate red-eye object into a true red-eye object.
  • 25. The method as recited in claim 24, wherein the acts ii), iii), iv), and v) are accomplished according to the following equation:
  • 26. The method as recited in claim 25, wherein: N′=10TmaxΦ=0.5 max(Φy,Φx); andRmaxΦ=2.5.
  • 27. One or more computer-readable media having computer-readable instructions thereon which, when executed by a processor, implement a method for classifying a candidate red-eye object in a digital image, the method comprising the acts of: i) selecting a candidate red-eye object in a digital image;ii) redefining an area of the candidate red-eye object by placing a new area in the center of the candidate red-eye object or in any other point within a bounding box of the candidate red-eye object;iii) labeling pixels satisfying a feature constraint in order to determine the number of disjoint objects and the object with maximum dimensions created from pixels satisfying the feature constraint;iv) determining that the number of disjoint objects and the maximum object dimensions are less than or equal to predetermined values; andv) transforming the candidate red-eye object into a true red-eye object.
  • 28. An image processing apparatus comprising: an electronic display; anda processor in electronic communication with the electronic display; andthe one or more computer-readable media as recited in claim 27.
  • 29. The image processing apparatus as recited in claim 28, wherein: the image processing apparatus comprises a printer;the one or more computer-readable media comprises one or more of a RAM, a ROM, and a flash EEPROM; andthe electronic visual display comprises a liquid crystal display.
  • 30. A method for classifying and correcting a candidate red-eye object in a digital image, the method comprising the acts of: i) selecting a candidate red-eye object in a digital image;ii) converting RGB data of the digital photographic image into a color-ratio-like redness;iii) summing pixels of the candidate that satisfy a redness constraint;iv) determining that the sum is greater than or equal to a threshold; andv) transforming the candidate red-eye object into a true red-eye object.
  • 31. The method as recited in claim 30, wherein the act ii) is accomplished according to the following equation: y(r,s)=100x(r,s)12/(x(r,s)22+x(r,s)32+1)where: x is the digital image with RGB pixels x(r,s)=[x(r,s)1,x(r,s)2,x(r,s)3]; andy(r,s) is color-ratio-like redness with a scalar value bounded between [0,255].
  • 32. The method as recited in claim 31, wherein the acts iii), iv), and v) are accomplished according to the following equation:
  • 33. The method as recited in claim 32, wherein: T1y=150;T2y=min(8,TmaxA/2), where TmaxA is the maximum value of Σ(r,s)yr,s)>T1y) for objects in the close neighborhood of the candidate red-eye object Oi.
  • 34. One or more computer-readable media having computer-readable instructions thereon which, when executed by a processor, implement a method for classifying a candidate red-eye object in a digital image, the method comprising the acts of: i) selecting a candidate red-eye object in a digital image;ii) converting RGB data of the digital image into a color-ratio-like redness;iii) summing pixels of the candidate that satisfy a redness constraint;iv) determining that the sum is greater than or equal to a threshold; andv) transforming the candidate red-eye object into a true red-eye object.
  • 35. An image processing apparatus comprising: an electronic display; anda processor in electronic communication with the electronic display; andthe one or more computer-readable media as recited in claim 34.
  • 36. The image processing apparatus as recited in claim 35, wherein: the image processing apparatus comprises a printer;the one or more computer-readable media comprises one or more of a RAM, a ROM, and a flash EEPROM; andthe electronic visual display comprises a liquid crystal display.