IMAGE PROCESSING APPARATUS FOR DETECTING COORDINATE POSITION OF CHARACTERISTIC PORTION OF FACE

Information

  • Patent Application
  • 20100202696
  • Publication Number
    20100202696
  • Date Filed
    February 03, 2010
    14 years ago
  • Date Published
    August 12, 2010
    14 years ago
Abstract
Image processing apparatus and methods are provided for detecting coordinate positions of characteristic portions of a face image. A method includes identifying a face area of the target image, determining initial coordinate positions for characteristic portions, selecting at least one characteristic amount used for correcting the initial coordinate positions or previously generated corrected coordinate positions, determining corrected coordinate positions by using the selected at least one characteristic amount, and detecting the corrected coordinate positions as the coordinate positions.
Description
BACKGROUND

1. Technical Field


The present invention relates to image processing apparatus and methods for detecting the coordinate positions of characteristic portions of a face image that is included in a target image.


2. Related Art


An active appearance model technique (also abbreviated as “AAM”) has been used to model a visual event. In the AAM technique, a face image is, for example, modeled by using a shape model that represents the face shape by using positions of characteristic portions of the face and a texture model that represents the “appearance” in an average face shape. The shape model and the texture model can be created, for example, by performing statistical analysis on the positions (e.g., coordinates) and pixel values (for example, luminance values) of predetermined characteristic portions (for example, an eye area, a nose tip, and a face line) of a plurality of sample face images. Using the AAM technique, any arbitrary face image can be modeled (synthesized). In addition, the positions of the characteristic portions of faces that are included in an image can be detected (for example, see JP-A-2007-141107).


In the AAM technique, however, it is desirable to improve the efficiency and the processing speed of detecting the positions of the characteristic portions of a face image within a target image.


In addition, it is also desirable to improve efficiency and processing speed whenever image processing is used to detect the positions of the characteristic portions of a face image within a target image.


SUMMARY

The following presents a simplified summary of some embodiments of the invention in order to provide a basic understanding of the invention. This summary is not an extensive overview of the invention. It is not intended to identify key/critical elements of the invention or to delineate the scope of the invention. Its sole purpose is to present some embodiments of the invention in a simplified form as a prelude to the more detailed description located below.


The present invention provides image processing apparatus and methods for detecting coordinate positions of characteristic portions of a face image. Such image processing apparatus and methods may improve the efficiency and the processing speed of detecting coordinate positions of characteristic portions of a face image in an image.


Thus, in a first aspect, an image processing apparatus is provided for detecting coordinate positions of characteristic portions of a face image in a target image. The image processing apparatus includes a face area detecting unit that detects an image area that includes at least a part of a face image as a face area from the target image, a setting unit that sets a characteristic point used for detecting a coordinate position of the characteristic portion in the target image based on the face area, a selection unit that selects a characteristic amount used for correcting a setting position of the characteristic point out of a plurality of characteristic amounts that is calculated based on a plurality of sample images including face images of which the coordinate positions of the characteristic portions are known, and a characteristic position detecting unit that corrects the setting position of the characteristic point so as to approach the coordinate position of the characteristic portion in the image by using the selected characteristic amount and detects the corrected setting position as the coordinate position. Since the setting position of the characteristic point set in the target image is corrected so as to approach the coordinate position of the characteristic portion in the image by using the characteristic amount selected by the selection unit, correction of the setting position can be performed accurately. Accordingly, the efficiency and the processing speed of a process for detecting the position of the characteristic portion of a face image included in the target image may be improved.


In many embodiments, the selection unit is configured to select the characteristic amount based on detection mode information that includes information on the use or purposes of the detection. In such a case, the setting position of the characteristic point is corrected by using the characteristic amount selected based on the detection mode information. Accordingly, the position of the characteristic portion of a face image included in the target image may be efficiently detected at a high speed.


In many embodiments, an input unit is used for inputting the detection mode information. In such a case, the characteristic amount is selected by using the detection mode information that is input via the input unit. Accordingly, the position of the characteristic portion of a face included in the target image may be efficiently detected at a high speed.


In many embodiments, the characteristic amount is a coefficient of a shape vector that is acquired by performing principal component analysis for a coordinate vector of the characteristic portion included in the plurality of sample images, and the selection unit selects the characteristic amount that is used for correcting the setting position of the characteristic point from among a plurality of the coefficients acquired by performing the principal component analysis. In such a case, the setting position of the characteristic point is corrected by using the coefficient of the selected shape vector. Accordingly, the position of the characteristic portion of a face included in the target image may be accurately detected.


In many embodiments, the characteristic position detecting unit is configured to correct the setting position of the characteristic point by using at least the characteristic amount that represents a face-turn of a face image in the horizontal direction. In such a case, since the setting position of the characteristic point is corrected by using the characteristic amount representing the face-turn of the face image in the horizontal direction, the position of the characteristic portion of a face included in the target image may be efficiently detected at a high speed.


In many embodiments, the characteristic position detecting unit is configured to correct the setting position of the characteristic point by using at least the characteristic amount that represents a face-turn of a face image in the vertical direction. In such a case, since the setting position of the characteristic point is corrected by using the characteristic amount representing the face-turn of the face image in the vertical direction, the position of the characteristic portion of a face included in the target image may be efficiently detected at a high speed.


In many embodiments, the setting unit is configured to set the characteristic point by using at least one parameter relating to a size, an angle, or a position of a face image for the face area. In such a case, the characteristic point may be set accurately by using at least one or more parameters relating to the size, the angle, or the position of a face image for the face area. Accordingly, the position of the characteristic portion of a face included in the target image may be accurately detected.


In many embodiments, the characteristic position detecting unit is configured to include a generation section that generates an average shape image acquired by transforming a part of the target image based on the characteristic point set in the target image, a calculation section that calculates a differential value between the average shape image and an average face image that is an image generated based on the plurality of sample images, and a correction section that corrects the setting position so as to decrease the differential value based on the calculated differential value. In such a case, the characteristic position detecting unit detects the setting position in which the differential value is a predetermined value as the coordinate position. In such a case, the setting position is corrected based on the differential value between the average shape image and the average face image, and the coordinate position of the characteristic portion is detected. Accordingly, the position of the characteristic portion of a face included in the target image may be accurately detected.


In many embodiments, the characteristic portions include an eyebrow, an eye, a nose, a mouth, and a face line. In such a case, the coordinate positions of the eyebrow, the eye, the nose, the mouth, and the face line can be accurately detected.


In another aspect, an image processing apparatus is provided that detects coordinate positions of characteristic portions of a face image in a target image. The image processing apparatus includes a processor and a machine readable memory coupled with the processor and comprising instructions that when executed cause the processor to identify a face area of the target image that includes at least a portion of the face image, generate initial coordinate positions for the characteristic portions in the target image based on the face area, select at least one characteristic amount used for correcting the initial coordinate positions or previously generated corrected coordinate positions for the characteristic portions, generate corrected coordinate positions so as to approach the characteristic portions in the target image by using the selected at least one characteristic amount, and detect the corrected coordinate positions as the coordinate positions of the characteristic portions of the face image. The at least one characteristic amount is selected from a plurality of characteristic amounts that are calculated based on a plurality of sample face images having known coordinate positions of the characteristic portions.


In many embodiments, the at least one characteristic amount is selected based on detection mode information that includes information on at least one of a use or a purpose of the detection.


In many embodiments, the memory further comprises instructions that when executed cause the processor to receive input corresponding to the detection mode information.


In many embodiments, the plurality of characteristic amounts comprise coefficients of shape vectors that were generated by performing principal component analysis of coordinate positions of characteristic portions in the plurality of sample images.


In many embodiments, the selected at least one characteristic amount comprises a characteristic amount representing a face-turn of a face image in the horizontal direction.


In many embodiments, the selected at least one characteristic amount comprises a characteristic amount representing a face-turn of a face image in the vertical direction.


In many embodiments, the initial coordinate positions are generated by using at least one of a parameter relating to a size, an angle, or a position of a face image for the face area.


In many embodiments, the memory further comprises instructions that when executed cause the processor to generate an average shape image from the target image by transforming a part of the target image into a reference average face image shape based on the initial coordinate positions or previously generated corrected coordinate positions, generate a differential image between the average shape image and a reference average face image having the reference average face image shape, generate the corrected coordinate positions so as to decrease a norm of the differential image as compared to a previously generated differential image between a previously generated average face image and the reference average face image, and detect the corrected coordinate positions for which the norm of the differential image is less than a predetermined value as the coordinate positions. In many embodiments, the reference average face image coordinate positions of its characteristic portions are based on the plurality of sample images.


In many embodiments, the characteristic portions include an eyebrow, an eye, a nose, a mouth, and a face line.


In many embodiments, the image processing apparatus includes at least one of a printer, a personal computer, a digital camera, or a digital video camera.


In addition, the invention can be implemented in various forms and, for example, can be implemented as a printer, a digital still camera, a personal computer, a digital video camera, and the like. In addition, the invention can be implemented in the forms of an image processing method, an image processing apparatus, a method of detecting the positions of characteristic portions, an apparatus for detecting the positions of characteristic portions, a facial expression determining method, a facial expression determining apparatus, a computer program for implementing the functions of the above-described apparatus or methods, a recording medium having the computer program recorded thereon, a data signal implemented in a carrier wave including the computer program, and the like.


For a fuller understanding of the nature and advantages of the present invention, reference should be made to the ensuing detailed description and accompanying drawings.





BRIEF DESCRIPTION OF THE DRAWINGS

The invention is described below with reference to the accompanying drawings, wherein like numbers reference like elements.



FIG. 1 is an explanatory diagram schematically showing the configuration of a printer as an image processing apparatus in accordance with many embodiments.



FIG. 2 is a flowchart showing steps of an active appearance model (AAM) setting process, in accordance with many embodiments.



FIG. 3 is an explanatory diagram showing exemplary sample images, in accordance with many embodiments.



FIG. 4 is an explanatory diagram illustrating the setting of characteristic points for a sample image, in accordance with many embodiments.



FIG. 5 is an explanatory diagram showing exemplary coordinates of the characteristic points set in the sample image of FIG. 4, in accordance with many embodiments.



FIGS. 6A and 6B are explanatory diagrams showing an exemplary average shape, in accordance with many embodiments.



FIGS. 7A, 7B, and 7C are explanatory diagrams illustrating relationships between shape vectors, shape parameters, and a face shape, in accordance with many embodiments.



FIG. 8 is an explanatory diagram illustrating a warp method for transforming part of a sample image into an average shape image having the shape of a reference average face image, in accordance with many embodiments.



FIG. 9 is an explanatory diagram showing an example of an average face image, in accordance with many embodiments.



FIG. 10 is a flowchart showing steps of a face characteristic position detecting process in accordance with many embodiments.



FIG. 11 is an explanatory diagram illustrating the detection of a face area in a target image, in accordance with many embodiments.



FIG. 12 is a flowchart showing steps of an initial position setting process for characteristic points, in accordance with many embodiments.



FIGS. 13A and 13B are explanatory diagrams showing an example of temporary setting positions for characteristic points generated by changing the values of global parameters, in accordance with many embodiments.



FIG. 14 is an explanatory diagram showing exemplary average shape images, in accordance with many embodiments.



FIG. 15 is a flowchart showing steps of a process for correcting set positions of characteristic points, in accordance with many embodiments.



FIG. 16 is an explanatory diagram illustrating the selection of a characteristic amount, in accordance with many embodiments.



FIG. 17 is an explanatory diagram showing an exemplary result of a face characteristic position detecting process, in accordance with many embodiments.



FIG. 18 is a flowchart showing steps of an initial disposition determining process for characteristic points, in accordance with many embodiments.



FIG. 19 is an explanatory diagram showing exemplary temporary initial positions of characteristic points generated by changing the values of characteristic amounts, in accordance with many embodiments.





DETAILED DESCRIPTION

In the following description, various embodiments of the present invention are described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the embodiments. However, it will also be apparent to one skilled in the art that the present invention may be practiced without the specific details. Furthermore, well-known features may be omitted or simplified in order not to obscure the embodiment being described.


Image Processing Apparatus

Referring now to the drawings, in which like reference numerals represent like parts throughout the several views, FIG. 1 is an explanatory diagram schematically showing the configuration of a printer 100 as an image processing apparatus, in accordance with many embodiments. The printer 100 can be a color ink jet printer corresponding to so-called direct printing in which an image is printed based on image data that is acquired from a memory card MC or the like. The printer 100 includes a CPU 110 that controls each unit of the printer 100, an internal memory 120 that includes a read-only memory (ROM) and a random-access memory (RAM), an operation unit 140 that can include buttons and/or a touch panel, a display unit 150 that includes a display (e.g., a liquid crystal display), a printing mechanism 160, and a card interface (card I/F) 170. In addition, the printer 100 can include an interface that is used for performing data communication with other devices (for example, a digital still camera or a personal computer). The constituent elements of the printer 100 are interconnected through a communication bus.


The printing mechanism 160 performs a printing operation based on print data. The card interface 170 is an interface that is used for exchanging data with a memory card MC inserted into a card slot 172. In many embodiments, an image file that includes target image data is stored in the memory card MC.


In the internal memory 120, an image processing unit 200, a display processing unit 310, and a print processing unit 320 are stored. The image processing unit 200 is a computer program and performs a face characteristic position detecting process by being executed by a CPU 110 under a predetermined operating system. The face characteristic detecting process detects the positions of predetermined characteristic portions (for example, an eye area, a nose tip, and a face line) in a face image. The face characteristic detecting process is described below in detail. In addition, various functions are implemented as the CPU 110 also executes the display processing unit 310 and the printing processing unit 320.


The image processing unit 200 includes a setting section 210, a characteristic position detecting section 220, a face area detecting section 230, and a selection section 240 as program modules. The characteristic position detecting section 220 includes a generation portion 222, a calculation portion 224, and a correction portion 226. The functions of these units, sections, and portions is described in detail in a description of a face characteristic position detecting process described below.


The display processing unit 310 can be a display driver that displays a process menu, a message, an image, or the like on the display unit 150 by controlling the display unit 150. The print processing unit 320 is a computer program that generates print data based on the image data and prints an image based on the print data by controlling the printing mechanism 160. The CPU 110 implements the functions of these units by reading out the above-described programs (the image processing unit 200, the display processing unit 310, and the print processing unit 320) from the internal memory 120 and executing the programs.


In addition, AAM information AMI is stored in the internal memory 120. The AAM information AMI is information that is set in advance in an AAM setting process described below and is referred to in the face characteristic position detecting process described below. The content of the AAM information AMI is described in detail in a description of the AAM setting process provided below.


AAM Setting Process


FIG. 2 is a flowchart showing steps of an AAM setting process in accordance with many embodiments. The AAM setting process creates shape and texture models that are used in an image modeling technique called an active appearance model (AAM). In many embodiments, the AAM setting process involves user input.


First, a plurality of images are prepared that include people's faces as sample images SI (Step S110). FIG. 3 is an explanatory diagram showing exemplary sample images SI. As illustrated in FIG. 3, the sample images SI can be prepared such that the sample face images have different attributes for various attributes such as personality, race, gender, facial expression (anger, laughter, troubled, surprise, or the like), and a direction (front-side turn, upward turn, downward turn, right-side turn, left-side turn, or the like). When the sample images SI are prepared in such a manner, a wide variety of face images can be modeled with high accuracy by using the AAM technique. Accordingly, the face characteristic position detecting process (described below) can be performed with high accuracy for a wide variety of face images. The sample images SI are also referred to herein as face images for learning.


Next, the characteristic points CP are set for each sample face image SI (Step S120). FIG. 4 is an explanatory diagram illustrating the setting of characteristic points CP for a sample image SI. The characteristic points CP are points that represent the positions of predetermined characteristic portions of a face image. In many embodiments, 68 characteristic points CP are located on portions of a face image that include predetermined positions on the eyebrows (for example, end points, four-division points, or the like), predetermined positions on the contour of the eyes, predetermined positions on contours of the bridge of the nose and the wings of the nose, predetermined positions on the contours of upper and lower lips, and predetermined positions on the contour (face line) of the face. In other words, predetermined positions of facial organs (eyebrows, eyes, a nose, and a mouth) and the face line are set as the characteristic portions. As shown in FIG. 4, the characteristic points CP can be set (disposed) to the illustrated 68 positions to represent the characteristic portions of each sample face image SI. The characteristic points can be designated, for example, by an operator of the image processing apparatus. The characteristic points CP correspond to the characteristic portions. Accordingly, the disposition of the characteristic points CP in a face image specifies the shape of the face.


The position of each characteristic point CP in a sample image SI can be specified by coordinates. FIG. 5 is an explanatory diagram showing exemplary coordinates of the characteristic points CP set in the sample image SI. In FIG. 5, SI(j) (j=1, 2, 3 . . . ) represents each sample image SI, and CP(k) (k=0, 1, . . . 67) represents each characteristic point CP. In addition, CP(k)-X represents the X coordinate of the characteristic point CP(k), and CP(k)-Y represents the Y coordinate of the characteristic point CP(k). The coordinates of the characteristic points CP can be set by using a predetermined reference point (for example, a lower left point in an image) in a sample image SI that has been normalized for face size, face tilt (a tilt within the image surface), and position of the face in the X direction and in the Y direction. In addition, a case where a plurality of people's images is included in one sample image SI is allowed (for example, two face images are included in a sample image SI(2)), and the persons included in one sample image SI are specified by personal IDs.


Subsequently, a shape model of the AAM is set (Step S130). In particular, the face shape s that is specified by the positions of the characteristic points CP is modeled by the following Equation (1) by performing a principal component analysis for a coordinate vector (see FIG. 5) that is configured by the coordinates (X coordinates and Y coordinates) of 68 characteristic points CP in each sample image SI. In addition, the shape model is also called a disposition model of characteristic points CP.









Equation






(
1
)












s
=


s
0

+




i
=
1

n




p
i



s
i








(
1
)







In the above-described Equation (1), s0 is an average shape. FIGS. 6A and 6B are explanatory diagrams showing an example of the average shape s0. In FIGS. 6A and 6B, the average shape s0 is a model that represents an average face shape that is specified by average positions (average coordinates) of each characteristic point of the sample images SI. An area (denoted by being hatched in FIG. 6B) surrounded by straight lines enclosing characteristic points CP (characteristic points CP corresponding to the face line, the eyebrows, and a region between the eyebrows; see FIG. 4) located on the outer periphery of the average shape s0 is referred to herein as an “average shape area BSA”. The average shape s0 is set such that, as shown in FIG. 6A, a plurality of triangle areas TA having the characteristic points CP as their vertexes divides the average shape area BSA into mesh shapes.


In the above-described Equation (1) representing a shape model, si is a shape vector, and pi is a shape parameter that represents the weight of the shape vector si. The shape vector si can be a vector that represents characteristics of the face shape s. The shape vector si can be an eigenvector corresponding to an i-th principal vector that is acquired by performing principal component analysis. As shown in the above-described Equation (1), a face shape s that represents the disposition of the characteristic points CP can be modeled as a sum of an average shape s0 and a linear combination of n shape vectors si. By appropriately setting the shape parameter pi for the shape model, the face shapes s in a wide variety of images can be reproduced.



FIGS. 7A, 7B, and 7C are explanatory diagrams illustrating relationships between shape vectors si, shape parameters pi, and the face shape s. As shown in FIG. 7A, in order to specify a face shape s, n (n=4 in FIG. 7A) eigenvectors that are set based on the accumulated contribution rates in the order of eigenvectors corresponding to principal components having higher contribution rates are used as the shape vectors si. Each of the shape vectors si, as denoted by arrows shown in FIG. 7A, corresponds to the moving direction and the amount of movement of each characteristic point CP. In many embodiments, a first shape vector s1 that corresponds to a first principal component having the highest contribution rate is a vector that is approximately correlated with the horizontal orientation of a face. Accordingly, by changing the value of the shape parameter p1, as illustrated in FIG. 7B, the turn of the face shape s in the horizontal direction is changed. In many embodiments, a second shape vector s2 corresponding to a second principal component that has the second highest contribution rate is a vector that is approximately correlated with the vertical orientation of a face. Accordingly, by changing the value of the shape parameter p2, as illustrated in FIG. 7C, the turn of the face shape s in the vertical direction is changed. In many embodiments, a third shape vector s3 corresponding to a third principal component having the third highest contribution rate is a vector that is approximately correlated with the aspect ratio of a face shape. In many embodiments, and a fourth shape vector s4 corresponding to a fourth principal component having the fourth highest contribution rate is a vector that is approximately correlated with the degree of opening of a mouth. As described above, the values of the shape parameters represent characteristics of a face image such as a facial expression and the turn of the face. The “shape parameters” are also referred to herein as characteristic amounts.


In many embodiments, the average shape s0 and the shape vector si that are set in the shape model setting step (Step S130) are stored in the internal memory 120 as the AAM information AMI (FIG. 1).


Subsequently, a texture model of the AAM is set (Step S140). In many embodiments, the process of setting the texture model begins by applying an image transformation (also referred to herein as “warp W”) to each sample image SI, so that set positions of the characteristic points CP in each of the transformed sample images SI match the characteristic points CP in the average shape s0.



FIG. 8 is an explanatory diagram illustrating a warp W method for transforming a part of a sample image SI into an average shape image having the shape of a reference average face image, in accordance with many embodiments. For each sample image SI, similar to the average shape s0, a plurality of triangle areas TA that divides an area surrounded by the characteristic points CP located on the outer periphery into mesh shapes is set. The warp W is an affine transformation set for each of the plurality of triangle areas TA. In other words, in the warp W, an image of triangle areas TA in a sample image SI is transformed into an image of corresponding triangle areas TA in the average shape s0 by using the affine transformation method. By using the warp W, a transformed sample image SI (also referred herein to as a “sample image SIw”) having the same set positions as those of the characteristic points CP of the average shape s0 is generated.


In addition, each sample image SIw is generated as an image in which an area (hereinafter, also referred to as a “mask area MA”) other than the average shape area BSA is masked by using the rectangular range including the average shape area BSA (denoted by being hatched in FIG. 8) as the outer periphery. An image area acquired by adding the average shape area BSA and the mask area MA is referred to herein as a reference area BA. In many embodiments, each sample image SIw is normalized, for example, as an image having the size of 56 pixels×56 pixels.


Next, the texture (also referred to herein as an “appearance”) A(x) of a face is modeled by using the following Equation (2) by performing principal component analysis for a luminance value vector that is configured by luminance values for each pixel group x of each sample image SIw. In many embodiments, the pixel group x is a set of pixels that are located in the average shape area BSA.









Equation






(
2
)













A


(
x
)


=



A
0



(
x
)


+




i
=
1

m




λ
i




A
i



(
x
)









(
2
)







In the above-described Equation (2), A0(x) is an average face image. FIG. 9 is an explanatory diagram showing an example of the average face image A0(x). The average face image A0(x) is an average face of sample images SIw (see FIG. 8) after the warp W. In other words, the average face image A0(x) is an image that is calculated by taking an average of pixel values (luminance values) of pixel groups x located within the average shape area BSA of the sample images SIw. Accordingly, the average face image A0(x) is a model that represents the texture (appearance) of an average face in the average face shape. In addition, the average face image A0(x), similarly to the sample images SIw, includes an average shape area BSA and a mask area MA and, for example, is calculated as an image having the size of 56 pixels×56 pixels.


In the above-described Equation (2) representing a texture model, Ai(x) is a texture vector, λi is a texture parameter that represents the weight of the texture vector Ai(x). The texture vector Ai(x) is a vector that represents the characteristics of the texture A(x) of a face. In many embodiments, the texture vector Ai(x) is an eigenvector corresponding to an i-th principal component that is acquired by performing principal component analysis. In many embodiments, m eigenvectors set based on the accumulated contribution rates in the order of the eigenvectors corresponding to principal components having the higher contribution rate are used as a texture vector Ai(x). In many embodiments, the first texture vector A1(x) corresponding to the first principal component having the highest contribution rate is a vector that is approximately correlated with a change in the color of a face (may be perceived as a difference in gender).


As shown in the above-described Equation (2), the face texture A(x) representing the outer appearance of a face can be modeled as a sum of the average face image A0(x) and a linear combination of m texture vectors Ai(x). By appropriately setting the texture parameter λi in the texture model, the face textures A(x) for a wide variety of images can be reproduced. In addition, in many embodiments, the average face image A0(x) and the texture vector Ai(x) that are set in the texture model setting step (Step S140 in FIG. 2) are stored in the internal memory 120 as the AAM information AMI (FIG. 1).


By performing the above-described AAM setting process (FIG. 2), a shape model that models a face shape and a texture model that models a face texture are set. By combining the shape model and the texture model that have been set, that is, by performing transformation (an inverse transformation of the warp W shown in FIG. 8) from the average shape s0 into a shape for the synthesized texture A(x), the shapes and the textures of a wide variety face images can be reproduced.


Face Characteristic Position Detecting Process


FIG. 10 is a flowchart showing the steps of a face characteristic position detecting process, in accordance with many embodiments. The face characteristic position detecting process detects the positions of characteristic portions of a face image by determining the disposition of the characteristic points CP in the face image included in the target image by using the AAM technique. As described above, in many embodiments, a total of 68 predetermined positions of a person's facial organs (the eyebrows, the eyes, the nose, and the mouth) and the contour of the face are set as the characteristic portions (see FIG. 4) in the AAM setting process (FIG. 2). Accordingly, the disposition of 68 characteristic points CP that represent predetermined positions of the person's facial organs and the contour of the face is determined.


When the disposition of the characteristic points CP in the face image is determined by performing the face characteristic position detecting process, the values of the shape parameter pi and the texture parameter λi for the face image are determined. Accordingly, the result of the face characteristic position detecting process can be used in an expression determination process for detecting a face image having a specific facial expression (for example, a smiling face or a face with closed eyes), a face-turn direction determining process for detecting a face image positioned in a specific direction (for example, a right-side direction or a lower-side direction), a face transformation process for transforming the shape of a face, a correction process for the shade of a face, or the like.


First, the image processing unit 200 (FIG. 1) acquires image data that represents a target image that is a target for the face characteristic position detecting process (Step S210). For example, when the memory card MC is inserted into the card slot 172 of the printer 100, a thumbnail image of the image file that is stored in the memory card MC can be displayed on the display unit 150. One or a plurality of images that is the target to be processed can be selected by a user via the operation unit 140. The image processing unit 200 acquires the image file that includes the image data corresponding to one or the plurality of images that has been selected from the memory card MC and stores the image file in a predetermined area of the internal memory 120. The acquired image data is referred to herein as target image data, and an image represented by the target image data is referred to herein as a target image OI.


The image processing unit 200 (FIG. 1) acquires detection mode information (Step S220). The detection mode information is information for changing the accuracy or the characteristics of detection depending on the use or purposes of the detection. In many embodiments, the detection mode information includes information on whether the processing speed is more important than the detection accuracy or whether the detection accuracy is more important than the processing speed when the detection is performed and information on whether determination on a facial expression of a face image, face-turn determination for a face image, or transformation of a face image is to be performed based on the result of the detection. The detection mode information can be input via the operation unit 140 by a user based on the display content of the display unit 150. The “operation unit 140” is an example of an “input unit.”


The face area detecting section 230 (FIG. 1) detects an image area that includes at least a part of a face image included in the target image OI as a face area FA (Step S230). The detecting of the face area FA can be performed by using a known face detecting technique. Such known face detecting techniques include, for example, a technique using pattern matching, a technique using extraction of a skin-color area, a technique using learning data that is set by learning (for example, learning using a neural network, learning using boosting, learning using a support vector machine, or the like) using sample images, and the like.



FIG. 11 is an explanatory diagram illustrating the detection of a face area FA in the target image OI. In FIG. 11, a face area FA detected from the target image OI is shown. In many embodiments, a face detecting technique is used that detects a rectangle area that approximately includes from the forehead to the chin in the vertical direction of the face and approximately includes the outer sides of both the ears in the horizontal direction as the face area FA.


The setting section 210 (FIG. 1) sets the initial positions of the characteristic points CP of the target image OI (Step S240). FIG. 12 is a flowchart showing the steps of an initial position setting process for characteristic points CP, in accordance with many embodiments. In many embodiments, the setting section 210 sets the characteristic points CP to temporary setting positions on the target image OI by variously changing the values of global parameters that represent the size, the tilt, and the positions (the position in the vertical direction and the position in the horizontal direction) of the face image with respect to the face area FA (Step S310).



FIGS. 13A and 13B are explanatory diagrams showing exemplary temporary setting positions of the characteristic points CP generated by changing the values of the global parameters. In FIGS. 13A and 13B, meshes formed by joining the characteristic points CP are shown relative to the target image OI for each of the temporary setting positions of the characteristic points CP. The setting section 210, as shown in FIGS. 13A and 13B, sets the temporary setting positions (hereinafter, also referred to as “reference temporary setting positions” or “initial coordinate positions”) of the characteristic points CP such that the average shape s0 is formed in the center portion of the face area FA in the center target image OI in both FIGS. 13A and 13B.


The setting section 210 sets a plurality of the temporary setting positions by variously changing the values of the global parameters for the reference temporary setting position. The changing of the global parameters (the size, the tilt, the position in the vertical direction, and the position in the horizontal direction) corresponds to performing enlargement or reduction, a change in the tilt, and parallel movement of the meshes formed by the characteristic points CP with respect to the target image M. Accordingly, the setting section 210, as shown in FIG. 13A, sets the temporary setting position (shown above or below the reference temporary setting position) for forming the meshes by enlarging or reducing the meshes of the reference temporary setting position by a predetermined scaling factor and the temporary setting position (shown on the right side or the left side of the diagram for the reference temporary setting position) for forming meshes of which the tilt is changed by rotating the meshes of the reference temporary setting position by a predetermined angle in the clockwise direction or the counter clockwise direction. In addition, the setting section 210 also sets the temporary setting position (shown on the upper left side, the lower left side, the upper right side, and the lower right side of the diagram for the reference temporary setting position) for forming the meshes acquired by performing a transformation combining enlargement or reduction and a change in the tilt for the meshes for the reference temporary setting position.


In addition, as shown in FIG. 13B, the setting section 210 sets the temporary setting position (shown above or below the diagram for the reference temporary setting position) for forming the meshes acquired by performing parallel movement for the meshes for the reference temporary setting position to the upper side or the lower side by a predetermined amount and the temporary setting position (shown on the left side and the right side of the diagram for the reference temporary setting position) for forming meshes acquired by performing parallel movement for the meshes for the reference temporary setting position to the left or right side. In addition, the setting section 210 sets the temporary setting position (shown on the upper left side, the lower left side, the upper right side, and the lower right side of the diagram for the reference temporary setting position) for forming meshes acquired by performing a transformation combining parallel movement to the upper side or the lower side and parallel movement to the left side or the right side for the meshes for the reference temporary setting position.


In addition, the setting section 210 also sets temporary setting positions acquired by performing parallel movement to the upper or lower side and to the left or right side for meshes, shown in FIG. 13B, for eight temporary setting positions other than the reference temporary setting position shown in FIG. 13A. Accordingly, in many embodiments, a total of 81 types of the temporary setting positions are used and include 80 (=3×3×3×3-1) types of temporary setting positions that are set by using combinations of known three-level values for each of four global parameters (the size, the tilt, the position in the vertical direction, and the position in the horizontal direction) and the reference temporary setting position.


The generation portion 222 (FIG. 1) generates an average shape image I(W(x;p)) corresponding to each temporary setting position that has been set (Step S320). FIG. 14 is an explanatory diagram showing exemplary average shape images I(W(x;p)). The average shape image I(W(x;p)) is calculated by performing a transformation of a portion of an input image for which the disposition of the characteristic points CP in the transformed input image is identical to that of the characteristic points CP in the average shape s0.


The transformation for calculating the average shape image I(W(x;p)), the same as the transformation (see FIG. 8) for calculating the sample images SIw, is performed by the warp W that is a set of affine transformations for each triangle area TA. In particular, an average shape area BSA (an area surrounded by characteristic points CP that are located on the outer periphery of the average shape area BSA) is specified by the characteristic points CP (sees FIGS. 13A and 13B) disposed in the target image M. Then, by performing the affine transformation for each triangle area TA of the average shape area BSA in the target image OI, the average shape image I(W(x;p)) is calculated. In many embodiments, the average shape image I(W(w;p)) has the same shape as the average face image A0(x), includes the average shape area BSA and a mask area MA, and is calculated as an image having the same size as the average face image A0(x).


In addition, as described above, the pixel group x is a set of pixels that are located in the average shape area BSA of the average shape s0. The pixel group in the image (the average shape area BSA of the target image OI) before performing the warp W that corresponds to the pixel group x in the image (the face image having the average shape s0) after performing the warp W is denoted by W(x;p). Since the average shape image is an image that is configured by the luminance values for each pixel group W(x;p) in the average shape area BSA of the target image OI, the average shape image is denoted by I(W(x;p)). In FIG. 14, nine average shape images I(W(x;p)) corresponding to nine temporary setting positions shown in FIG. 13A are shown.


The calculation portion 224 (FIG. 1) calculates a differential image Ie between the average shape image I(W(x;p)) corresponding to each temporary setting position and the average face image A0(x) (Step S330). Since 81 types of the temporary setting positions of the characteristic points CP are set, the calculation portion 224 (FIG. 1) calculates 81 differential images Ie.


The setting section 210 calculates the norm from the pixel value of each differential image Ie and sets a temporary setting position (hereinafter, also referred to as a minimal-norm temporary setting position) corresponding to the differential image Ie having the norm of the smallest value as the initial position of the characteristic points CP of the target image OI (Step S340). The pixel value used for calculating the norm may be either a luminance value or an RGB value. As described above, the initial position setting process for the characteristic points CP is completed.


When the initial position setting process for the characteristic points CP is completed, the characteristic position detecting section 220 (FIG. 1) corrects the set position of the characteristic points CP in the target image OI (Step S250). FIG. 15 is a flowchart showing the steps of a correction process for the set positions of characteristic points CP, in accordance with many embodiments.


The generation portion 222 (FIG. 1) calculates an average shape image I(W(x;p)) from the target image OI (Step S410). The method of calculating the average shape image I(W(x;p)) is the same as that in Step S320 of the characteristic points CP initial position setting process.


The characteristic position detecting section 220 calculates a differential image Ie between the average shape image I(W(x;p)) and the average face image A0(x) (Step S420). The characteristic position detecting section 220 determines whether the process for correcting the characteristic point CP setting position converges based on the differential image Ie (Step S430). The characteristic position detecting section 220 calculates the norm of the differential image Ie. When the value of the norm is smaller than a threshold value set in advance, the characteristic position detecting section 220 can determine convergence. On the other hand, when the value of the norm is equal to or lager than the threshold value set in advance, the characteristic position detecting section 220 can determine no convergence. Alternatively, the characteristic position detecting section 220 can be configured to determine convergence for a case where the value of the norm of the calculated differential image Ie is smaller than that calculated in Step S430 at a previous time and to determine no convergence for a case where the value of the norm is equal to or larger than a precious value. Furthermore, the characteristic position detecting section 220 can be configured to determine convergence by combining the determination on the basis of the threshold value and the determination on the basis of the comparison with a previous value. For example, the characteristic position detecting section 220 can be configured to determine convergence only for a case where the value of the calculated norm is smaller than a threshold value and is smaller than a previous value and to determine no convergence for other cases.


When no convergence is determined in the above-described convergence determination in Step S430, the selection section 240 (FIG. 1) selects characteristic amounts based on the detection mode information (Step S440). FIG. 16 is an explanatory diagram illustrating the selection of characteristic amounts. The selection section 240 (FIG. 1), described below, selects shape parameters, which are used for correcting the setting position of the characteristic points CP by the correction portion 226, based on the detection mode information acquired in Step S220.


In particular, when information indicating that processing speed is more important than detection accuracy is included in the detection mode information, the selection section 240 (FIG. 1), as illustrated in FIG. 16, selects two characteristic amounts of a shape parameter p1 of the first principal component having the highest contribution rate and a shape parameter p2 of the second principal component having the second highest contribution rate. By decreasing the number of the shape parameters that are used for correction of the setting positions of the characteristic points CP, the processing speed can be increased. In addition, by using the shape parameters of the first and second principal components having high contribution, a decrease in the detection accuracy can be suppressed. On the other hand, when information indicating that detection accuracy is more important than processing speed is included in the detection mode information, the selection section 240 (FIG. 1) can select all n shape parameters pi that are set based on the accumulated contribution rate. Accordingly, detection can be performed with high accuracy by using all the n shape parameters pi. In the characteristic amounts selected by the selection section 240 (FIG. 1), the shape parameters of the first and second principal components can be included.


In addition, when a facial expression determination for a face image or a face-turn determination for a face image is included in the detection mode information, the selection section 240 (FIG. 1) can select characteristic amounts having high contributions to a difference in the facial expression or the face-turn. For example, when a facial expression determination for a smiling face is performed, the selection section 240 (FIG. 1), selects a fourth shape parameter p4 that is a coefficient of the fourth shape vector s4 that is approximately correlated with the degree of opening of a mouth or any other shape parameter relating to the degree of a smiling face, in addition to the shape parameters p1 and p2 having high contribution rates. Accordingly, as a result of the face characteristic position detecting process described below, the degree of a smiling face can be determined based on the values of the selected characteristic amounts. When the facial expression determination for the closed eye is performed, the selection section 240 (FIG. 1) selects a shape parameter relating to the shape of the eye in addition to the shape parameters p1 and p2. Accordingly, it can be determined whether a face image has a closed eye.


In addition, when the face-turn determination is performed, the selection section 240 (FIG. 1) selects at least two characteristic amounts of the shape parameter p1 that changes the face-turn in the vertical direction and the shape parameter p2 that changes the face-turn in the horizontal direction. Accordingly, the degree of the face-turn in the vertical direction and the horizontal direction can be determined based on the values of the shape parameters. When transformation of a face image is performed, the selection section 240 (FIG. 1) can transform the face shape sufficiently by selecting a shape parameter p3 that is a coefficient of a third shape vector s3, which is approximately correlated with the aspect ratio of a face shape, or a shape parameter that contributes to the transformation of a face, in addition to the shape parameters pi and p2 that have high contribution rates.


The correction portion 226 (FIG. 1) calculates an update amount ΔP of the parameters (Step S450). In many embodiments, the update amount ΔP of the parameters represents the amount of change in the values of the four global parameters (the overall size, the tilt, the position in the X-direction, and the position in the Y-direction) and m shape parameters that are characteristic amounts selected by the selection section 240 (FIG. 1) in Step S440. In addition, in many embodiments, right after setting the characteristic points CP to the initial position, the values determined in the initial position setting process (FIG. 12) for the characteristic points CP are set to the global parameters. In such embodiments, since a difference between the initial position of the characteristic points CP and the set position of the characteristic points CP of the average shape s0 at this moment is limited to a difference of the overall size, the tilt, and the positions, all the values of the shape parameters pi of the shape model are zero.


In many embodiments, the update amount ΔP of the parameters is calculated by using the following Equation (3). In other words, the update amount ΔP of the parameters is the product of an update matrix R and the difference image Ie.





Equation (3)





ΔP=R×Ie  (3)


The update matrix R represented in Equation (3) is a matrix of M rows×N columns that is set by learning in advance for calculating the update amount ΔP of the parameters based on the differential image Ie and is stored in the internal memory 120 as the AAM information AMI (FIG. 1). In many embodiments, the number M of the rows of the update matrix R is identical to a sum (4+m) of the number (4) of the global parameters and the number (m) of the shape parameters pi selected by the selection section 240 (FIG. 1), and the number N of the columns is identical to the number (56 pixels×56 pixels−number of pixels included in the mask area MA) within the average shape area BSA of the average face image A0(x) (FIGS. 6A and 6B). In many embodiments, the update matrix R is calculated by using the following Equations (4) and (5).









Equation






(
4
)












R
=


H

-
1







[




A
0






W



P



]

T







(
4
)






Equation






(
5
)












H
=





[




A
0






W



P



]

T



[




A
0






W



P



]







(
5
)







Equations (4) and (5), as well as active models in general, are described in Matthews and Baker, “Active Appearance Models Revisited,” tech. report CMU-RI-TR-03-02, Robotics Institute, Carnegie Mellon University, April 2003, the full disclosure of which is hereby incorporated by reference.


The correction portion 226 (FIG. 1) updates the parameters (four global parameters and selected m shape parameters) based on the calculated update amount ΔP of the parameters (Step S460). Accordingly, the set position of the characteristic points CP in the target image OI is corrected. The correction portion 226 corrects such that the norm of the differential image Ie decreases. At this moment, the values of the shape parameters other than the selected m shape parameters are maintained to be zero. After update of the parameters is performed, again, the average shape image I(W(x;p)) is calculated (Step S410) from the target image OI for which the set position of the characteristic points CP has been corrected (Step S410), the differential image Ie is calculated (Step S420), and a convergence determination is made based on the differential image Ie (Step S430). In a case where no convergence is determined in the convergence determination performed again, additionally, the update amount ΔP of the parameters is calculated based on the differential image Ie (Step S450), and correction of the set position of the characteristic points CP by updating the parameters is performed (Step S460).


When the process from Step S410 to Step S460 shown in FIG. 15 is repeatedly performed, the positions of the characteristic points CP corresponding to the characteristic portions of the target image OI approach the positions of actual characteristic portions as a whole. Then, the convergence is determined in the convergence determination (Step S430) at a time point. When the convergence is determined in the convergence determination, the face characteristic position detecting process is completed (Step S470). The set position of the characteristic points CP specified by the values of the global parameters and the shape parameters pi, which are set at that moment, is determined to be the final setting position of the characteristic points CP in the target image OI.



FIG. 17 is an explanatory diagram showing an exemplary result of a face characteristic position detecting process. In FIG. 17, the set position of the characteristic points CP that is finally determined for the target image OI is shown. In accordance with the set position of the characteristic positions CP, the positions of the characteristic portions (a person's facial organs (the eyebrows, the eyes, the nose, and the mouth) and predetermined positions in the contour of a face) of the target image OI are specified. Accordingly, the shapes and the positions of the person's facial organs and the contour and the shape of the face of the target image OI can be detected. In addition, when the facial expression determination or the face-turn determination is performed, a facial expression or a face-turn can be determined by comparing values of m shape parameters, which are selected at the time when the face characteristic position detecting process is completed, with a threshold value. On the other hand, when the transformation of a face image is performed, the shape of a face can be transformed sufficiently by changing the values of m shape parameters, which are selected at the time when the face characteristic position detecting process is completed.


The print processing unit 320 generates print data for the target image OI for which the shapes and the positions of the facial organs and the contour and the shape of a face are detected. In particular, the print processing unit 320 generates the print data by performing a color conversion process for adjusting pixel values of pixels to the ink used by the printer 100, a halftone process for representing the gray scales of pixels after the color conversion process by distribution of dots, a rasterization process for changing the data sequence of the image data, for which the halftone process has been performed, in the order to be transmitted to the printer 100, and the like for the target image OI. The printing mechanism 160 prints the target image OI for which the shapes and positions of the facial organs and the contour and the shape of the face have been detected based on the print data that is generated by the print processing unit 320. In addition, the print processing unit 320 is not limited to the generation of the print data of the target image OI and can generate print data of an image for which a predetermined process such as face transformation or correction for the shade of a face has been performed based on the shapes and the positions of the detected facial organs or the contour and the shape of a face. In addition, the printing mechanism 160 can print the image, for which a process such as face transformation or correction for the shade of a face has been performed, based on the print data that is generated by the print processing unit 320.


As described above, the setting position of the characteristic points CP in the target image OI can be corrected by using the characteristic amount selected by the selection section 240 out of a plurality of characteristic amounts set in advance. Accordingly, the efficiency and the processing speed of the process for detecting the positions of characteristic portions of a face included in the target image OI may be improved.


In particular, the correction portion 226 can calculate the update amount ΔP of the parameters by using four global parameters (the overall size, the tilt, the position in the X-direction, and the position in the Y-direction) and m shape parameters that are characteristic amounts selected by the selection portion 240. Accordingly, compared to a case where the update amount ΔP of the parameters is calculated by using four global parameters and all n (nm) shape parameters set based on the accumulated contribution rates, the amount of calculation can be suppressed. Accordingly, the processing speed of the detection process may be improved. In addition, by using the shape parameters having high contribution rates for calculating the update amount ΔP of the parameter, a decrease in the detection accuracy is suppressed, whereby the positions of the characteristic portions may be detected efficiently.


In many embodiments, the setting position of the characteristic points CP is corrected by using characteristic amounts that are selected based on the detection model information input from the operation unit 140 by a user. Accordingly, the positions of the characteristic portions of a face included in the target image may be efficiently detected at a high speed. In particular, the characteristic amounts can be selected depending on the use or purposes of detection requested by a user based on the detection mode information. Thus, for example, when the processing speed is an important consideration, the processing speed can be improved by decreasing the number of the characteristic amounts to be selected. In addition, when a facial expression determination for a face image, a face-turn determination, or transformation is performed, the determination and/or transformation can be efficiently accomplished by using the characteristic amounts contributing to such a use or purposes.


In many embodiments, the characteristic position detecting section 220 corrects the setting position of the characteristic points CP by using the shape parameter p1 that changes the face-turn of a face in the horizontal direction. Accordingly, the positions of the characteristic portions of a face may be efficiently detected at a high speed. In particular, since the shape parameter p1 is a coefficient of the first shape vector s1 of the first principal component having the highest contribution rate, the setting position of the characteristic points CP can be effectively corrected to the positions of the characteristic portions by changing the value of the shape parameter p1. Accordingly, the number of the shape parameters used for correction can be suppressed, whereby the efficiency and the processing speed of the process for detecting the positions of the characteristic portions of a face can be improved. In addition, since the shape parameter p2 that changes the face-turn of a face in the vertical direction is a coefficient of the shape parameter p2 of the second principal component having the second highest contribution rate, similarly, the efficiency and the processing speed of the process may be improved.


In many embodiments, the setting section 210 sets the characteristic points CP by using the global parameters. Accordingly, the positions of the characteristic potions of a face included in the target image OI can be efficiently detected at a high speed. In particular, a plurality of temporary setting positions of the characteristic points CP that form various meshes can be prepared in advance by changing the values of four global parameters (the size, the tilt, the position in the vertical direction, and the position in the horizontal direction), and a temporary setting position corresponding to a differential image Ie having the smallest value of the norm is set as the initial position. Accordingly, the initial position of the characteristic positions CP in the target image OI can approach the positions of the characteristic portions of the face. Therefore, correction can be easily performed by using the correction portion 226 in the correction process for the characteristic point CP setting position, whereby the efficiency and the processing speed of a process for detecting the positions of the characteristic portions of a face may be improved.


In many embodiments, the target image OI for which the shapes and the positions of facial organs and the contour and the shape of a face have been detected can be printed. Accordingly, after an expression determination for detecting a face image having a specific facial expression (for example, a smiling face or a face with closed eyes) or a face-turn direction determining for detecting a face image positioned in a specific direction (for example, a right-side direction or a lower-side direction) is performed, an arbitrary image can be selected and printed based on the result of the determination. In addition, an image, for which a predetermined process such as a face transformation or a shade correction of a face has been performed based on the shapes and the positions of facial organs and the contour and the shape of a face that have been detected, can be printed. Accordingly, after a face transformation, a face-shade correction, or the like is performed for a specific face image, the face can be printed.


Alternate Initial Disposition Process


FIG. 18 is a flowchart showing the flow of an initial disposition determining process for the characteristic points CP, in accordance with many embodiments. In the above described initial position setting process for the characteristic points CP, the setting section 210 determines the initial position from among the temporary setting positions of the characteristic points CP that are set by changing the values of the global parameters. Additionally, the initial position can be determined by using the shape parameter that is selected by the selection section 240. Steps S510 to S540 shown in FIG. 18 are the same as Steps S310 to S340 shown in FIG. 12 in the above-described process, and thus a description thereof is omitted here. In the process shown in FIG. 18, the minimal-norm temporary setting position specified in Step S340 is called a “reference temporary initial position”.


The selection section 240 (FIG. 1) selects the characteristic amount based on the detection mode information (Step S550). Since the selection of the characteristic amount is the same as that of the above-described process, a description thereof is omitted here. In the process shown in FIG. 18, the selection section 240 (FIG. 1) selects the shape parameters p1 and p2.


The setting section 210 sets a plurality of temporary initial positions that is acquired by variously changing the values of the shape parameters p1 and p2 with respect to the reference temporary initial position (Step S560). FIG. 19 is an explanatory diagram showing exemplary temporary initial positions of characteristic points CP generated by changing the values of the characteristic amounts. The changing of the values of the shape parameters p1 and p2 corresponds to setting of the initial temporary position acquired by horizontally turning the meshes formed by the characteristic points CP as shown in FIG. 7B or the initial temporary position acquired by vertically turning the meshes formed by the characteristic points CP as shown in FIG. 7C. Accordingly, the setting section 210, as shown in FIG. 19, sets the temporary initial position (shown on the right side or the left side of the diagram for the reference temporary initial position) for forming meshes horizontally turned by a predetermined angle or the temporary initial position (shown on the upper side or the lower side of the diagram for the reference temporary initial position) for forming meshes vertically turned by a predetermined angle. In addition, the setting section 210 also sets the temporary initial positions (shown on the upper left side, the lower left side, the upper right side, or the lower right side of the diagram for the reference temporary initial position) for forming meshes acquired by combining horizontal turn and vertical turn of the meshes with respect to the reference temporary initial position.


The setting section 210 sets eight temporary initial positions in addition to the reference temporary initial position shown in FIG. 19. In other words, a total of nine types of temporary initial positions including eight types (=3×3-1) of the temporary initial positions set by combinations of known three-level values for each of two characteristic amounts (vertical turn and horizontal turn) and the reference temporary initial position are set.


The generation portion 222 (FIG. 1) generates an average shape image I(W(x;p)) corresponding to each temporary initial position that has been set. In addition, the calculation portion 214 (FIG. 1) calculates a differential image Ie between the average shape image I(W(x;p)) corresponding to each temporary initial position and the average face image A0(x). The setting section 210 calculates the norm of each differential image Ie and sets the temporary initial position corresponding to the differential image Ie having the smallest value of the norm as the initial position of the characteristic positions CP of the target image OI (Step S570). As described above, the alternate initial position setting process for the characteristic points CP according is completed.


In the alternate initial position setting process, the initial position of the characteristic points CP is set in the initial position setting process for the characteristic points CP by using the global parameters and the characteristic amount selected by the selection section 240. Accordingly, the efficiency and the processing speed of the process for detecting the positions of characteristic portions of a face included in the target image can be improved. In particular, a plurality of temporary setting positions of the characteristic positions CP that form various meshes is prepared in advance by changing the values of four global parameters (the size, the tilt, the position in the vertical direction, and the position in the horizontal direction) and two characteristic amounts (the vertical turn and the horizontal turn), and a temporary setting position corresponding to the differential image Ie having the smallest value of the norm is set as the initial position. Accordingly, the initial position of the characteristic points CP in the target image OI can be set to be close to the positions of the characteristic portions of a face. Therefore, correction can be made in an easy manner by the correction portion 226 in the process for correcting the setting position of the characteristic points CP, whereby the efficiency and the processing speed of the process for detecting the positions of the characteristic portions of a face may be improved.


Exemplary Variations

Furthermore, the present invention is not limited to the above-described embodiments or examples. Thus, various embodiments can be performed without departing from the scope of the base idea of the present invention. For example, the modifications described below can be made.


In many embodiments, the selecting of the characteristic amounts by using the selection section 240 is performed after the convergence determination for the differential image Ie (Step S430) performed by the characteristic position detecting section 220. The time for selecting the characteristic amounts by using the selection section 240 is, however, not particularly limited. For example, the characteristic amounts can be selected before the convergence determination. Likewise, in the process illustrated in FIG. 18, the time for the selecting of the characteristic amounts is not limited to the time after the setting of the reference temporary initial position performed by the setting section 210 (Step S540), and the selection section 240 can be configured to select the characteristic amounts at any suitable time.


In many embodiments, the detection mode information includes information on whether processing speed is more important than detection accuracy or detection accuracy is more important than processing speed and information on whether to perform a facial expression determination for a face image, a face-turn determination for a face image, and/or a transformation of a face image based on the result of the detection. The detection mode information, however, can include information other than the above-described information and/or can omit at least a part of the above-described information. In many embodiments, the selection section 240 has been described to select two characteristic amounts of the shape parameters p1 and p2 for the case where the detection mode information indicates that processing speed is important is included in the detection mode information. The selection section 240 can also be configured to select other shape parameters. In many embodiments, the selection section 240 has been described to select all n shape parameters pi that are set based on the accumulated contribution rates for the case where the detection mode information indicates that detection accuracy is important is included in the detection mode information. The selection section 240 can also be configured not to select some of the shape parameters. In addition, the shape parameters selected by the selection section 240 for the case where the facial expression determination for a face image, the face-turn determination for a face image, or the transformation of a face image is performed are not limited to the above-described shape parameters and can be set to any suitable shape parameters.


In many embodiments, a total of 80 types(=3×3×3×3-1) of the temporary setting positions corresponding to combinations of three-level values for each of four global parameters (the size, the tilt, the position in the vertical direction, and the positions in the horizontal direction) are set in advance for the initial position setting process for the characteristic points CP. The types and the number of the parameters and the number of levels of parameter values that are used for setting the temporary setting positions can be changed. For example, only some of the four global parameters can be used for setting the temporary setting positions, and the temporary setting positions can be set in accordance with combinations of five-level values for each parameter used.


In the correction process for the positions of the characteristic points CP, by calculating the average shape image I(W(x;p)) based on the target image OI, the setting positions of the characteristic points CP of the transformed target image OI are matched to the set positions of the characteristic points CP of the average face image A0(x). The dispositions of the characteristic points CP can also be matched to each other by performing an image transformation on the average face image A0(x) so as to match the target image OI.


The sample images SI (FIG. 3) are only an example, and the number and the types of images used as the sample images SI can be set to any suitable number and type of images. In addition, the predetermined characteristic portions (see FIG. 4) of a face that are represented in the positions of the characteristic points CP as described above are only an example. Thus, some of the above-described characteristic portions can be omitted, and/or other portions can be used as the characteristic portions.


In many embodiments, the texture model is set by performing principal component analysis for the luminance value vector that is configured by luminance values for each pixel group x of the sample images SIw. The texture mode can also be set by performing principal component analysis for index values (for example, RGB values) other than luminance values.


In addition, the size of the average face image A0(x) is not limited to 56 pixels×56 pixels and can be configured to be different. In addition, the average face image A0(x) need not include the mask area MA (FIG. 8) and can include, for example, only the average shape area BSA. Furthermore, instead of the average face image A0(x), a different reference face image that is set based on statistical analysis of the sample images SI can be used.


In many embodiments, the shape model and the texture model are created using the AAM technique. The shape model and the texture model can also be created by using any other suitable modeling technique (for example, a technique called a Morphable Model or a technique called an Active Blob).


In many embodiments, the target image OI is stored in the memory card MC. The target image OI can also be stored and/or acquired elsewhere, for example, the image can be acquired through a network. In addition, the detection mode information can be acquired elsewhere, for example, through a network.


In addition, the image processing disclose herein has been described as being performed by the printer 100 as an image processing apparatus. However, a part of or all of the disclosed image processing can be performed by an image processing apparatus of any other suitable type such as a personal computer, a digital still camera, or a digital video camera. In addition, the printer 100 is not limited to an ink jet printer and can be a printer of any other suitable type such as a laser printer or a sublimation printer.


A part of an image processing apparatus that is implemented by hardware can be replaced by software. Likewise, a part of an image processing apparatus implemented by software can be replaced by hardware.


In addition, in a case where a part of or the entire function according to an embodiment of the invention is implemented by software (computer program), the software can be provided in a form being stored on a computer-readable recording medium. The “computer-readable recording medium” in an embodiment of the invention is not limited to a portable recording medium such as a flexible disk or a CD-ROM and includes various types of internal memory devices such a RAM and a ROM and an external memory device of a computer such as a hard disk that is fixed to a computer.


Other variations are within the spirit of the present invention. Thus, while the invention is susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the invention to the specific form or forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention, as defined in the appended claims.

Claims
  • 1. An image processing apparatus that detects coordinate positions of characteristic portions of a face image in a target image, the image processing apparatus comprising: a processor; anda machine readable memory coupled with the processor and comprising instructions that when executed cause the processor to identify a face area of the target image that includes at least a portion of the face image,generate initial coordinate positions for the characteristic portions in the target image based on the face area,select at least one characteristic amount used for correcting the initial coordinate positions or previously generated corrected coordinate positions for the characteristic portions, the at least one characteristic amount selected from a plurality of characteristic amounts that are calculated based on a plurality of sample face images having known coordinate positions of the characteristic portions,generate corrected coordinate positions so as to approach the characteristic portions in the target image by using the selected at least one characteristic amount, anddetect the corrected coordinate positions as the coordinate positions of the characteristic portions of the face image.
  • 2. The image processing apparatus according to claim 1, wherein the at least one characteristic amount is selected based on detection mode information that includes information on at least one of a use or a purpose of the detection.
  • 3. The image processing apparatus according to claim 2, wherein the memory further comprises instructions that when executed cause the processor to receive input corresponding to the detection mode information.
  • 4. The image processing apparatus according to claim 3, wherein the plurality of characteristic amounts comprise coefficients of shape vectors that were generated by performing principal component analysis of coordinate positions of characteristic portions in the plurality of sample images.
  • 5. The image processing apparatus according to claim 4, wherein the selected at least one characteristic amount comprises a characteristic amount representing a face-turn of a face image in the horizontal direction.
  • 6. The image processing apparatus according to claim 5, wherein the selected at least one characteristic amount comprises a characteristic amount representing a face-turn of a face image in the vertical direction.
  • 7. The image processing apparatus according to claim 6, wherein the initial coordinate positions are generated by using at least one of a parameter relating to a size, an angle, or a position of a face image for the face area.
  • 8. The image processing apparatus according to claim 7, wherein the memory further comprises instructions that when executed cause the processor to generate an average shape image from the target image by transforming a part of the target image into a reference average face image shape based on the initial coordinate positions or previously generated corrected coordinate positions;generate a differential image between the average shape image and a reference average face image having the reference average face image shape, the reference average face image coordinate positions of its characteristic portions based on the plurality of sample images;generate the corrected coordinate positions so as to decrease a norm of the differential image as compared to a previously generated differential image between a previously generated average face image and the reference average face image; anddetect the corrected coordinate positions for which the norm of the differential image is less than a predetermined value as the coordinate positions.
  • 9. The image processing apparatus according to claim 8, wherein the characteristic portions comprise at least one of an eyebrow, an eye, a nose, a mouth, or a face line.
  • 10. The image processing apparatus according to claim 1, wherein the image processing apparatus comprises at least one of a printer, a personal computer, a digital camera, or a digital video camera.
  • 11. An image processing method for detecting coordinate positions of characteristic portions of a face image in a target image, the image processing method using a computer comprising: identifying a face area of the target image that includes at least a portion of the face image;determining initial coordinate positions for characteristic portions in the target image based on the face area;selecting at least one characteristic amount used for correcting the initial coordinate positions or previously generated corrected coordinate positions for the characteristic portions, the at least one characteristic amount selected from a plurality of characteristic amounts that is calculated based on a plurality of sample images having known coordinate positions of the characteristic portions;determining corrected coordinate positions so as to approach the characteristic portions in the target image by using the selected at least one characteristic amount; anddetecting the corrected coordinate positions as the coordinate positions of the characteristic portions of the face image.
  • 12. The method according to claim 11, wherein the at least one characteristic amount is selected based on detection mode information that includes information on at least one of a use or a purpose of the detection.
  • 13. The method according to claim 12, further comprising receiving input corresponding to the detection mode information.
  • 14. The method according to claim 11, wherein the plurality of characteristic amounts comprise coefficients of shape vectors that were generated by performing principal component analysis of coordinate positions of characteristic portions in the plurality of sample images.
  • 15. The method according to claim 11, wherein the selected at least one characteristic amount comprises a characteristic amount representing a face-turn of a face image in the horizontal direction.
  • 16. The method according to claim 11, wherein the selected at least one characteristic amount comprises a characteristic amount representing a face-turn of a face image in the vertical direction.
  • 17. The method according to claim 11, wherein the initial coordinate positions are generated by using at least one of a parameter relating to a size, an angle, or a position of a face image for the face area.
  • 18. The image processing method according to claim 11, further comprising: determining an average shape image from the target image by transforming a part of the target image into a reference average face image shape based on the initial coordinate positions or previously generated corrected coordinate positions;determining a differential image between the average shape image and a reference average face image having the reference average face image shape, the reference average face image coordinate positions of its characteristic portions based on the plurality of sample images;determining the corrected coordinate positions so as to decrease a norm of the differential image as compared to a previously generated differential image between a previously generated average face image and the reference average face image; anddetecting the corrected coordinate positions for which the norm of the differential image is less than a predetermined value as the coordinate positions.
  • 19. A tangible medium containing a computer program implementing the method according to any one of claim 11, 12, 13, 14, 15, 16, 17, or 18.
Priority Claims (1)
Number Date Country Kind
2009-025900 Feb 2009 JP national
CROSS-REFERENCES TO RELATED APPLICATIONS

Priority is claimed under 35 U.S.C. §119 to Japanese Application No. 2009-025900 filed on Feb. 6, 2009 which is hereby incorporated by reference in its entirety. The present application is related to U.S. application Ser. No. ______, entitled “Specifying Position of Characteristic Portion of Face Image,” filed on ______, (Attorney Docket No. 21654P-026100US); U.S. application Ser. No. ______, entitled “Image Processing Apparatus For Detecting Coordinate Positions of Characteristic Portions of Face,” filed on ______, (Attorney Docket No. 21654P-026800US); and U.S. application Ser. No. ______, entitled “Image Processing For Changing Predetermined Texture Characteristic Amount of Face Image,” filed on ______, (Attorney Docket No. 21654P-027000US); the full disclosures of which are incorporated herein by reference.