1. Technical Field
The present invention relates to an image processing apparatus for detecting the coordinate positions of characteristic portions of a face that are included in a target image.
2. Related Art
An active appearance model technique (also abbreviated as “AAM”) has been used to model a visual event. In the AAM technique, a face image is, for example, modeled by using a shape model that represents the face shape by using positions of characteristic portions of the face and a texture model that represents the “appearance” in an average face shape. The shape model and the texture model can be created, for example, by performing statistical analysis on the positions (coordinates) and pixel values (for example, luminance values) of predetermined characteristic portions (for example, an eye area, a nose tip, and a face line) of a plurality of sample face images. Using the AAM technique, any arbitrary face image can be modeled (synthesized). In addition, the positions of the characteristic portions of faces that are included in an image can be detected (for example, see JP-A-2007-141107).
In the AAM technique, however, it is desirable to improve the efficiency and the processing speed of detecting the position of the characteristic portions of faces that are included in an image.
In addition, it may also be desirable to improve efficiency and processing speed whenever image processing is used to detect the position of the characteristic portions of faces included in images.
The following presents a simplified summary of some embodiments of the invention in order to provide a basic understanding of the invention. This summary is not an extensive overview of the invention. It is not intended to identify key/critical elements of the invention or to delineate the scope of the invention. Its sole purpose is to present some embodiments of the invention in a simplified form as a prelude to the more detailed description located below.
The present invention provides image processing apparatus and methods for detecting coordinate positions of characteristic portions of a face image. Such image processing apparatus and methods may improve the efficiency and the processing speed of detecting coordinate positions of characteristic portions of a face image included in an image.
Thus, in a first aspect, an image processing apparatus is provided for detecting coordinate positions of characteristic portions of a face image in a target image. The image processing apparatus includes a face area detecting unit that detects an image area that includes at least a portion of a face image as a face area from the target image, an initial position setting unit that sets an initial position of a characteristic point set in the target image, and a characteristic position detecting unit that corrects a setting position of the characteristic point set to the initial position so as to approach the coordinate position of the characteristic portion and detects the corrected setting position as the coordinate position of the characteristic portion. The initial position setting unit sets the initial position of a characteristic point set by using one or more parameters relating to a size, an angle, and a position of the face image in the face area and one or more characteristic amounts representing characteristics of the face image that are prepared in advance. Accordingly, the initial position can be set to an appropriate position. Therefore, the efficiency and the processing speed of a process for detecting coordinate positions of characteristic portions of a face image included in the target image may be improved.
In many embodiments, the initial position setting unit is configured to set the initial positions of a characteristic point set based on one of a plurality of characteristic point sets that are created in advance by using the predetermined values of the parameters and the characteristic amounts. In such a case, since one of the preexisting characteristic point sets is used to set the initial positions of the characteristic points, the coordinate positions of characteristic portions of the face image in the target image can be efficiently detected at a high speed.
In many embodiments, the initial position setting unit is configured to include a generation section that generates an average shape image by transforming a part of the target image corresponding to a characteristic point set for the characteristic portions, and a calculation section that generates a differential image between the average shape image and a reference average face image. The reference average face image has coordinate positions of its characteristic portions based on a plurality of sample images that include known face images. A plurality of candidate points sets for the characteristic portions can be evaluated by generating the above-described differential image for each of the candidate point sets, and calculating the norm of each of the resulting differential images. The initial positions of the characteristic points set can be set based on the average shape image corresponding to the differential image having the smallest value of its norm. In such a case, the coordinate positions of characteristic portions of the face image in the target image can be efficiently detected at a high speed.
In many embodiments, the one or more characteristic amounts include one or more coefficients of a corresponding one or more shape vectors that is acquired by performing principal component analysis on coordinate positions of characteristic portions included in the plurality of sample images. In many embodiments, the initial position setting unit sets the initial coordinate positions by using coefficients of one or more shape vectors out of the shape vectors in descending order of contribution rate for the characteristics of the face image. In such a case, by using the coefficients of one or more shape vectors having high contribution rates for setting the initial positions of the characteristic points, the coordinate positions of characteristic portions of the face image in the target image may be efficiently detected at a high speed.
In many embodiments, the initial position setting unit is configured to set the initial coordinate positions of the characteristic portions by using the characteristic amount that represents face-turn of the face image in the horizontal direction. In such a case, the coordinate positions of characteristic portions of the face image in the target image may be efficiently detected at a high speed.
In many embodiments, the initial position setting unit is configured to set the initial coordinate position by using the characteristic amount that represents face-turn of the face image in the vertical direction. In such a case, the coordinate positions of characteristic portions of the face image in the target image may be efficiently detected at a high speed.
In many embodiments, the characteristic position detecting unit further includes a correction portion that corrects the coordinate positions of the characteristic portions so as to decrease the norm of a differential image between the average shape image corresponding to the initial position and the reference average face image. In many embodiments, the characteristic position detecting unit detects the coordinate positions for which the norm of the differential image becomes equal to or less than a predetermined value so as to identify the corrected coordinate positions as the coordinate positions of the characteristic portions of the face image. Accordingly, the coordinate positions of characteristic portions of the face image in the target image may be efficiently detected at a high speed.
In many embodiments, the characteristic portions include an eyebrow, an eye, a nose, a mouth, and a face line. In such a case, the coordinate positions can be detected accurately.
In another aspect, an image processing apparatus is provided that detects coordinate positions of characteristic portions of a face image in a target image. The image processing apparatus includes a processor and a machine readable memory coupled with the processor that includes instructions that when executed cause the processor to identify a face area of the target image that includes at least a portion of the face image, generate initial coordinate positions for the characteristic portions, generate corrected coordinate positions for the characteristic portions by modifying the initial coordinate positions or previously generated corrected coordinate positions so as to approach the characteristic portions in the target image, and detect the corrected coordinate positions as the coordinate positions of the characteristic portions of the face image. The initial coordinate positions are generated by using one or more predetermined parameters and one or more predetermined characteristic amounts representing characteristics of the face image. The predetermined parameters relate to a size, an angle, or a position of the face image in the face area.
In many embodiments, the initial coordinate positions are generated from one of a plurality of candidate sets of initial coordinate positions that are set in advance by using predetermined values of the parameters and the characteristic amounts.
In many embodiments, the memory further comprises instructions that when executed cause the processor to generate each of a plurality of average shape images from the target image by transforming a part of the target image into a reference average face image shape based on a corresponding set of the candidate sets of initial coordinate positions. In many embodiments, the memory further comprises instructions that when executed cause the processor to generate a differential image between each of the average shape images and a reference average face image having the reference average face image shape, the reference average face image having coordinate positions of its characteristic portions based on a plurality of sample face images. In many embodiments, the memory further comprises instructions that when executed cause the processor to generate the initial coordinate positions from the candidate set of initial coordinate positions corresponding to the differential image having the smallest value for its norm.
In many embodiments, the one or more characteristic amounts include one or more coefficients of a corresponding one or more shape vectors that are acquired by performing principal component analysis on coordinate positions of characteristic portions included in the plurality of sample images. In many embodiments, the initial coordinate positions are generated by using coefficients of one or more shape vectors out of the shape vectors in descending order of contribution rate for the characteristics of the face image.
In many embodiments, the initial coordinate positions are generated by using the characteristic amount that represents face-turn of the face image in the horizontal direction. In many embodiments, the initial coordinate positions are generated by using the characteristic amount that represents face-turn of the face image in the vertical direction.
In many embodiments, the corrected coordinate positions are generated so as to decrease the norm of a differential image between the reference average face image and an average shape image corresponding to the initial coordinate positions or previously generated corrected coordinate positions. In many embodiments, the corrected coordinate positions are detected as the coordinate positions of the characteristic portions of the face image when the norm of the differential image is equal to or less than a predetermined value.
In many embodiments, the characteristic portions include an eyebrow, an eye, a nose, a mouth, and a face line.
In another aspect, an image processing method is provided for detecting coordinate positions of characteristic portions of a face image in a target image. The image processing method includes identifying a face area of the target image that includes at least a portion of the face image, determining initial coordinate positions for the characteristic portions, determining corrected coordinate positions for the characteristic portions by modifying the initial coordinate positions or previously generated corrected coordinate positions so as to approach the characteristic portions in the target image, and identifying the corrected coordinate positions as the coordinate positions of the characteristic portions of the face image. The initial coordinate positions are determined by using one or more predetermined parameters and one or more predetermined characteristic amounts representing characteristics of the face image. The predetermined parameters relate to a size, an angle, or a position of the face image in the face area.
In addition, the invention can be implemented in various forms and, for example, may be implemented as a printer, a digital still camera, a personal computer, a digital video camera, and the like. In addition, the invention can be implemented in the forms of an image processing method, an image processing apparatus, a method of detecting the positions of characteristic portions, an apparatus for detecting the positions of characteristic portions, a facial expression determining method, a facial expression determining apparatus, a computer program for implementing the functions of the above-described methods or apparatuses, a recording medium having the computer program recorded thereon, a data signal implemented in a carrier wave including the computer program, and the like.
For a fuller understanding of the nature and advantages of the present invention, reference should be made to the ensuing detailed description and accompanying drawings.
The invention is described below with reference to the accompanying drawings, wherein like numbers reference like elements.
In the following description, various embodiments of the present invention are described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the embodiments. However, it will also be apparent to one skilled in the art that the present invention may be practiced without the specific details. Furthermore, well-known features may be omitted or simplified in order not to obscure the embodiment being described.
Referring now to the drawings, in which like reference numerals represent like parts throughout the several views,
The printing mechanism 160 performs a printing operation based on print data. The card interface 170 is an interface that is used for exchanging data with a memory card MC inserted into a card slot 172. In many embodiments, an image file that includes the image data is stored in the memory card MC.
In the internal memory 120, an image processing unit 200, a display processing unit 310, and a print processing unit 320 are stored. The image processing unit 200 can be a computer program and performs a face characteristic position detecting process by being executed by a CPU 110 under a predetermined operating system. The face characteristic detecting process detects the positions of predetermined characteristic portions (for example, an eye area, a nose tip, and a face line) in a face image. The face characteristic detecting process is described below in detail. In addition, various functions are implemented as the CPU 110 also executes the display processing unit 310 and the printing processing unit 320.
The image processing unit 200 includes an initial position setting section 210, a characteristic position detecting section 220, and a face area detecting section 230 as program modules. The initial position setting section 210 includes a generation portion 212 and a calculation portion 214. In addition, the characteristic position detecting section 220 includes a correction portion 222. The functions of these units, sections, and portions will be described in detail in a description of the face characteristic position detecting process described below.
The display processing unit 310 can be a display driver that displays a process menu, a message, an image, and/or the like on the display unit 150 by controlling the display unit 150. The print processing unit 320 can be a computer program that generates print data based on the image data and prints an image based on the print data by controlling the printing mechanism 160. The CPU 110 implements the functions of these units by reading out the above-described programs (the image processing unit 200, the display processing unit 310, and the print processing unit 320) from the internal memory 120 and executing the programs.
In addition, AAM information AMI is stored in the internal memory 120. The AAM information AMI is information that is set in advance in an AAM setting process described below and is referred to in the face characteristic position detecting process described below. The content of the AAM information AMI is described in detail in a description of the AAM setting process provided below.
First, a plurality of images are prepared that includes people's faces as sample images SI (Step S110).
Then, the characteristic points CP are set for each sample face image SI (Step S120).
The position of each characteristic point CP in a sample image SI can be specified by coordinates.
Subsequently, a shape model of the AAM is set (Step S130). In particular, the face shape s that is specified by the positions of the characteristic points CP is modeled by the following Equation (1) by performing a principal component analysis for a coordinate vector (see
In the above-described Equation (1), s0 is an average shape.
In the above-described Equation (1) representing a shape model, si is a shape vector, and pi is a shape parameter that represents the weight of the shape vector si. The shape vector si can be a vector that represents the characteristics of the face shape s. The shape vector si can be an eigenvector corresponding to an i-th principal vector acquired by performing principal component analysis. As shown in the above-described Equation (1), a face shape s that represents the disposition of the characteristic points CP can be modeled as a sum of an average shape s0 and a linear combination of n shape vectors si. By appropriately setting the shape parameter pi for the shape model, the face shapes s in a wide variety of images can be reproduced.
In many embodiments, the average shape s0 and the shape vector si that are set in the shape model setting step (Step S130) is stored in the internal memory 120 as the AAM information AMI (
Subsequently, a texture model of the AAM is set (Step S140). In many embodiments, the process of setting the texture model begins by applying an image transformation (also referred to herein as “warp W”) to each sample image SI, so that set positions of the characteristic points CP in each of the transformed sample images SI are identical to those of the characteristic points CP in the average shape s0.
In addition, each sample image SIw is generated as an image in which an area (hereinafter, also referred to as a “mask area MA”) other than the average shape area BSA is masked by using the rectangular range including the average shape area BSA (denoted by being hatched in
Next, the texture (also referred to herein as an “appearance”) A(x) of a face is modeled by using the following Equation (2) by performing principal component analysis for a luminance value vector that is configured by luminance values for each pixel group x of each sample image SIw. In many embodiments, the pixel group x is a set of pixels that are located in the average shape area BSA.
In the above-described Equation (2), A0(x) is an average face image.
In the above-described Equation (2) representing a texture model, Ai(x) is a texture vector, λi is a texture parameter that represents the weight of the texture vector Ai(x). The texture vector Ai(x) is a vector that represents the characteristics of the texture A(x) of a face. In many embodiments, the texture vector Ai(x) is an eigenvector corresponding to an i-th principal component that is acquired by performing principal component analysis. In many embodiments, m eigenvectors set based on the accumulated contribution rates in the order of the eigenvectors corresponding to principal components having the higher contribution rate are used as a texture vector Ai(x). In many embodiments, the first texture vector A1(x) corresponding to the first principal component having the highest contribution rate is a vector that is approximately correlated with a change in the color of a face (may be perceived as a difference in gender).
As shown in the above-described Equation (2), the face texture A(x) representing the outer appearance of a face can be modeled as a sum of the average face image A0(x) and a linear combination of m texture vectors Ai(x). By appropriately setting the texture parameter λi in the texture model, the face textures A(x) for a wide variety of images can be reproduced. In addition, in many embodiments, the average face image A0(x) and the texture vector Ai(x) that are set in the texture model setting step (Step S140 in
By performing the above-described AAM setting process (
When the disposition of the characteristic points CP in the face image is determined by performing the face characteristic position detecting process, the values of the shape parameter pi and the texture parameter λi for the face image are determined. Accordingly, the result of the face characteristic position detecting process can be used in an expression determination process for detecting a face image having a specific facial expression (for example, a smiling face or a face with closed eyes), a face-turn direction determining process for detecting a face image positioned in a specific direction (for example, a right-side direction or a lower-side direction), a face transformation process for transforming the shape of a face, a correction process for the shade of a face, or the like.
First, the image processing unit 200 (
The face area detecting section 230 (
The initial position setting section 210 (
The initial position setting section 210 sets a plurality of the temporary setting positions by variously changing the values of the global parameters for the reference temporary setting position. The changing of the global parameters (the size, the tilt, the position in the vertical direction, and the position in the horizontal direction) corresponds to performing enlargement or reduction, a change in the tilt, and parallel movement of the meshes formed by the characteristic points CP with respect to the target image OI. Accordingly, the initial position setting section 210, as shown in
In addition, as shown in
In addition, the initial position setting section 210 also sets temporary setting positions acquired by performing parallel movement to the upper or lower side and to the left or right side for meshes, shown in
The generation portion 212 (
The transformation for calculating the average shape image I(W(x;p)), the same as the transformation (see
In addition, as described above, the pixel group x is a set of pixels that are located in the average shape area BSA of the average shape S0. The pixel group in the image (the average shape area BSA of the target image OI) before performing the warp W that corresponds to the pixel group x in the image (the face image having the average shape s0) after performing the warp W is denoted by W(x;p). Since the average shape image is an image that is configured by the luminance values for each pixel group W(x;p) in the average shape area BSA of the target image OI, the average shape image is denoted by I(W(x;p)). In
The calculation portion 214 (
The initial position setting section 210 calculates the norm from the pixel values of each differential image Ie and sets a temporary setting position (hereinafter, also referred to as a minimal-norm temporary setting position) corresponding to the differential image Ie having the smallest value of the norm as the reference temporary position of the characteristic points CP of the target image OI (Step S340). The pixel value used for calculating the norm may be either a luminance value or an RGB value.
The initial position setting section 210 sets a plurality of temporary initial positions by variously changing the values of the shape parameters p1 and p2 as characteristic amounts for the reference temperature position (Step S350).
The initial position setting section 210 sets eight temporary initial positions in addition to the reference temporary initial position shown in
The generation portion 212 (
When the initial position setting process for the characteristic points CP is completed, the characteristic position detecting section 220 (
The generation portion 212 (
The characteristic position detecting section 220 calculates a differential image Ie between the average shape image I(W(x;p)) and the average face image A0(x) (Step S420). The characteristic position detecting section 220 determines whether the process for correcting the characteristic point CP setting position converges based on the differential image Ie (Step S430). The characteristic position detecting section 220 calculates the norm of the differential image Ie. When the value of the norm is smaller than a threshold value set in advance, the characteristic position detecting section 220 determines convergence. On the other hand, when the value of the norm is equal to or lager than the threshold value set in advance, the characteristic position detecting section 220 determines no convergence. Alternatively, the characteristic position detecting section 220 can be configured to determine convergence for a case where the value of the norm of the calculated differential image Ie is smaller than that calculated in Step S430 at the previous time and to determine no convergence for a case where the value of the norm is equal to or larger than the precious value. Furthermore, the characteristic position detecting section 220 can be configured to determine convergence by combining the determination on the basis of the threshold value and the determination on the basis of the comparison with the previous value. For example, the characteristic position detecting section 220 can be configured to determine convergence only for a case where the value of the calculated norm is smaller than the threshold value and is smaller than the previous value and to determine no convergence for other cases.
When no convergence is determined in the above-described convergence determination in Step S430, the correction portion 222 (
In many embodiments, the update amount ΔP of the parameters is calculated by using the following Equation (3). In other words, the update amount ΔP of the parameters is product of an update matrix R and the difference image Ie.
Equation (3)
ΔP=R×Ie (3)
The update matrix R represented in Equation (3) is a matrix of M rows×N columns that is set by learning in advance for calculating the update amount ΔP of the parameters based on the differential image Ie and is stored in the internal memory 120 as the AAM information AMI (
Equations (4) and (5), as well as active models in general, are described in Matthews and Baker, “Active Appearance Models Revisited,” tech. report CMU-RI-TR-03-02, Robotics Institute, Carnegie Mellon University, April 2003, the full disclosure of which is hereby incorporated by reference.
The correction portion 222 (
When the process from Step S410 to Step S450 shown in
The print processing unit 320 generates print data for the target image OI for which the shapes and the positions of the facial organs and the contour and the shape of a face are detected. In particular, the print processing unit 320 generates the print data by performing a color conversion process for adjusting pixel values of pixels to the ink used by the printer 100, a halftone process for representing the gray scales of pixels after the color conversion process by distribution of dots, a rasterization process for changing the data sequence of the image data, for which the halftone process has been performed, in the order to be transmitted to the printer 100, and the like for the target image OI. The printing mechanism 160 prints the target image OI for which the shapes and positions of the facial organs and the contour and the shape of the face have been detected based on the print data that is generated by the print processing unit 320. In addition, the print processing unit 320 is not limited to the generation of the print data of the target image OI and can generate print data of an image for which a predetermined process such as face transformation or correction for the shade of a face has been performed based on the shapes and the positions of the detected facial organs or the contour and the shape of a face. In addition, the printing mechanism 160 can print the image, for which a process such as face transformation or correction for the shade of a face has been performed, based on the print data that is generated by the print processing unit 320.
As described above, the initial position of the characteristic points CP can be set in the initial position setting process for the characteristic points CP by using the global parameters and the characteristic amounts. Accordingly, the efficiency and the processing speed of the process for detecting the positions of characteristic portions of a face included in the target image may be improved.
In particular, a plurality of temporary setting positions of the characteristic positions CP that form various meshes can be prepared in advance by changing the values of four global parameters (the size, the tilt, the position in the vertical direction, and the position in the horizontal direction) and two characteristic amounts (the vertical turn and the horizontal turn), and a temporary setting position corresponding to the differential image Ie having the smallest value of the norm is set as the initial position. Accordingly, the initial position of the characteristic points CP in the target image OI can be set to be close to the positions of the characteristic portions of a face. Therefore, correction can be made in an easy manner by the correction portion 222 in the process for correcting the set positions of the characteristic points CP, whereby the efficiency and the processing speed of the process for detecting the positions of the characteristic portions of a face may be improved.
In many embodiments, in the initial position setting process for the characteristic points CP, the initial position setting section 210 sets the initial position of the characteristic points CP by variously changing the shape parameters p1 and p2. Accordingly, the positions of characteristic portions of a face that is included in the target image can be effectively detected at a high speed. In particular, the shape parameter p1 can be a coefficient of the first shape vector s1 that is approximately correlated with the horizontal turn of a face as the first principal component having the highest contribution rate for the characteristics of a face. In addition, the shape parameter p2 can be a coefficient of the second shape vector s2 that is approximately correlated with the vertical turn of a face as the second principal component having the second highest contribution rate for the characteristics of a face. Accordingly, by variously changing the shape parameters p1 and p2, a setting position of the characteristic points CP corresponding to various characteristics of a face can be set. Therefore, the initial position of the characteristic points CP in the target image OI can be set to be close to the positions of the characteristic portions of a face.
In many embodiments, the target image OI for which the shapes and the positions of facial organs or the contour and the shape of a face have been detected can be printed. Accordingly, after an expression determination process for detecting a face image having a specific facial expression (for example, a smiling face or a face with closed eyes) or a face-turn direction determining process for detecting a face image positioned in a specific direction (for example, a right-side direction or a lower-side direction) is performed, any arbitrary image can be selected and printed based on the result of the determination. In addition, an image for which a predetermined process such as face transformation or shade correction for a face has been performed based on the shapes and the positions of facial organs or the contour and the shape of a face that have been detected can be printed. Accordingly, after face transformation or face-shade correction, or the like is performed for a specific face image, the face can be printed.
Furthermore, the present invention is not limited to the above-described embodiments or examples. Thus, various embodiments can be performed without departing from the scope of the base idea of the present invention. For example, the modifications described below can be made.
The initial position setting section 210 can determine a reference temporary initial position having the smallest norm of the differential image Ie out of temporary setting positions that are set by variously changing the values of the global parameters, and the temporary initial position can be set by variously changing the characteristic amount of the reference temporary initial position.
The initial position can be set from among a total of 729 types (=3×3×3×3×3×3) of temporary setting positions by setting the temporary setting positions that are acquired by performing parallel movement in the vertical or the horizontal direction, shown in
In many embodiments, a total of 80 types (=3×3×3×3−1) of the temporary setting positions corresponding to combinations of three-level values for each of four global parameters (the size, the tilt, the position in the vertical direction, and the positions in the horizontal direction) are set in advance for the initial position setting process for the characteristic points CP. However, the types and the number of the parameters and the number of levels of parameter values that are used for setting the temporary setting positions can be changed. For example, only some of the four global parameters can be configured to be used for setting the temporary setting positions, and the temporary setting positions can be set in accordance with combinations of five-level values for each used parameter.
In many embodiments, the temporary initial positions are set in accordance with combinations of three-level values of the shape parameters p1 and p2 corresponding to two principal components in the descending order of the contribution rates for the initial position setting process for the characteristic points CP. However, the number of the shape parameters pi or the number of the levels of the parameter values can be changed. For example, only the shape parameter pi corresponding to one principal component having the highest contribution rate can be configured to be used. Alternatively, the shape parameters pi corresponding to three principal components or more selected from the highest contribution rate side can be configured to be used. In addition, for example, the number of levels of the parameter values can be set to five.
In many embodiments, the correction process for the positions of the characteristic points CP calculates the average shape image I(W(x;p)) based on the target image OI and the setting position of the characteristic points CP of the target image OI is matched to the set position of the characteristic points CP of the average face image A0(x). However, both the dispositions of the characteristic points CP can be configured to be matched to each other by performing an image transformation for the average face image A0(x).
The sample image SI (
In addition, the texture model can be set by performing principal component analysis for the luminance value vector that is configured by luminance values for each pixel group x of the sample image SIw. The texture model can also be set by performing principal component analysis for index values (for example, RGB values) other than the luminance values that represent the texture (appearance) of the face image.
In addition, the size of the average face image A0(x) is not limited to 56 pixels×56 pixels and can be configured to be different. In addition, the average face image A0(x) need not include the mask area MA (
In addition, in many embodiments, the shape model and the texture model that are used are set by using the AAM technique. The shape model and the texture model can also be set by using any other suitable modeling technique (for example, a technique called a Morphable Model or a technique called an Active Blob).
In addition, in many embodiments, the image stored in the memory card MC is configured as the target image OI. However, for example, the target image OI can also be an image that is acquired through a network.
In addition, in many embodiments, the image processing is performed by using a printer (e.g., the printer 100) as an image processing apparatus as described above. However, a part of or all of the processing can be performed by an image processing apparatus of any other suitable type such as a personal computer, a digital still camera, or a digital video camera. In addition, the printer 100 is not limited to an ink jet printer and can be a printer of any other suitable type such as a laser printer or a sublimation printer.
In addition, a part of the configuration that is implemented by hardware can be replaced by software. Likewise, a part of the configuration implemented by software can be replaced by hardware.
In addition, in a case where a part of or the entire function according to an embodiment of the invention is implemented by software (computer program), the software can be provided in a form being stored on a computer-readable recording medium. The “computer-readable recording medium” in an embodiment of the invention is not limited to a portable recording medium such as a flexible disk or a CD-ROM and includes various types of internal memory devices such a RAM and a ROM and an external memory device of a computer such as a hard disk that is fixed to a computer.
Other variations are within the spirit of the present invention. Thus, while the invention is susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the invention to the specific form or forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention, as defined in the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2009-017056 | Jan 2009 | JP | national |
Priority is claimed under 35 U.S.C. §119 to Japanese Application No. 2009-017056, filed on Jan. 28, 2009 which is hereby incorporated by reference in its entirety. The present application is related to U.S. application Ser. No. ______, entitled “Specifying Position of Characteristic Portion of Face Image,” filed on ______, (Attorney Docket No. 21654P-026100US); U.S. application Ser. No. ______, entitled “Image Processing Apparatus For Detecting Coordinate Position of Characteristic Portion of Face,” filed on ______, (Attorney Docket No. 21654P-026900US); and U.S. application Ser. No. ______, entitled “Image Processing For Changing Predetermined Texture Characteristic Amount of Face Image,” filed on ______, (Attorney Docket No. 21654P-027000US); each of which are incorporated herein by reference.