The disclosure of Japanese Patent Application No. 2010-138423, which was filed on Jun. 17, 2010, is incorporated here by reference.
1. Field of the Invention
The present invention relates to an image processing apparatus. More particularly, the present invention relates to an image processing apparatus which estimates an age of a person appeared in an image.
2. Description of the Related Art
According to one example of this type of apparatus, in an age estimating device which performs an image process on an image of a face of a measurement-target person photographed by an image input device and estimates an age of the measurement-target person, the image of the face of the measurement-target person is photographed, and then, a plurality of characteristic amounts being different from each other are extracted from the acquired image of the face. Based on the extracted plurality of characteristic amounts, a plurality of ages are estimated by a plurality of age estimators. Based on a distribution of the plurality of ages estimated by the plurality of age estimators, an estimated age is determined. The determined estimated age is displayed by a displayer.
However, in the above-described apparatus, upon estimating the age based on the characteristic amount extracted from the face image, a facial expression is never referred to. Thus, when a quality of the image is adjusted with reference to the estimated age, depending on a facial expression of the face detected from a scene, the quality of the image may be deteriorated.
An image processing apparatus according to the present invention, comprises: a taker which takes an image; a searcher which searches for one or at least two face images from the image taken by the taker; a first designator which designates each of the one or at least two face images discovered by the searcher; a holder which holds a plurality of face characteristics respectively corresponding to a plurality of ages; a detector which detects a facial expression of the face image designated by the first designator; an estimator which estimates an age of a person equivalent to the face image designated by the first designator, based on the plurality of face characteristics held by the holder and the facial expression detected by the detector; and an adjuster which adjusts a quality of the image taken by the taker with reference to an estimated result of the estimator.
According to the present invention, a computer program embodied in a tangible medium, which is executed by a processor of an image processing apparatus, the program comprises: a taking instruction to take an image; a searching instruction to search for one or at least two face images from the image taken based on the taking instruction; a first designating instruction to designate each of the one or at least two face images discovered based on the searching instruction; a holding instruction to hold a plurality of face characteristics respectively corresponding to a plurality of ages; a detecting instruction to detect a facial expression of the face image designated based on the first designating instruction; an estimating instruction to estimate an age of a person equivalent to the face image designated based on the first designating instruction, based on the plurality of face characteristics held based on the holding instruction and the facial expression detected based on the detecting instruction; and an adjusting instruction to adjust a quality of the image taken based on the taking instruction with reference to an estimated result based on the estimating instruction.
According to the present invention, an imaging control method executed by an image processing apparatus, the imaging control method, the imaging control method comprises: a taking step of taking an image; a searching step of searching for one or at least two face images from the image taken by the taking step; a first designating step of designating each of the one or at least two face images discovered by the searching step; a holding step of holding a plurality of face characteristics respectively corresponding to a plurality of ages; a detecting step of detecting a facial expression of the face image designated by the first designating step; an estimating step of estimating an age of a person equivalent to the face image designated by the first designating step, based on the plurality of face characteristics held by the holding step and the facial expression detected by the detecting step; and an adjusting step of adjusting a quality of the image taken by the taking step with reference to an estimated result of the estimating step.
The above described features and advantages of the present invention will become more apparent from the following detailed description of the embodiment when taken in conjunction with the accompanying drawings.
With reference to
Upon estimating an age of the face image, in addition to the plurality of face characteristics respectively corresponding to the plurality of ages, a facial expression of a face image which is a target of the estimation is referred to. By adjusting a quality of the image with reference to the age thus estimated, the quality of the image is improved.
With reference to
When a power source is applied, under a main task, a CPU 26 determines a setting (i.e., an operation mode at a current time point) of a mode selector switch 28md arranged in a key input device 28. If the operation mode at the current time point is an imaging mode, an imaging task and an age-group designating task are started up. If the operation mode at the current time point is a reproducing mode, a reproducing task is started up.
When the imaging mode is selected, in order to execute a moving-image taking process, the CPU 26 commands a driver 18c to repeat an exposure procedure and an electric-charge reading-out procedure under the imaging task. In response to a vertical synchronization signal Vsync periodically generated from an SG (Signal Generator) not shown, the driver 18c exposes the imaging surface and reads out the electric charges produced on the imaging surface in a raster scanning manner. From the imager 16, raw image data that is based on the read-out electric charges is cyclically outputted.
A pre-processing circuit 20 performs processes, such as digital clamp, pixel defect correction, gain control and etc., on the raw image data outputted from the imager 16. The raw image data on which these processes are performed is written into a raw image area 32a of an SDRAM 32 through a memory control circuit 30.
A post-processing circuit 34 reads out the raw image data accommodated in the raw image area 32a through the memory control circuit 30, performs processes such as a color separation process, a white balance adjusting process, a YUV converting process and etc., on the read-out raw image data. Moreover, the post-processing circuit 34 executes a zoom process for displaying and a zoom process for searching on the image data that comply with a YUV format, in a parallel manner. As a result, display image data and search image data that comply with the YUV format are individually created. The display image data is written into a display image area 32b of the SDRAM 32 by the memory control circuit 30. The search image data is written into a search image area 32c of the SDRAM 32 by the memory control circuit 30.
An LCD driver 36 repeatedly reads out the display image data accommodated in the display image area 32b through the memory control circuit 30, and drives an LCD monitor 38 based on the read-out image data. As a result, a real-time moving image (a live view image) of the scene is displayed on a monitor screen.
With reference to
An AE evaluating circuit 22 integrates, out of the RGB data produced by the pre-processing circuit 20, RGB data belonging to the evaluation area EVA at every time the vertical synchronization signal Vsync is generated. Thereby, 256 integral values (256 AE evaluation values) are outputted from the AE evaluating circuit 22 in response to the vertical synchronization signal Vsync.
An AF evaluating circuit 24 integrates, out of the RGB data produced by the pre-processing circuit 20, a high-frequency component of the RGB data belonging to the evaluation area EVA at every time the vertical synchronization signal Vsync is generated. Thereby, 256 integral values (256 AF evaluation values) are outputted from the AF evaluating circuit 24 in response to the vertical synchronization signal Vsync.
When a shutter button 28sh is in a non-operated state, under the imaging task, the CPU 26 executes a simple AE process that is based on output from the AE evaluating circuit 22 so as to calculate an appropriate EV value. An aperture amount and an exposure time period that define the calculated appropriate EV value are set to the drivers 18b and 18c, respectively, and as a result, a brightness of the live view image is adjusted approximately.
When a shutter button 28sh is half depressed, under the imaging task, the CPU 26 executes a strict AE process that is based on the output from the AE evaluating circuit 22 so as to calculate an optimal EV value. An aperture amount and an exposure time period that define the optimal EV value are set to the drivers 18b and 18c, respectively, and as a result, the brightness of the live view image is adjusted approximately. Upon completion of the strict AE process, as long as nothing is registered in an age-group designation register RGST1 described later, the CPU 26 executes a normal AF process under the imaging task. The AF process is executed by using a hill-climbing system referring to output of the AF evaluating circuit 24, and the focus lens 12 is set to a focal point. Thereby, a sharpness of the live view image is improved.
When the shutter button 28sh is fully depressed, a still-image taking process and a recording process are executed. One frame of the display image data at a time point at which the shutter button 28sh is fully depressed is taken by the still-image taking process into a still-image area 32d. The taken one frame of the image data is read out from the still-image area 32d by an I/F 40 which is started up in association with the recording process, and is recorded on a recording medium 42 in a file format.
When the reproducing mode is selected, the CPU 26 designates the latest image file recorded on the recording medium 42 and commands the I/F 40 and the LCD driver 36 to execute a reproducing process in which the designated image file is noticed.
The I/F 40 reads out the image data of the designated image file from the recording medium 42, and writes the read-out image data into the display image area 32b of the SDRAM 32 through the memory control circuit 30. The LCD driver 36 reads out the image data accommodated in the display image area 32b through the memory control circuit 30, and an optical image corresponding to the read-out image data is generated. As a result, the generated optical image is displayed on the LCD monitor 38.
By an operator operating the key input device 28, the CPU 26 designates a succeeding image file or a preceding image file as a reproduced-image file. The designated-image file is subjected to a reproducing process similar to that described above, and as a result, a display of the LCD monitor 38 is updated.
Moreover, when an age-group designating operation is performed by the operator through the key input device 28 while displaying the live view image by the simple AE process, under the imaging task, the CPU 26 registers the designated age group on the age-group designation register RGST1 shown in
Under the age-group designating task executed in parallel with the imaging task, the CPU 26 executes an age estimating process in order to estimate an age of a person by searching for a face image of the person from the search image data accommodated in the search image area 32c. Upon the age estimating process, the CPU 26 converts the search image data accommodated in the search image area 32c into QVGA data which has horizontal 320 pixels×vertical 240 pixels (resolution: QVGA). Thereafter, the face image of the person is searched from the QVGA data. For the age estimating process, a standard face dictionary STDC shown in
The face-detection frame structure FD is moved in a raster scanning manner corresponding to the evaluation area EVA on an image of the QVGA data (see
The CPU 26 reads out image data belonging to the face-detection frame structure FD from the QVGA data so as to calculate a characteristic amount of the read-out image data. The calculated characteristic amount is compared with a characteristic amount of a face image registered in the standard face dictionary STDC. When a matching degree exceeds a reference value REF1, it is regarded that the face image is discovered from the face-detection frame structure FD, and a variable CNT is incremented. Furthermore, a position and a size of the face-detection frame structure FD at a current time point are registered as a position and a size of the face-detection frame structure surrounding the discovered face image, on the face-detection frame structure register RGST2.
Thus, when a scene shown in
Subsequently, the CPU 26 designates CNT face-detection frame structures registered in the face-detection frame structure register RGST2 in order. Image data belonging to the designated face-detection frame structure is subjected to a following face recognition process.
Prior to the face recognition process, the CPU 26 converts the search image data accommodated in the search image area 32c into VGA data which has horizontal 640 pixels×vertical 480 pixels (resolution: VGA) in order to improve a processing speed. Subsequently, the CPU 26 converts the position and size of the face-detection frame structure registered in the face-detection frame structure register RGST2 into the one on the VGA data and rewrites the face-detection frame structure register RGST2. Moreover, for the face recognition process, a recognized face register RGST3 shown in
In the age and gender dictionary ASDC, for example, a characteristic amount of a face image of an average person in each of ages from less than a year old to 80 years old is contained with each gender. It is noted that, in
In the face recognition process, firstly, image data belonging to a designated-face-detection frame structure is read out from the VGA data so as to calculate a smile degree of the read-out image data. A characteristic amount of the image data belonging to the designated-face-detection frame structure is corrected so that a difference between the calculated smile degree and the smile degree of each characteristic amount contained in the age and gender dictionary ASDC is inhibited or resolved. Since the smile degree of each characteristic amount contained in the age and gender dictionary ASDC is zero, the characteristic amount of the image data belonging to the designated-face-detection frame structure is corrected so that the smile degree becomes zero.
For example, in a case where a characteristic amount of the face image of the person H1 shown in
Subsequently, a variable K is set to each of “1” to “Kmax”, and the corrected characteristic amount is compared with a characteristic amount described in a K-th column of the age and gender dictionary ASDC. It is noted that “Kmax” is equivalent to the total number of the characteristic amounts contained in the age and gender dictionary ASDC. When a matching degree exceeds a reference value REF2, a column number (=K) of a characteristic amount in a matching destination and the matching degree are registered on the recognized face register RGST3 shown in
When at least one column number is registered in the recognized face register RGST3, a position and a size of the face-detection frame structure corresponding to a maximum matching degree, and an age and a gender described in a column of the age and gender dictionary ASDC indicated by a column number corresponding to the maximum matching degree are registered on the finalization register RGST4 shown in
Upon completion of the face recognition process for the image data belonging to the CNT face-detection frame structures, the CPU 26 determines whether or not there is any registration in the finalization register RGST4. When there is the registration, the CPU 26 sets a flag FLG_RCG to “1” and converts the position and size of the face-detection frame structure registered in the finalization register RGST4 into the one on the search image data so as to rewrite the finalization register RGST4. On the other hand, when nothing is registered in the finalization register RGST4, the flag FLG_RCG is set to “0”.
Thus, upon completion of the age estimating process, in a case where the flag FLG_RCG is set to “1” and the designated age group is registered in the age-group designation register RGST1, under the age-group designating task, the CPU 26 compares the age-group designation register RGST1 with the finalization register RGST4. A variable M is set to each of “1” to “Mmax”, and an estimated age described in an M-th column of the finalization register RGST4 is compared with the designated age group registered in the age-group designation register RGST1. When the estimated age is included in the designated age group, a position and a size of the face-detection frame structure described in the M-th column of the finalization register RGST4 are registered on a focus register RGST5 shown in
When the designated age group is registered in the age-group designation register RGST1 at a time point of completion of the strict AE process by half-depressing the shutter button, under the imaging task, the CPU 26 applies a designated-age-group display command to a graphic generator 46. In the designated-age-group display command, the designated age group registered in the age-group designation register RGST1 is described. When the designated-age-group display command is applied, the graphic generator 46 creates graphic data representing the designated age group so as to apply the created graphic data to the LCD driver 36. As a result, as shown in
After the designated-age-group display command is issued, the CPU 26 waits for a completion of the age-group designating task executed in parallel with the imaging task. When the position and size of the face-detection frame structure are registered on the focus register RGST5 by the process of the age-group designating task, the CPU 26 issues a face-frame-structure display command toward the graphic generator 46. In the face-frame-structure display command, the position and size of the face-detection frame structure registered in the focus register RGST5 is described. When there is no registration in the focus register RGST5, i.e., when the age estimated in the age estimating process is not included in any of the age groups designated by the age-group designating operation, the face-frame-structure display command is non-issued.
When the face-frame-structure display command is applied, the graphic generator 46 creates graphic data representing a face-frame structure KF so as to apply the created graphic data to the LCD driver 36. The graphic data is created with reference to the position and size described in the face-frame-structure display command. As a result, the face-frame structure KF is displayed in a manner to surround a face image of a person having the estimated age belonging to the age group designated by the age-group designating operation.
In an example shown in
When a beautiful skin process operation is performed via the key input device 28 in a case where the reproducing mode is selected, image data of an image file under reproduction is written into the search image area 32c of the SDRAM 32. Subsequently, under the reproducing task, the CPU 26 executes the age estimating process. It is noted that, the process executed here is similar to the age estimating process executed under the imaging task.
Thus, when the beautiful skin process operation is performed on a reproduced image P_bfr shown in
For example, as to the face image of the person H5 shown in
The corrected characteristic amount is compared with each characteristic amount contained in the age and gender dictionary ASDC, and at every time a matching degree exceeds the reference value REF2, the column number of the characteristic amount in the matching destination and the matching degree are registered on the recognized face register RGST3. Upon completion of comparing, a position and a size of the face-detection frame structure corresponding to a maximum matching degree, and an age and a gender described in the column of the age and gender dictionary ASDC indicated by a column number corresponding to the maximum matching degree are registered on the finalization register RGST4.
Upon completion of the age estimating process after converting the position and size of the face-detection frame structure registered in the finalization register RGST4 into the one on the search image data, the CPU 26 executes a beautiful skin process. The beautiful skin process is executed for the image belonging to the face-detection frame structure registered in the finalization register RGST4, based on the age and the gender registered in the finalization register RGST4. In the beautiful skin process, for example, the higher the estimated age is, the more a correction degree is strengthened. Moreover, as to a face image of a female, the correction degree is strengthen than that of a male, and a skin whitening process of correcting a skin color brightly, etc. is also performed.
In an example of
The CPU 26 executes a plurality of tasks including the main task shown in
With reference to
With reference to
In a step S29, it is determined whether or not the shutter button 28sh is half depressed. When a determined result is NO, in a step S31, the simple AE process is executed. The brightness of the live view image is adjusted approximately by the simple AE process.
Under the started-up age-group designating task, a flag FLG_FIN is set to “0” as an initial setting, and is updated to “1” when the process of the age-group designating task is completed. In a step S33, it is repeatedly determined whether or not the flag FLG_FIN is updated to “1”, and as long as a determined result is NO, the simple AE process is repeatedly executed in the step S31. When the determined result of the step S33 is updated from NO to YES, the process returns to the step S23.
When the determined result of the step S29 is YES, in a step S35, the strict AE process is executed. The brightness of the live view image is adjusted to an optimal value by the strict AE process. In a step S37, it is determined whether or not the designated age group is registered in the age-group designation register RGST1, and when a determined result is YES, the process advances to a step S39 so as to issue the designated-age-group display command toward the graphic generator 46. In the designated-age-group display command, the designated age group registered in the age-group designation register RGST1 is described. As a result, the designated age group registered in the age-group designation register RGST1 is displayed at the lower right of the monitor screen with the live view image.
In a step S41, it is repeatedly determined whether or not the flag FLG_FIN indicates “1”, and when a determined result is updated from NO to YES, in a step S43, it is determined whether or not the position and size of the face-detection frame structure are registered in the focus register RGST5. In a step S45, the face-frame-structure display command is issued toward the graphic generator 46. In the face-frame-structure display command, the position and size of the face-detection frame structure registered in the focus register RGST5 is described. As a result, the face-frame structure KF is displayed in a manner to surround the face image of the person having the estimated age belonging to the age group designated in the age-group designating operation.
In a step S47, the AF process giving priority to the face position surrounded by the face-frame structure KF is executed. As a result, a sharpness of the face image surrounded by the face-frame structure KF is improved. When the determined result of the step S37 or S43 is NO, the process advances to a step S49 so as to execute the normal AF process. The focus lens 12 is placed at the focal point by the AF process.
Upon completion of the process in the step S47 or S49, in a step S51, it is determined whether or not the shutter button 28sh is fully depressed, and in a step S53, it is determined whether or not the operation of the shutter button 28sh is cancelled. When YES is determined in the step S51, in a step S55, the still-image taking process is executed, and in a step S57, the recording process is executed. When the determined result of the step S53 is YES, the process advances to a step S59. As a result of the process in the step S55, one frame of the image data representing the scene at the time point at which the shutter button 28sh is fully depressed is taken into the still-image area 32d. Moreover, as a result of the process in the step S57, the image data taken into the still-image area 32d is recorded on the recording medium 42 in the file format.
When the designated age group or the face-frame structure KF is displayed, in the step S59, a designated-age-group non-display command or a face-frame-structure non-display command is applied to the graphic generator 46, and as a result, displaying the designated age group or the face-frame structure KF is cancelled. Thereafter, the process returns to the step S23.
With reference to
In a step S75, it is determined whether or not the variable M exceeds a maximum value Mmax (=the total number of the registered ages in the finalization register RGST4), and when a determined result is YES, the process advances to the step S69 while when the determined result is NO, the process advances to a step S77. In the step S77, the estimated age described in the M-th column of the finalization register RGST4 is compared with the designated age group registered in the age-group designation register RGST1. In a step S79, as a result of comparing in the step S77, it is determined whether or not the estimated age described in the M-th column of the finalization register RGST4 is included in the designated age group registered in the age-group designation register RGST1. When a determined result is NO, the process advances to a step S83 while when the determined result is YES, in a step S81, the position and size of the face-detection frame structure described in the M-th column of the finalization register RGST4 are registered on the focus register RGST5. In the step S83, the variable M is incremented, and thereafter, the process returns to the step S75.
The age estimating process in the step S65 shown in
In a step S101, it is determined whether or not the vertical synchronization signal Vsync is generated. When a determined result is updated from NO to YES, in a step S103, the face-detection frame structure FD is placed at an upper left position of the search area. In a step S105, a part of the QVGA data belonging to the face-detection frame structure FD is read out so as to calculate the characteristic amount of the read-out QVGA data.
In a step S107, the calculated characteristic amount is compared with the characteristic amount of the face image contained in the standard face dictionary STDC, and in a step S109, it is determined whether or not the matching degree exceeds the reference REF 1. When a determined result is NO, the process directly advances to a step S115 while when the determined result is YES, the process advances to the step S115 via steps S111 and S113. In the step S111, the variable CNT is incremented. In the step S113, the position and size of the face-detection frame structure FD at the current time point are registered on the face-detection frame structure register RGST2.
In the step S115, it is determined whether or not the face-detection frame structure FD reaches a lower right position of the search area. When a determined result is NO, in a step S117, the face-detection frame structure FD is moved by a predetermined amount in a raster direction, and thereafter, the process returns to the step S105. When the determined result is YES, in a step S119, the size of the face-detection frame structure FD is reduced by a scale of “5”, and in a step S121, it is determined whether or not the size of the face-detection frame structure FD is less than “SZmin”. When a determined result of the step S121 is NO, in a step S123, the face-detection frame structure FD is placed at the upper left position of the search area, and thereafter, the process returns to the step S105. When the determined result of the step S121 is YES, the process advances to a step S125.
In the step S125, it is determined whether or not the variable CNT is set to “0”, and when a determined result is NO, in a step S127, the search image data accommodated in the search image area 32c is converted into VGA data. When the determined result is YES, the process returns to the routine in an upper hierarchy. In a step S129, the position and size of the face-detection frame structure registered in the face-detection frame structure register RGST2 is converted into the one on the VGA data so as to rewrite the face-detection frame structure register RGST2.
In a step S131, the registered contents in the finalization register RGST4 are cleared, and in a step S133, the variable N is set to “1”. In a step S135, it is determined whether or not the variable N exceeds the variable CNT, and when a determined result is NO, the process advances to a step S137 so as to designate a face-detection frame structure set in an N-th column of the face-detection frame structure register RGST2. In a step S139, the face recognition process in which the image data belonging to the designated-face detection frame structure is noticed is executed. Upon completion of the face recognition process, in a step S141, the variable N is incremented, and thereafter, the process returns to the step S135.
When the determined result of the step S135 is YES, in a step S143, it is determined whether or not there is any registration in the finalization register RGST4. When a determined result of the step S143 is YES, in a step S145, the flag FLG_RCG is set to “1”, and in a step S147, the position and size of the face-detection frame structure registered in the finalization register RGST4 is converted into the one on the search image data so as to rewrite the finalization register RGST4. When the determined result of the step S143 is NO, in a step S149, the flag FLG_RCG is set to “0”. Upon completion of the process in the step S147 or S149, the process returns to the routine in an upper hierarchy.
The face recognition process in the step S139 is executed according to a subroutine shown in
In a step S155, the recognized face register RGST3 is cleared, and in a step S157, the variable K is set to “1”. In a step S159, it is determined whether or not the variable K exceeds a maximum value Kmax (=the total number of the characteristic amounts contained in the age and gender dictionary ASDC). When a determined result is NO, the process advances to a step S165 so as to compare the corrected characteristic amount with the characteristic amount described in the K-th column of the age and gender dictionary ASDC.
In a step S167, it is determined whether or not the matching degree exceeds the reference value REF2. When a determined result is YES, the process advances to a step S169 so as to register the column number (=K) of the characteristic amount in the matching destination and the matching degree on the recognized face register RGST3. Upon completion of the registration, in a step S171, the variable K is incremented, and thereafter, the process returns to the step S159. When the determined result of the step S167 is NO, the process returns to the step S159 via the process in the step S171.
When the determined result of the step S159 is YES, it is determined in a step S161 whether or not at least one column number is set in the recognized face register RGST3. When a determined result of the step S161 is YES, in a step S163, the position and size of the face-detection frame structure corresponding to the maximum matching degree, and the age and gender described in the column of the age and gender dictionary ASDC indicated by the column number corresponding to the maximum matching degree are registered on the finalization register RGST4. When a determined result of the step S161 is NO, or upon completion of the process in the step S163, the process returns to the routine in an upper hierarchy.
With reference to
In a step S185, it is determined whether or not an operation for updating a reproduced file is performed by the operator. When a determined result is YES, in a step S187, the variable P is incremented or decremented, and thereafter, the process returns to the step S183. When the determined result is NO, in a step S189, it is determined whether or not the beautiful skin process operation is performed by the operator, and when a determined result is NO, the process returns to the step S185 while when the determined result is YES, the process advances to a step S191.
In the step S191, the image data of the image file under reproduction is written into the search image area 32c of the SDRAM 32, and in the step S193, the age estimating process is executed. In a step S195, it is determined whether or not the flag FLG_RCG is “1”, and when a determined result is NO, the process returns to the step S185 while when the determined result is YES, the process advances to a step S197. In the step S197, the beautiful skin process is executed for the image belonging to the face-detection frame structure registered in the finalization register RGST4, based on the age or the gender registered in the finalization register RGST4. Upon completion of the beautiful skin process, the process returns to the step S185.
As can be seen from the above-described explanation, the CPU 26 takes the image (16, S181 to 187), searches for the one or at least two face images from the taken image (S93 to S123), and designates each of the one or at least two discovered face images (S137). Moreover, the CPU 26 holds the plurality of face characteristics respectively corresponding to the plurality of ages (44) and detects the facial expression of the designated face image. Moreover, the CPU 26 estimates the age of the person equivalent to the designated face image based on the held plurality of face characteristics and the detected facial expression (S153 to S163, S165 to S171), and adjusts the quality of the taken image with reference to the estimated result (S47, S197).
Upon estimating the age of the face image, in addition to the plurality of face characteristics respectively corresponding to the plurality of ages, the facial expression of the face image which is the target of the estimation is referred to. By adjusting the quality of the image with reference to the age thus estimated, the quality of the image is improved.
It is noted that, in this embodiment, the control programs equivalent to the multi task operating system and the plurality of tasks executed thereby are previously stored in the flash memory 44. However, a communication I/F 50 for connecting to the external server may be arranged in the digital camera 10 as shown in
Moreover, in this embodiment, the processes executed by the CPU 26 are divided into the main task shown in
Moreover, in this embodiment, the characteristic amount of the face image belonging to the designated-face detection frame structure is corrected in order to inhibit or resolve the difference between the smile degree calculated in the step S151 and the smile degree of each characteristic amount contained in the age and gender dictionary ASDC. However, instead of the characteristic amount of the face image belonging to the designated-face detection frame structure, or together with the characteristic amount of the face image belonging to the designated-face detection frame structure, the K-th characteristic amount contained in the age and gender dictionary ASDC may be corrected. Moreover, the characteristic amount of the corrected face image may be detected by regarding the face image belonging to the designated-face detection frame structure, not the characteristic amount, as a target of correction.
Moreover, in this embodiment, the age and gender dictionary ASDC which contains a characteristic amount of an average face image of each age of the male together with a characteristic amount of an average face image of each age of the female is used. However, the gender may be determined by comparing the corrected characteristic amount with the characteristic amount in the gender dictionary before estimating the age, based on a gender dictionary which contains a characteristic amount of an average face image of the male and a characteristic amount of an average face image the of female. In this case, only the characteristic amount of the determined gender may be compared with the age and gender dictionary ASDC.
Although the present invention has been described and illustrated in detail, it is clearly understood that the same is by way of illustration and example only and is not to be taken by way of limitation, the spirit and scope of the present invention being limited only by the terms of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2010-138423 | Jun 2010 | JP | national |