The disclosure of Japanese Patent Application No. 2009-284736, which was filed on Dec. 16, 2009, is incorporated herein by reference.
1. Field of the Invention
The present invention relates to an electronic camera. More particularly, the present invention relates to an electronic camera which searches for an image coincident with a designated image from a scene image outputted from an imaging device.
2. Description of the Related Art
According to one example of this type of camera, when a shutter button is half depressed, a focus lens is set to a pan-focus state, and thereafter, a face-recognition process is executed. Thereby, when a face of a human is recognized, a position of the recognized face is determined as an AF area, and a contrast AF process is executed by noticing the determined AF area. A still-image photographing process is executed in response to fully depression of the shutter button.
However, in the above-described camera, the contrast AF process is executed after the face-recognition process, and therefore, a time lag which is attributed to the contrast AF process arises by the time the still-image photographing process is performed. As a result, the face recognized by the face-recognition process is oriented to another direction at a time point of the still-image photographing process, and thereby, the quality of a still image may be deteriorated.
An electronic camera according to the present invention, comprises: an imager, having an imaging surface capturing a scene through a focus lens, which repeatedly outputs a scene image; an extractor which checks the scene image outputted from the imager with each of a plurality of characteristic patterns so as to extract a specific characteristic pattern satisfying a predetermined condition; a restrictor which restricts behavior of adjusting a distance from the focus lens to the imaging surface in association with an extraction process of the extractor; a creator which creates a reference image based on the scene image outputted from the imager corresponding to the extraction of the specific characteristic pattern; and a register which registers the specific characteristic pattern as a characteristic pattern used for searching for an image coincident with the reference image created by the creator.
An imaging control program product executed by a processor of an electronic camera provided with an imager, having an imaging surface capturing a scene through a focus lens, which repeatedly outputs a scene image, the imaging control program product comprises: an extracting step which checks the scene image outputted from the imager with each of a plurality of characteristic patterns so as to extract a specific characteristic pattern satisfying a predetermined condition; a restricting step which restricts behavior of adjusting a distance from the focus lens to the imaging surface in association with an extraction process of the extracting step; a creating step which creates a reference image based on the scene image outputted from the imager corresponding to the extraction of the specific characteristic pattern; and a registering step which registers the specific characteristic pattern as a characteristic pattern used for searching for an image coincident with the reference image created by the creating step.
An imaging control method executed by an electronic camera provided with an imager, having an imaging surface capturing a scene through a focus lens, which repeatedly outputs a scene image, the imaging control method comprises: an extracting step which checks the scene image outputted from the imager with each of a plurality of characteristic patterns so as to extract a specific characteristic pattern satisfying a predetermined condition; a restricting step which restricts behavior of adjusting a distance from the focus lens to the imaging surface in association with an extraction process of the extracting step; a creating step which creates a reference image based on the scene image outputted from the imager corresponding to the extraction of the specific characteristic pattern; and a registering step which registers the specific characteristic pattern as a characteristic pattern used for searching for an image coincident with the reference image created by the creating step.
The above described features and advantages of the present invention will become more apparent from the following detailed description of the embodiment when taken in conjunction with the accompanying drawings.
With reference to
The behavior which adjusts the distance from the focus lens 6 to the imaging surface is restricted in association with the extraction of the specific characteristic pattern, and the reference image is created based on the scene image outputted from the imager 1 corresponding to the extraction of the specific characteristic pattern. Thereby, a time period from a timing of extracting the specific characteristic pattern to a timing of creating the reference image is shortened, and the quality of the reference image is improved.
With reference to
When a power source is applied, a CPU 26 determines a setting (i.e., an operation mode at a current time point) of a mode selector switch 28md arranged in a key input device 28, under a main task If the operation mode at the current time point is a pet registration mode, a pet registering task and a registration-use face detecting task are started up. Moreover, if the operation mode at the current time point is a pet imaging mode, on the condition that a pet image is already registered, a pet imaging task and an imaging-use face detecting task are started up.
When the pet registration mode is selected, the CPU 26 enables a pan-focus setting under the pet registering task. The drivers 18a and 18b respectively adjust a position of the focus lens 12 and an aperture amount of the aperture unit 14 so that a depth of field becomes deep. Subsequently, the CPU 26 commands a driver 18c to repeat an exposure procedure and an electric-charge reading-out procedure in order to start a moving-image taking process. In response to a vertical synchronization signal Vsync periodically generated from an SG (Signal Generator) not shown, the driver 18c exposes the imaging surface and reads out the electric charges produced on the imaging surface in a raster scanning manner. From the imager 16, raw image data based on the read-out electric charges is outputted periodically.
A pre-processing circuit 20 performs processes, such as digital clamp, pixel defect correction, and gain control, on the raw image data which is outputted from the imager 16. The raw image data on which such pre-processes are performed is written into a raw image area 32a of an SDRAM 32 through a memory control circuit 30.
A post-processing circuit 34 reads out the raw image data accommodated in the raw image area 32a through the memory control circuit 30, performs processes such as a color separation process, a white balance adjusting process, a YUV converting process and etc., on the read-out raw image data, and individually creates display image data and search image data that comply with a YUV format. The display image data is written into a display image area 32b of the SDRAM 32 by the memory control circuit 30. The search image data is written into a search image area 32c of the SDRAM 32 by the memory control circuit 30.
An LCD driver 36 repeatedly reads out the display image data accommodated in the display image area 32b through the memory control circuit 30, and drives an LCD monitor 38 based on the read-out image data. As a result, a real-time moving image (through image) of the scene is displayed on a monitor screen.
Moreover, under the registration-use face detecting task executed in parallel with the pet registering task, the CPU 26 searches for a face image of an animal from the search image data accommodated in the search image area 32c. For the registration-use face detecting task, a general dictionary GLDC shown in
Under the registration-use face detecting task, firstly, a graphic generator 46 is requested to display a registration frame structure RF1. The graphic generator 46 outputs graphic data representing the registration frame structure RF1 toward the LCD driver 36. The registration frame structure RF1 is displayed at a center of the LCD monitor 38 as shown in
Subsequently, a flag FLG_A is set to “0”, and a flag FLG_B is set to “0”. Herein, the flag FLG_A is a flag for identifying whether or not a face pattern in which a checking degree exceeds a reference value REF is discovered, and “0” indicates being undiscovered while “1” indicates being discovered. Moreover, the flag FLG_B is a flag for identifying whether or not a reference-face-pattern number is determined, and “0” indicates being undetermined while “1” indicates being determined. It is noted that the reference-face-pattern number is a face pattern number which is referred to in image searching under the imaging-use face detecting task.
When the vertical synchronization signal Vsync is generated, partial image data belonging to the registration frame structure RF1 is read out from the search image area 32c so as to calculate a characteristic amount of the read-out image data. Thus, in a case where a cat CT1 is captured as shown in
Subsequently, a variable K is set to each of “1” to “70”, the calculated characteristic amount is checked with a characteristic amount of a face pattern FP_K. When a checking degree exceeds the reference value REF, the current face pattern number (=FP_K) and the checking degree are registered in the register RGST1, and the flag FLG_A is updated to “1”.
Regarding the cat CT1 shown in
Regarding the dog DG1 shown in
When the flag FLG_A indicates “1” at a time point at which the above-described process corresponding to K=70 is completed, out of face pattern numbers registered in the register RGST1, a face pattern number corresponding to a maximum checking degree is determined as the reference-face-pattern number. In an example of
With reference to
An AE evaluating circuit 22 integrates RGB data belonging to the evaluation area EVA, out of the RGB data produced by the pre-processing circuit 20, at each generation of the vertical synchronization signal Vsync. Thereby, 256 integral values, i.e., 256 AE evaluation values, are outputted from the AE evaluating circuit 22 in response to the vertical synchronization signal Vsync.
Moreover, an AF evaluating circuit 24 extracts a high-frequency component of G data belonging to the same evaluation area EVA, out of the RGB data outputted from the pre-processing circuit 20, and integrates the extracted high-frequency component at each generation of the vertical synchronization signal Vsync. Thereby, 256 integral values, i.e., 256 AF evaluation values, are outputted from the AF evaluating circuit 24 in response to the vertical synchronization signal Vsync.
When the flag FLG_B indicates “0”, under the pet registering task, the CPU 26 executes a simple AE process that is based on the output from the AE evaluating circuit 22, so as to calculate an appropriate EV value. The simple AE process is executed in parallel with the moving-image taking process, and an exposure time period that defines the appropriate EV value in cooperation with an aperture amount corresponding to the pan-focus setting is set to the driver 18c. As a result, a brightness of the through image is adjusted moderately.
When the flag FLG_B is updated to “1”, the CPU 26 executes a strict AE process under the pet registering task. The strict AE process is also executed based on the output of the AE evaluating circuit 22, and thereby, an optimal EV value is calculated. To the driver 18c, the exposure time period that defines the optimal EV value in cooperation with the aperture amount corresponding to the pan-focus setting is set. As a result, the brightness of the through image is adjusted strictly. Upon completion of the strict AE process, the CPU 26 executes a still-image taking process. One frame of image data immediately after the strict AE process is completed is taken by the still-image taking process into a still-image area 32d.
Thereafter, the CPU 26 cuts out partial image data belonging to the registration frame structure RF1 out of the image data which is taken into the still-image area 32d, and reduces the cut-out image data. Thereby, registered pet image data is obtained. The registered pet image data is allocated to the reference-face-pattern number which is determined under the registration-use face detecting task The registered pet image data and the reference-face-pattern number being associated with each other are stored in a flash memory 44 as an extraction dictionary EXDC.
In the example of
With reference to
When the pet imaging mode is selected, the CPU 26 reads out the registered pet image data contained in the extraction dictionary EXDC from the flash memory 44 under the pet imaging task, and develops the read-out registered pet image data to the display image area 32b of the SDRAM 32. The LCD driver 36 reads out the developed registered pet image data through the memory control circuit 30, and drives the LCD monitor 38 based on the read-out registered pet image data.
Thus, when the extraction dictionary EXDC is created as shown in
When a selection operation which selects any one of the displayed registered pet images is performed, the CPU 26 reads out a characteristic amount of a reference face pattern corresponding to the selected registered pet image from the general dictionary GLDC. In a case where the registered pet image representing the cat CT1 is selected in the example of
Moreover, under the imaging-use face detecting task executed in parallel with the pet imaging task, the CPU 26 searches for the face image of the animal from the search image data accommodated in the search image area 32c. The face image to be searched is the image coincident with the registered pet image which is selected by the selection operation. For the imaging-use face detecting task, a plurality of face-detection frame structures FD, FD, FD, . . . shown in
The face-detection frame structure FD is moved in a raster scanning manner corresponding to the evaluation area EVA on the search image area 32b (see
The CPU 26 reads out image data belonging to the face-detection frame structure FD from the search image area 32b through the memory control circuit 30 so as to calculate a characteristic amount of the read-out image data. The calculated characteristic amount is checked with the characteristic amount of the reference face pattern. When the checking degree exceeds the reference value REF, a position and a size of the face-detection frame structure FD at a current time point are determined as the position and size of the face image, and a flag FLGpet is updated from “0” to “1”.
Under the pet imaging task, the CPU 26 repeatedly executes the simple AE process corresponding to FLGpet=0. The brightness of the through image is moderately adjusted by the simple AE process. When the flag FLGpet is updated to “1”, the CPU 26 requests the graphic generator 46 to display a face frame structure KF1. The graphic generator 46 outputs graphic data representing the face frame structure KF1 toward the LCD driver 36. The face frame structure KF1 is displayed on the LCD monitor 38 in a manner adapted to the position and size of the face image which are determined under the imaging-use face detecting task
Thus, when the cat CT1 is captured in a state where the registered pet image of the cat CT1 is selected, the face frame structure KF1 is displayed on the LCD monitor 38 as shown in
Thereafter, the CPU 26 executes the strict AE process and the AF process under the pet imaging task The AF process is executed based on the output of the AF evaluating circuit 24, and the focus lens 12 is set to a focal point which is discovered by the AF process. Thereby, a sharpness of the through image is improved.
Upon completion of the AF process, the still-image taking process and a recording process are executed. One frame of the image data immediately after the AF process is completed is taken by the still-image taking process into a still-image area 32d. The taken one frame of the image data is read out from the still-image area 32d by an I/F 40 which is started up in association with the recording process, and is recorded on a recording medium 42 in a file format The face frame structure KF1 is non-displayed after the recording process is completed.
The CPU 26 executes a plurality of tasks including the main task shown in
With reference to
When the determined result is YES, the pet imaging task is started up in a step S9 while when the determined result is NO, the CPU 26 notifies an error in a step S11. When NO is determined in both the steps Si and S5, another process is executed in a step S13. Upon completion of the processes in the steps S3, S9 and S11 or S13, it is repeatedly determined in a step S15 whether or not a mode switching operation is performed. When a determined result is updated from NO to YES, the task that is being started up is stopped in a step S17. Thereafter, the process returns to the step S1.
With reference to
The flag FLG_B is set to “0” as an initial setting under the registration-use face detecting task, and is updated to “1” when the reference-face-pattern number is determined. In a step S27, it is determined whether or not the flag FLG_B indicates “1”, and when the determined result is NO, the simple AE process is executed in a step S29. Thereby, the brightness of the through image is adjusted moderately.
When the flag FLG_B is updated from “0” to “1”, the strict AE process is executed in a step S31, and the still-image taking process is executed in a step S33. As a result of the process in the step S31, the brightness of the through image is adjusted strictly. Moreover, as a result of the process in the step S33, one frame of the image data immediately after the strict AE process is completed is taken into the still-image area 32d.
In a step S35, the registered pet image data is created based on the image data taken into the still-image area 32d. In a step S37, the registered pet image data created in the step S35 is allocated to the reference-face-pattern number which is determined under the registration-use face detecting task Thereby, the extraction dictionary EXDC is newly or additionally created. Upon creation of the extraction dictionary EXDC, the process returns to the step S25.
With reference to
In a step S51, the variable K is set to “1”, and in a step S53, the characteristic amount calculated in the step S49 is checked with the characteristic amount of the face pattern FP_K contained in the general dictionary GLDC. In a step S55, it is determined whether or not the checking degree exceeds the reference value REF, and when the determined result is NO, the process directly advances to a step S61 while when the determined result is YES, the process advances to the step S61 via steps S57 to S59. In the step S57, the current face pattern number (=FP_K) and the checking degree are registered in the register RGST1. In the step S59, the flag FLG_A is updated to “1” in order to declare that the face pattern in which the checking degree exceeds the reference value REF is discovered.
In the step S61, it is determined whether or not the variable K reaches “70”. When the determined result is NO, the variable K is incremented in a step S63, and thereafter, the process returns to the step S53 while when the determined result is YES, in a step S65, it is determined whether or not the flag FLG_A indicates “1”. When the flag FLG_A indicates “0”, the process returns to the step S47, and when the flag FLG_A indicates “1”, the reference-face-pattern number is determined in a step S67. The reference-face-pattern number is equivalent to the face pattern number corresponding to the maximum checking degree out of the face pattern numbers registered in the register RGST1. Upon completion of the process in the step S67, the flag FLG_B is updated to “1” in a step S69 in order to declare the determination of the reference-face-pattern number, and thereafter, the process is ended.
With reference to
In a step S77, the moving-image taking process is executed, and in a step S79, whole of the evaluation area EVA is set as a search area. In a step S81, in order to define a variable range of the size of the face-detection frame structure FD, a maximum size SZmax is set to “200”, and the minimum size SZmin is set to “20”. Upon completion of the process in the step S81, the imaging-use face detecting task is started up in a step S83.
The flag FLGpet is set to “0” as an initial setting under the imaging-use face detecting task, and is updated to “1” when a face image coincident with the reference-face pattern is discovered. In a step S85, it is determined whether or not the flag FLGpet indicates “1”, and as long as the determined result is NO, the simple AE process is repeatedly executed in a step S87. The brightness of the through image is moderately adjusted by the simple AE process.
When the determined result is updated from NO to YES, the process advances to a step S89, so as to request the graphic generator 46 to display the face frame structure KF1. The graphic generator 46 outputs the graphic data representing the face frame structure KF1 toward the LCD driver 36. The face frame structure KF1 is displayed on the LCD monitor 38 in a manner to surround the detected face image.
In a step S91, the strict AE process is executed, and in a step S93, the AF process is executed. As a result of the strict AE process and the AF process, the brightness and focus of the through image are adjusted strictly. In a step S95, the still-image taking process is executed, and in a step S97, the recording process is executed. One frame of the image data immediately after the AF process is completed is taken by the still-image taking process into the still-image area 32d. The taken one frame of the image data is recorded by the recording process on the recording medium 42. Upon completion of the recording process, in a step S99, the graphic generator 46 is requested not to display the face frame structure KF1, and thereafter, the process returns to the step S79.
With reference to
In a step S111, the calculated characteristic amount is checked with the characteristic amount of the reference face pattern which is read out from the general dictionary GLDC, and in a step S113, it is determined whether or not the checking degree exceeds the reference value REF. When the determined result is YES, the process advances to a step S115, and when the determined result is NO, the process advances to a step S119.
In the step S115, the position and size of the face-detection frame structure FD at the current time point are determined as the position and size of the face image. The determining process is reflected in the face-frame-structure display process in the above-described step S89.
The face frame structure KF1 is displayed on the LCD monitor 38 in a manner which adapts to the position and size of the face-detection frame structure FD at the current time point. Upon completion of the process in the step S115, the flag FLGpet is set to “1” in a step S117, and thereafter, the process is ended.
In the step S119, it is determined whether or not the face-detection frame structure FD reaches a lower right position of the search area. When the determined result is NO, in a step S121, the face-detection frame structure FD is moved in a raster direction by a predetermined amount, and thereafter, the process returns to the step S109. When the determined result is YES, in a step S123, it is determined whether or not the size of the face-detection frame structure FD is equal to or less than “SZmin”. When the determined result is NO, in a step S125, the size of the face-detection frame structure FD is reduced by “5”, and in a step S127, the face-detection frame structure FD is placed at the upper left position of the search area. Thereafter, the process returns to the step S109. When the determined result in the step S123 is YES, the process directly returns to the step S103.
As can be seen from the above-described explanation, the imager 16, having the imaging surface capturing the scene through the focus lens 12, repeatedly outputs the raw image data. The CPU 26 enables the pan-focus setting in order to restrict a focus adjusting operation (S21), and extracts the reference face pattern by checking the YUV-formatted image data based on the raw image data outputted in this state from the imager 16 with each of a plurality of face patterns contained in the general dictionary GLDC (S47 to S67). Moreover, the CPU 26 creates the registered pet image data based on the raw image data which is outputted from the imager 16 corresponding to the extraction of the reference face pattern (S33 to S35), and registers the reference face pattern as the face pattern used for searching for the image coincident with the created registered pet image data (S37).
Thus, the focus adjusting operation is restricted in association with the extraction of the reference face pattern, and the registered pet image data is created based on the raw image data which is outputted from the imager 16 corresponding to the extraction of the reference face pattern. Thereby, a time period from a timing of extracting the reference face pattern to a timing of creating the registered pet image data is shortened, and the quality of registered pet image data is improved.
It is noted that, in this embodiment, the pan-focus setting is enabled under the pet registering task to save the time period required for creating the pet image data.
However, in the pet registration mode, it is necessary to contain the face image of the animal to the registration frame structure RF1 as shown in
Moreover, when the face of the animal is contained in the registration frame structure RF 1 under the pet registering task, the operator continuously captures the face of the animal in a state where the distance to the animal is kept approximately the same distance. Therefore, instead of enabling the pan-focus setting, a center-priority continuous setting may be enabled at the same time of starting the moving-image taking process. In this case, instead of the process in the step S21 shown in
According to
Although the present invention has been described and illustrated in detail, it is clearly understood that the same is by way of illustration and example only and is not to be taken by way of limitation, the spirit and scope of the present invention being limited only by the terms of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2009-284736 | Dec 2009 | JP | national |