The disclosure of Japanese Patent Application No. 2009-254595, which was filed on Nov. 6, 2009, is incorporated herein by reference.
1. Field of the Invention
The present invention relates to an electronic camera. More particularly, the present invention relates to an electronic camera which searches for a specific object image from a scene image.
2. Description of the Related Art
According to one example of this type of apparatus, a scene image is repeatedly outputted from an image sensor. A CPU repeatedly determines, prior to a half-depression of a shutter button, whether or not a face image facing an imaging surface is appeared on the scene image outputted from the image sensor. A detection history of the face including a determined result is described in a face-detecting history table by the CPU. When the shutter button is half depressed, the CPU determines a face image position based on the detection history of the face described in the face-detecting history table. An imaging condition such as focus is adjusted by noticing the determined face image position. Thereby, it becomes possible to appropriately adjust the imaging condition by noticing the face image.
However, in the above-described apparatus, a face appeared in a recorded image is not always directed to the front, and therefore, an imaging performance is limited in this regard.
An electronic camera according to the present invention, comprises: an imager, having an imaging surface capturing a scene, which repeatedly generates a scene image; a first searcher which searches for a partial image having a specific pattern from the scene image generated by the imager; an adjuster which adjusts an imaging condition by noticing an object which is equivalent to the partial image discovered by the first searcher; a second searcher which searches for the partial image having the specific pattern from the scene image generated by the imager after an adjusting process of the adjuster is completed; and a first recorder which records the scene image corresponding to the partial image discovered by the second searcher out of the scene image generated by the imager.
An imaging control program product according to the present invention is an imaging control program product executed by a processor of the electronic camera provided with the imager, having the imaging surface capturing the scene, which repeatedly generates the scene image, comprises: a first searching step which searches for the partial image having the specific pattern from the scene image generated by the imager; an adjusting step which adjusts the imaging condition by noticing the object which is equivalent to the partial image discovered by the first searching step; a second searching step which searches for the partial image having the specific pattern from the scene image generated by the imager after the adjusting process of the adjusting step is completed; and a recording step which records the scene image corresponding to the partial image discovered by the second searching step out of the scene image generated by the imager.
An imaging control method according to the present invention is an imaging control method executed by the electronic camera provided with the imager, having the imaging surface capturing the scene, which repeatedly generates the scene image, comprises: a first searching step which searches for the partial image having the specific pattern from the scene image generated by the imager; an adjusting step which adjusts the imaging condition by noticing the object which is equivalent to the partial image discovered by the first searching step; a second searching step which searches for the partial image having the specific pattern from the scene image generated by the imager after the adjusting process of the adjusting step is completed; and a recording step which records the scene image corresponding to the partial image discovered by the second searching step out of the scene image generated by the imager.
The above described features and advantages of the present invention will become more apparent from the following detailed description of the embodiment when taken in conjunction with the accompanying drawings.
With reference to
The imaging condition is adjusted by noticing the object which is equivalent to a specific object image. Moreover, a searching process of the specific object image is executed again after adjusting the imaging condition. Furthermore, the scene image is recorded corresponding to a discovery of the specific object image by the searching process executed again. Thereby, a frequency of the specific object image appearing on a recorded scene image and an image quality of the specific object image appeared on the recorded scene image are improved. Thus, an imaging performance is improved.
With reference to
When a normal imaging mode or a pet imaging mode is selected by a mode key 28md arranged in a key input device 28, a CPU 26 commands a driver 18c to repeat exposure behavior and electric-charge reading-out behavior in order to start a moving-image fetching process under the normal imaging task or the pet imaging task. In response to a vertical synchronization signal Vsync periodically generated from an SG (Signal Generator) not shown, the driver 18c exposes the imaging surface and reads out the electric charges produced on the imaging surface in a raster scanning manner. From the imager 16, raw image data that is based on the read-out electric charges is cyclically outputted.
A pre-processing circuit 20 performs processes, such as digital clamp, pixel defect correction, gain control and etc., on the raw image data outputted from the imager 16. The raw image data on which these processes are performed is written into a raw image area 32a of an SDRAM 32 through a memory control circuit 30.
A post-processing circuit 34 reads out the raw image data accommodated in the raw image area 32a through the memory control circuit 30, performs processes such as a color separation process, a white balance adjusting process, a YUV converting process and etc., on the read-out raw image data, and individually creates display image data and search image data that comply with a YUV format.
The display image data is written into a display image area 32b of the SDRAM 32 by the memory control circuit 30. The search image data is written into a search image area 32c of the SDRAM 32 by the memory control circuit 30.
An LCD driver 36 repeatedly reads out the display image data accommodated in the display image area 32b through the memory control circuit 30, and drives an LCD monitor 38 based on the read-out image data. As a result, a real-time moving image (through image) of the scene is displayed on a monitor screen. It is noted that a process on the search image data will be described later.
With reference to
An AE evaluating circuit 22 integrates RGB data belonging to the evaluation area EVA, out of the RGB data produced by the pre-processing circuit 20, at each generation of the vertical synchronization signal Vsync. Thereby, 256 integral values, i.e., 256 AE evaluation values, are outputted from the AE evaluating circuit 22 in response to the vertical synchronization signal Vsync.
Moreover, an AF evaluating circuit 24 extracts a high-frequency component of G data belonging to the same evaluation area EVA, out of the RGB data outputted from the pre-processing circuit 20, and integrates the extracted high-frequency component at each generation of the vertical synchronization signal Vsync. Thereby, 256 integral values, i.e., 256 AF evaluation values, are outputted from the AF evaluating circuit 24 in response to the vertical synchronization signal Vsync.
The CPU 26 executes a simple AE process that is based on the output from the AE evaluating circuit 22, in parallel with a moving-image fetching process, so as to calculate an appropriate EV value. An aperture amount and an exposure time period that define the calculated appropriate EV value are set to the drivers 18b and 18c, respectively. As a result, a brightness of the through image is adjusted moderately.
When a shutter button 28sh is half-depressed in a state where the normal imaging mode is selected, the CPU 26 executes an AE process that is based on the output of the AE evaluating circuit 22 under the normal imaging task and respectively sets the aperture amount and the exposure time period that define an optimal EV value calculated thereby to the drivers 18b and 18c. As a result, the brightness of the through image is adjusted strictly. Moreover, the CPU 26 executes an AF process that is based on the output from the AF evaluating circuit 24 under the normal imaging task so as to set the focus lens 12 to a focal point through the driver 18a. Thereby, a sharpness of the through image is improved.
When the shutter button 28sh is shifted from a half-depressed state to a fully-depressed state, the CPU 26 starts up an I/F 40, for a recording process, under the normal imaging task. The I/F 40 reads out one frame of the display image data representing the scene at a time point at which the shutter button 28sh is fully depressed, from the display image area 32b through the memory control circuit 30, and records an image file in which the read-out display image data is contained onto a recording medium 42.
In a case where the pet imaging mode is selected, under a face detecting task executed in parallel with the pet imaging task, the CPU 26 searches for a face image of an animal from the image data accommodated in the search image area 32c. For such a face detecting task, dictionaries DC_1 to DC_3 shown in
According to
The register RGST1 shown in
The face-detection frame structure FD shown in
Firstly, the search area is set so as to cover the whole evaluation area EVA. Moreover, the maximum size SZmax is set to “200”, and the minimum size SZmin is set to “20”. Therefore, the face-detection frame structure FD, having the size which changes in ranges “200” to “20”, is scanned on the evaluation area EVA as shown in
The CPU 26 reads out the image data belonging to the face-detection frame structure FD from the search image area 32c through the memory control circuit 30 so as to calculate a characteristic amount of the read-out image data. The calculated characteristic amount is checked with the characteristic amount of the face pattern contained in each of the dictionaries DC_1 to DC_3.
On the assumption that the face of the cat stands upright, a checking degree corresponding to the characteristic amount of the face pattern contained in the dictionary DC_1 exceeds a reference REF when the face of the cat is captured in a posture of a camera housing standing upright. Moreover, the checking degree corresponding to the characteristic amount of the face pattern contained in the dictionary DC_2 exceeds the reference REF when the face of the cat is captured in a posture of a camera housing being inclined by 90 degrees to the right. Furthermore, the checking degree corresponding to the characteristic amount of the face pattern contained in the dictionary DC_3 exceeds the reference REF when the face of the cat is captured in a posture of a camera housing being inclined by 90 degrees to the left.
When the checking degree exceeds the reference REF, the CPU 26 regards the face of the cat as being discovered, registers the position and size of the face-detection frame structure FD at a current time point as the face-image information on the register RGST1, and concurrently, issues a face-frame-structure character display command corresponding to the position and size of the face-detection frame structure FD at a current time point toward a graphic generator 46.
The graphic generator 46 creates graphic image data representing a face frame structure, based on an applied face-frame-structure character display command, and applies the created graphic image data to the LCD driver 36. The LCD driver 36 displays a face-frame-structure character KF1 on the LCD monitor 38, based on the applied graphic image data.
When a cat EM1 shown in
When the checking degree exceeds the reference REF, under the pet imaging task, the CPU 26 executes the AE process that is based on the output of the AE evaluating circuit 22 and the AF process that is based on the output from the AF evaluating circuit 24. The AE process and the AF process are executed in accordance with the above-described procedure, and therefore, the brightness of the through image is adjusted strictly, and the sharpness of the through image is improved.
However, a time period required for the AE process is fixed while the time period required for the AF process differs depending on the position of the focus lens 12 and/or the cat. Thus, if the time period taken for the AE process is too long, an orientation of the face of the cat may be changed to another orientation as shown in
If the measured time period is equal to or less than a threshold value TH1 (=one second for example), the CPU 26 rapidly executes the recording process. The recording process is executed at a timing shown in
If the measured time period exceeds the threshold value TH1, the CPU 26 searches for the face image of the cat under the face detecting task again. However, the CPU 26 sets a partial area covering the face image registered on the register RGST1 as the search area. As shown in
Thus, the face-detection frame structure FD, having the size changes in a partial range defined by the maximum size SZmax and the minimum size SZmin, is scanned as shown in
Similarly to the above-described case, the CPU 26 reads out the image data belonging to the face-detection frame structure FD from the search image area 32c through the memory control circuit 30 so as to calculate the characteristic amount of the read-out image data. However, at a time point at which the limited search process is executed, the posture of the camera housing is specified, and therefore, the calculated characteristic amount is checked with the characteristic amount of the face pattern contained in the dictionary corresponding to the posture of the camera housing out of the dictionaries DC_1 to DC_3.
When the checking degree exceeds the reference REF, the CPU 26 regards the face of the cat as being rediscovered, and issues a face-frame-structure character display command corresponding to the position and size of the face-detection frame structure FD at the current time point toward the graphic generator 46. As a result, a face-frame-structure character KF1 is displayed on the LCD monitor 38.
The CPU 26 measures the time period required for the limited search process under the pet imaging task, and compares the measured time period with a threshold value TH2 (three seconds for example). When the checking degree exceeds the reference REF before the measured time period reaches the threshold value TH2, the CPU 26 rapidly executes the recording process. The recording process is executed at a timing shown in
When the pet imaging mode is selected, the CPU 26 executes a plurality of tasks including the pet imaging task shown in
With reference to
The flag FLGpet is set to “0” as an initial setting under the started-up face detecting task, and is updated to “1” when the face image coincident with any one of the face patterns contained in the dictionaries DC_1 to DC_3 is discovered. In a step S11, it is determined whether or not the flag FLGpet indicates “1”, and as long as a determined result is NO, the simple AE process is repeatedly executed in a step S13. The brightness of the through image is moderately adjusted by the simple AE process.
When the determined result is updated from NO to YES, resetting and starting a timer TM1 is executed in a step S15, and the AE process and the AF process are respectively executed in a step S17 and S19. As a result of the AE process and the AF process, the brightness and the focus of the through image are adjusted strictly.
In a step S21, it is determined whether or not the measured value of the timer TM1 at a time point at which the AF process is completed exceeds the threshold value TH1. When a determined result is NO, the process directly advances to a step S35 and executes the recording process. As a result, the image data representing the scene at a time point at which the AF process is completed is recorded on the recording medium 42 in a file format Upon completion of the recording process, the process returns to the step S3.
When the determined result in the step S21 is YES, the process advances to a step S23, and sets a partial area covering the face image registered on the register RGST1 as the search area. The search area, having the size which is 1.3 times of the face size registered on the register RGST1, is allocated to a position which is equivalent to the face position registered on the register RGST1. In a step S25, the maximum size SZmax is set to the value which is 1.3 times of the face size registered on the register RGST1, and concurrently, the minimum size SZmin is set to the value which is 0.8 times of the face size registered on the register RGST1.
Upon completion of the process in the step S25, the face detecting task is started up again in a step S27, and resetting and starting the timer TM1 is executed in a step S29. As described above, the flag FLGpet is set to “0” as the initial setting under the started-up face detecting task, and is updated to “1” when the face image coincident with any one of the face patterns contained in the dictionaries DC_1 to DC_3 is discovered. In a step S31, it is determined whether or not the flag FLGpet indicates “1”, and in a step S33, it is determined whether or not the measured value of the timer TM1 exceeds the threshold value TH2.
When the flag FLGpet is updated from “0” to “1” before the measured time period of the timer TM1 reaches the threshold value TH2, YES is determined in the step S31, and the recording process is executed in a step S35. As a result, the image data representing the scene at a time point at which the flag FLGpet is updated to “1” is recorded on the recording medium 42 in a file format Upon completion of the recording process, the process returns to the step S3.
When the measured value of the timer TM1 reaches the threshold value TH2 with the flag FLGpet indicating “0”, YES is determined in the step 33, and the process returns to the step S3 without executing the recording process in the step S35.
With reference to
In a step S51, a checking process for checking the calculated characteristic amount with the characteristic amount of each of the face patterns contained in the dictionaries DC_1 to DC_3 is executed. Upon completion of the checking process, in a step S53, it is determined whether or not the flag FLGpet indicates “1”. When a determined result is YES, the process is ended while when the determined result is NO, the process advances to a step S55.
In the step S55, it is determined whether or not the face-detection frame structure FD reaches a lower right position of the search area. When a determined result is NO, in a step S57, the face-detection frame structure FD is moved in a raster direction by a predetermined amount, and thereafter, the process returns to the step S49. When the determined result is YES, in a step S59, it is determined whether or not the size of the face-detection frame structure FD is equal to or less than “SZmin”. When the determined result is NO, in a step S61, the size of the face-detection frame structure FD is reduced by “5”, and the face-detection frame structure FD is placed at an upper left position of the search area in a step S63. Thereafter, the process returns to the step S49. When a determined result in the step S59 is YES, the process directly returns to the step S43.
The checking process in the step S51 shown in
In the step S73, the variable DIR is set to “1”. In a step S75, the characteristic amount of the image data belonging to the face-detection frame structure FD is checked with the characteristic amount of the face pattern contained in a dictionary DC_DIR. In a step S77, it is determined whether or not the checking degree exceeds the reference REF.
When a determined result is NO, the variable DIR is incremented in a step S79, and in a step S81, it is determined whether or not the incremented variable DIR exceeds “3”. If DIR≦2 is established, the process returns to the step S75 while if DIR>3 is established, the process returns to the routine in an upper hierarchy.
When the determined result in the step S77 is YES, the process advances to a step S83, and the current position and size of the face-detection frame structure FD is registered as the face image information on the register RGST1. In a step S85, the face-frame-structure character display command corresponding to the position and size of the face-detection frame structure FD at the current time point is issued toward the graphic generator 46. As a result, the face-frame-structure character KF1 is displayed on the through image in an OSD manner. Upon completion of the process in a step S85, the flag FLGpet is set to “1” in a step S87, and the process returns to the routine in the upper hierarchy.
In steps S89 to S91, processes similar to that in the above-described steps S75 to S77 are executed. In the step S89, the dictionary corresponding to the posture of the camera housing out of the dictionaries DC_1 to DC_3 is referred. If the checking degree is equal to or less than the reference REF, the process directly returns to the routine in the upper hierarchy, and if the checking degree exceeds the reference REF, in steps S93 to S95, the process executes processes similar to that in the above-described steps S85 to S87, and then returns to the routine in the upper hierarchy.
As can be seen from the above-described explanation, the imager 16, having the imaging surface capturing the scene, repeatedly generates the scene image. The CPU 26 searches for the face image coincident with any one of the face patterns contained in the dictionaries DC_1 to DC_3 from the scene image generated by the imager 16 (S9), and adjusts an imaging parameter by noticing an animal equivalent to the discovered face image (S17, S19). The CPU 26 also searches for the face image coincident with any one of the face patterns contained in the dictionaries DC_1 to DC_3 from the scene image generated by the imager 16 after the adjusting process of the imaging parameter is completed (S27), and records the scene image corresponding to the discovered face image on the recording medium 42 (S31, S35).
Thus, the imaging parameter is adjusted by noticing the animal equivalent to the discovered face image. Moreover, the searching process of the face image is executed again after adjusting the imaging parameter. Furthermore, the scene image is recorded corresponding to the discovery of the face image by the searching process executed again. Thereby, the frequency of the face image of the animal appearing on the recorded scene image and the image quality of the face image of the animal appeared on the recorded scene image are improved. Thus, the imaging performance is improved.
It is noted that in this embodiment, the limited searching process is executed after the AE process and the AF process are completed (see steps S23 to S25 shown in
In this case, instead of the process according to the flowchart shown in
Moreover, according to
Furthermore, in this embodiment, the imaging condition is adjusted only immediately after the face image is discovered by the whole area searching process (see steps S17 to S19 shown in
Moreover, in this embodiment, a still camera which records a still image is assumed; however, the present invention is applied to a movie camera which records a moving image.
Although the present invention has been described and illustrated in detail, it is clearly understood that the same is by way of illustration and example only and is not to be taken by way of limitation, the spirit and scope of the present invention being limited only by the terms of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2009-254595 | Nov 2009 | JP | national |