The disclosure of Japanese Patent Application No. 2011-12514, which was filed on Jan. 25, 2011, is incorporated here by reference.
1. Field of the Invention
The present invention relates to an electronic camera. More particularly, the present invention relates to an electronic camera which sets a distance from a focus lens to an imaging surface to a distance corresponding to a focal point.
2. Description of the Related Art
According to one example of this type of camera, in a display screen where an animal which is an object and a cage which is an obstacle are displayed, a focal point is set by an operator moving a focus setting mark to a part of a display region of an animal to be focused. Moreover, a non-focal point is set by the operator moving a non-focal point setting mark to a part of a display region of a cage undesired to be focused. After an imaging lens is moved, a shooting is performed so as to come into focus on thus set focal point.
However, in the above-described camera, an operation of the operator is needed to set the focal point, and therefore, a focus performance may be decreased if the operation of the operator is unskilled.
An electronic camera according to the present invention, comprises: an imager, having an imaging surface capturing an optical image through a focus lens, which repeatedly outputs an electronic image corresponding to the optical image; a designator which designates, on the imaging surface, an area corresponding to a partial image coincident with a dictionary image out of the electronic image outputted from the imager; a changer which repeatedly changes a distance from the focus lens to the imaging surface after a designating process of the designator; a calculator which calculates a matching degree between a partial image outputted from the imager corresponding to the area designated by the designator and the dictionary image, corresponding to each of a plurality of distances defined by the changer; and an adjuster which adjusts the distance from the focus lens to the imaging surface based on a calculated result of the calculator.
According to the present invention, a computer program embodied in a tangible medium, which is executed by a processor of an electronic camera, the program comprises: an imaging step, having an imaging surface capturing an optical image through a focus lens, of repeatedly outputting an electronic image corresponding to the optical image; a designating step of designating, on the imaging surface, an area corresponding to a partial image coincident with a dictionary image out of the electronic image outputted from the imaging step; a changing step of repeatedly changing a distance from the focus lens to the imaging surface after a designating process of the designating step; a calculating step of calculating a matching degree between a partial image outputted from the imaging step corresponding to the area designated by the designating step and the dictionary image, corresponding to each of a plurality of distances defined by the changing step; and an adjusting step of adjusting the distance from the focus lens to the imaging surface based on a calculated result of the calculating step.
According to the present invention, an imaging control method executed by an electronic camera, comprises: an imaging step, having an imaging surface capturing an optical image through a focus lens, of repeatedly outputting an electronic image corresponding to the optical image; a designating step of designating, on the imaging surface, an area corresponding to a partial image coincident with a dictionary image out of the electronic image outputted from the imaging step; a changing step of repeatedly changing a distance from the focus lens to the imaging surface after a designating process of the designating step; a calculating step of calculating a matching degree between a partial image outputted from the imaging step corresponding to the area designated by the designating step and the dictionary image, corresponding to each of a plurality of distances defined by the changing step; and an adjusting step of adjusting the distance from the focus lens to the imaging surface based on a calculated result of the calculating step.
The above described features and advantages of the present invention will become more apparent from the following detailed description of the embodiment when taken in conjunction with the accompanying drawings.
With reference to
The distance from the focus lens to the imaging surface is repeatedly changed after the area corresponding to the partial image coincident with the dictionary image is designated on the imaging surface. The matching degree between the partial image corresponding to the designated area and the dictionary image is calculated corresponding to each of the plurality of distances defined by the changing process. The calculated matching degree is regarded as a focus degree corresponding to an object equivalent to the partial image, and the distance from the focus lens to the imaging surface is adjusted based on the matching degree. Thereby, a focus performance is improved.
With reference to
When a power source is applied, in order to execute a moving-image taking process, a CPU 26 commands a driver 18c to repeat an exposure procedure and an electric-charge reading-out procedure under an imaging task. In response to a vertical synchronization signal Vsync periodically generated from an SG (Signal Generator) not shown, the driver 18c exposes the imaging surface and reads out the electric charges produced on the imaging surface in a raster scanning manner. From the image sensor 16, raw image data that is based on the read-out electric charges is cyclically outputted.
A pre-processing circuit 20 performs processes, such as digital clamp, pixel defect correction, gain control and etc., on the raw image data outputted from the image sensor 16. The raw image data on which these processes are performed is written into a raw image area 32a of an SDRAM 32 through a memory control circuit 30.
A post-processing circuit 34 reads out the raw image data accommodated in the raw image area 32a through the memory control circuit 30, and performs a color separation process, a white balance adjusting process and a YUV converting process, on the read-out raw image data. Furthermore, the post-processing circuit 34 executes a zoom process for display and a zoom process for search to image data that comply with a YUV format, in a parallel manner. As a result, display image data and search image data that comply with the YUV format is individually created. The display image data is written into a display image area 32b of the SDRAM 32 by the memory control circuit 30. The search image data is written into a search image area 32c of the SDRAM 32 by the memory control circuit 30.
An LCD driver 36 repeatedly reads out the display image data accommodated in the display image area 32b through the memory control circuit 30, and drives an LCD monitor 38 based on the read-out image data. As a result, a real-time moving image (a live view image) of the scene is displayed on a monitor screen.
With reference to
An AE evaluating circuit 22 integrates RGB data belonging to the evaluation area EVA, out of the RGB data produced by the pre-processing circuit 20, at every time the vertical synchronization signal Vsync is generated. Thereby, 256 integral values (256 AE evaluation values) are outputted from the AE evaluating circuit 22 in response to the vertical synchronization signal Vsync. An AF evaluating circuit 24 integrates a high-frequency component of the RGB data belonging to the evaluation area EVA, out of the RGB data generated by the pre-processing circuit 20, at every time the vertical synchronization signal Vsync is generated. Thereby, 256 integral values (256 AF evaluation values) are outputted from the AF evaluating circuit 24 in response to the vertical synchronization signal Vsync. Processes based on thus obtained AE evaluation values and the AF evaluation values will be described later.
Under a person detecting task executed in parallel with the imaging task, the CPU 26 set a flag FLG_f to “0” as an initial setting. Subsequently, the CPU 26 executes a face detecting process in order to search for a face image of a person from the search image data accommodated in the search image area 32c, at every time the vertical synchronization signal Vsync is generated.
In the face detecting process, a face-detection frame structure FD of which a size is adjusted as shown in
In the face detecting process, the whole evaluation area EVA is set as a face portion search area, firstly. Moreover, in order to define a variable range of the size of the face-detection frame structure FD, a maximum size FSZmax is set to “200”, and a minimum size FSZmin is set to “20”.
The face-detection frame structure FD is moved by each predetermined amount in the raster scanning manner, from a start position (an upper left position) toward an ending position (a lower right position) of the face portion search area (see
Partial search image data belonging to the face-detection frame structure FD is read out from the search image area 32c through the memory control circuit 30. A characteristic amount of the read-out search image data is compared with a characteristic amount of each of the five dictionary images contained in the face dictionary DC_F. When a matching degree equal to or more than a threshold value TH_F is obtained, it is regarded that the face image has been detected. A position and a size of the face-detection frame structure FD and a dictionary number of a comparing resource at a current time point are registered as face information in a face-detection register RGSTface shown in
Thus, when persons HM1, HM2 and HM3 are captured by the imaging surface as shown in
Moreover, when persons HM4, HM5 and HM6 are captured by the imaging surface as shown in
After the face detecting process is completed, when there is no registration of the face information in the face-detection register RGSTface, i.e., when a face of a person has not been discovered, the CPU 26 executes a human-body detecting process in order to search for a human-body image from the search image data accommodated in the search image area 32c.
In the human-body detecting process, a human-body-detection frame structure BD of which size is adjusted as shown in
In the human-body detecting process, the whole evaluation area EVA is set as a human-body search area, firstly. Moreover, in order to define a variable range of the size of the human-body-detection frame structure BD, a maximum size BSZmax is set to “200”, and a minimum size BSZmin is set to “20”.
The human-body-detection frame structure BD is also moved by each predetermined amount in the raster scanning manner, from a start position (an upper left position) toward an ending position (a lower right position) of the human-body search area (see
Partial search image data belonging to the human-body-detection frame structure BD is read out from the search image area 32c through the memory control circuit 30. A characteristic amount of the read-out search image data is compared with a characteristic amount of the dictionary image contained in the human-body dictionary DC_B. When a matching degree equal to or more than a threshold value TH_B is obtained, it is regarded that the human-body image is detected. A position and a size of the human-body-detection frame structure BD at a current time point are registered as human-body information in a human-body-detection register RGSTbody shown in
Thus, when the persons HM1, HM2 and HM3 are captured by the imaging surface as shown in
Moreover, when the persons HM4, HM5 and HM6 are captured by the imaging surface as shown in
After the face detecting process is completed, when the face information has been registered in the face-detection register RGSTface, the CPU 26 uses a region indicated by face information in which a size is the largest, out of the face information registered in the face-detection register RGSTface, as a target region of an AF process described later. When a plurality of the face information in which the size is the largest is registered, out of a plurality of maximum size of face information, a region indicated by face information in which a position is the nearest to the center of the imaging surface is used as a target region of the AF process.
When the persons HM1, HM2 and HM3 are captured by the imaging surface as shown in
When the persons HM4, HM5 and HM6 are captured by the imaging surface as shown in
After the human-body detecting process is executed and completed, when the human-body information is registered in the human-body-detection-register RGSTbody, the CPU 26 uses human-body information in which a size is the largest, out of the human-body information registered in the human-body-detection-register RGSTbody, as a target region of the AF process. When a plurality of human-body information in which a size is the largest is registered, out of a plurality of maximum size of human-body information, a region indicated by human-body information in which a position is the nearest to the center of the imaging surface is used as a target region of the AF process described later.
When the persons HM1, HM2 and HM3 are captured by the imaging surface as shown in
Moreover, when the persons HM4, HM5 and HM6 are captured by the imaging surface as shown in
A position and a size of the face information or the human-body information used as the target region of the AF process and a dictionary number of a comparing resource are registered in an AF target register RGSTaf shown in
It is noted that, after the human-body detecting process is completed, when there is no registration of the human-body information in the human-body-detection register RGSTbody, i.e., when any of a face and a human-body has not been discovered, the CPU 26 sets the flag FLG_f to “0” in order to declare that the person is undiscovered.
When a shutter button 28sh is in a non-operated state, the CPU 26 executes following processes. When the flag FLG_f indicates “0”, the CPU 26 executes a simple AE process that is based on output from the AE evaluating circuit 22 under the imaging task so as to calculate an appropriate EV value. The simple AE process is executed in parallel with the moving-image taking process, and an aperture amount and an exposure time period that define the calculated appropriate EV value are set to the drivers 18b and 18c, respectively. As a result, a brightness of a live view image is adjusted approximately.
When the flag FLG_f is updated to “1”, the CPU 26 requests a graphic generator 46 to display a person frame structure HF with reference to a registration content of the face-detection register RGSTface or the human-body-detection register RGSTbody. The graphic generator 46 outputs graphic information representing the person frame structure HF toward the LCD driver 36. The person frame structure HF is displayed on the LCD monitor 38 in a manner adapted to the position and size of any of the face image and the human-body image detected under the human-body detecting task.
Thus, when the persons HM1, HM2 and HM3 are captured by the imaging surface as shown in
When the flag FLG_f is updated to “1”, the CPU 26 extracts, out of the 256 AE evaluation values outputted from the AE evaluating circuit 22, AE evaluation values corresponding to a position of a face image or a human-body image respectively registered in the face-detection register RGSTface or the human-body-detection register RGSTbody.
The CPU 26 executes a strict AE process that is based on the extracted partial AE evaluation values. An aperture amount and an exposure time period that define an optimal EV value calculated by the strict AE process are set to the drivers 18b and 18c, respectively. As a result, a brightness of a live view image is adjusted to a brightness in which a part of the scene equivalent to the position of the face image or the human-body image is noticed.
When the shutter button 28sh is half depressed, the CPU 26 executes a normal AF process or a person-priority AF process. When the flag FLG_f indicates “0”, the CPU 26 extracts, out of the 256 AF evaluation values outputted from the AF evaluating circuit 24, AF evaluation values corresponding to a predetermined region of a center of the scene. The CPU 26 executes a normal AF process that is based on the extracted partial AF evaluation values. As a result, the focus lens 12 is placed at a focal point in which the center of the scene is noticed, and thereby, a sharpness of a live view image is improved.
When the flag FLG_f indicates “1”, in order to finalize a target region of the AF process, the CPU 26 duplicates descriptions of the AF target register RGSTaf on a finalization register RGSTdcd. Subsequently, the CPU 26 executes the person-priority AF process so as to place the focus lens 12 at a focal point in which a person is noticed. For the person-priority AF process, a comparing register RGSTref shown in
In the person-priority AF process, firstly, the focus lens 12 is placed at an infinite-side end. At every time the vertical synchronization signal Vsync is generated, the CPU 26 commands the driver 18a to move the focus lens 12 by a predetermined width, and the driver 18a moves the focus lens 12 from the infinite-side end toward a nearest-side end by the predetermined width.
At every time the focus lens 12 is moved, partial search image data belonging within the target region of the AF process described in the finalization register RGSTdcd is read out from the search image area 32c through the memory control circuit 30. A characteristic amount of the read-out search image data is compared with a characteristic amount of a dictionary image indicated by a dictionary number described in the finalization register RGSTdcd. When the dictionary number indicates any of “1” to “5”, the dictionary images contained in the face dictionary DC_F are used for comparing, and when the dictionary number indicates “0”, the dictionary image contained in the human-body dictionary DC_B is used for comparing. A position of the focus lens 12 at a current time point and the obtained matching degree are registered in the comparing register RGSTref.
When the focus lens 12 is moved to the nearest-side end, as a result of evaluating a maximum matching degree out of matching degrees in each position of the focus lens 12 from the infinite-side end to the nearest-side end, a lens position indicating a maximum matching degree is detected as a focal point. The focal point thus discovered is set to the driver 18a, and the driver 18a places the focus lens 12 at the focal point. As a result, a sharpness of a person image included in the target region of the AF process is improved.
According to an example shown in
According to the curve CV1, the matching degree is not maximum in a lens position LPS1 corresponding to a position of the fence FC. On the other hand, when a position of the focus lens 12 is in LPS2, the matching degree indicates a maximum value MV1, and therefore, the lens position LPS2 is detected as a focal point. Thus, the focus lens 12 is placed at the lens position LPS2.
When the shutter button 28sh is fully depressed after completion of the normal AF process or the person-priority AF process, the CPU 26 executes the still-image taking process and the recording process under the imaging task. One frame of image data at a time point at which the shutter button 28sh is fully depressed is taken into a still image area 32d. The taken one frame of the image data is read out from the still-image area 32d by an I/F 40 which is activated in association with the recording process, and is recorded on a recording medium 42 in a file format.
The CPU 26 executes a plurality of tasks including the imaging task shown in
With reference to
In a step S5, it is determined whether or not the shutter button 28sh is half depressed, and when a determined result is YES, the process advances to a step S17 whereas when the determined result is NO, in a step S7, it is determined whether or not the flag FLG_f is set to “1”. When a determined result of the step S7 is YES, in a step S9, the graphic generator 46 is requested to display a person frame structure HF with reference to a registration content of the face-detection register RGSTface or the human-body-detection register RGSTbody. As a result, the person frame structure HF is displayed on the LCD monitor 38 in a manner adapted to a position and a size of any of a face image and a human-body image detected under the human-body detecting task.
Upon completion of the process in the step S9, the strict AE process corresponding to the position of the face image or the human-body image is executed in a step S11. An aperture amount and an exposure time period that define an optimal EV value calculated by the strict AE process are set to the drivers 18b and 18c, respectively. As a result, a brightness of the live view image is adjusted to a brightness in which a part of the scene equivalent to the position of the face image or the human-body image is noticed. Upon completion of the process in the step S11, the process returns to the step S5.
When a determined result of the step S7 is NO, in a step S13, the graphic generator 46 is requested to hide the person frame structure HF. As a result, the person frame structure HF displayed on the LCD monitor 38 is hidden.
Upon completion of the process in the step S13, the simple AE process is executed in a step S15. An aperture amount and an exposure time period that define the appropriate EV value calculated by the simple AE process are set to the drivers 18b and 18c, respectively. As a result, a brightness of the live view image is adjusted approximately. Upon completion of the process in the step S15, the process returns to the step S5.
In the step S17, it is determined whether or not the flag FLG_f is set to “1”, and when a determined result is NO, the process advances to a step S23 whereas when the determined result is YES, in order to finalize a target region of the AF process, descriptions of the AF target register RGSTaf is duplicated on the finalization register RGSTdcd in a step S19.
In a step S21, the person-priority AF process is executed so as to place the focus lens 12 at a focal point in which a person is noticed. As a result, a sharpness of a person image included in the target region of the AF process is improved. Upon completion of the process in the step S21, the process advances to a step S25.
In the step S23, the normal AF process is executed. As a result, the focus lens 12 is placed at a focal point in which the center of the scene is noticed, and thereby, a sharpness of the live view image is improved. Upon completion of the process in the step S23, the process advances to the step S25.
In the step S25, it is determined whether or not the shutter button 28sh is fully depressed, and when a determined result is NO, in a step S27, it is determined whether or not the shutter button 28sh is cancelled. When a determined result of the step S27 is NO, the process returns to the step S25 whereas when the determined result of the step S27 is YES, the process returns to the step S5.
When a determined result of the step S25 is YES, the still-image taking process is executed in a step S29, and the recording process is executed in a step S31. One frame of image data at a time point at which the shutter button 28sh is fully depressed is taken into the still image area 32d by the still-image taking process. The taken one flame of the image data is read out from the still-image area 32d by an I/F 40 which is activated in association with the recording process, and is recorded on the recording medium 42 in a file format. Upon completion of the recording process, the process returns to the step S5.
With reference to
Upon completion of the face detecting process, in a step S47, it is determined whether or not there is any registration of face information in the face-detection register RGSTface, and when a determined result is YES, the process advances to a step S53 whereas when the determined result is NO, the human-body detecting process is executed in a step S49.
Upon completion of the human-body detecting process, in a step S51, it is determined whether or not there is any registration of the human-body information in the human-body-detection register RGSTbody, and when a determined result is YES, the process advances to a step S59 whereas when the determined result is NO, the process returns to the step S41.
In the step S53, it is determined whether or not a plurality of face information in which a size is the largest is registered, out of the face information registered in the face-detection register RGSTface. When a determined result is NO, in a step S55, a region indicated by the face information in which the size is the largest is used as a target region of the AF process. When the determined result is YES, in a step S57, out of a plurality of maximum size of face information, a region indicated by face information in which a position is the nearest to the center of the imaging surface is used as a target region of the AF process.
In the step S59, it is determined whether or not a plurality of human-body information in which a size is the largest is registered, out of the human-body information registered in the human-body-detection register RGSTbody. When a determined result is NO, in a step S61, the human-body information in which the size is the largest is used as a target region of the AF process. When the determined result is YES, in a step S63, a region indicated by human-body information in which a position is the nearest to the center of the imaging surface out of a plurality of maximum size of human-body information is used as a target region of the AF process.
Upon completion of the process in any of the steps S55, S57, S61 and S63, in a step S65, a position and a size of the face information or the human-body information used as the target region of the AF process and a dictionary number of a comparing resource are registered in the AF target register RGSTaf. It is noted that, when the human-body information is registered, “0” is described as the dictionary number.
In a step S67, in order to declare that a person has been discovered, the flag FLG_f is set to “1”, and thereafter, the process returns to the step S43.
The person-priority AF process in the step S21 is executed according to a subroutine shown in
In a step S77, a characteristic amount of partial search image data belonging within the target region of the AF process described in the finalization register RGSTdcd is calculated, and in a step S79, the calculated characteristic amount is compared with a characteristic amount of a dictionary image indicated by a dictionary number described in the finalization register RGSTdcd. When the dictionary number indicates any of “1” to “5”, the dictionary images contained in the face dictionary DC_F are used for comparing, and when the dictionary number indicates “0”, the dictionary image contained in the human-body dictionary DC_B is used for comparing. In a step S81, a position of the focus lens 12 at a current time point and the obtained matching degree are registered in the comparing register RGSTref.
In a step S83, an expected position is set to “expected position—predetermined width”, and in a step S85, it is determined whether or not the expected position newly set is closer than a nearest-side end. When a determined result is NO, the process returns to the step S73 whereas when the determined result is YES, the process advances to a step S87.
In the step S87, an expected position is set to a lens position indicating a maximum matching degree out of matching degrees registered in the comparing register RGSTref, and in a step S89, the focus lens 12 is moved to the expected position. Upon completion of the process in the step S89, the process returns to the routine in an upper hierarchy.
The face detecting process in the step S45 is executed according to a subroutine shown in
In a step S93, the whole evaluation area EVA is set as a search area. In a step S95, in order to define a variable range of a size of the face-detection frame structure FD, a maximum size FSZmax is set to “200”, and a minimum size FSZmin is set to “20”.
In a step S97, the size of the face-detection frame structure FD is set to “FSZmax”, and in a step S99, the face-detection frame structure FD is placed at an upper left position of the search area. In a step S101, partial search image data belonging to the face-detection frame structure FD is read out from the search image area 32c so as to calculate a characteristic amount of the read-out search image data.
In a step S103, a variable N is set to “1”, and in a step S105, the characteristic amount calculated in the step S101 is compared with a characteristic amount of a dictionary image in the face dictionary DC_F in which a dictionary number is N. In a step S107, it is determined whether or not a matching degree equal to or more than a threshold value TH_F is obtained, and when a determined result is NO, the process advances to a step S111 whereas when the determined result is YES, the process advances to the step S111 via a process in a step S109.
In the step S109, a position and a size of the face-detection frame structure FD and a dictionary number of a comparing resource at a current time point are registered as face information in the face-detection register RGSTface.
In the step S111, the variable N is incremented, and in a step S113, it is determined whether or not the variable N exceeds “5”. When a determined result is NO, the process returns to the step S105 whereas when the determined result is YES, in a step S115, it is determined whether or not the face-detection frame structure FD has reached a lower right position of the search area.
When a determined result of the step S115 is NO, in a step S117, the face-detection frame structure FD is moved in a raster direction by a predetermined amount, and thereafter, the process returns to the step S101. When the determined result of the step S115 is YES, in a step S119, it is determined whether or not the size of the face-detection frame structure FD is equal to or less than “FSZmin”. When a determined result of the step S119 is NO, in a step S121, the size of the face-detection frame structure FD is reduced by a scale of “5”, in a step S123, the face-detection frame structure FD is placed at the upper left position of the search area, and thereafter, the process returns to the step S101. When the determined result of the step S119 is YES, the process returns to the routine in an upper hierarchy.
The human-body detecting process in the step S49 is executed according to a subroutine shown in
In a step S133, the whole evaluation area EVA is set as a search area. In a step S135, in order to define a variable range of a size of the human-body-detection frame structure BD, a maximum size BSZmax is set to “200”, and a minimum size BSZmin is set to “20”.
In a step S137, the size of the human-body-detection frame structure BD is set to “BSZmax”, and in a step S139, the human-body-detection frame structure BD is placed at the upper left position of the search area. In a step S141, partial search image data belonging to the human-body-detection frame structure BD is read out from the search image area 32c so as to calculate a characteristic amount of the read-out search image data.
In a step S143, the characteristic amount calculated in the step S141 is compared with a characteristic amount of a dictionary image in the human-body dictionary DC_B. In a step S145, it is determined whether or not a matching degree equal to or more than a threshold value TH_B is obtained, and when a determined result is NO, the process advances to a step S149 whereas when the determined result is YES, the process advances to the step S149 via a process in a step S147. In the step S147, a position and a size of the human-body-detection frame structure BD at a current time point are registered as human-body information in the human-body-detection register RGSTbody.
In the step S149, it is determined whether or not the human-body-detection frame structure BD has reached a lower right position of the search area, and when a determined result is NO, in a step S151, the human-body-detection frame structure BD is moved in a raster direction by a predetermined amount, and thereafter, the process returns to the step S141. When the determined result is YES, in a step S153, it is determined whether or not the size of the human-body-detection frame structure BD is equal to or less than “BSZmin”. When a determined result of the step S153 is NO, in a step S155, the size of the human-body-detection frame structure BD is reduced by a scale of “5”, in a step S157, the human-body-detection frame structure BD is placed at the upper left position of the search area, and thereafter, the process returns to the step S141. When the determined result of the step S153 is YES, the process returns to the routine in an upper hierarchy.
As can be seen from the above-described explanation, the CPU 26 has an imaging surface capturing a scene through the focus lens 12, repeatedly outputs a scene image, and designates, on the imaging surface, an area corresponding to a partial image coincident with a dictionary image out of the scene image outputted from the image sensor 16. The CPU 26 and the driver 18a repeatedly changes a distance from the focus lens 12 to the imaging surface after the designating process. The CPU 26 calculates a matching degree between a partial image outputted from the image sensor 16 corresponding to the area designated by the designating process and the dictionary image, corresponding to each of a plurality of distances defined by the changing process. Moreover, the CPU 26 adjusts the distance from the focus lens 12 to the imaging surface based on a calculated result of the calculating process.
The distance from the focus lens 12 to the imaging surface is repeatedly changed after the area corresponding to the partial image coincident with the dictionary image is designated on the imaging surface. The matching degree between the partial image corresponding to the designated area and the dictionary image is calculated corresponding to each of the plurality of distances defined by the changing process. The calculated matching degree is regarded as a focus degree corresponding to an object equivalent to the partial image, and the distance from the focus lens to the imaging surface is adjusted based on the matching degree. Thereby, a focus performance is improved.
It is noted that, in this embodiment, in the person-priority AF process, as a result of evaluating a maximum matching degree out of matching degrees in positions of the focus lens 12 from the infinite-side end to the nearest-side end, a lens position indicating a maximum matching degree is detected as a focal point. However, an AF evaluation value of a target region of the AF process may be measured in each lens position in which a matching degree exceeds a threshold value so as to use a lens position in which the measured AF evaluation value indicates a maximum value as the focal point.
In this case, steps S161 to S165 shown in
In the step S161, it is determined whether or not the matching degree obtained by the process in the step S79 exceeds a threshold value TH_R, and when a determined result is NO, the process advances to the step S83 whereas when the determined result is YES, the process advances to the step S83 via processes in the step S163 and S165.
In the step S163, an AF evaluation value within the target region of the AF process described in the finalization register RGSTdcd is measured. Measuring is performed by evaluating an average value of the AF evaluation values within the target region of the AF process out of the 256 AF evaluation values outputted from the AF evaluating circuit 24. In the step S165, a position of the focus lens 12 at a current time point and the measured AF evaluation value is registered in the AF evaluation value register RGSTafv.
In the step S167, an expected position is set to a lens position indicating a maximum value out of the AF evaluation values registered in the AF evaluation value register RGSTafv. Upon completion of the process in the step S167, the process advances to the step S89.
With reference to
According the curve CV2, the matching degree exceeds the threshold value TH_R within a range in which a lens position exists from LPS_s to LPS_e. Therefore, in the lens position within the range, an AF evaluation value within the target region of the AF process described in the finalization register RGSTdcd is measured.
According to the curve CV1, in a lens position corresponding to the position of the fence FC, a matching degree does not exceed the threshold value TH_R, and therefore, the AF evaluation value within the target region of the AF process is not measured. On the other hand, according to the solid line portion of the curve CV2, when a position of the focus lens 12 is at LPS3, the AF evaluation value within the target region of the AF process indicates a maximum value MV2, and therefore, the lens position LPS3 is detected as a focal point. Therefore, the focus lens 12 is placed at the lens position LPS3.
Moreover, in this embodiment, the control programs equivalent to the multi task operating system and the plurality of tasks executed thereby are previously stored in the flash memory 44. However, a communication I/F 50 may be arranged in the digital camera 10 as shown in
Moreover, in this embodiment, the processes executed by the CPU 26 are divided into a plurality of tasks including the imaging task shown in
Moreover, in this embodiment, the present invention is explained by using a digital still camera, however, a digital video camera, cell phone units and a smartphone may be applied to.
Although the present invention has been described and illustrated in detail, it is clearly understood that the same is by way of illustration and example only and is not to be taken by way of limitation, the spirit and scope of the present invention being limited only by the terms of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2011-012514 | Jan 2011 | JP | national |