Embodiments of the invention relate generally to imaging systems and more particularly to a method, apparatus, and system for autofocusing an imaging system.
A desirable feature in video imaging systems, such as digital video cameras, is the ability to continuously autofocus. Continuous autofocus is the ability of the camera to continuously maintain correct focus on a subject even if the camera or the subject moves.
System 100 typically includes a lens 170 for focusing images on an imager 120. System 100 generally also comprises a central processing unit (CPU) 150, such as a microprocessor, that communicates with an input/output (I/O) device 130 over a bus 110. The imager 120 also communicates with the CPU 150 over the bus 110. The system 100 also includes random access memory (RAM) 160, and can include removable memory 140, such as flash memory, which also communicates with the CPU 150 over the bus 110. The imager 120 may be combined with the CPU 150, with or without memory storage, on a single integrated circuit or on a different chip.
A processing circuit 260, which could be implemented as a separate hardware circuit, programmed processor or implemented as part of an image processing circuit employed in imager 120, receives successive captured image frames 250 from a pixel array of imager 120. The processing circuit 260 analyzes the received frames to adjust the distance between the lens 170 and the imager 120 to bring into focus images captured by the system 100. Processing circuit 260 could use any auto-focusing technique, including techniques that consider more than one previously captured image, techniques that analyze a frame to determine pixels that represent the subject of the frame, and techniques that attempt to predict future autofocus moves from previous autofocus moves. More specific details of such lens adjustment methods and apparatuses are described in U.S. Patent Application Publication Nos. 2006/0012836 and 2007/0009248 and U.S. patent application Ser. Nos. 11/354,126 and 11/486,069, all of which are hereby incorporated herein by reference.
For example, one well known method of auto focusing involves analyzing differences in sharpness between image objects in a frame and determining a sharpness score. By applying such a method to a first received frame, processing circuit 260 might determine that the system 100 is out of focus then step lens 170 from position B to position A. Processing circuit 260 could then analyze a second frame then determine to step lens 170 back to position B, to allow lens 170 to remain at position A, or to step lens 170 to a position C not shown in
The CMOS imager 120 is operated by a sensor control and image processing circuit 350. Circuit 350 controls the row and column circuitry for selecting the appropriate row and column lines for pixel readout, outputs pixel data to other components of system 100, and could perform other processing functions. As is well known in the art, the functions of sensor control and image processing circuitry 350, processing circuit 260, and CPU 150, could be implemented as separate components or could be implemented as a single signal processing circuit located anywhere in system 100.
As noted, system 100 can be operated in a video mode in which successive image frames are captured at a predetermined capture rate. In this mode, imager 120 automatically stores or outputs a series of captured frames. This series of frames corresponds to a digital video which can be stored in the memory 140 of the system or output from system 100. The same output frames are also used for performing an autofocus operation on a next-acquired frame in order to perform a continuous autofocus operation. However, unlike a non-video digital image capture where successive image frames are analyzed in an autofocus operation before an output image is captured, in a video stream all captured images are output, which reduces the frames available for an autofocus operation. Consequently, it is often difficult when performing such an autofocus operation on a video output frame stream to keep an image in focus, resulting in out of focus images in the output video stream.
Embodiments disclosed herein provide a system 100 which has a continuous autofocus operation in the video mode with an improved focusing operation by using additional “hidden frames” which are acquired and used for autofocus operations, but which are not output as part of the video output frame stream.
First, at step 400 the system 100 operates imager 120 to capture a “hidden frame” 450. Frame 450 is referred to as a “hidden frame” because is not output as part of the video output frame stream or otherwise accessible to a user, and does not get output to a user through I/O device 130 or stored to a removable memory 140. Hidden frame 450 is used by system 100 for autofocus processing purposes, though it could also be used for other image acquisition functions as well. After system 100 completes processing functions requiring hidden frame 450, including auto focus, system 100 could overwrite or delete the hidden frame 450. The hidden frame could also be used for other processing functions, and it is possible for system 100 to perform auto focusing operations using only hidden frames 450. Moreover, while capturing and processing hidden frames, system 100 could disable signals used to control the output of frame data to users.
Step 410 of
At step 420, system 100 captures an “output frame” 460. Unlike hidden frame 450, an output frame 460 is intended to be output and is otherwise available to a user. For example, output frame 460 may be output from the system 100 using I/O device 130, displayed on a video screen associated with system 100, or stored to removable memory 140. Thus, an output frame 460 and a hidden frame 450 differ in that the output frame 460 is available to a user of system 100 in some manner while a hidden frame 450 is internal to system 100 and is not available to a user under normal operations of system 100.
After capturing an output frame 460, system 100, at step 430, can perform an autofocusing or other processing function using the output frame. For example, step 430 could use the same autofocusing algorithm used during step 410 or could use different autofocus algorithms. Further, the processing performed at step 430 could use output frame 460 or, depending on the specific processing function performed, use previously captured hidden frames or output frames for an autofocus operation. System 100 repeats steps 400, 410, 420, and 430 in order to capture an additional hidden frame 470, an output frame 480, and subsequent hidden and output frames. Again, output frames, such as frames 460 and 480, will be available to a user in some way, while hidden frames, such as frames 450 and 470, will not be available to a user through normal operation of system 100.
System 100 can capture and use multiple hidden frames 450, 470 between output frames.
Increasing the number of captured hidden frames and autofocusing operations improves the autofocusing function of system 100 and helps ensure that the output frames are in focus. Thus, the more hidden frames system 100 captured and used in autofocusing, the better focused the output frames. Although the
In a modified embodiment, system 100 does not need to capture hidden frames with the same resolution as output frames. It has been determined that system 100 can perform adequate autofocusing processing using hidden frames which have a resolution as small as 5% to 10% of the resolution of output frames which reduces the load on the autofocus processing capabilities of system 100.
In addition to having a lower resolution compared to output frames, hidden frames could also be captured using a different integration time than the integration time used when capturing an output frame. As is well known in the art, integration time refers to the time period during which a pixel acquires an image signal. Referring back to
System 100 could also apply a different gain to signals of the pixel array when capturing hidden frames than the gain applied to the pixel signals for output frames. For example, as shown in
The use of different integration times and gains for hidden frames and an output frame could also be combined in another embodiment. For example, system 100 could capture hidden frames using a shorter integration time than that used for output frames, while using a higher gain than the gain used for output frames.
In other embodiments, system 100 could process hidden frames differently from the way system 100 processes output frames. For example, when capturing and outputting output frames, it is well known to use various processing techniques, including binning and scaling. Such processing techniques could be disabled at step 430 before system 100 captures and processes hidden frames. Additionally, binning and scaling could be used on the hidden frames to lower resolution, but not used on the output frames.
In order to capture and process the hidden frames, system 100 should capture frames at a frame rate greater than the frame rate normally used for video capture. For example, consider a system 100 configured to capture and process one hidden frame for each output frame, and assuming that capturing and processing the hidden frame consumes the same amount of time as capturing and processing the output frame, then for an output video frame rate of 30 frames per second (“fps”), system 100 should have the capability of capturing frames at 60 frames per second (fps).
Various factors determine the difference between the user defined frame rate and the actual frame rate system 100 would use. For example, increasing the number of hidden frames captured between each output frame would increase the rate at which system 100 would have to capture images in order to output a video stream corresponding to the user defined frame rate. However, increasing the number of hidden frames also improves the performance of the auto focusing functions. On the other hand, decreasing the integration time for the hidden frames, decreasing the resolution of the hidden frames, or deactivating processing of hidden frames would reduce the frame rate at which system 100 is required to capture images.
The above description and drawings illustrate embodiments that achieve the objects, features, and advantages of the present invention. However, it is not intended that the present invention be strictly limited to the above-described and illustrated embodiments. Any modification, though presently unforeseeable, of the present invention that comes within the spirit and scope of the following claims should be considered part of the present invention.