Embodiments of the invention relate generally to an imaging device for capturing photo images and more particularly to an auto focusing technique for an imaging device.
Most cameras, including digital cameras, have an automatic focus feature (referred to herein as “auto focus”) in which subjects viewed through the camera can be focused on automatically. Auto focus systems are generally categorized as either active or passive systems. Active systems actually determine the distance between the camera and the subject of the scene by measuring the total travel time of ultrasonic waves or infrared light emitted from the camera. Based on the total travel time, the distance between the camera and the subject of the scene may be calculated and the appropriate lens position may be selected. Passive auto focus systems, on the other hand, do not require the emission of ultrasonic waves or infrared light, but instead simply rely on the light that is naturally reflected by the subject in the scene.
One example of a passive auto focus system is a system that uses contrast analysis to determine the best focal position for the camera lens. In a contrast analysis auto focus system, adjacent areas of a scene are compared with each other to measure differences in intensity among the adjacent areas. An out-of-focus scene will include adjacent areas that have similar intensities, while a focused scene will likely show a significant contrast between areas in which the subject of the scene is located and other areas of the scene (e.g., background objects). As the camera incrementally moves the lens during the auto focus operation, each area of the scene is analyzed to determine differences in intensity between adjacent areas. When a particular lens position results in the maximum intensity difference between adjacent areas, the camera will use that lens position for its auto focus lens setting.
Conventional auto focus systems have difficulty with continuously auto focusing on a moving subject, especially when all focusing decisions must be made using only statistical information from the image frames. A standard approach for continuous auto focusing is to refocus on the subject each time motion in the scene is detected. In general, conventional auto focus systems perform a process of 1) focusing on a subject, 2) detecting motion in the scene, and 3) refocusing on the subject. In a passive system, it may take several steps to refocus on the subject because the system makes incremental focusing adjustments until the optimal focus position is obtained.
Accordingly, there is a need and a desire for an improved method of auto focusing an imaging device to capture a moving subject.
In the following detailed description, reference is made to various embodiments that are described with sufficient detail to enable those skilled in the art to practice them. It is to be understood that other embodiments may be employed, and that various structural, logical and electrical changes may be made.
Various embodiments described herein relate to a method and system for auto focusing an imaging device on a moving subject. In one embodiment, the distance of the moving subject from the imaging device is determined by measuring a change in brightness of light reflected from the subject at a first location and light reflected from the subject at a second location. The imaging device is then focused using the determined distance of the subject from the imaging device.
Turning to
At step 104, the method 100 determines the distance between the subject and the imaging device 200. This distance is referred to herein as the first distance X1. The microprocessor 210 determines the first distance X1 using the current focused lens position and by correlating the first distance X1 to the position of the lens 230 as determined in step 102. In one embodiment, the memory unit 260 includes a look-up table correlating known positions of the lens 230 to known distances between an in-focus subject and the imaging device 200. The first distance X1 is then stored in the memory unit 260.
At step 106, the method 100 determines the background brightness B0 of the scene including the subject at the first distance X1. To do so, an image of the scene under ambient light is captured by the imager 240, which outputs the image to the microprocessor 210. In one embodiment, the microprocessor 210 determines the background brightness B0 by calculating the total brightness of the scene from the image. In one embodiment, the total brightness of the scene is calculated by adding up all of the pixel brightness values and dividing by the number of pixels. The microprocessor 210 stores the background brightness B0 in the memory unit 260.
At step 108, the method 100 determines the brightness of the scene including the subject at the first distance X1 and as illuminated by light from the light source 250. To do so, the microprocessor 210 instructs the light source 250 to emit light thus illuminating the subject. In one embodiment, the light source 250 may be a flash, and may flash each time the subject is to be illuminated. In another embodiment, the light source 250 may be a continuous light source, such as a spotlight, and may emit light throughout the method 100, with the exception of step 106.
The image of the scene including the illuminated subject at the first distance X1 is captured by the imager 240, which sends the image to the microprocessor 210. The microprocessor 210 determines the brightness B1 of the scene by calculating the total brightness of the scene from the captured image. The microprocessor 210 stores the calculated brightness B1 in the memory unit 260.
At step 110, the method 100 determines that the subject has moved. In one embodiment, the microprocessor 210 detects that the subject has moved using a technique similar to a passive focusing technique. That is, when the microprocessor 210 detects that the image of the subject from the imager 240 is no longer in focus, such as by performing contrast analysis as described above, the microprocessor 210 determines that the subject has moved. When the microprocessor 210 determines that the subject has moved, the method proceeds to step 112. In another embodiment, step 110 may be omitted and the method may proceed to step 112 after a timed interval. In another embodiment, if the subject does not move in a set time period, the method 100 may continue at step 102 or the method 100 may end.
At step 112, because the subject has moved, the method 100 determines the brightness of the scene including the subject at a second distance X2 and as illuminated by the light source 250. The second distance X2 is the distance from the imaging device 200 to the location to which the subject has moved. After the microprocessor 210 has determined that the subject has moved, the microprocessor 210 instructs the light source 250 to emit light, thus illuminating the subject. The image of the scene including the illuminated subject at the second distance X2 is captured by the imager 240, which sends the image to the microprocessor 210. The microprocessor 210 determines the brightness B2 of the scene by calculating the total brightness of the scene from the newly captured image. The microprocessor 210 stores the brightness B2 in the memory unit 260.
At step 114, the imaging device determines the second distance X2 as described below. The brightness of the light reflected from a subject decreases in proportion to the square of the distance to the light source 250. Equation 1 shows a formula for comparing the distance of the subject to the brightness of the subject for the first distance X1 and the second distance X2.
The microprocessor 210 calculates the second distance X2 using equation 2.
If the light source 250 is located away from the imaging device 200, Equation 2 may be modified to account for the distance from the light source 250 to the imaging device 200.
At step 116, the method 100 focuses on the subject at its new location. To do so, the microprocessor 210 instructs the lens controller 220 to adjust the lens 230 to a position that will properly focus an image of the subject located at the second distance X2 on the imager 240. In one embodiment, the memory unit 260 includes a look-up table correlating known distances of a subject to the imaging device 200 to known positions of the lens 230.
The method 100 may be continuously repeated to keep the subject in focus over a period of time. In one embodiment, the method 100 may start over from step 102 and progress through step 116, to determine new values of B0, B1, B2, X1, and X2. In another embodiment, shown in
An imager 240, for example, a CMOS imager, for use with the imaging device 200 is shown in
The imager 240 is operated by the timing and control circuit 450, which controls address decoders 455, 470 for selecting the appropriate row and column lines for pixel readout. The control circuit 450 also controls the row and column driver circuitry 445, 460 such that they apply driving voltages to the drive transistors of the selected row and column select lines. The pixel column signals, which for a CMOS imager typically include a pixel reset signal (Vrst) and a pixel image signal (Vsig), are read by a sample and hold circuit 461. Vrst is read from a pixel immediately after a charge storage region is reset. Vsig represents the amount of charges generated by the pixel's photosensitive element and stored in the charge storage region in response to applied light to the pixel. A differential signal of Vrst and Vsig is produced by differential amplifier 462 for each pixel. The differential signal is digitized by analog-to-digital converter 475 (ADC). The analog-to-digital converter 475 supplies the digitized pixel signals to an image processor 480, which forms and outputs a digital image. In one embodiment, the image processor 480 may perform some or all of the functions of the microprocessor 210 in the method described above with reference to
The processes and devices described above illustrate example methods and devices of many that could be used to implement the various embodiments. For example, the various embodiments described herein could be used with a still or video camera. It is not intended that the present be strictly limited to the above-described and illustrated embodiments and is only limited by the scope of the appended claims.