This invention relates to digital cameras, and to enhancing auto focusing performance and image processing.
Cameras and other imaging devices normally have a single plane of focus, with a range of acceptable focus (“depth of field”) near that range. Large apertures are useful for low light imaging, but create a narrower focal range. This means that it is impossible in some circumstances to generate sharp images of multiple subjects at different focal distances without the aid of external lighting, narrower apertures, and other measures that can affect desired images.
Modern digital cameras may employ a capability called “focus stacking” in which a fixed camera images a stationary inanimate subject (such as for product photography) and takes a series of many images at regular focal distance intervals. Each image is at an incrementally different focal distance to cover the range of distances from the closest to the farthest point of the subject, with the distances being selected for even spacing in the range, without regard to the elements of the subject or their distance. The image distance intervals are narrow enough to ensure that the intervals are less that the depth of focus of each image to that all subject points are in focus for at least one of the images. The images are then post-processed into a single image that uses the sharpest image segment for each area on the subject to generate an overall sharp image.
While effective for stationary subjects this is not useful for moving subjects like people. Even a person sitting relatively still for a portrait may move enough to generate misalignment of the images. Moreover, the number of images required can be in the dozens or even hundreds to cover large subjects, requiring extended periods when motionlessness is required. For example, even a fast 20 frames per second shutter with a limited 20-frame image will require one second of motionlessness, which is beyond the capacity for hand-holding, subject motion, and image stabilization. Moreover, the appearance of having all points of a subject in focus is unnatural and may be undesirable in instances when only selected elements (at different focal distances) are desired to be in focus. For example, a sharp image of each person in a small group (or of all facial features of a model) while the background is blurred to eliminate distractions.
Accordingly, there is a need for a camera system comprising a body that contains a lens with a range of focus settings and an image sensor operable to record an image. The camera system has a controller operably connected to the sensor to receive the image, and the controller is operably connected to the lens to control the focus setting. The controller is operable to focus the lens on a selected point, and the controller is operable to determine at least two different first and second subject elements. The controller is operable to focus the lens on the first subject and record a first image, and the controller is operable to focus the lens on the second subject and record a second image.
The system operates to image a scene 24 shown in
When the two subjects are at different focal distances such as is illustrated, the camera operates to take two images in rapid succession.
The sequence of captured images may simply be stored, for (in one basic usage) the user later to select which is the desired single subjects in an otherwise normally focused image. This avoids the need for the photographer to select from among subjects during the imaging event. An example might include a sideline image of a football line of scrimmage, with a rapid sequence of each of the linemen being imaged in focus. Notably for all embodiments, the system does not simply take a sequence of images at limited intervals irrespective of the subjects' actual locations in hope of getting everything approximately in focus. It operates to select image focal distances based on the location of actual subjects, and operates preferably to focus specifically on selected and identified subjects, with only that many images captured, and each subject optimally in focus. Even two subjects at very different focal distances will have images captured rapidly in sequence without intervening images to create an undesired delay in capturing the two (or more) most critical subjects.
In more advanced embodiments, the image is processed and composited using the techniques associated with conventional focus stacking or bracketing to create a single image with both (or all) subjects in focus. In a simple example, the sharply focused face is overlaid on the same subject's blurry face of the other image, in the sense of photoshopping a face in a group image to eliminate an eye blink. In the simple example, the face may be positioned in the same location on the frame, with the assumption that the brief interval between shots does not cause an objectionable offset.
In a more advanced system, the blurred image of the subject is analyzed to establish location datum points, and these are correlated with location datum points in the sharply focused face image, so that the sharp face is pasted onto the main image in registration with the face (or any subject) of the reference image. For instance, the blurry eyes of the left subject in
More than just the face or other key element of the second image may be composited with the first image. While the boundary of the face may be identified and pasted onto the other image, more advanced approaches may be employed. As with focus bracketing systems, each location may be assessed to determine which of the two or more images is in closer focus to represent that location more sharply. This will ensure smooth transitions where each image is about equally out of focus (or sharp) at a transition between images. And as with location of the sharp element in registration with the location of the blurred image of the same element, the transition areas may be stretched and shafted to ensure that there is registration potentially at many different points along a perimeter between images.
Foreground and background blur control may be employed. In the illustrated case of the nearer and farther subjects of
An additional implementation for a person's face 40 is depicted in
The system may be programmed to recognize any of these specific elements, and to shoot images in rapid succession, with each in focus, changing focus between images. As above, the images may be composited in the manner of focus stacking systems, or the sharp selected features may be composited into a selected master image.
In an alternative embodiment, the nearest and farthest desired subjects may be identified, and the camera operated to automatically shoot a number of evenly spaced images at focal distances between the selected subjects, creating an effective depth of field. The intervals may be evenly spaced, and may be based on an analysis by the processor or controller of whether the selected subjects are at far apart enough focal distances to necessitate one of more intermediate images to generate sharp images (such as the nose of a subject when focusing primarily on a near and far eye). Also, if the subjects are separated (as with two people in foreground and background) how many images are required between the two selected focal distance extremes. This may also determine whether other subjects are in view in the middle distance between the two primary subjects. Because the intermediate subject matter may be of lesser importance, the camera may first image the two or more primary subjects detected and identified as important, then capture the intermediate focal distance images that are less problematic if lost due to subject movement of camera shake.
Conventional focus stacking may be improved (made faster and potentially hand-holdable) by employing similar principles by using a detected subject such as a face or eye at the starting point, and optionally a second detected subject as an end point, using appropriate intervals as determined by automatic image analysis (e.g. more images at tighter intervals for large aperture fast lens settings with thin depth of focus, fewer for smaller aperture slow lens settings).
The image processing may be in camera or post processing, and may be done interactively with the user who may select from the recorded images.
Number | Name | Date | Kind |
---|---|---|---|
7463296 | Sun | Dec 2008 | B2 |
8849064 | Mocanu | Sep 2014 | B2 |
9294678 | Takahashi | Mar 2016 | B2 |
9386213 | Matsumoto | Jul 2016 | B2 |
9386215 | Ono | Jul 2016 | B2 |
9531938 | Ogura | Dec 2016 | B2 |
10375292 | Park | Aug 2019 | B2 |
10917571 | Shanmugam | Feb 2021 | B2 |
11212489 | Yoshimura | Dec 2021 | B2 |
20120120269 | Capata | May 2012 | A1 |
20120281132 | Ogura | Nov 2012 | A1 |
20140192216 | Matsumoto | Jul 2014 | A1 |
20140226914 | Mocanu | Aug 2014 | A1 |
20140285649 | Saitou | Sep 2014 | A1 |
20150055007 | Takahashi | Feb 2015 | A1 |
20150195449 | Ono | Jul 2015 | A1 |
20160021293 | Jensen | Jan 2016 | A1 |
20180063409 | Rivard | Mar 2018 | A1 |
20180131869 | Kim | May 2018 | A1 |
20180213144 | Kim | Jul 2018 | A1 |
20190215440 | Rivard | Jul 2019 | A1 |
20190243376 | Davis | Aug 2019 | A1 |
20200145583 | Shanmugam | May 2020 | A1 |
20210149274 | Kang | May 2021 | A1 |