The described technology is directed to the field of computational photography.
Computational photography refers to the capture and algorithmic processing of digital images. This processing can produce either a single result frame—a still image—or a sequence of result frames—a video clip or animation. For example, a High Dynamic Range (“HDR”) computational photography technique involves (1) capturing a sequence of frames at different exposure levels, and (2) selectively fusing these frames into a single result frame that is often more visually appealing than any of the captured frames.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key factors or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
A facility for generating at least one image is described. In some examples, for each of multiple registered photography scenarios, the facility determines a suitable score for the scenario based upon state of a photography device, including state of a scene as represented by one or more preview frames from the image sensor and/or information from other sensors such as ambient light sensors, gyroscopic motion sensor, accelerometer, depth sensor, etc. The facility selects a scenario having a suitability score that is no lower than any other determined suitability score. The facility then captures a sequence of frames in a manner specified for the selected scenario, and processes that captured sequence of frames in a manner specified for the selected scenario to obtain at least one image. In some examples, the selected scenario specifies a sequence of frames in a manner that is based upon state of the photography device. In some examples, the selected scenario specifies capture of a single frame.
Conventional implementations of computational photography techniques in a dedicated camera, a smart phone, or other photography devices typically require explicit user selection of a particular computational photography technique, such as by interacting with physical or on-screen camera configuration controls. The inventors have recognized that this makes conventional implementations ill-suited to less sophisticated users who don't understand particular computational photography techniques and how they stand to improve photographs under certain conditions, in that these less sophisticated users are unlikely to use and gain the benefit of computational photography techniques. Even among more sophisticated users who do understand computational photography techniques, the need to explicitly select a particular computational photography technique requires a certain amount of time and effort, making it less likely that the user will be able to act quickly to capture a short-lived scene. This is even more true where conventional techniques require a user to separately adjust a number of different settings in order to use a particular computational photography technique.
In order to address these shortcomings of conventional implementations of computational photography techniques, the inventors have conceived and reduced to practice a software and/or hardware facility for automatically selecting and applying an appropriate computational photography technique—or “scenario”—such as when a user takes a photograph (“the facility”).
In some examples, for a set of the scenarios, the facility tests how suited each scenario is to present conditions. Such testing can be performed with respect to a variety of inputs, including information about preview frames from the camera's image; information from other sensors of the capture device, such as ambient light sensors, depth sensors, orientation sensors, and movement sensors; other state of the capture device, such as the state of configurable user preferences; and other information, including information retrieved wirelessly from another device by the capture device. For example, inputs from external devices and sensors may provide complementary or new information about such photographic considerations as lighting, scene content, structure, motion, depth, objects, coloring, or type of image to be captured—e.g., people, action scene, crowded, macro, etc.
In various examples, the facility does this testing either in response to user action (for example, when the user presses the camera's shutter button), or continuously while the camera is active.
When the user does press the camera's shutter button, the facility automatically implements the scenario determined to be best-suited to present conditions by (1) performing a series of frame captures specified as part of the scenario, and (2) processing the captured frames to obtain one or more result frames in a manner also specified as part of the scenario. In various embodiments, this processing produces a still image, a video clip, an animation sequence, and/or a 3-D image or video clip having depth information inferred with respect to the capture sequence.
In some examples, the set of scenarios available for selection by the facility is extensible. In particular, a new scenario can be added to this set by specifying (1) a way to calculate a suited new score for the scenario based on present conditions; (2) a recipe for performing frame captures when the scenario selected or a way to compute that recipe based on present conditions; and (3) a process for processing frames captured in accordance with the recipe when the scenario is selected. In some examples, a scenario may further be accompanied by one or more conditions that determine when the scenario is an active part of the set and available for selection.
By performing in some or all of these ways, the facility enables any user to obtain the benefits of the computational photography technique best-suited to conditions without having to take any special action.
While
In some examples, the facility uses some or all of the scenarios described below, together with the bases identified therefore determining each scenario's fitness score.
In some examples, the facility uses some or all of the capture recipes described in Table 2 below among the scenarios it implements.
In some examples, the facility includes among the registered scenarios a scenario in which High Dynamic Range Imaging is performed without exposure bracketing. In particular, the capture recipe is a burst of short-exposure frames and the processing process is to fuse these frames and perform HDR tone mapping and local contrast enhancement techniques.
Returning to
Those skilled in the art will appreciate that the acts shown in
In some examples, a method for generating at least one image is provided. The method comprises: for each of a plurality of registered photography scenarios, determining a suitability score for the scenario based upon state of the photography device; selecting a scenario among the plurality of scenarios having a determined score no lower than any other determined score; capturing a sequence of frames in a manner specified for the scenario; and processing the captured sequence of frames in a manner specified for the scenario to obtain at least one image.
In some examples, a computer-readable medium having contents configured to cause a photography device perform a process for generating at least one image is provided. The process comprises: for each of a plurality of registered photography scenarios, determining a suitability score for the scenario based upon state of the photography device; selecting a scenario among the plurality of scenarios having a determined score no lower than any other determined score; capturing a sequence of frames in a manner specified for the scenario; and processing the captured sequence of frames in a manner specified for the scenario to obtain at least one image.
In some examples, a computer-readable memory storing a photography scenario data structure is provided. The data structure comprises: a plurality of entries each representing a photography scenario, each entry comprising: first contents specifying how to determine a suitability score for the photography scenario based upon state information, second contents specifying how to capture a sequence of frames as part of the photography scenario, and third contents specifying how to process the captured sequence of frames as part of the photography scenario, such that the contents of the data structure is usable to determine a suitability score for each photography scenario represented by an entry, to select a photography scenario having a highest suitability score, and to perform frame capture and captured frame processing in accordance with the selected photography scenario.
In some examples, a photography device is provided. The photography device comprises: a scoring subsystem configured to, for each of a plurality of computational photography scenarios, determine a suitability score for the scenario based upon state of the photography device; a scenario selection subsystem configured to select a scenario among the plurality of scenarios having a determined score no lower than any other determined score; an image sensor configured to capture a sequence of frames in a manner specified for the scenario; and a processing subsystem configured to process the sequence of frames captured by the image sensor in a manner specified for the scenario to obtain at least one image.
It will be appreciated by those skilled in the art that the above-described facility may be straightforwardly adapted or extended in various ways. While the foregoing description makes reference to particular examples, the scope of the invention is defined solely by the claims that follow and the elements recited therein.