The present application relates generally to imaging. In particular the present application relates to multiframe imaging.
In the field of computational photography, many algorithms are using multiple captured frames, which are combined into one frame. This will enhance the digital photography, because multiple pictures (i.e. frames) of the same object, having different capturing settings can be used to extend the characteristics of the picture, when such multiple pictures are combined. However, imaging devices having a single camera suffer from time difference or used exposure time between the captured frames.
There is, therefore, a need for a solution that minimizes the problems relating to such differences.
Now there has been invented an improved method and technical equipment implementing the method, by which the above problems are alleviated. Various aspects of the invention include a method, an apparatus, a server, a client and a computer readable medium comprising a computer program stored therein, which are characterized by what is stated in the independent claims. Various embodiments of the invention are disclosed in the dependent claims.
According to a first aspect, there is provided a method comprising determining a level of motion in a target to be captured; adapting capture parameters to be used in multiple frame capture of the target according to the determined level of motion; and performing the multiple frame capture with the capture parameters.
According to a second aspect, an apparatus comprises at least one processor, at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: determining a level of motion in a target to be captured; adapting capture parameters to be used in multiple frame capture of the target according to the determined level of motion and performing the multiple frame capture with the capture parameters.
According to a third aspect, an apparatus, comprises at least: means for determining a level of motion in a target to be captured; means for adapting capture parameters to be used in multiple frame capture of the target according to the determined level of motion; and means for performing the multiple frame capture with the capture parameters.
According to a fourth aspect, a computer program, comprises code for determining a level of motion in a target to be captured; and code for adapting capture parameters to be used in multiple frame capture of the target according to the determined level of motion, and code for performing the multiple frame capture with the capture parameters, when the computer program is run on a processor.
According to a fifth aspect, a computer-readable medium is encoded with instructions that, when executed by a computer, perform: determining a level of motion in a target to be captured; adapting capture parameters to be used in multiple frame capture of the target according to the determined level of motion and performing the multiple frame capture with the capture parameters.
According to an embodiment, for high motion, short exposure times are set for the multiple frames.
According to an embodiment, for small motion, long exposure times are set for the multiple frames.
According to an embodiment, the number of frames to be captured are determined according to the determined level of motion, wherein for high motion, less frames are captured than for small motion.
According to an embodiment, two exposures are performed simultaneously during a capture.
According to an embodiment, one of the exposures is main exposure, and another of the exposures is relative to the main exposure.
According to an embodiment, it is automatically identified whether an exposure is a main exposure or a relative exposure.
According to an embodiment, the main exposure and the relative exposure is set in such a manner that the determined level of motion defines the difference between the main exposure and the relative exposure.
According to an embodiment, the apparatus comprises a computing device comprising: a user interface circuitry and user interface software configured to facilitate a user to control at least one function of the apparatus through use of a display and further configured to respond to user inputs; and a display circuitry configured to display at least a portion of a user interface of the apparatus, the display and display circuitry configured to facilitate the user to control at least on function of the apparatus.
According to an embodiment, the computing device comprises a mobile phone.
In the following, various embodiments of the invention will be described in more detail with reference to the appended drawings, in which
Autoexposure (AE) algorithm is known to be used to set exposure parameters for normal (i.e. not under process of computational photography) image before image is captured. If a computational algorithm requires different parameters (e.g. over exposure or under exposure), those are usually set with fixed offsets from AE reference. Typical use case is high dynamic range (HDR) imaging. In some cases the offset from the AE reference is set adaptively by analyzing the statistics of the image. However, the analysis is based on analyzing the exposure, i.e. intensity, and ignoring other important factors, like motion blur. If such factors were taken into account in fixed (not adaptive) way, it would limit the power of the algorithms.
The present embodiments enable decreasing the amount of artifacts in multiframe imaging, e.g. HDR. Or in other words, enable increasing the quality of the algorithm by letting the algorithm to use better parameters than would be safe by default.
The artifacts are caused to HDR and other multiframe algorithms by differences between the user input frames. Differences can be caused by time difference between the frames or different exposure times in the frames. Especially motion blur (global or local) may cause problems and artifacts to the processed output images.
There are various way to detect, if and how much there is movement in the scene. Hardware sensors (e.g. gyroscope, accelerometer) and software analysis (motion vectors, contrast and gradient based calculations, etc.) can be mentioned as examples.
Instead of fixing the camera parameters for each capture based on exposure analysis only, the present embodiments propose including motion information to the decision making. Motion information relates to the amount of motion in the scene, i.e. “small motion” or “high motion” (in some cases also “medium motion”). The borderline between small and high motion is based on used algorithms, i.e. how much motion blur an algorithm can handle. For example, some algorithms can handle some amount of motion blur, but some other algorithms cannot handle any motion blur. Also the quantity of the motion can be very different. Therefore, for the purposes of the present solution, it does not matter how small and high motion is defined, because the determination may be done for each use case depending on e.g. user preferences, multiframe algorithm behavior, etc. Therefore, what matters in these embodiments, is that the determination between small and high motion has been done and that information is further utilized for optimizing capture parameters.
For scenes with small movement, longer exposure times can be set, or more images can be taken to be used as input for the algorithm. This will reduce noise in the image and increase the dynamic range for HDR. For scenes with high movement, the parameters can be optimized for quick capture (e.g. high framerate, short exposure times, higher gains, less images). By each of these parameters, the visual quality of the output image is optimized. The present embodiments relate to pre-processing of images, which means that the processing algorithm takes place before capturing images. Therefore, problems occurring in known solutions can be avoided beforehand.
The present embodiment can be used for generic optimization purposes. In particular, the present embodiment can be used with HDR. The present embodiment is applicable for traditional HDR imaging with multiple captures (e.g. three captured frames with different exposures).
The apparatus 50 shown in
There may be a number of servers connected to the network, and in the example of
There are also a number of end-user devices such as mobile phones and smart phones 251 for the purposes of the present embodiments, Internet access devices (Internet tablets) 250, personal computers 260 of various sizes and formats, and computing devices 261, 262 of various sizes and formats. These devices 250, 251, 260, 261, 262 and 263 can also be made of multiple parts. In this example, the various devices are connected to the networks 210 and 220 via communication connections such as a fixed connection 270, 271, 272 and 280 to the internet, a wireless connection 273 to the internet 210, a fixed connection 275 to the mobile network 220, and a wireless connection 278, 279 and 282 to the mobile network 220. The connections 271-282 are implemented by means of communication interfaces at the respective ends of the communication connection. All or some of these devices 250, 251, 260, 261, 262 and 263 are configured to access a server 240, 241, 242 and a social network service.
A method according to an embodiment is described by means of following example:
Autoexposure proposes 30 ms exposure time and 1×gain. HDR algorithm needs two additional frames, e.g. +/−1 exposure value (EV) shifts. Known methods would use parameters such as 15 ms with 1×gain and 60 ms with 1×gain (or similar).
By the present embodiments, the used capture parameters are adaptive to motion. For scenes with small movement, longer exposure times can be set, or more images can be taken to be used as input for the algorithm. For scenes with high movement, the parameters can be optimized for quick capture (e.g. high framerate, short exposure times, higher gains, less images).
For example:
As another use case, HDR with different exposures during the single capture (e.g. half of the lines are exposed longer than the rest of lines in sensor). Such a use case is common in HDR video recording. The ratio of the exposure times between the differently exposure lines will cause tradeoff between motion artifacts and improvement in dynamic range. The higher the difference is, the better dynamic range will be achieved. However, more artifacts will occur. Traditionally the ratio is fixed during the recording. According to an embodiment, the difference is made adaptive according to the detected motion (global and/or local). The benefits of that is that it optimizes the visual quality.
The process according to an embodiment may contain the following:
In the present embodiments, two exposures can be captured simultaneously in the sensors. One is a main exposure and another is a relative exposure. The relative exposure can be shorter or longer than the main exposure, but is relative to the main exposure (i.e. main multiplied by some factor). In the present embodiments, it is possible to identify whether the exposure is a main exposure or a relative exposure. The information on the used exposure can be located in a frame metadata. However, with adaptive algorithms, the information on the user exposure may not be necessary. The higher the difference between the main exposure and the relative exposure is, the better dynamic range is obtained. In other words, when there is high motion, a small difference is desired and penalty in dynamic range is accepted. With low motion, the difference is increased.
In above, embodiments for optimizing camera parameters at the beginning (i.e. pre-processing) to avoid or to reduce any difference problems. This means that with present embodiments, the input images to be used will be as sharp as needed since for some algorithms small motion blur is allowed while most multiframe algorithms will need as sharp images as possible for the best result. Invention optimizes captured images (before the capture) in order to avoid many problems.
An embodiment of a method is illustrated in
The various embodiments of the invention can be implemented with the help of computer program code that resides in a memory and causes the relevant apparatuses to carry out the invention. For example, a device may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the device to carry out the features of an embodiment. Yet further, a network device like a server may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the network device to carry out the features of an embodiment.
It is obvious that the present invention is not limited solely to the above-presented embodiments, but it can be modified within the scope of the appended claims.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/FI2013/050396 | 4/11/2013 | WO | 00 |