The present disclosure is related generally to auto High Dynamic Range (“HDR”) capture decision making and more particularly to a system and method for automatically capturing an HDR image.
There are a number of cameras which exist today that possess built-in HDR functionality. Such cameras generally include an Auto Exposure Bracketing (“AEB”) option. When the HDR mode is selected, the AEB option is invoked and the camera automatically takes three or more shots with a different exposure for each frame. AEB is a useful option for taking HDR scenes. Thus, when the HDR mode is selected, the AEB option is invoked, and the camera takes multiple shots which are later combined using various techniques to get a final image. However, in such a scenario, the user has to always manually enable the HDR mode or disable the HDR mode depending on the various conditions. The user may or may not be able to decide appropriately. Consequently, this might not result in the best shot of an image to be captured.
While the appended claims set forth the features of the present techniques with particularity, these techniques, together with their objects and advantages, may be best understood from the following detailed description taken in conjunction with the accompanying drawings of which:
Turning to the drawings, wherein like reference numerals refer to like elements, techniques of the present disclosure are illustrated as being implemented in a suitable environment. The following description is based on embodiments of the claims and should not be taken as limiting the claims with regard to alternative embodiments that are not explicitly described herein.
Before providing a detailed discussion of the figures, a brief overview is given to guide the reader. Generally, HDR is defined as a set of techniques used in imaging and photography to reproduce a greater dynamic range of luminosity than possible using standard digital imaging or photographic techniques. HDR images can represent more accurately the range of intensity levels found in real scenes and are often captured by way of a plurality of differently exposed pictures of the same subject matter. The differently exposed pictures of the same subject matter are then combined to produce the final image. However, with the existing technologies, a user has to manually enable the HDR mode for a particular scene to be captured depending on various factors. This does not always provide the best possible shot for the scene to be captured.
The present techniques provide a method and apparatus for automatically invoking the HDR mode in a camera. The user simply uses his camera, and the camera assists him in taking the best shot for the scene. The automatic capturing of the HDR image eases the user's experience. The detection of various signals and decision making based on those signals is made at the time the camera is operating in a preview or a viewfinder mode.
Briefly, in a specific embodiment, when the camera detects that the camera is in a preview mode, the camera determines a dynamic range, auto-exposure (“AE”) metadata, and a motion level associated with frames captured in the preview mode. Further, the camera invokes the HDR mode based on the determined values of the dynamic range, AE metadata, and the motion level.
More generally, methods and apparatuses for invoking an HDR in a camera are disclosed. The method comprises detecting that the camera is operating in a preview mode. The method further comprises determining a dynamic range, AE metadata, and a motion level associated with a plurality of frames captured in the preview mode. Finally, the method comprises invoking the HDR mode when each of the determined dynamic range, the AE metadata, and the motion level is above a first threshold value, a second threshold value, and a third threshold value, respectively.
Turning now to the drawings, one example of the present system is described in detail, although any suitable examples may be employed. A block diagram of a camera 100 that captures images is illustrated in
The image sensor 102 captures an image. Specifically, during the image-capture operations, light from a scene may be focused onto the image sensor 102. During a preview or a viewfinder mode, the image sensor 102 captures the image, converts into an electrical signal, and provides it to the image-processing engine 104. It may be understood by one skilled in art that various types of image sensors 102 may be used in the camera 100.
The camera 100 further comprises the image-processing engine 104. The image-processing engine 104 receives the electrical signal from the image sensor 102 and processes the signal to detect various parameters. Specifically, the image-processing engine 104 comprises the dynamic-range analysis unit 106, the AE-metadata analysis unit 108, and the motion-level analysis unit 110. The dynamic-range analysis unit 106 on receiving the electrical signal determines a dynamic range of a plurality of frames associated with the image for a particular scene. The dynamic range may be defined as the luminance ratio of the brightest element in a given scene to the darkest element in the given scene. Typically, cameras and other imaging devices capture images having a dynamic range that is smaller than that of real-world scenes.
Further, during the image-sensor operations, the dynamic range of the scene to be captured may vary. For example, the dynamic range of a scene that is imaged may change as the image-sensor 102 is pointed in different directions, as the lighting conditions of the scene change, etc. The dynamic range determined by the dynamic-range analysis unit 106 indicates if the scene has a need to improve the dynamic range or has a need to bring out details in dark or shadow regions or bring out details in bright regions. This is one of the factors in determining whether the HDR mode should be invoked or not.
Further, the AE-metadata analysis unit 108 determines a delta between current scene AE-tone characteristics, based on raw-image statistics, and the target AE-tone characteristics, producing AE metadata indicating bi-modal strength of a scene, how much the image or parts of the image should be pushed down for tone mapping, how much saturation should be applied to the color space, and the tone map steepness of the scene. Therefore, the AE-metadata analysis unit 108 determines the AE metadata for the frames associated with the image in the preview mode.
Furthermore, the motion-level analysis unit 110 detects motion associated with the plurality of frames within the scene in the preview mode, allowing the motion-level analysis unit 110 to determine if the subject or the entire scene motion will result in a poor capture. Based on this determination, the motion-level analysis unit 110 determines the motion level associated with the frames associated with the image in the preview mode. In accordance with an embodiment, the motion-level analysis unit 110 may comprise a motion sensor, for example, an accelerometer, a gyroscope, or the like for detecting the motion.
Further, three signals comprising the dynamic range, the AE metadata, and the motion level are provided to the processing circuitry 112. In accordance with an embodiment, the filter 120 in the processing circuitry 112 receives the three signals and filters them to protect them from oscillation. The filter 120 may be a temporal hysteresis filter. In accordance with an embodiment, the filter 120 tracks the dynamic range, the AE metadata, and the motion level for a specified time period. The specified time period may be dynamically determined by the processor 114. The filter 120 determines a stability value of the dynamic range, the AE metadata, and the motion level for the plurality of frames captured in the specified time period. The stability value predicts a future stability of the measured dynamic range, the AE metadata, and the motion level. In an embodiment, the filter 120 predicts the future stability of the dynamic range, the AE metadata, and the motion level of the scene based on historical scene motion, historical dynamic range, and historical AE statistics evaluated for the specified time period.
In accordance with an embodiment, the filtered data from the filter 120 are provided to the processor 114 in the processing circuitry 112. In another embodiment, the processor 114 constantly receives three signals, comprising the dynamic range, the AE metadata, and the motion level, from the image-processing engine 104 as the preview frames are processed, and then the signals are filtered by the filter 120.
In addition, the processor 114 in the processing circuitry 112 may include one or more integrated circuits (e.g., image-processing circuits, microprocessors, storage devices such as random-access memory and non-volatile memory, etc.) and may be implemented using components that are separate from the processing circuitry 112 or that form part of the processing circuitry 112 (e.g., circuits that form part of an integrated circuit that controls or reads pixel signals from image pixels in an image pixel array on image-processing engine 104 or an integrated circuit within the processing circuitry 112). Image data that have been captured by the image sensor 102 may be processed by the processor 114 and stored in the memory 116. Processed image data may, if desired, be provided to external equipment (e.g., a computer or other device) using wired or wireless communications paths coupled to the image-processing engine 104.
Further, the processing circuitry 112 automatically invokes the HDR mode based on the three input values received from each of the dynamic-range analysis unit 106, the AE-metadata analysis unit 108, and the motion-level analysis unit 110 in the image-processing engine 104. Specifically, when each of the dynamic range, AE metadata, and the motion level is greater than a first threshold value, a second threshold value, and a third threshold value, the HDR mode is invoked. The values of the first, second, and the third threshold may be set by the manufacturer of the camera.
The processor 114 is further coupled to the memory 116 and the timer 118. The processor 114 writes data to and reads data from the memory 116. In an embodiment, the memory 116 stores the values of the dynamic range, the AE metadata, the motion level, the first threshold value, the second threshold value, and the third threshold value. The timer 118 is used to set the specified time period which can be stored in the memory 116.
Therefore, in accordance with the embodiments of the present disclosure, the HDR mode is invoked based on the values of the dynamic range, AE metadata, and the motion level.
It is to be understood that
With this general background in mind, turn to
The example process 200 begins at step 202 when the image sensor 102 in the camera 100 detects the preview mode. In one embodiment, detecting the preview mode comprises receiving a plurality of frames associated with the preview mode. In other words, the image sensor 102 receives the frames associated with the current preview of the scene or the image captured during the preview mode.
The method 200 then determines 204 a dynamic range, AE metadata, and a motion level associated with the plurality of frames. In one embodiment, the dynamic range and the AE metadata are measured for each frame of the plurality of frames captured. In accordance with the embodiment, the dynamic range, the AE metadata, and the motion level are constantly generated as preview frames are captured by the image sensor 102.
The method 200 then determines 206 if each of the dynamic range, the AE metadata, and the motion level is above a first threshold value, a second threshold value, and a third threshold value, respectively. If it is determined that each of the dynamic range, the AE metadata, and the motion level is above the first threshold value, the second threshold value, and the third threshold value, respectively, then the HDR mode is invoked. However, if at least one of the dynamic range, the AE metadata, and the motion level is below the first threshold, second threshold, or third threshold, respectively, then the method 200 invokes 210 the non-HDR mode. In accordance with another embodiment, the HDR mode is invoked when at least one of the dynamic range and the AE metadata is above the first threshold or the second threshold, respectively, and the motion level is above the third threshold. In other words, it is possible for one of the dynamic range or the motion level to be below the first threshold or the second threshold, respectively, and still invoke the HDR mode.
Therefore, in accordance with the embodiments, the decision to invoke the HDR mode is based on accessing the three input signals generated by the dynamic-range analysis unit 106, the AE-metadata analysis unit 108, and the motion-level analysis unit 110. Further, the three input signals are combined such that an accurate decision can be made as to whether to invoke the HDR mode.
The electronic device 300 includes a transceiver 304 configured for sending and receiving data. The transceiver 304 is linked to one or more antennas 302. The electronic device 300 also includes a processor 306 that executes stored programs. The processor 306 may be implemented as any programmed processor and may be configured to operate with the different antennas and transceivers for the different 3G networks or other networks. However, the functionality described herein may also be implemented on a general-purpose or a special-purpose computer, a programmed microprocessor or microcontroller, peripheral-integrated circuit elements, an application-specific integrated circuit or other integrated circuits, hardware logic circuits, such as a discrete element circuit, a programmable logic device such as a programmable logic array, field programmable gate-array, or the like.
The electronic device 300 further includes a memory 308. The processor 306 writes data to and reads data from the memory 308. The electronic device 300 includes a user-input interface 310 that may include one or more of a keypad, display screen, touch screen, and the like. The electronic device 300 also includes an audio interface 312 that includes a microphone and a speaker. The electronic device 300 also includes a component interface 314 to which additional elements may be attached. Possible additional elements include a universal serial bus interface. The electronic device 300 includes a power-management module 316. The power-management module 316, under the control of the processor 306, controls the amount of power used by the transceiver 304 to transmit signals. Finally, the electronic device 300 also includes a camera 318 (for example, camera 100 of
In an embodiment, the user interface 310 includes a display screen, such as a touch-sensitive display that displays, to the user, the output of various application programs executed by the processor 306. The user interface 310 additionally includes on-screen buttons that the user can press in order to cause the electronic device 300 to respond. The content shown on the user interface 310 is generally provided to the user interface at the direction of the processor 306. Similarly, information received through the user interface 310 is provided to the processor 306, which may then cause the electronic device 300 to carry out a function whose effects may or may not necessarily be apparent to a user.
In view of the many possible embodiments to which the principles of the present discussion may be applied, it should be recognized that the embodiments described herein with respect to the drawing figures are meant to be illustrative only and should not be taken as limiting the scope of the claims. Therefore, the techniques as described herein contemplate all such embodiments as may come within the scope of the following claims and equivalents thereof.
Number | Name | Date | Kind |
---|---|---|---|
4881127 | Isoguchi et al. | Nov 1989 | A |
5294990 | Aoki et al. | Mar 1994 | A |
5505199 | Kim | Apr 1996 | A |
5909246 | Terashima | Jun 1999 | A |
6167356 | Squadron et al. | Dec 2000 | A |
6347925 | Woodard et al. | Feb 2002 | B1 |
6529253 | Matsute | Mar 2003 | B1 |
6614471 | Ott | Sep 2003 | B1 |
7190263 | McKay et al. | Mar 2007 | B2 |
7301563 | Kakinuma | Nov 2007 | B1 |
7414665 | Watanabe et al. | Aug 2008 | B2 |
7450187 | Sun | Nov 2008 | B2 |
8295631 | Adams et al. | Oct 2012 | B2 |
8619128 | Bilbrey et al. | Dec 2013 | B2 |
8803985 | Kaizu et al. | Aug 2014 | B2 |
9143749 | Wernersson | Sep 2015 | B2 |
20020080263 | Krymski | Jun 2002 | A1 |
20030007088 | Rantanen et al. | Jan 2003 | A1 |
20030052989 | Bean et al. | Mar 2003 | A1 |
20040107103 | Iyengar et al. | Jun 2004 | A1 |
20050154318 | Sato | Jul 2005 | A1 |
20050206820 | Palmer | Sep 2005 | A1 |
20060156374 | Hu et al. | Jul 2006 | A1 |
20070090283 | Linke et al. | Apr 2007 | A1 |
20070115459 | Nakao et al. | May 2007 | A1 |
20070201815 | Griffin | Aug 2007 | A1 |
20070237423 | Tico et al. | Oct 2007 | A1 |
20080077020 | Young et al. | Mar 2008 | A1 |
20090086074 | Li et al. | Apr 2009 | A1 |
20090087099 | Nakamura | Apr 2009 | A1 |
20090109309 | He et al. | Apr 2009 | A1 |
20090189992 | Zhang et al. | Jul 2009 | A1 |
20090190803 | Neghina et al. | Jul 2009 | A1 |
20100091119 | Lee | Apr 2010 | A1 |
20100097491 | Farina et al. | Apr 2010 | A1 |
20100149393 | Zarnowski et al. | Jun 2010 | A1 |
20100208082 | Buchner et al. | Aug 2010 | A1 |
20100271469 | She | Oct 2010 | A1 |
20100309333 | Smith et al. | Dec 2010 | A1 |
20100309334 | James et al. | Dec 2010 | A1 |
20100309335 | Brunner et al. | Dec 2010 | A1 |
20110013807 | Lee et al. | Jan 2011 | A1 |
20110043691 | Guitteny et al. | Feb 2011 | A1 |
20110052136 | Homan et al. | Mar 2011 | A1 |
20110069189 | Venkataraman et al. | Mar 2011 | A1 |
20110122315 | Schweiger et al. | May 2011 | A1 |
20110205433 | Altmann et al. | Aug 2011 | A1 |
20120081579 | Doepke | Apr 2012 | A1 |
20120105584 | Gallagher et al. | May 2012 | A1 |
20120314901 | Hanson et al. | Dec 2012 | A1 |
20130016251 | Ogasahara | Jan 2013 | A1 |
20130057713 | Khawand | Mar 2013 | A1 |
20130208138 | Li et al. | Aug 2013 | A1 |
20130208143 | Chou et al. | Aug 2013 | A1 |
20130271602 | Bentley et al. | Oct 2013 | A1 |
20130314511 | Chen et al. | Nov 2013 | A1 |
20140009634 | Hiwada et al. | Jan 2014 | A1 |
20140063300 | Lin et al. | Mar 2014 | A1 |
20140074265 | Arginsky et al. | Mar 2014 | A1 |
20140085495 | Almalki et al. | Mar 2014 | A1 |
20140160326 | Black | Jun 2014 | A1 |
20140232929 | Ichikawa | Aug 2014 | A1 |
20140244617 | Rose | Aug 2014 | A1 |
20140358473 | Goel et al. | Dec 2014 | A1 |
20150195482 | Wise | Jul 2015 | A1 |
20150288869 | Furuhashi | Oct 2015 | A1 |
20150341546 | Petrescu et al. | Nov 2015 | A1 |
20150341547 | Petrescu et al. | Nov 2015 | A1 |
20150341548 | Petrescu et al. | Nov 2015 | A1 |
20150341549 | Petrescu et al. | Nov 2015 | A1 |
20150341550 | Lay | Nov 2015 | A1 |
20150341561 | Petrescu et al. | Nov 2015 | A1 |
20160037055 | Waddington | Feb 2016 | A1 |
20160050354 | Musatenko et al. | Feb 2016 | A1 |
20160080626 | Kovtun et al. | Mar 2016 | A1 |
Number | Date | Country |
---|---|---|
2043360 | Apr 2009 | EP |
2645700 | Mar 2012 | EP |
2852147 | Mar 2015 | EP |
20070005947 | Jan 2007 | KR |
WO 2005099251 | Oct 2005 | WO |
WO 2007128114 | Nov 2007 | WO |
WO 2010068175 | Jun 2010 | WO |
WO 2012166044 | Dec 2012 | WO |
WO 2013172335 | Nov 2013 | WO |
Entry |
---|
European Patent Office, International Search Report and the Written Opinion in International Patent Application PCT/US2015/018869 (May 20, 2015). |
“Advisory Action”, U.S. Appl. No. 11/931,828, May 30, 2014, 3 pages. |
“Final Office Action”, U.S. Appl. No. 11/931,828, Jan. 14, 2014, 14 pages. |
“Final Office Action”, U.S. Appl. No. 11/931,828, May 13, 2010, 17 pages. |
“Final Office Action”, U.S. Appl. No. 11/931,828, Jun. 11, 2015, 16 pages. |
“Final Office Action”, U.S. Appl. No. 13/468,098, Sep. 18, 2015, 16 pages. |
“International Preliminary Report on Patentability”, Application No. PCT/US2013/040347, Nov. 20, 2014, 6 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2013/040437, Jul. 23, 2013, 9 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2015/023238, Jun. 22, 2015, 11 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2015/023250, Jun. 22, 2015, 12 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2015/023241, Jun. 23, 2015, 12 pages. |
“New BLINK Apps Even More Creative”, Retrieved from: http://research.microsoft.com/en-us/news/features/blink-061813.aspx, Jun. 18, 2013, 4 pages. |
“Non-Final Office Action”, U.S. Appl. No. 11/931,828, Jul. 12, 2013, 21 pages. |
“Non-Final Office Action”, U.S. Appl. No. 11/931,828, Oct. 7, 2015, 22 pages. |
“Non-Final Office Action”, U.S. Appl. No. 11/931,828, Nov. 19, 2014, 20 pages. |
“Non-Final Office Action”, U.S. Appl. No. 11/931,828, Dec. 30, 2009, 14 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/468,098, Mar. 2, 2015, 14 pages. |
“Non-Final Office Action”, U.S. Appl. No. 14/448,199, Sep. 17, 2015, 14 pages. |
“Non-Final Office Action”, U.S. Appl. No. 14/450,573, Dec. 23, 2015, 10 pages. |
“Non-Final Office Action”, U.S. Appl. No. 14/457,374, Nov. 13, 2015, 12 pages. |
“Non-Final Office Action”, U.S. Appl. No. 14/487,785, Sep. 25, 2015, 8 pages. |
“Powershot SX700HS Camera User Guide”, Retrieved from the Internet: http://gdlp01.c-wss.com/gds/7/0300014407/02/PowerShot—SX700HS—Camer—User—Guide—EN.pdf, Mar. 29, 2014, 196 pages. |
“Restriction Requirement”, U.S. Appl. No. 14/450,390, Dec. 16, 2015, 6 pages. |
“Restriction Requirement”, U.S. Appl. No. 14/450,522, Dec. 24, 2015, 6 pages. |
“Restriction Requirement”, U.S. Appl. No. 14/450,553, Jan. 7, 2016, 6 pages. |
“Restriction Requirement”, U.S. Appl. No. 14/450,573, Sep. 1, 2015, 6 pages. |
Dexter,“Multi-view Synchronization of Human Actions and Dynamic Scenes”, In Proceedings British Machine Vision Conference, 2009, 11 pages. |
Whitehead,“Temporal Synchronization of Video Sequences in Theory and in Practice”, Proceedings of the IEEE Workshop on Motion Video Computing, 2005, 6 pages. |
Non-Final Office Action, U.S. Appl. No. 14/450,390, Apr. 8, 2016, 10 pages. |
Notice of Allowance, U.S. Appl. No. 13/468,098, Mar. 23, 2016, 8 pages. |
Notice of Allowance, U.S. Appl. No. 14/448,199, Apr. 5, 2016, 10 pages. |
Notice of Allowance, U.S. Appl. No. 14/457,374, Feb. 10, 2016, 15 pages. |
Notice of Allowance, U.S. Appl. No. 14/487,785, Feb. 1, 2016, 9 pages. |
Restriction Requirement, U.S. Appl. No. 14/450,461, Jan. 20, 2016, 6 pages. |
Number | Date | Country | |
---|---|---|---|
20150271405 A1 | Sep 2015 | US |