The invention described herein relates generally to the field of high dynamic range (HDR) imaging. Previous attempts at creating HDR images have experienced short-comings related to the temporal disparity between multiple image captures, and the expense and complication associated with multiple imaging paths. The temporal proximity of multiple image captures is limited by the rate at which the images can be read out from the image sensor. Greater temporal disparity among captures increases the likelihood of motion existing among the captures, whether camera motion related to hand jitter, or scene motion resulting from objects moving within the scene. Motion increases the difficulty in merging multiple images into a single output image. The use of multiple imaging paths and sensors introduces a correspondence problem among the multiple images and thus generates images having different perspectives.
Described herein is a method and camera for generating a high dynamic range (HDR) image. The method comprises the first step of receiving a first optical signal from a lens and generating a first output signal at a first image acquisition chip. The first image acquisition chip is coated with a first partial reflection coating. The second step provides for reflecting the first optical signal off the first partial reflection coating to create a second optical signal such that the second optical signal has a lower intensity than the first optical signal. The third step provides for receiving the second optical signal and generating a second output signal at a second image acquisition chip. The fourth step provides for combining the first and second output signals to create the HDR image.
The HDR camera may be described as comprising a lens, first and second image acquisition chips, and a processor. The first image acquisition chip is positioned to receive a first optical signal from the lens and configured to generate a first output signal in response to receiving the first optical signal. Also, the first image acquisition chip is coated with a first partial reflection coating such that an intensity-reduced portion of the first optical signal is reflected off the first partial reflection coating to create a second optical signal. The second image acquisition chip is positioned to receive the second optical signal and configured to generate a second output signal in response to receiving the second optical signal. The processor is configured to combine the first and second output signals to create an HDR image.
Alternatively, the HDR camera may be described as comprising a lens, first and second image acquisition chips, and a processor. The first image acquisition chip is positioned to receive a first optical signal from the lens. The first image acquisition chip is coated with a first partial reflection coating such that a first portion of the first optical signal is transmitted through the first partial reflection coating and a second portion of the first optical signal is reflected off the first partial reflection coating to create a second optical signal. The first image acquisition chip is configured to generate a first output signal in response to receiving the first portion of the first optical signal. The second image acquisition chip is positioned to receive the second optical signal and configured to generate a second output signal in response to receiving the second optical signal. The processor is operatively coupled to receive the first and second output signals and configured to combine the first and second output signals to create an HDR image.
Throughout the several views, like elements are referenced using like references. The elements in the figures are not drawn to scale and some dimensions are exaggerated for clarity.
Described herein is a method 10 for generating a high dynamic range (HDR) image. Generally, method 10 may be described as a process for multiplexing a live image into multiple images with different light-exposure levels with no time delay between images.
The lens 14 may be any optical device capable of transmitting and/or refracting light. The lens may be a simple lens with a single optical element or a compound lens having an array of optical elements with a common axis. Suitable examples of the lens include, but are not limited to, spherical, convex, concave, and planar. The first and second image acquisition chips 16 and 18, and all other image acquisition chips described herein, may be any device capable of converting an optical signal into an output signal that can be communicated to the processor 20. Suitable examples for each of the first and second image acquisition chips 16 and 18 include, but are not limited to a charge-coupled device (CCD), a complementary metal-oxide-semiconductor (CMOS), and Lytro®'s light field sensor that incorporates a micro-lens array.
The processor 20 may be any device capable of combining the first and second output signals 24 and 30 into an HDR image. Accordingly, the processor 20 is in communication with the first and second image acquisition chips 16 and 18. The processor 20 may comprise electrical circuitry and software for merging the first and second output signals 24 and 30, and additional signals into an HDR image. The processor 20 may be internal or external to the camera 12. The processor 20 may be configured to run any suitable image processing software. Suitable image processing software is known and commercially available. Examples include HDR software packages available from Nikon® and Canon®, such as Canon®'s Digital Photo Professional software. Included within the suitable software packages is software that utilizes a bilateral filter algorithm, such as is described in Kuang et al., “Evaluating HDR Rendering Algorithms”, ACM Transactions on Applied Perception, Vol. 4, Issue 2, July 2007.
The processor 20 may be operatively coupled to a non-transitory computer-readable medium. The non-transitory computer-readable medium has computer-readable instructions stored thereon that, when executed by the processor 20, cause the camera 12 to: allow the first and second image acquisition chips 16 and 18 to receive the first and second optical signals 22 and 28 respectively; receive the first and second output signals 24 and 30 at the processor 20 and merging, with the processor 20, the image data from the first and second output signals 24 and 30 into a single image having an increased dynamic range relative to that of the lower dynamic range images from which it is produced. The image data may be transferred in various formats, including raw and Joint Photographic Experts Group (JPEG) formats.
The first partial reflection coating 26, and all other partial reflection coatings described herein, may consist of any substance that will allow a portion of an optical signal to pass through the coating while simultaneously reflecting a portion of the optical signal. The first partial reflection coating 26, and all other partial reflection coatings described herein, may be deposited directly onto a corresponding image acquisition chip such as by chemical vapor deposition, or each partial reflection coating may be a separate optical device, optically coupled to its corresponding image acquisition chip. The first partial reflection coating 26, and all other partial reflection coatings described herein, may be any desired thickness. Suitable examples of a partial reflection coating include, but are not limited to, a neutral density filter, clear glass, and shaded glass.
Method 10 is able to simultaneously capture multiple images at different exposure levels with no time delay between images. Diffraction theory, Geometrical Optics, aberration theory and Digital Image Processing may be applied to multiplex a live image into multiple images with different exposure levels. Diffraction theory suggests that the relationship between an image and its focal point may be described by a two-dimensional (2D) fast Fourier transform (FFT). In other words, the focal point of an image is equal to the 2D FFT of the image. Aberrations are imperfections of the image. All imaging systems will have some degree of aberration. If the aberration is known, then the aberration may be filtered from the output signal according to known image processing techniques.
In like manner, the additional image acquisition chip 362 is positioned to receive a portion of the fourth optical signal 42 that is transmitted through the partial reflection coating 382. The additional image acquisition chip 362 is configured to generate, and send to the processor 20, a fourth output signal 44 upon receiving the portion of the fourth optical signal 42 that is transmitted through the partial reflection coating 382. Another portion of the fourth optical signal 42 is reflected off the partial reflection coating 382 to create a fifth optical signal 45 that has a lower intensity than the fourth optical signal 42. This arrangement and process is repeated for each of the n additional image acquisition chips and partial reflection coatings. The camera 12 may have any desired number of additional image acquisition chips 36.
With multiple image acquisition chips, each with its own partial reflection coating (with the exception of the last image acquisition chip in the optical chain where the partial reflection coating is optional), the image of object 23 is focused in the center of each acquisition chip of camera 12, while the rest of each chip will experience misfocus aberration and astigmatism. The misfocus aberration and astigmatism can be corrected by using fractional Fourier transform algorithms as is known in the art. An example of using fractional Fourier transform algorithms to correct for misfocus aberration may be found in U.S. Pat. No. 6,687,418 issued to Lester F. Ludwig, which is incorporated herein by reference. The same algorithms may be applied by the processor 20 to the output signals from each of the image acquisition chips to reconstruct and/or restore the respective images in each output signal. The processor 20 is configured to combine n output signals to create the HDR image.
From the above description of method 10 and camera 12, it is manifest that various techniques may be used for implementing the concepts of method 10 and camera 12. The described embodiments are to be considered in all respects as illustrative and not restrictive. Further, the method and apparatus disclosed herein may be practiced in the absence of any element that is not specifically claimed and/or disclosed herein. For example, the method 10 may be practiced without the use of beam splitters. It should also be understood that method 10 and camera 12 are not limited to the particular embodiments described herein, but are capable of many embodiments without departing from the scope of the claims.
The United States Government has ownership rights in this invention. Licensing and technical inquiries may be directed to the Office of Research and Technical Applications, Space and Naval Warfare Systems Center, Pacific, Code 72120, San Diego, Calif., 92152; voice (619) 553-5118; ssc_pac_t2@navy.mil. Reference Navy Case Number 101438.
Number | Name | Date | Kind |
---|---|---|---|
5165068 | Baldwin | Nov 1992 | A |
5640206 | Kinoshita et al. | Jun 1997 | A |
5692226 | Hall | Nov 1997 | A |
5726709 | Kinoshita et al. | Mar 1998 | A |
6342980 | Omuro | Jan 2002 | B1 |
6590679 | Edgar et al. | Jul 2003 | B1 |
6687418 | Ludwig | Feb 2004 | B1 |
7202891 | Ingram | Apr 2007 | B1 |
7423679 | Szajewski et al. | Sep 2008 | B2 |
7535622 | Hewlett | May 2009 | B2 |
7742211 | Riley et al. | Jun 2010 | B2 |
8784301 | McDowall | Jul 2014 | B2 |
20020162973 | Cordingley et al. | Nov 2002 | A1 |
20050029458 | Geng et al. | Feb 2005 | A1 |
20050087829 | Merrill et al. | Apr 2005 | A1 |
20050093982 | Kuroki | May 2005 | A1 |
20050168746 | Ohtsuka | Aug 2005 | A1 |
20050200852 | Kimura | Sep 2005 | A1 |
20070081086 | Ingram | Apr 2007 | A1 |
20070115376 | Igarashi | May 2007 | A1 |
20070201738 | Toda et al. | Aug 2007 | A1 |
20080218851 | Chen et al. | Sep 2008 | A1 |
20080303927 | Khanh | Dec 2008 | A1 |
20090244717 | Tocci | Oct 2009 | A1 |
20100328780 | Tocci | Dec 2010 | A1 |
20110069189 | Venkataraman et al. | Mar 2011 | A1 |
20110200319 | Kravitz et al. | Aug 2011 | A1 |
20120001072 | Toda et al. | Jan 2012 | A1 |
20120200535 | Stienstra et al. | Aug 2012 | A1 |
20130194675 | Tocci | Aug 2013 | A1 |
20130319121 | Hill et al. | Dec 2013 | A1 |
20140235945 | McDowall | Aug 2014 | A1 |
20140331875 | Frye et al. | Nov 2014 | A1 |
20150003584 | Weisfield et al. | Jan 2015 | A1 |
Entry |
---|
McGuire, Morgan et al.; Optical Splitting Trees for High-Precision Monocular Imaging; Computational Photography, IEEE Computer Graphics and Applications; Mar. 2007. |
Tocci, Michael D. et al.; A Versatile HDR Video Production System; ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH 2011, vol. 30 Issue 4; Jul. 2011. |
Kiser, Chris et al.; Real Time Automated Tone Mapping System for HDR Video; IEEE International Conference on Image Processing; Sep. 30-Oct. 3, 2012. |
Ellerbroek, B. L. et al.; Inverse Problems in Astronomical Adaptive Optics; (Topical Review Article), Inverse Problems, 063001, vol. 25, No. 6; Jun. 2009. |
Rice University; Diffraction and Fourier Optics; Jul. 2011, available online at http://www.owlnet.rice.edu/˜dodds/Files332/fourier.pdf. |