An embodiment of the invention relates generally to devices having multiple cameras, and in particular, to devices for and methods of processing image data in a composite image.
Digital cameras are electronic devices that capture an image that is stored in a digital format. Other electronic devices, such as smart phones, tablets or other portable devices, are often equipped with a camera to enable the capture of images in a digital format. As the demands for improved functionality of cameras or electronic devices having cameras has increased, multiple cameras having different functionality have been implemented in electronic devices. According to some implementations, a dual camera module in an electronic device may contain two different lenses/sensors, or may have different camera characteristics, such as exposure settings.
Many scenes captured by a device having digital cameras contain a very large dynamic range (i.e. both very bright and very dark areas exist in the same scene). It is difficult for a camera to record the scene using a single exposure, so multiple exposures are often used, where the image frames are combined using high dynamic range (HDR) algorithms. These algorithms may select parts of each image frame that is best exposed among the multiple exposures. These parts are composited together using blending. In blending, multiple image frames are combined together using corresponding blending weights to control how each frame should contribute to the final result. A specific artifact, known as the halo artifact, is often created around a transition point between light and dark areas of a scene.
Therefore, devices and methods of blending images that reduce transition artifacts such as the halo artifact is beneficial.
A method of processing image data in a composite image is described. The method comprises analyzing a multi-level blending of images of the composite image; identifying sharp transition boundary areas of the composite image; applying less attenuation for higher levels of multi-level blending of the composite image in the sharp transition boundary areas; and applying more attenuation for lower levels of multi-level blending of the composite image in the sharp transition boundary areas.
A device for enhancing quality of an image is also described. The device comprises a processor circuit configured to: analyze a multi-level blending of images of the composite image; identify sharp transition boundary areas of the composite image; apply less attenuation for higher levels of multi-level blending of the composite image in the sharp transition boundary areas; and apply more attenuation for lower levels of multi-level blending of the composite image in the sharp transition boundary areas.
A non-transitory computer-readable storage medium having data stored therein represents software executable by a computer for enhancing quality of an image, the non-transitory computer-readable storage medium comprises: analyzing a multi-level blending of images of the composite image; identifying sharp transition boundary areas of the composite image; applying less attenuation for higher levels of multi-level blending of the composite image in the sharp transition boundary areas; and applying more attenuation for lower levels of multi-level blending of the composite image in the sharp transition boundary areas.
The devices and methods set forth below detect transition boundaries associated with a scene where the blending control may change quickly from a dark region to a bright region. After detection of the transition boundaries, the devices and methods provide control of the blending to suppress the halo artifact. The devices and methods may detect where the halo artifact will be created and control the blending to prevent the blended result from exhibiting halo artifact. For example, the devices and methods may look for large changes in the blending weights, where areas with large changes may mark the sharp transition boundaries. The circuits and methods may be beneficial when used with images having different exposures or for an image generated using a flash is blended with an image generated using no flash.
According to some implementations, the devices and methods provide an extension to pyramid blending, where detection may be applied at multiple levels of a blending pyramid. Fine details are recorded in the high-resolution levels of the blending pyramid. Less control is applied at these levels to avoid smoothing fine details in the output image. Large scale transitions are recorded in lower levels, so it is possible to apply detection and more attenuation, also known as suppression, in the areas detected in the lower levels to control the halo artifact. Therefore, the devices and methods allow halo attenuation to be performed in the pyramid without losing fine details in the output image. Attenuation may be applied at each level using controllable thresholds. The detail level in low resolution levels may be reduced at points where there is quick transition in the blending weights. Control adaptation may be based upon scene content. The devices and methods allow the halo region to be detected and control how much high-pass energy to pass to the output to avoid the halo overshoot and undershoot.
Further, while the specification includes claims defining the features of one or more implementations of the invention that are regarded as novel, it is believed that the circuits and methods will be better understood from a consideration of the description in conjunction with the drawings. While various circuits and methods are disclosed, it is to be understood that the circuits and methods are merely exemplary of the inventive arrangements, which can be embodied in various forms. Therefore, specific structural and functional details disclosed within this specification are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the inventive arrangements in virtually any appropriately detailed structure. Further, the terms and phrases used herein are not intended to be limiting, but rather to provide an understandable description of the circuits and methods.
Turning first to
The processor 102 may be coupled to a display 106 for displaying information to a user. The processor 102 may also be coupled to a memory 108 that enables storing information related to data or information associated with image data. The memory 108 could be implemented as a part of the processor 102, or could be implemented in addition to any cache memory of the processor, as is well known. The memory 108 could include any type of memory, such as a solid state drive (SSD), Flash memory, Read Only Memory (ROM) or any other memory element that provides long term memory, where the memory could be any type of internal memory of the electronic drive or external memory accessible by the electronic device.
A user interface 110 is also provided to enable a user to both input data and receive data. Some aspects of recording images may require user's manual input. The user interface 110 could include a touch screen user interface commonly used on a portable communication device, such as a smart phone, smart watch or tablet computer, and other input/output (I/O) elements, such as a speaker and a microphone. The user interface 110 could also comprise devices for inputting or outputting data that could be attached to the mobile device by way of an electrical connector, or by way of a wireless connection, such as a Bluetooth or a Near Field Communication (NFC) connection.
The processor 102 may also be coupled to other elements that receive input data or provide data, including various sensors 120, an inertial measurement unit (IMU) 112 and a Global Positioning System (GPS) device 113 for activity tracking. For example, an inertial measurement unit (IMU) 112 can provide various information related to the motion or orientation of the device, while GPS 113 provides location information associated with the device. The sensors, which may be a part of or coupled to a mobile device, may include by way of example a light intensity (e.g. ambient light or UV light) sensor, a proximity sensor, an environmental temperature sensor, a humidity sensor, a heart rate detection sensor, a galvanic skin response sensor, a skin temperature sensor, a barometer, a speedometer, an altimeter, a magnetometer, a hall sensor, a gyroscope, WiFi transceiver, or any other sensor that may provide information related to achieving a goal. The processor 102 may receive input data by way of an input/output (I/O) port 114 or a transceiver 116 coupled to an antenna 118. While the elements of the electronic device are shown by way of example, it should be understood that other elements could be implemented in the electronic device of
Turning now to
According to some implementations, a circuit and method of implementing a blending pyramid accepts 2 different images, such as a long exposure input and a short exposure input along with a frame of blending weights, as will be described in more detail in reference to
According to the example of
Turning now to
In a reconstruction pyramid, when the base and detail images are combined together, the original input image is reconstructed. For the blending application, each level of the image pyramid also contains a blending block that combines information from both input images to create a blended detail image. The lowest level of the pyramid contains the low frequency views of the two input images. These are also blended together to form the blended base layer. The blended base layer along with the blended detail layers from the blending pyramid are given to the reconstruction pyramid to form the final blended output image. It should be noted that the correction for transition affects such as the halo affect may be performed during the blending process, rather than after blending is completed. According to one implementation, the blending pyramid may comprise 8 levels, where blending may be performed in certain levels. For example, blending at the portion of the image that is detected as having a transition artifact may not be performed in the top 3 levels having the highest resolution (i.e. Levels 1-3 of the example of
Turning now to
As shown in the graph of
Out=(Wi*I1+(1−Wi)*I2)*A (1)
The attenuation factor, A, may be computed using a linear curve, as shown in
y=1−m(x−t1), (2)
where the slope m can be:
m=(1−Amin)/(t2−t1) (3).
Therefore, the blended output in the regions identified as having transition artifacts are attenuated by multiplying the standard output for that level by an attenuation value A that is less than or equal to one. According to one implementation, the value of A may be equal to 1.0 until a first threshold value t1 associated with a Wdi value, and then decrease to a minimum of Amin until a second threshold value t2 is achieved.
In order to determine values of t1 and t2, a difference image Wd is determined, where
Wi−Wdi=Wd, (4)
where Wi may be based upon an input image to a low pass filter (the output of which is a downsampled image) and Wdi is based upon an upsampled version of the corresponding downsampled Wi image. According to one implementation, the Wdi values may be generated after a Laplacian transform for example (in order to generate a difference image based upon images of the same resolution). The values of t1 and t2 are selected so that the resulting difference image Wd avoids halo effects at transition edges between light and dark areas. While a Laplacian technique may be employed, other techniques such as wavelet decomposition could be used instead.
The parameters (i.e. controllable thresholds Amin, t1 and t2)) of the graph of
Turning now to
Turning now to
According to some implementations, analyzing a multi-level blending of images may comprise analyzing a pyramid blending of images of the composite image. The method may also comprise selectively applying attenuation at predetermined levels of the plurality levels, including applying attenuation using controllable thresholds. For example, the controllable thresholds may comprise a first threshold for starting attenuation, a second threshold for ending attenuation, and a minimum attenuation value applied during attenuation. Different controllable thresholds may be applied to different levels of a pyramid blending of the images of the composite image. The method may also comprise generating a blended output of the images of the composite image after applying less attenuation for higher levels of multi-level blending of the composite image in the sharp transition boundary areas and applying more attenuation for lower levels of multi-level blending of the composite image in the sharp transition boundary areas. While specific elements of the method are described, it should be understood that additional elements of the method, or additional details related to the elements, could be implemented according to the disclosure of
Accordingly, the halo attenuation technique may be implemented as a modification to a conventional blending standard algorithm. However, the same technique can be applied in any blending algorithm where there is a distinction between base level image and detail level image. Modification of the details at lower resolutions reduces the halo effect because the halo is an artifact that affects very high-level transitions in a scene. The technique is presented in the context of high dynamic range imaging where multiple exposures are captured to increase the dynamic range of the combined image output. The technique can also be used to help reduce artifacts when combining flash and no flash images for example. The technique avoids overshoots and undershoots around transition regions between different exposures.
It can therefore be appreciated that new circuits for and methods of implementing a device having cameras with focal lengths has been described. It will be appreciated by those skilled in the art that numerous alternatives and equivalents will be seen to exist that incorporate the disclosed invention. As a result, the invention is not to be limited by the foregoing implementations, but only by the following claims.
Number | Name | Date | Kind |
---|---|---|---|
6359617 | Xiong | Mar 2002 | B1 |
8340415 | Hoppe et al. | Dec 2012 | B2 |
8639056 | Zhai | Jan 2014 | B2 |
8811764 | Hsu | Aug 2014 | B1 |
9158989 | Stein | Oct 2015 | B1 |
9369684 | Lim | Jun 2016 | B2 |
9390482 | Lee | Jul 2016 | B2 |
9600741 | Su et al. | Mar 2017 | B1 |
9621767 | El Mezeni | Apr 2017 | B1 |
20030228064 | Gindele | Dec 2003 | A1 |
20050254722 | Fattal | Nov 2005 | A1 |
20100183222 | Fattal | Jul 2010 | A1 |
20100310189 | Wakazono | Dec 2010 | A1 |
20130202204 | Yamonaka | Aug 2013 | A1 |
20140064632 | Manabe | Mar 2014 | A1 |
20140153820 | Lee | Jun 2014 | A1 |
20140354892 | Moon | Dec 2014 | A1 |
20150002704 | Vidal-Naquet | Jan 2015 | A1 |
20150078676 | Arai | Mar 2015 | A1 |
20160014336 | Han | Jan 2016 | A1 |
20170069060 | Baqai | Mar 2017 | A1 |
20170352131 | Berlin | Dec 2017 | A1 |
20180063441 | Molgaard et al. | Mar 2018 | A1 |
20190347777 | Meng | Nov 2019 | A1 |
20200186710 | Sheikh | Jun 2020 | A1 |
Number | Date | Country |
---|---|---|
2008165312 | Jul 2008 | JP |
10-1767094 | Aug 2017 | KR |
1828180 | Feb 2018 | KR |
Entry |
---|
Raanan Fattal, Dani Lischinski, and Michael Werman. 2002. Gradient domain high dynamic range compression. In Proceedings of the 29th annual conference on Computer graphics and interactive techniques (SIGGRAPH '02). (Year: 2002). |
S. Keerativittayanun, T. Kondo, K. Kotani and T. Phatrapornnant, “An innovative of pyramid-based fusion for generating the HDR images in common display devices,” 2015 14th IAPR International Conference on Machine Vision Applications (MVA), Tokyo, 2015, pp. 53-56, (Year: 2015). |
Im Su Jin et al., A Pyramid Fusion Method of Two Differently Exposed Images Using Gray Pixel Values. Journal of Korea Multimedia Society. vol. 10 No. 8. Aug. 2016. |
PCT/KR2019/007085, Notification of Transmittal of the International Search Report and the Written Opinion, dated Nov. 18, 2019. |
PCT Search Report PCT/KR2019/007085 published 2019. |
Number | Date | Country | |
---|---|---|---|
20200265566 A1 | Aug 2020 | US |