The present invention is directed generally to methods of reducing or removing chromatic noise in images and digital video.
Luminance noise refers to fluctuations in brightness. Luminance noise may appear as light and dark specks (e.g., within a region of an image in which pixels should have the same or similar brightness). Chromatic or chroma noise refers to fluctuations in color. Chroma noise may appear as specks or blotches of unexpected color(s) (e.g., within a region of an image in which pixels should have the same or similar colors). Chroma noise is often more apparent in very dark or very light areas of an image and may give the image an unnatural appearance.
Image editing software often includes a user input (e.g., slider) that may be used to remove chroma noise manually. Software may also automatically remove chroma noise by decolorizing any pixels that have an unexpected color when compared to their neighboring pixels. Decolorized pixels are set to black, which essentially converts the chroma noise to luminance noise. Then, other image processing techniques may be applied to the image to remove the luminance noise and improve the overall appearance of the image.
The camera 204 is mounted on the housing 202. The camera 204 is configured to capture the digital video 203 and store that digital video 203 in the memory 208. The captured digital video 203 includes a series of root images (e.g., including a root image 240) of a scene. By way of a non-limiting example, the camera 204 may be implemented as a camera or video capture device 158 (see
The processor(s) 206 is/are configured to execute software instructions stored in the memory 208. By way of a non-limiting example, the processor(s) 206 may be implemented as a central processing unit (“CPU”) 150 (see
The display 210 is positioned to be viewed by the user while the user operates the video capture system 200. The display 210 is configured to display a preview of the digital video 203 being captured by the camera 204. By way of a non-limiting example, the display 210 may be implemented as conventional display device, such as a touch screen. The display 210 may be mounted on the housing 202. For example, the display 210 may be implemented as a display 154 (see
The manual control(s) 220 is/are configured to be operated by the user and may affect properties (e.g., focus, exposure, and the like) of the digital video 203 being captured. The manual control(s) 220 may be implemented as software controls that generate virtual controls displayed by the display 210. In such embodiments, the display 210 may be implemented as touch screen configured to receive user input that manually manipulates the manual control(s) 220. Alternatively, the manual control(s) 220 may be implemented as physical controls (e.g., button, knobs, and the like) disposed on the housing 202 and configured to be manually manipulated by the user. In such embodiments, the manual control(s) 220 may be connected to the processor(s) 206 and the memory 208 by the bus 212.
By way of non-limiting examples, the manual control(s) 220 may include a focus control 220A, an exposure control 220B, and the like. The focus control 220A may be used to change the focus of the digital video being captured by the camera 204. The exposure control 220B may change an ISO value, shutter speed, aperture, or an exposure value (“EV”) of the digital video being captured by the camera 204.
The memory 208 stores a noise decay module 230 implemented by the processor(s) 206. In some embodiments, the noise decay module 230 may generate and display the virtual controls implementing the manual control(s) 220. Alternatively, the manual control(s) 220 may be implemented by other software instructions stored in the memory 208.
In first block 282 (see
In decision block 284 (see
When the decision in decision block 284 (see
Then, the noise decay module 230 advances to block 288 (see
At this point, the noise decay module 230 processes each root pixel of the root image 240 one at a time. Thus, in block 288 (see
Then, in block 290 (see
The relative luminance (“Y”) of a particular pixel may be calculated using the following function in which a variable “s” represents the three linearized RGB color values (Rlinear, Glinear, and Blinear) of the particular pixel expressed as an RGB vector:
Y=dot(s,vec3(0.2126,0.7152,0.0722))
Y=[Rlinear,Glinear,Blinear]·[0.2126,0.7152,0.0722]=(Rlinear×0.2126)+(Glinear×0.7152)+(Blinear×0.0722) Eq. 2
Using the above equation, the relative luminance (“Y”) may be calculated for each pixel in a two-dimensional region of the root image 240 centered at the selected root pixel. For example, the region may be three pixels by three pixels. In this example, the selected root pixel may be characterized as being an origin of the region (which includes the root pixel and its eight surrounding neighbors) and assigned a coordinate value of (0, 0). Thus, a separate relative luminance value may be calculated for each of the eight root pixels neighboring the selected root pixel as well as for the selected root pixel. In this example, the following set of nine relative luminance values would be calculated: Y(−1,−1), Y(−1,0), Y(−1,1), Y(0,−1), Y(0,0), Y(0,1), Y(1,−1), Y(1,0), and Y(1,1). Then, these relative luminance values may be combined to determine the relative luminance (“Y”) of the selected root pixel. For example, an average or a median of the relative luminance values may be calculated and used as the relative luminance (“Y”) of the selected root pixel.
If the color values of the selected root pixel (represented by the RGB vector “s”) are linear, the perceptual luminance (“p”) of the selected root pixel equals the relative luminance (“Y”) of the selected root pixel. Otherwise, the relative luminance (“Y”) may be linearized to obtain the perceptual luminance (“p”) using the following formula:
The perceptual luminance (“p”) in the RGB color space may be used by the method 280 (see
Next, in block 292 (see
linear monochromatic RGB vector=[p,p,p] Eq. 4
In block 294 (see
biased monochromatic RGB vector=[o*p,o*p,o*p] Eq. 5
The relative-luminance weighted saturation bias (“o”) may be calculated using the following formula:
o=0.16667×ln(p)+1.0 Eq. 6
In block 296 (see
Next, in decision block 298 (see
On the other hand, the decision in decision block 298 (see
At this point, a new pixel has been generated for each of the root pixels. Combined, the new pixels define the denoised image 250. Optionally, the denoised image 250 may be remapped to a different color space. For example, the linear RGB values may be remapped to the sRGB color space. The denoised image 250 may be subject to one or more additional operations, such as Gamma curve remapping, luma curve augmentation (shadow/highlight repair), histogram equalization, additional spacial denoise, RGB mixing, and lookup table application. Optionally, the denoised image 250 may be displayed to the user using the display 210.
The method 280 (see
Referring to
The method 280 decays the chrominance of the root image 240 and generates the denoised image 250 within the gamut of the original color space (e.g., the sRGB color space) of the root image 240.
The mobile communication device 140 includes the CPU 150. Those skilled in the art will appreciate that the CPU 150 may be implemented as a conventional microprocessor, application specific integrated circuit (ASIC), digital signal processor (DSP), programmable gate array (PGA), or the like. The mobile communication device 140 is not limited by the specific form of the CPU 150.
The mobile communication device 140 also contains the memory 152. The memory 152 may store instructions and data to control operation of the CPU 150. The memory 152 may include random access memory, ready-only memory, programmable memory, flash memory, and the like. The mobile communication device 140 is not limited by any specific form of hardware used to implement the memory 152. The memory 152 may also be integrally formed in whole or in part with the CPU 150.
The mobile communication device 140 also includes conventional components, such as a display 154 (e.g., operable to display the denoised image 250), the camera or video capture device 158, and keypad or keyboard 156. These are conventional components that operate in a known manner and need not be described in greater detail. Other conventional components found in wireless communication devices, such as USB interface, Bluetooth interface, infrared device, and the like, may also be included in the mobile communication device 140. For the sake of clarity, these conventional elements are not illustrated in the functional block diagram of
The mobile communication device 140 also includes a network transmitter 162 such as may be used by the mobile communication device 140 for normal network wireless communication with a base station (not shown).
The mobile communication device 140 may also include a conventional geolocation module (not shown) operable to determine the current location of the mobile communication device 140.
The various components illustrated in
The memory 152 may store instructions executable by the CPU 150. The instructions may implement portions of one or more of the methods described above (e.g., the method 280 illustrated in
The foregoing described embodiments depict different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality.
While particular embodiments of the present invention have been shown and described, it will be obvious to those skilled in the art that, based upon the teachings herein, changes and modifications may be made without departing from this invention and its broader aspects and, therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of this invention. Furthermore, it is to be understood that the invention is solely defined by the appended claims. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to inventions containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should typically be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, typically means at least two recitations, or two or more recitations).
Accordingly, the invention is not limited except as by the appended claims.
This application claims the benefit of U.S. Provisional Application No. 62/468,063, filed on Mar. 7, 2017, and U.S. Provisional Application No. 62/468,874, filed on Mar. 8, 2017, both of which are incorporated herein by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
8284271 | Zimmer et al. | Oct 2012 | B2 |
8508624 | Linzer | Aug 2013 | B1 |
8699813 | Singh et al. | Apr 2014 | B2 |
9472162 | Stauder et al. | Oct 2016 | B2 |
9525804 | Baqai et al. | Dec 2016 | B2 |
9583035 | Buckley et al. | Feb 2017 | B2 |
9659349 | Douady-Pleven et al. | May 2017 | B2 |
20050025380 | Keshet | Feb 2005 | A1 |
20100092082 | Hirakawa et al. | Apr 2010 | A1 |
20100098348 | Zhang | Apr 2010 | A1 |
20140126808 | Geisler et al. | May 2014 | A1 |
20150350576 | Zimmer | Dec 2015 | A1 |
20160071251 | Thoma et al. | Mar 2016 | A1 |
20160328830 | Pouli et al. | Nov 2016 | A1 |
Number | Date | Country |
---|---|---|
104486607 | Apr 2015 | CN |
Entry |
---|
Garcia-Lamant, F., et al., “Segmentation of color images by chromaticity features using self organizing maps,” Ingnieria e Investigacion, Aug. 2016, 36(2):78-89. |
International Search Report and Written Opinion, dated May 2, 2018, received in International Application No. PCT/US2018/20921. |
Number | Date | Country | |
---|---|---|---|
62468063 | Mar 2017 | US | |
62468874 | Mar 2017 | US |