The present invention is directed generally to methods of quantifying micro-contrast and visualizing micro-contrast in images and digital video.
Within an image, contrast may be defined as a difference in luminance and/or color that makes an object distinguishable from other features (e.g., other objects) within the image. Thus, contrast may be most visible at an edge or boundary between different objects. On the other hand, micro-contrast refers to contrast between adjacent (or nearly adjacent) pixels. Therefore, micro-contrast is not concerned with edges or boundaries. Further, while micro-contrast differs from sharpness, an image with good micro-contrast may appear sharper.
Unfortunately, some image processing techniques (e.g., physical anti-aliasing filters and/or software-based moiré pattern reduction) can result in relatively homogeneous adjacent pixels. In other words, these techniques may reduce micro-contrast within a resultant image. This problem can be further exacerbated by a color filter array (“CFA”) Demosaic scheme that prioritizes high output spatial resolution over native acutance. Overlays used to visualize focus may be derived from pixel-submatrix convolution operators (like Sobel, Roberts Cross, or Prewitt) and are commonly used to overcome the problem of near-pixel homogeneity. Unfortunately, these approaches struggle to discriminate between high-frequency detail and Gaussian noise.
The camera 204 is mounted on the housing 202. The camera 204 is configured to capture the digital video 203 and store that digital video 203 in the memory 208. The captured digital video 203 includes a series of root images (e.g., including a root image 240) of a scene. By way of a non-limiting example, the camera 204 may be implemented as a camera or video capture device 158 (see
The processor(s) 206 is/are configured to execute software instructions stored in the memory 208. By way of a non-limiting example, the processor(s) 206 may be implemented as a central processing unit (“CPU”) 150 (see
The display 210 is positioned to be viewed by the user while the user operates the video capture system 200. The display 210 is configured to display a preview of the digital video 203 being captured by the camera 204. By way of a non-limiting example, the display 210 may be implemented as conventional display device, such as a touch screen. The display 210 may be mounted on the housing 202. For example, the display 210 may be implemented as a display 154 (see
The manual control(s) 220 is/are configured to be operated by the user and may affect properties (e.g., focus, exposure, and the like) of the digital video 203 being captured. The manual control(s) 220 may be implemented as software controls that generate virtual controls displayed by the display 210. In such embodiments, the display 210 may be implemented as touch screen configured to receive user input that manually manipulates the manual control(s) 220. Alternatively, the manual control(s) 220 may be implemented as physical controls (e.g., button, knobs, and the like) disposed on the housing 202 and configured to be manually manipulated by the user. In such embodiments, the manual control(s) 220 may be connected to the processor(s) 206 and the memory 208 by the bus 212.
By way of non-limiting examples, the manual control(s) 220 may include a focus control 220A, an exposure control 220B, and the like. The focus control 220A may be used to change the focus of the digital video being captured by the camera 204. The exposure control 220B may change an ISO value, shutter speed, aperture, or an exposure value (“EV”) of the digital video being captured by the camera 204.
The memory 208 stores an Inductive Micro-Contrast Evaluation (“IMCE”) module 230 implemented by the processor(s) 206. In some embodiments, the IMCE module 230 may generate and display the virtual controls implementing the manual control(s) 220. Alternatively, the manual control(s) 220 may be implemented by other software instructions stored in the memory 208.
In first block 282 (see
At this point, the IMCE module 230 processes each root pixel of the root image 240 one at a time. Thus, in block 284 (see
Next, in block 286 (see
Referring to
In block 290 (see
Alternatively, the following Equation 2 may be used to calculate the value of the micro-contrast score (“MCS”):
In Equations 1 and 2 above, the sample pixels SP1 and SP2 are a first pair of pixels along a first diagonal and the sample pixels SP3 and SP4 are a second pair of pixels along a second diagonal. In Equations 1 and 2, the length of a first pixel minus a second pixel (e.g., length(SP1-SP2)) within one of the first and second pairs is determined by subtracting the RGB color values of the second pixel from the RGB color values of the first pixel to obtain a resultant set of RGB color values. Then, the resultant RGB color values obtained for each of the first and second pairs are treated as three-dimensional coordinates and each converted into a length using the following Equation 3:
length=√{square root over (R2+G2+B2)} Eq. 3
Thus, the value of length(SP1-SP2) is calculated using the following Equation 4:
length(SP1-SP2)=√{square root over ((RSP1-RSP2)2+(GSP1-GSP2)2+(BSP1-BSP2)2)} Eq. 4
Similarly, the value of length(SP3-SP4) is calculated using the following Equation 5:
length(SP3-SP4)=√{square root over ((RSP3-RSP4)2+(GSP3-GSP4)2+(BSP3-BSP4)2)} Eq. 5
In the equations above, the RGB color values may each be scaled to range from zero to one. In such embodiments, a maximum possible value of each of the length(SP1-SP2) and the length(SP3-SP4) is the square root of three. A total maximum distance may be defined as twice the square root of three (2*30.5≈3.464). Thus, the IMCE method 280 generates the micro-contrast score (“MCS”) as a sum of a first distance between a first pair of vectors (length(SP1-SP2)) and a second distance between a second pair of vectors (length(SP3-SP4)) divided by the total maximum distance. Therefore, the micro-contrast score (“MCS”) may be characterized as being of a percentage of the total maximum distance.
Next, referring to
On the other hand, the decision in decision block 292 is “NO,” when the IMCE module 230 (see
In block 296 (see
For example, the IMCE module 230 may use a blend ratio 234 to blend the assistive overlay 232 and the root image 240 together. The blend ratio 234 determines how much the assistive overlay 232 and the root image 240 each contribute to the mutated image 250. The blend ratio 234 may be characterized as including first and second weights that sum to one. The assistive overlay 232 may be weighted by the first weight and the root image 240 may be weighted by the second weight. The pixels of the assistive overlay 232 are blended with the root pixels by applying the first weight to each of the pixels of the assistive overlay 232 and the second weight to each of the root pixels. Then, the weighted pixels of the assistive overlay 232 are added to the weighted root pixels to obtain the mutated pixels. Thus, each pixels of the assistive overlay 232 may be blended with a corresponding root pixel using a per-pixel linear-mix operation.
Optionally, only those portions of the assistive overlay 232 having a micro-contrast score greater than or equal to a minimum evaluation threshold value may be blended with the root image 240. Thus, portions of the assistive overlay 232 having a micro-contrast score less than the minimum evaluation threshold value may be omitted from the mutated image 250. For any omitted portion(s) of the assistive overlay 232, the corresponding root pixel is used in the mutated image 250. The minimum evaluation threshold value may vary based on desired sensitivity.
The assistive overlay 232 visualizes the entire focal plane and may appear as colored lines drawn on top of regions of the root image 240 (e.g., assigned micro-contrast scores greater than or equal to the minimum evaluation threshold value). Unlike prior art colorized edge-detection overlays (often referred to as “focus peaking”), the IMCE module 230 does not use a single arbitrary color to visualize focus or micro-contrast. Instead, the IMCE module 230 may paint a linear blend of the first and second overlay colors (which are the assistive overlay 232) on the root image 240.
The assistive overlay 232 may be challenging to see if the underlying root image 240 is chromatically vibrant. Optionally, to make the assistive overlay 232 easier to see, the IMCE module 230 may generate a global desaturation of the root image 240 (or a desaturated version of the root image 240). Then, the IMCE module 230 creates the mutated image 250 by blending the assistive overlay 232 with the desaturated version of the root image 240.
Then, the IMCE method 280 (see
Referring to
Referring to
The assistive overlay 232 helps the user visualizes degrees of micro-contrast, not focus. Therefore, the assistive overlay 232 (or micro-contrast map) generated by the IMCE method 280 (see
The micro-contrast score (“MCS”) is generated on a per-pixel basis and is always subject to user-interpretation. Neither the video capture system 200 nor the IMCE method 280 (see
Using the assistive overlay 232, the user may adjust the manual control(s) 220 (e.g., the focus control 220A) or other parameters to improve the micro-contrast in the root image 240. For example, the user may adjust the lighting and view the assistive overlay 232 (e.g., in a preview) using the display 210. Then, the user may view the assistive overlay 232 to see if the change in lighting improved the micro-contrast in the current root image 240.
The mobile communication device 140 includes the CPU 150. Those skilled in the art will appreciate that the CPU 150 may be implemented as a conventional microprocessor, application specific integrated circuit (ASIC), digital signal processor (DSP), programmable gate array (PGA), or the like. The mobile communication device 140 is not limited by the specific form of the CPU 150.
The mobile communication device 140 also contains the memory 152. The memory 152 may store instructions and data to control operation of the CPU 150. The memory 152 may include random access memory, ready-only memory, programmable memory, flash memory, and the like. The mobile communication device 140 is not limited by any specific form of hardware used to implement the memory 152. The memory 152 may also be integrally formed in whole or in part with the CPU 150.
The mobile communication device 140 also includes conventional components, such as a display 154 (e.g., operable to display the mutated image 250), the camera or video capture device 158, and keypad or keyboard 156. These are conventional components that operate in a known manner and need not be described in greater detail. Other conventional components found in wireless communication devices, such as USB interface, Bluetooth interface, infrared device, and the like, may also be included in the mobile communication device 140. For the sake of clarity, these conventional elements are not illustrated in the functional block diagram of
The mobile communication device 140 also includes a network transmitter 162 such as may be used by the mobile communication device 140 for normal network wireless communication with a base station (not shown).
The mobile communication device 140 may also include a conventional geolocation module (not shown) operable to determine the current location of the mobile communication device 140.
The various components illustrated in
The memory 152 may store instructions executable by the CPU 150. The instructions may implement portions of one or more of the methods described above (e.g., the IMCE method 280 illustrated in
The foregoing described embodiments depict different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality.
While particular embodiments of the present invention have been shown and described, it will be obvious to those skilled in the art that, based upon the teachings herein, changes and modifications may be made without departing from this invention and its broader aspects and, therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of this invention. Furthermore, it is to be understood that the invention is solely defined by the appended claims. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to inventions containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should typically be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, typically means at least two recitations, or two or more recitations).
Accordingly, the invention is not limited except as by the appended claims.
This application claims the benefit of U.S. Provisional Application No. 62/468,063, filed on Mar. 7, 2017, and U.S. Provisional Application No. 62/468,874, filed on Mar. 8, 2017, both of which are incorporated herein by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
5496106 | Anderson | Mar 1996 | A |
6075875 | Gu | Jun 2000 | A |
6996549 | Zhang et al. | Feb 2006 | B2 |
7492938 | Brinson, Jr. et al. | Feb 2009 | B2 |
7796829 | Nguyen et al. | Sep 2010 | B2 |
8014034 | Hooper | Sep 2011 | B2 |
8275449 | White et al. | Sep 2012 | B2 |
8279325 | Pitts et al. | Oct 2012 | B2 |
8508612 | Cote et al. | Aug 2013 | B2 |
8531542 | Cote et al. | Sep 2013 | B2 |
8786625 | Cote et al. | Jul 2014 | B2 |
8922704 | Cote et al. | Dec 2014 | B2 |
9014504 | Lim et al. | Apr 2015 | B2 |
9025867 | Cote et al. | May 2015 | B2 |
9077943 | Lim et al. | Jul 2015 | B2 |
9105078 | Lim et al. | Aug 2015 | B2 |
9131196 | Lim et al. | Sep 2015 | B2 |
9332239 | Cote et al. | May 2016 | B2 |
9661212 | Oishi et al. | May 2017 | B2 |
20120147205 | Lelescu | Jun 2012 | A1 |
20130321675 | Cote et al. | Dec 2013 | A1 |
20150371111 | Mohamed et al. | Dec 2015 | A1 |
20170206689 | Eo et al. | Jul 2017 | A1 |
Number | Date | Country |
---|---|---|
2017139367 | Aug 2017 | WO |
2017175231 | Oct 2017 | WO |
Entry |
---|
Vanbrabant, P., et al., “Optical Analysis of Small Pixel Liquid Crystal Microdisplays,” Journal of Display Technology, Mar. 2011, 7(3): 156-161. |
Number | Date | Country | |
---|---|---|---|
20180262752 A1 | Sep 2018 | US |
Number | Date | Country | |
---|---|---|---|
62468063 | Mar 2017 | US | |
62468874 | Mar 2017 | US |