The present invention relates generally to images. More particularly, an embodiment of the present invention relates to the adaptive prevention of false contouring artifacts in layered coding of images with extended dynamic range.
As used herein, the term ‘dynamic range’ (DR) may relate to a capability of the human psychovisual system (HVS) to perceive a range of intensity (e.g., luminance, luma) in an image, e.g., from darkest darks to brightest brights. In this sense, DR relates to a ‘scene-referred’ intensity. DR may also relate to the ability of a display device to adequately or approximately render an intensity range of a particular breadth. In this sense, DR relates to a ‘display-referred’ intensity. Unless a particular sense is explicitly specified to have particular significance at any point in the description herein, it should be inferred that the term may be used in either sense, e.g. interchangeably.
As used herein, the term high dynamic range (HDR) relates to a DR breadth that spans the some 14-15 orders of magnitude of the human visual system (HVS). For example, well adapted humans with essentially normal (e.g., in one or more of a statistical, biometric or opthamological sense) have an intensity range that spans about 15 orders of magnitude. Adapted humans may perceive dim light sources of as few as a mere handful of photons. Yet, these same humans may perceive the near painfully brilliant intensity of the noonday sun in desert, sea or snow (or even glance into the sun, however briefly to prevent damage). This span though is available to ‘adapted’ humans, e.g., those whose HVS has a time period in which to reset and adjust.
In contrast, the DR over which a human may simultaneously perceive an extensive breadth in intensity range may be somewhat truncated, in relation to HDR. As used herein, the terms ‘extended dynamic range’, ‘visual dynamic range’ or ‘variable dynamic range’ (VDR) may individually or interchangeably relate to the DR that is simultaneously perceivable by a HVS. As used herein, VDR may relate to a DR that spans 5-6 orders of magnitude. Thus while perhaps somewhat narrower in relation to true scene referred HDR, VDR nonetheless represents a wide DR breadth. As used herein, the term VDR images or pictures may relate to images or pictures wherein each pixel component is represented by more than 8 bits.
Until fairly recently, displays have had a significantly narrower DR than HDR or VDR. Television (TV) and computer monitor apparatus that use typical cathode ray tube (CRT), liquid crystal display (LCD) with constant fluorescent white back lighting or plasma screen technology may be constrained in their DR rendering capability to approximately three orders of magnitude. Such conventional displays thus typify a low dynamic range (LDR), also referred to as a standard dynamic range (SDR), in relation to VDR and HDR.
As with the scalable video coding and HDTV technologies, extending image DR typically involves a bifurcate approach. For example, scene referred HDR content that is captured with a modern HDR capable camera may be used to generate either a VDR version or an SDR version of the content, which may be displayed on either a VDR display or a conventional SDR display. To conserve bandwidth or for other considerations, one may transmit VDR signals using a layered or hierarchical approach, using an SDR base layer (BL) and an enhancement layer (EL). Legacy decoders that receive the layered bit stream may use only the base layer to reconstruct an SDR picture; however, VDR-compatible decoders can use both the base layer and the enhancement layer to reconstruct a VDR stream.
In such layered VDR coding, images may be represented at different spatial resolutions, bit depths, and color spaces. For example, typical VDR signals are represented using 12 or more bits per color component, while typical SDR signals are represented using 8 bits per color component. Furthermore, base layer and enhancement layer signals may be further compressed using a variety of image and video compression schemes, such as those defined by the ISO/IEC Recommendations of the Motion Pictures Expert Group (MPEG), such as MPEG-1, MPEG-2, MPEG-4, part 2, and H.264.
Layered VDR coding introduces quantization in at least two segments of the coding pipeline: a) during the transformation of the VDR signal from a first bit depth (e.g., 12-bits per color component) to an SDR signal of a second, lower, bit depth (e.g., 8 bits per color component), and b) during the compression process of the base and enhancement layers. False contours may appear on reconstructed images as an artifact of such quantization
The approaches described in this section are approaches that could be pursued, but not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, issues identified with respect to one or more approaches should not assume to have been recognized in any prior art on the basis of this section, unless otherwise indicated.
An embodiment of the present invention is illustrated by way of example, and not in way by limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
Adaptive prevention of false contouring artifacts in VDR layered coding is described herein. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are not described in exhaustive detail, in order to avoid unnecessarily occluding, obscuring, or obfuscating the present invention.
Overview
Example embodiments described herein relate to the adaptive quantization of VDR video signals to prevent false contouring artifacts in layered coding. In an embodiment, an encoder receives a sequence of images in extended or visual dynamic range (VDR). For each image, a dynamic range compression function and associated parameters are selected to convert the input image into a second image with a lower dynamic range. Using the input image and the second image, a residual image is computed. The input VDR image sequence is coded using a layered codec that uses the second image as a base layer and a residual image that is derived from the input and second images as one or more residual layers. Using the residual image, a false contour detection method (FCD) estimates the number of potential perceptually visible false contours in the decoded VDR image and iteratively adjusts the dynamic range compression parameters to reduce the number of false contours.
In an example embodiment the dynamic range compression function comprises a uniform quantizer.
In an example embodiment, the FCD method comprises a pixel-level contour detector and a picture-level contour detector.
In another embodiment, an encoder receives a scene (e.g., a group of pictures) of VDR images. A uniform dynamic range compression function comprising frame-dependent parameters CL[i] and CH[i] is applied to each frame i to convert each VDR image into an SDR image, where the SDR image has a lower dynamic range than the VDR image. After setting initial CL[i] and CH[i] values, in an iterative process, a residual image is computed using each VDR image and its corresponding SDR image. Using the residual image, a false contour detection method (FCD) computes the number of perceptually visible false contours in the residual image and iteratively adjusts either CL[i] or CH[i] to reduce the occurrence of false contours. After processing all frames in the scene, a scene-dependent CH value is computed based on the maximum of all the CH[i] values, and a scene-dependent CL value is computed based on the minimum of all the CL[i] values. During the compression of the input scene, the uniform dynamic range compression function is applied to all the images in the scene using the computed scene-dependent CH and CL values.
In another embodiment, a false contour detection metric is computed by comparing the edge contrast of a false contour to a visibility threshold based on system parameters and a model of a contrast sensitivity function (CSF).
Example Layered VDR System
In an embodiment, input signal V 105 may represent an input VDR signal represented by a high bit-depth resolution (e.g., 12 or more bits per color component in an 4:4:4 color format, such as RGB 4:4:4). This VDR signal may be processed by a dynamic range compression process (e.g., a tone mapping operator or a quantizer) 110 to generate signal S 112. In some embodiments, the dynamic range compression process may also comprise other non-linear or linear image transformation processes. Signal S may be in the same or lower spatial resolution than signal V. Signal S may be represented in a lower bit-depth resolution than V, e.g., 8 bits per color component. Signal S may be in the same color format as V, or in other embodiments, it may be in a different color format, e.g., YCbCr 4:2:0.
In an embodiment, base layer (BL) signal S 112 may be processed by BL Encoder 130 to derive compressed signal 135. In an embodiment, coding 130 may be implemented by any of the existing video encoders, such as an MPEG-2 or MPEG-4 video encoder, as specified by the motion pictures expert group (MPEG) specifications.
An enhancement (or residual) layer 175 may be generated by decoding signal 135 in BL decoder 140, generating a predicted value of the original VDR signal V (165), and subtracting the predicted value (165) from the original to generate a residual signal 175. In an embodiment, predictor 160 may be implemented using multivariate multiple-regression models as described in International Patent Application No. PCT/US2012/033605 filed 13 Apr. 2012. Residual signal 175 may be further compressed by residual encoder 180 to generate an encoded residual signal 185. In an embodiment, coding 180 may be implemented by any of the existing video encoders, such as an MPEG-2 or MPEG-4 video encoder, as specified by the motion pictures expert group (MPEG) specifications, or other image and video encoders, such as JPEG2000, VP8, Flash video, and the like. Encoding 180 may also be preceded by other image processing operations, such as color transformations and/or non-linear quantization.
Due to the quantization processes in either dynamic range compressor 110 or encoders 130 and 180, in a VDR decoder, the reconstructed VDR signal may exhibit quantization-related related artifacts, such as false contouring artifacts. One approach to reduce these artifacts may incorporate post-processing techniques, such as de-contouring; however, such techniques increase the computational complexity of the decoder, may not completely remove false contours, and may even introduce other undesired artifacts. A second approach to reduce false contouring may incorporate applying encoding pre-processing methods, such as dithering. However, pre-dithering increases the entropy of the input signal and thus degrades overall compression efficiency. An embodiment proposes a novel approach to prevent false contouring artifacts by adaptively adjusting the VDR to SDR dynamic range compression process 110 so that the number of potential false contours is prevented and minimized.
In an embodiment, encoding system 100 may also include a false contour detector (FCD) 120, which estimates the potential severity or visibility of false contouring artifacts in the decoded stream. As will be described later herein, given signals V and S, FCD 120 adjusts the parameters of dynamic range compressor 110 so that the number of potential false contouring artifacts is minimized. Such parameters may be transmitted to a decoder, for example, as part of the coded bit stream using metadata 167, or they may be used by the encoder to derive the prediction parameters being used in predictor 160.
As defined herein, the term “metadata” may relate to any auxiliary information that is transmitted as part of the coded bitstream and assists a decoder to render a decoded image. Such metadata may include, but are not limited to, information related to: color space or gamut transformations, dynamic range, and/or quantization parameters, such as those described herein.
Layered VDR coding system 100 includes the dynamic range compressor 110, which maps the original VDR signal 105 to a base layer (e.g., SDR) signal 112. In an embodiment, the input-output characteristics of the dynamic range compressor can be defined by a function Q( ), and a set of parameters P. Thus, given VDR input vi, the SDR output si can be expressed as
s
i
=Q(vi, P). (1)
where O is a rounding offset and P={CH, CL}. In equation (2), the vL and vH values are typically defined based on the dynamic range characteristics of a group of frames in the input video sequence. Given vL and vH, the {CH, CL} parameters control the amount of dynamic range quantization and the number of potential perceptually visible false contours. In
In one embodiment, the quantizer is uniform and is applied only to the gamma corrected luminance component of the VDR signal (e.g., the Y component in a YCbCr signal). In other embodiments, the quantizer may be non-uniform, may operate independently to more than one color component of the input signal, and may operate in a perceptual quantization domain.
Given equation (2), an embodiment of the de-quantization (or dynamic range expansion) process can be expressed as
In one embodiment, the linear quantized data will be compressed via lossy compression 130 and the inverse quantization function may be obtained via linear regression, for example, by fitting a low order polynomial between the VDR and base layer data set. In an embodiment, given a parametric model of the dynamic range expansion function, such as:
Q
−1(si)=a2si2+a1si+a0, (4)
the parameters of such a model (e.g., a0, a1, and a2,) can be solved by minimizing the mean square error (MSE) between the predicted VDR value and the input VDR values, as depicted in equation (5) below:
When the dynamic range compression is followed by clipping, prediction of the VDR data may be improved by using the unclipped SDR data to compute equation (5).
Given the dynamic range compression function of equation (1), an embodiment selects a set of parameters P to minimize the number of potential false contours in the reconstructed VDR signal. For example, given the linear quantizer of equation (2), if the distribution of the pixel values in the VDR picture has a main peak closer to zero pixel values and a long trail towards the maximum pixel value, CL may be set to zero (CL=0), and then a value for CH which minimizes the false contouring artifact in the final reconstructed VDR may be derived. A higher CH value may reduce the false contouring artifacts; however, high CH values may increase the bit rate of the residual stream 185, and thus affect overall coding efficiency.
In an embodiment, a selection of dynamic range compression parameters (e.g., [CL CH]) may be kept constant over the frames of a video scene (e.g., a group of pictures), which may support maintaining constant luminance within the scene and facilitate the motion estimation processes within the base layer and enhancement layer codecs (e.g., 130 and 180).
r
i
=v
i
−
i, (6)
which is inputted for further processing to FCD Counter 320. Given a residual image ri 319, FCD Counter unit 320 outputs (according to a given criterion) the number of potential perceptually visible false contours in the reconstructed VDR signal due to the parameters chosen for dynamic range compression.
In an embodiment, in equation (6), the residual may be computed as ri=vi−{circumflex over (v)}i, where {circumflex over (v)}i denotes the predicted VDR signal 165, derived directly from predictor 160.
In FCD Counter 320, detecting and counting false contours in the input residual comprises a pixel level detection component and a picture (or frame) level detection component.
For a given dynamic range compressor 110, mi denotes the range of vi input values that correspond to the same si output value. For example, for the uniform quantizer depicted in
In this embodiment, uniform quantization results in,
From equations (8), (9), and (10), the standard deviation of the residual pixels in the area of interest is strongly correlated with the parameters [CL CH] that define the slope of the uniform quantizer.
Let
denote the normalized standard deviation (step 430) of all the residual pixels in Ai, and let
{tilde over (σ)}i=median_filter(
denote the median (step 440) of all normalized standard deviations in a pixel area Ãi surrounding pixel ri. Denote as αi a pixel-level false contour indicator, then, given thresholds TH, TL, LL, and LH, residual ri 319 is associated with a potential false contour if αi=1, where (step 450)
αi=({tilde over (σ)}i<TH)·({tilde over (σ)}i>TL)·(vi>LL)·(vi<LH). (13)
In one embodiment, in equation (12), area Ãi=Ai. In other embodiments, area Ãi may be larger than Ai. In an example embodiment, TH=⅕, and TL= 1/16, and for 16-bit VDR inputs, LL=10,000 and LH=60,000. The false contouring artifact exists in the areas where {tilde over (σ)}i is smaller than a threshold TH. However, very small values of {tilde over (σ)}i may indicate areas that were already very smooth; hence, to avoid false positives, {tilde over (σ)}i is set higher than a threshold TL. From a perceptual point of view, false contouring artifacts are difficult to observe in the very dark and very bright areas; hence thresholds LL and LH may be used to define very dark and very bright areas, respectively, where detecting false contours may be less significant.
While αi of equation (13) provides an indication of false contours at the pixel level, from a perceptual point of view, detecting false contours at the picture level may be more significant. For example, a false contouring artifact will be perceptually visible only if a large picture area has multiple pixels with αi=1 and these pixels are connected.
In an embodiment, a picture or video frame is divided into non-overlapping blocks Bj (e.g., 16×16 or 32×32 pixel blocks) (step 510). Let βj denote a binary, block level, false contour indicator that represents how noticeable are false contouring artifacts in that block. In an embodiment, given
where |Bj| denotes the number of pixels in area Bj, then
βj=(bj>TB)·(cj<TB). (16)
The variable cj and the threshold TB of equations (15) and (16) are introduced to compensate for the false contour detection across two blocks. In an embodiment, TB=0.1.
After computing βj for each block (step 520), a potential false contour is counted if a number of blocks for which βj=1 are considered connected (step 530). In an embodiment, given a threshold Tc (e.g., Tc=0), let
{θk}=connected_component{βj}, (17)
denote the number of 4-connected components among the set of βj. For example, using the MATLAB programming language, {θk} can be computed using the function bwconncomp(βj, 4). Then, in step 540, the value of FCD metric
FCD=|{|θk|>Tc}|, (18)
denotes the number of potential perceptually visible false contouring artifacts in the picture area of interest.
In general, thresholds LL, LH, and TC depend on both the display characteristics of the target VDR display system and the characteristics of the human visual system.
As used herein, the term ‘scene’ relates to a group of consecutive frames or pictures, or to a collection of consecutive video frames or pictures that in general have similar dynamic range and color characteristics.
As used herein, the term ‘high clipping’ relates to a preference during dynamic range compression to clip mostly towards the high pixel values (e.g., highlights). Such clipping is preferable when a histogram of the pixel values in a scene indicate that the majority of pixel values tend to be closer to the dark area and the histogram shows a long tail in the highlights (bright) area.
As used herein, the term ‘low clipping’ relates to a preference during dynamic range compression to clip mostly towards the low pixel values (e.g., dark shadows). Such clipping is preferable when a histogram of the pixel values in a scene indicates that the majority of pixel values tend to be closer to the bright area and the histogram shows a long tail in the dark area.
In an embodiment, the decision whether to perform low clipping or high clipping (620) may be determined by computing the skewedness of the histogram of pixel values in a scene. Given μ and σ, the estimated mean and standard deviation values of the luminance values of all N input vi pixels in a scene, skewedness may be defined as
If skewedness is negative, the data is spread out more towards the left of the mean. If skewedness is positive, the data is spread out more to the right of the mean.
Under low clipping (637), for each frame i, the iterative process starts (step 637-2) with an initial CL[i] value (e.g., CL[i]=0) and computes the FCD metric using equation (18) (step 637-3). If the number of detected perceptually visible false contours is equal or lower than a given threshold Tf (e.g. Tf=0), then the process continues to the next frame (step 637-1), until all frames have been processed. Otherwise, if the number of detected perceptually visible contours is higher than the given threshold, then in step 637-2 the CL[i] value is decreased by a specified step (e.g., by 5 or by 10) and the process repeats. When all the frames in a scene have been processed, then the CL value for the dynamic compression of the scene is selected as the minimum value among all computed CL[i] values.
Under high clipping (635), for each frame i, the iterative process starts (step 635-2) with an initial CH[i] value (e.g., CH[i]=235) and computes the FCD metric using equation (18) (635-3). If the number of detected perceptually visible false contours is equal or lower than a given threshold Tf (e.g. Tf=0), then the process continues to the next frame (635-1), until all frames have been processed. Otherwise, if the number of detected perceptually visible contours is higher than the given threshold, then in step 635-2 the CH[i] value is increased by a specified step (e.g., by 5 or by 10) and the process repeats. When all the frames in a scene have been processed, then the CH value for the dynamic compression for the scene is selected as the maximum value among all computed CH[i] values.
As depicted in
For each connected component (e.g., 795),
Next, in steps 740-2A and 740-2B, the average code values are converted to gamma-corrected luminance values L1 and L2. There are many alternatives to compute gamma-corrected luminance values based on a variety of standard display calibration models. In one embodiment
L=Vavgγ, (20)
where Vavg is the average code value and γ is the gamma.
Given the luminance values L1 and L2, step 750 may compute the edge contrast CE as
Based on the local light adaptation level and other system parameters, in step 760-10, a contrast sensitivity function (CSF) model (e.g., CSF 760-15) is either computed or selected from a family of suitable pre-computed models. Given the CSF model (e.g., 760-15), a contrast sensitivity value SC is derived. In some embodiments, SC may be defined as the peak contrast sensitivity (e.g., PCS in 760-15). In other embodiments, SC may be defined as the intersection of the contrast sensitivity function with the Y-axis (e.g., SCS in 760-15). In step 760-20, given the computed local contrast sensitivity value SC, the local contrast threshold may be computed in as
Returning back to
The methods described herein for a uniform quantizer can easily be extended to other types of quantizers, such as non-uniform equalizers. By dividing the original input range [vL, vH] into p non-overlapping input ranges [vLi, vHi] for i=1, 2, . . . , p, then the problem of preventing false contour artifacts can be expressed as the problem of identifying a set of parameters Pi, e.g., Pi={CHi, CLi}, for i=1, 2, . . . , p, so that the FCD metric within each input segment (e.g., FCDi) is equal or below a given threshold.
Example Computer System Implementation
Embodiments of the present invention may be implemented with a computer system, systems configured in electronic circuitry and components, an integrated circuit (IC) device such as a microcontroller, a field programmable gate array (FPGA), or another configurable or programmable logic device (PLD), a discrete time or digital signal processor (DSP), an application specific IC (ASIC), and/or apparatus that includes one or more of such systems, devices or components. The computer and/or IC may perform, control, or execute instructions relating to detecting and preventing false contours, such as those described herein. The computer and/or IC may compute any of a variety of parameters or values that relate to detecting and preventing false contours as described herein. The image and video embodiments may be implemented in hardware, software, firmware and various combinations thereof.
Certain implementations of the invention comprise computer processors which execute software instructions which cause the processors to perform a method of the invention. For example, one or more processors in a display, an encoder, a set top box, a transcoder or the like may implement methods to detect and prevent false contouring artifacts as described above by executing software instructions in a program memory accessible to the processors. The invention may also be provided in the form of a program product. The program product may comprise any medium which carries a set of computer-readable signals comprising instructions which, when executed by a data processor, cause the data processor to execute a method of the invention. Program products according to the invention may be in any of a wide variety of forms. The program product may comprise, for example, physical media such as magnetic data storage media including floppy diskettes, hard disk drives, optical data storage media including CD ROMs, DVDs, electronic data storage media including ROMs, flash RAM, or the like. The computer-readable signals on the program product may optionally be compressed or encrypted.
Where a component (e.g. a software module, processor, assembly, device, circuit, etc.) is referred to above, unless otherwise indicated, reference to that component (including a reference to a “means”) should be interpreted as including as equivalents of that component any component which performs the function of the described component (e.g., that is functionally equivalent), including components which are not structurally equivalent to the disclosed structure which performs the function in the illustrated example embodiments of the invention.
Equivalents, Extensions, Alternatives and Miscellaneous
Example embodiments that relate to detecting and preventing false contours in coding VDR sequences are thus described. In the foregoing specification, embodiments of the present invention have been described with reference to numerous specific details that may vary from implementation to implementation. Thus, the sole and exclusive indicator of what is the invention, and is intended by the applicants to be the invention, is the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. Any definitions expressly set forth herein for terms contained in such claims shall govern the meaning of such terms as used in the claims. Hence, no limitation, element, property, feature, advantage or attribute that is not expressly recited in a claim should limit the scope of such claim in any way. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
This application claims priority to U.S. Provisional Patent Application No. 61/554,294 filed 1 Nov. 2011, hereby incorporated by reference in its entirety.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US2012/062958 | 11/1/2012 | WO | 00 | 4/21/2014 |
Number | Date | Country | |
---|---|---|---|
61554294 | Nov 2011 | US |