1. Field of the Invention
The present invention relates to the field of digital video processing, and more particularly to methods and apparatuses for decoding and enhancing sampled video streams.
2. Description of the Prior Art
As video sources march towards ever high resolutions for improved display quality, existing distribution and playback technologies do not always keep pace. Transmitting and recording higher quality video using the existing transmission and writable media infrastructure requires video processing techniques to upgrade system deficiencies and to meet the demands of higher quality video presentation.
Methods such as interlacing and scalable decoding are used to compress digital video sources for transmission and/or distribution on writeable media and to decompress the resultant video stream (defined herein as an array of pixels comprising a set of image data) to provide a higher quality facsimile of the original source video stream. De-interlacing takes lower resolution interlaced video sequences and converts them to higher resolution progressive image sequences. Scalable coding takes a lower-quality video sequence and manipulates the video data in order to create a higher quality sequence.
Video coding methods today that are applied to proportionally higher quality video streams for transmission on existing channels require a commensurate increase in channel capacity. To support both legacy and new resolutions, systems today transmit two distinct video streams for presentation so that both a low resolution and high resolution video presentation system can be supported. This approach requires separate channels for each of the low resolution and high resolution streams.
Removable media for use in playback systems today that support low resolution video lack the storage capacity to simultaneously carry a low resolution version of a typical feature-length video as well as an encoded high resolution version of the video. Further, encoding media with optional high resolution presentation techniques often precludes use of that media with systems that support low resolution-only playback.
Today, when presented with a standard resolution video stream, high-resolution display systems up-sample the stream to match the display resolution. Up sampling produces a visually inferior picture to that of a native high resolution video stream. For example, images from such up-sampling are often slightly blurry or soft. To compensate, these systems apply global filters over an entire image to sharpen the otherwise soft picture. However, such techniques introduce perceptible artifacts as they attempt to emulate a higher resolution video stream without adequate information about original high resolution stream.
Today's digital video standards rely upon block based compression which is lossy, introducing visually perceptible block artifacts upon presentation of the decoded image stream. Artifacts may be reduced by applying de-blocking filters to the decoded image stream; however, this method introduces additional inaccuracies from a true reconstruction of the original video stream. Another method reduces the resolution of the video stream before encoding resulting in a loss of image fidelity proportional to the image reduction. Another method uses increasingly smaller block sizes to further reduce inaccuracies introduced by compression. This method reduces the compression ratio and increases the size of the transmitted data stream. Still another method encodes the highest possible resolution video stream for transmission with similar trade-offs as the previous method.
In an effort to reconstruct an output image that is more true to the original source (before encoding), classic decoders may combine two images, a temporally predicted image, and an up-sampled image, on a block by block basis. This method of combining images requires an explicit signal for every change in block processing of every image, increasing stream complexity and size. More advanced techniques such as CABAC require side information signaling performing substantially the same function on a per block and per image basis.
Accordingly, the present invention is directed to systems and methods for obtaining from an encoded baseline low resolution video stream a low resolution and high resolution video stream. The encoded baseline low resolution video stream is employed together with an enhancement video stream at a video decoder.
Baseline video stream is defined herein as a bit stream of low resolution video images. Enhancement stream is defined herein as a bit stream that directs a decoder to produce improvements in fidelity to a decoded baseline video stream. The terms low resolution and high resolution are applied herein to distinguish the relative resolutions between two images. There is no specific numerical range implied by the use of these terms for these two video streams and do not imply specific quantitative measures. A video stream is defined herein as an array of pixels comprising a set of image data.
It is understood that the terms forward and backward used herein when referencing motion compensation, predictors, and reference images are referring to two distinct images that may not be temporally after or before the current image. For example, forward motion vector and backward motion vector refer to only to motion vectors derived from two distinct reference images.
Various embodiments of the present invention highlight a number of features, including:
According to one aspect of the present invention, a method is provided for decoding and enhancing a video image stream from a bitstream containing at least sampled baseline image data and image enhancement data, comprising: separating the bitstream into blocks of sampled baseline image data and image enhancement data; adaptively upsampling the sampled baseline image data on a block-by-block basis to produce upsampled baseline image data, the adaptive upsampling controlled at least in part by a portion of the image enhancement data for each block; enhancing the upsampled baseline image data by applying to the upsampled baseline image data residual corrections, the residual corrections compressed using a predetermined transform, to thereby obtain enhanced image data; and outputting the enhanced image data.
According to a further aspect of the present invention, a method is provided for decoding and enhancing a video image stream from a bitstream containing at least sampled baseline image data and image enhancement data, comprising: separating the bitstream into blocks of sampled baseline image data and image enhancement data; adaptively upsampling the sampled baseline image data on a block-by-block basis to produce upsampled baseline image data, the adaptive upsampling controlled at least in part by a portion of the image enhancement data for each block; determining motion vector data from a portion of the image enhancement data; enhancing the upsampled baseline image data by applying to the upsampled baseline image data residual corrections, the residual corrections compressed using a predetermined transform, to thereby obtain enhanced image data; resampling the enhanced image data based on the motion vector data to thereby obtain resampled enhanced image data; blending the resampled enhanced image data with the upsampled baseline image data to produce predicted image data; enhancing the predicted image data by applying to the predicted image data residual corrections, the residual corrections compressed using a predetermined transform, to thereby obtain resampled further enhanced image data; upsampling the resampled further enhanced image data to obtain further enhanced image data; and outputting the further enhanced image data for display.
According to a still further aspect of the present invention, a method is provided for decoding and enhancing a video image stream from an enhanced initial image frame and a bitstream containing at least sampled baseline image data and image enhancement data, comprising: separating the bitstream into blocks of sampled baseline image data and image enhancement data; upsampling the sampled baseline image data to produce a first image frame; determining motion vector data based on said first image frame; determining from the motion vector data mismatch image data; resampling the enhanced initial image frame based on the motion vector data to thereby obtain a resampled enhanced initial image frame; blending the resampled enhanced initial image frame with the first image frame, the blending control provided at least in part by the mismatch image data, to produce a predicted image; enhancing the predicted image by applying to the predicted image residual corrections, the residual corrections compressed using a predetermined transform, to thereby obtain an enhanced first image frame; and outputting the enhanced first image frame for display.
According to yet another aspect of the present invention, a method is provided for decoding and enhancing a video image stream from an enhanced initial image frame and a bitstream containing at least sampled baseline image data and image enhancement data, comprising: separating the bitstream into blocks of sampled baseline image data and image enhancement data; upsampling the sampled baseline image data to produce a first image frame; determining motion vector data from a portion of the image enhancement data resampling the enhanced initial image frame based on the motion vector data to thereby obtain a resampled enhanced initial image frame; blending the resampled enhanced initial image frame with the first image frame to produce a predicted image; enhancing the predicted image by applying correction data to individual pixels, control for the correction data comprising a set of weighted texture maps identified on a block-by-block or pixel-by-pixel basis by a portion of the image enhancement data, to thereby obtain an enhanced first image frame; and outputting the enhanced first image frame for display.
According to still another aspect of the present invention, a method is provided for decoding and enhancing a video image stream from an enhanced initial image frame and a bitstream containing at least sampled baseline image data and image enhancement data, comprising: separating the bitstream into blocks of sampled baseline image data and image enhancement data; adaptively upsampling the sampled baseline image data on a block-by-block basis to produce a first image frame, the adaptive upsampling controlled at least in part by a portion of the image enhancement data for each block; determining motion vector data based on said first image frame; determining from the motion vector data mismatch image data; resampling the enhanced initial image frame based on the motion vector data to thereby obtain a resampled enhanced initial image frame; blending the resampled enhanced initial image frame with the first image frame, the blending control provided at least in part by the mismatch image data, to produce a predicted image; enhancing the predicted image by applying correction data to individual pixels, control for the correction data comprising a set of weighted texture maps identified on a block-by-block or pixel-by-pixel basis by a portion of the image enhancement data, to thereby obtain an enhanced first image frame; and outputting the enhanced first image frame for display.
The above is a summary of a number of the unique aspects, features, and advantages of the present invention. However, this summary is not exhaustive. Thus, these and other aspects, features, and advantages of the present invention will become more apparent from the following detailed description and the appended drawings, when considered in light of the claims provided herein.
In the drawings appended hereto like reference numerals denote like elements between the various drawings. While illustrative, the drawings are not drawn to scale. In the drawings:
In one aspect of the present invention, a low-quality version of a video source, typically low resolution video sequence, is up-sampled and treated to provide a high-quality version of the video source, typically a high resolution video sequence. This process is generally referred to as spatial scalability of a video source. Scalable coding methods and systems according to various embodiments of the present invention take a low-quality video sequence as a starting point for creating a higher-quality sequence. In one example, the low-quality version may be standard resolution video and the high-quality version may be high definition video. One of ordinary skill in the art will readily understand that the present invention may be used for other applications in which additional information beyond the base video stream is used to enhance the resultant video stream. In one alternative example, additional information may be provided in an enhancement stream. The enhancement stream may carry, for example chrominance data relating to a high quality master version of the video sequence, where the base layer stream is just monochromatic (carries just luminance).
Briefly, both a baseline video stream and an enhancement stream are received in encoded format, on a packet basis. Demultiplexer 21 separates the two streams based on header information in each packet, directing the baseline video stream packets 21b to a decoder 11 and the enhancement packets to a parser 23. Decoder 11 decodes the baseline video stream and delivers baseline images 11 a to up-sampler 13. The decoded baseline video stream is then up-sampled, baseline images guided in part by the decoded enhancement stream 23a. Motion estimation is then applied to derive motion vectors 17a and mismatch images 17b, which are then utilized by portions of the enhancement decoding described below.
In the enhancement decoding branch of the flow chart, predicted images 31a are enhanced by a selected enhancement process at 51. At this point it should be noted that reference herein to “images” is intended in its broadest sense. While a video is typically divided into frames, images as used herein can refer to portions of a frame, an entire frame, or multiple frames. The enhanced images are buffered at 53 and made available to a motion compensation process 18 utilizing the aforementioned motion vectors 17a and mismatch images 17b from 17. By buffering the enhanced images at 53, a temporal selection of blocks of previously enhanced pixels are available for reuse as reference frames in subsequent construction.
The manner in which motion compensation is applied derives efficiency by using the decoded baseline images as a source. Up-sampled baseline images 15a are used to derive motion vectors 17a which are predictors applied to previously decoded enhanced images 53b to create motion compensated images 18a. Blending functions 43 are applied to these motion compensated enhanced images using both forward and backward prediction. Guided by a Selector Control 23d signal from the decoded enhancement stream, the selector 31 switches on a block-by-block basis between a block from the up-sampled image decoded block 19 or a motion predicted block 43a.
The baseline image decoder 11 produces standard resolution or baseline output images 11a which are up-sampled at up-sampler 13 in a manner directed by up-sampler Control 23a parsed from the enhancement stream. Further details of the preferred method for up-sampling are described hereinbelow with reference to
Motion vectors 17a which are derived from the up-sampled baseline images 13b provide the coordinates of image samples to be referenced from previously enhanced images 53. We have discovered that these provide the best motion predictors, as predictors derived from comparisons between the current up-sampled image and the previously enhanced images are not as accurate. Since the desired enhanced image is, at this point, being created by this process, predictors from the up-sampled baseline images serve as good estimates for the otherwise unobtainable ideal predictors from the enhanced images residing in the enhancement buffer 53. Additional motion prediction steps are detailed in
Using the coordinates derived from the motion vectors at 17, samples from enhancement buffer 53 are motion compensated at 18 to create predictors 18a, typically one for each forward and backward reference, that are combined at 43 to serve as a best motion predictor 43a for selection at 31. Additional motion compensation steps are detailed in
The selector 31 finally blends the best spatial predictor 19 as input with the best motion compensated temporal predictor 43a to produce the best overall predictor 31a. In the preferred embodiment, the blending function is a block-by-block selection between one of two sources, 19 or 43a, to produce the optimal output predicted images 31a. For a majority of blocks comprising the enhanced image, this predicted image 31a is often good enough. For those blocks that the predictor is not sufficient, further residual enhancement is added at 51 to the predicted image 31a to achieve the enhanced images 51a. Residual enhancement is directed by the enhancement stream's residual control 23b. Additional steps are detailed in
To increase bitrate efficiency and to match the resolution to the typical level of detail present in any content, the intermediate enhanced image 53a may be coded at a resolution slightly lower than the final output image 55a. Quality may be improved, and implementation is simplified, if for example, the coded enhanced image 53a is two times the size both horizontally and vertically to that of the baseline image 11a. A typical size is 720×480 for the baseline image, enhanced to a resolution of 1440×960, and then resampled to a standard HDTV output resolution grid of 1920×1080.
In summary, the enhancement image branch of the flowchart (from 31a to 53a/b) is primed first by the up-sampled baseline images 13b via the path 13b to 15 to 19, and continually primed by subsequently up-sampled baseline images. From there, enhancement images are cycled through the enhancement branch and modified by predictors derived from up-sampled baseline image sets. Selection is guided by the selector control 23d as is residual enhancement 23b. Residual enhancement is added in where selected (either spatial or temporal) predictors are not adequate, as indicated by the enhancement stream and as predetermined at the encoder.
Apparatus
A bitstream buffer 60 holds data packets received 10 from a communications channel or storage medium, which are buffered out at 10a and demultiplexed 21 by the demultiplexer 71 to feed the enhancement and baseline image decoding stages with bitstream data 21a, 21b as said data is needed by the respective decoding stages.
A baseline decoder 61 processes a base bitstream 21b to produce decode baseline images 11a. This decoder can be any video decoder, including any but not limited to the various standard video decoders such as MPEG-1, MPEG-2, or MPEG-4, or MPEG-4 part 10, also known as AVC/H.264.
A parser 73 isolates stream tokens 23a, 23b, 23c, and 23d packed within the enhancement bitstream 21a. Tokens needed for enhancement decoding may be packed by token type, or multiplexed together with other tokens that represent a coded description of a geometric region within an image, such as a neighborhood of blocks. Similar to MPEG-2 and H.264 video, one advantageous method according to the present invention packs tokens needed for a given block together to minimize the amount of hardware buffering needed to hold the tokens until they are required by decoding stages.
These tokens may be coded with a variable-length entropy coder that maps the token to a stream symbol with an average bit length approximating the probability of the token; more specifically, the bit length is proportional to −log 2 (probability). The probability or likelihood of a token is initialized in the higher level picture headers and further dynamically modeled by explicit stream directives (such as probability resets or state updates), the stream of previously sent tokens, and contexts such as measurements taken inside the decoder state. Features 13a (discussed further below with regard to
Upsampler 63 processes baseline images 11a in accordance with the upsampler control 23a. These control signals and functions are described in more detail in
A motion estimator 67 analyzes the current upsampled image, and the previously upsampled version of the forward and backward reference images stored in the upsampled Image Buffer 65. This analysis consists of determining the motion between each block of the current upsampled image with respect to the reference images. This process may be performed via any manner of block matching or other similarity identification mechanisms which are well known in the art and which result in a motion vector indicating the direction and magnitude of relative displacement between each block's position in the current frame and its correspondingly matching location in the reference frame. Each motion vector therefore can also be associated with a pixel-wise error map reflection the degree of mismatch between the current block and its corresponding block in each reference frame. These motion vectors 17a and mismatch images 17b are then sent to the Motion Compensated predictor 81.
A motion compensated predictor 81 receives the current spatially upsampled image 13b together with enhanced images 53b to produce a blended bidirectionally predicted frame 43a as directed in part by the motion vectors 17a and mismatch information 17b.
A selector 75 picks the best overall predictor among the best sub-predictors, including up-sampled spatial 19 and temporal predictors 43a. The selection is first estimated by context models and then finally selected by block mode tokens 23d, parsed from the enhanced video layer bitstream 21a. If runs of several correctly estimated block modes are present, a run length token optionally is used to indicate that the estimated mode is sufficient for enhancement coding purposes and no explicit mode tokens are sent for those corresponding blocks within the run. A residual decoder 77 provides additional enhancements to the predicted image 31a as guided by a residual control 23b. A detailed description of the process used within the Decode Residual 77 block is detailed below (
Up-Sampling Method
Returning now to
With reference now to
More specifically, baseline images 11a are input to a simple polyphase resampling filtering 310 process which produces full resolution images 310a, equivalent in resolution to enhanced images (51a from
Next, features are computed at step 320 from the full resolution images 310a on a block by block basis. In the preferred embodiment, block size is 8×8, however, block size may be image dependent. Block features may include average pixel intensity (luminance) wherein the average of all pixels within the block is computed. Another useful feature is variance. Here, the absolute value of the difference between the overall image average pixel intensity and each pixel within a block is summed to produce a single number for that feature of the block. The output of the compute block feature 320 is the feature vector 320a which represents an ordered list of features for each block in an image.
The up-sampler classification process 330 is provided by the bitstream (10a shown in
Next, the up-sampler class 330a is input into a look-up filter at step 340, which outputs a filter 340a for that class. This filter is selected by class and applied as a predetermined weighted sum over neighboring pixels to produce the best match to the expected output of the source video stream. The filter 340a corresponding to a particular class is then applied 350 to the pixels 310a in the block belonging to that class, producing spatially up-sampled images 13b. Note that it is mathematically feasible to combine the filter 340a's weighted values with the weights used in the simple polyphase resampling 310, thus combining steps 310 and 350. The preferred embodiment keeps these stages separate for design reasons.
In summary, the up-sampling method computes image features on a block basis, classifies the feature vectors into a small number of classes as directed by the enhancement stream, and identifies a class for each block within the image. Corresponding to each class is a specific filter. The method applies the corresponding filter to the pixels of the classified block. The filters which are typically sharpening filters, are designed for each class of blocks to give the best match to the expected output or the original source video stream.
Residual Decoder Method
Inverse quantization is next performed at step 513, based upon the quantization specification determined at step 514 from the data headers, to expand the residual coefficients 511a to the full dynamic range of dequantized coefficients. The coefficient is then multiplied by enhancement basis vectors at step 515 from an enhancement basis vector specification determined at step 516 from the data headers to obtain difference data, the residual decoded image 515a. As an alternative to determination from data headers, the decompression specification, inverse quantization specification, and enhancement basis vector specification may be preset in the decoder. The residual decoding steps 511, 513, and 515 therefore transform parsed compact stream token in bitstream 23b into de-compressed difference samples which comprise the residual data 515a. Predicted image 31a may then be added to the residual data 515a at step 517. This step 517 of adding enhancement to the raw image follows traditional addition arithmetic with saturation found in many reconstruction stages that combine prediction data with residual data to form the final reconstructed data.
Optionally, each residual decoder step 511, 513, 515, and 517 may also be fed Up-sampler Control 23a from the parser (73 of
The motion estimator 67 finds the best temporal predictor referenced from previously stored spatial predictor images in up-sampled image buffer 65. Although accurate optical flow field measurements are desirable, the preferred motion estimation steps provide a good approximation to true single motion vector per pixel accuracy.
As shown in the flow chart in
The motion vector 171a relating the 16×16 block area is used to initialize the block search for each of four 8×8 blocks split in equal quadrants from the single 16×16 block. The 16×16 motion vector 171a is scaled to the appropriate coordinate grid of the 8×8 block and serves as a starting point for the 8×8 refinement search 173.
A scaled and adjusted version of the 8×8 vector 173a in turn initializes the search 175 for each of the four 4×4 blocks split from the single 8×8 block. Due to the small size of the block, which lends the block search to a false optical match (but potentially minimum numerical match), a large overlap (relative to the small size of the block) of two border pixels is added to constrain the block match to a better contextual fit, in a similar manner to the overlap in 171. The 4×4 shape is considerably close to the ideal single-vector per pixel to produce results closely approximating a true optical flow field in many cases.
The resulting motion vectors 17a for each 4×4 block are passed onto the motion compensator stage 18. The mismatch image 17b produced as a by-product of the matching algorithm is used in feature calculations as discussed below with regard to
Compensation and Blending
The forward 186 and backward reference images 187 reside in the enhancement buffer (53 as referred in
Beginning at the top of
Similarly, the backward motion vectors and mismatch image 181b are input to backward motion compensation step 183. This step also receives two images; the corresponding backward reference image 187 and the current up-sampled image 13b. By applying the backward motion vectors and backward mismatch image, the two input images 187 and 13b are combined to produce an output, backward predicted image 183a. Motion compensation control 23c from the enhancement bit stream 21a overrides inaccurate motion vectors. The output, backward predicted image 183a, together with the forward predicted image 185a, are input to the bi-directional blended prediction 189, which produces the final output bi-directional predicted image 43a. A detail of the backward motion prediction process (
Referring now to
As
Likewise for the spatially predicted image, step 1453 of computing image features is applied to the current up-sampled image 13b. The up-sampled image features 1453a, also computed on a block by block basis, may include average pixel intensity or brightness level, average variance, or the like. For each block, up-sampled image features 1453a and mismatch features 1452a are input to classify features step 1454 and converted into one of a small set of classes 1454a. For example, a set of 32 classes may be composed of five bits of concatenated feature indices having the following bit assignments:
The output class 1454a is used at step 1455 to select an optimally defined filter to be applied to the block so classified. Both the class definitions that determine the manner of classification at step 1454 and the filter parameters at step 1455 that are assigned to each class may be embedded in the received bitstream 10 at the decoder input. There is a one to one correspondence between classes 1454a and filters 1455a.
Whereas classic decoders require signaling on a block by block basis to combine two images, the method according to the present invention applies automated decoder-based feature extraction and classification to blend two images, thereby reducing signaling requirements as well as providing blending. The filter 1455a is now input to the step 1456 of using filter parameters to compute the blending factor. Also input are the forward mismatch image 17b and up-sampled image features, such as per pixel variance, 1453a which influence the block based filter 1455a at the pixel level in order to adjust the forward blending factor (FMC) 1456a for each pixel. Factor 1456a is input to step 1457 in order to blend with current FMC*af+(1−af)*current up-sampled image 13b, so that the blending factor together with the corresponding pixels from motion compensated reference image 53b and current up-sampled image 19 may be blended to produce the final output motion compensated and blended forward reference image 1457a.
An example method of describing a filter 1455a according to a block's class 1454a, considering that image variance as a feature 1453a in the current up-sampled image 13b contributes two high order bits to the class 1454a output after processing in step 1454, would be described as:
{00xx=low variance, 01xx=moderately low variance, 10xx=moderately high variance, 11xx=high variance}.
Variance suggests texture in a block which may be true to the original source image or may be an artifact of the encoding and decoding process. Now consider the other source image, motion compensated forward reference image 53b. It's corresponding mismatch image feature 17b also contributes two low order bits to the class 1454a output after processing in step 1454, and would be described as:
{xx00=low mismatch, xx01=moderately low mismatch, xx10=moderately high mismatch, xx11=high mismatch}.
The mismatch image 17b feature is considered together with the variance to determine weighting or a blending factor between the two source images. For example, if the variance index is low and the mismatch index is high, the class is 0011. It is likely that the filter for this class will be one such that for pixels with moderate levels of mismatch the generated filter value af will have a value close to zero, thereby generating an output pixel value predominantly weighted toward the current up-sampled image 13b. With the same filter, if the mismatch pixel value is very small, the filter generated weighting value af my be closer to 1.0, thereby generating an output pixel value predominantly weighted toward the forward motion compensated image 53b. Conversely, if the variance index is high and the mismatch index is low, the motion compensated forward reference image 53b would predominate. Degrees of blending are selected for the intermediate indices. Also, we have found that an average block intensity index of the current up-sampled image 13b improves the reliability and accuracy of choosing an optimal blending factor.
The flow chart of
Referring now to the flow chart of
The preferred method computes features 1491, 1492, and 1493 on a block basis. Forward computed features 1491a and backward computed features 1493a may incorporate the average value of af and ab respectively for each block. Brightness average and variance may be two computed image features 1492 applied to the current up-sampled image 13b. These three sets of features are input to step 1494 which classifies the features similar to feature classification discussed in previous examples, to produce a class 1494a. From this class 1494a input, filter parameters are extracted at step 1495 reflecting image blending preferences exhibited by the feature classification 1494. Next, the filter parameters 1495a are input to step 1496 which uses the filter parameters to compute the blending factor b, together with per pixel values for af 1456a and ab 1436a to produce the per pixel blending factors b. In the final step 1497, the two input images forward and backward motion compensated and blended reference images 18a are blended on a pixel by pixel basis according to the computed blending factor b, 1496a, producing the final output bi-directionally predicted image 43a. Note that FMBC=18a and BBMC=18a as illustrated in step 1497.
Referring to
A compute block features 2300 process may comprise computing various block features such as for example: variance, average brightness, etc. The features to be computed may be explicitly controlled by the up-sampling feature specifications 2320 in the bitstream. The features taken together may be referred to as a feature vector.
In a further stage, the process performs up-sampler classification 2500. This stage assigns an up-sampling class 2590 to each feature vector 2390. The classification process is specified in the enhancement bitstream as the up-sampling classification specification 2520 and may consist of one or more of the following mechanisms: Table (lattice), K-means (VQ), hierarchical tree split, etc.
In a look-up filter 2700 process, each class has an associated filter or filters that may be H&V, or 2D, or non-linear edge adaptive. This is delivered in the bitstream as the up-sampling filter specification 2720. An explicit filter may optionally be selected at 2800. If the up-sampling explicit bitstream filter selection 2810 is in the bitstream, then it overrides the classified feature based filter. If this filter is one that corresponds to a classified filter, then this signal could be sent one stage earlier as an up-sampling explicit bitstream class selection (not shown).
Finally, an up-sampling filter 2900 is applied. In this step, the process may apply a filter, such as for example a sharpening filter, to an already up-sampled image. This avoids polyphase resampling. The filter is applied on the base image by applying polyphase resampler and sharpening filter all at once.
While a plurality of preferred exemplary embodiments have been presented in the foregoing detailed description, it should be understood that a vast number of variations exist, and these preferred exemplary embodiments are merely representative examples, and are not intended to limit the scope, applicability or configuration of the invention in any way. For example, it will be appreciated that while a method and device have been disclosed that contain a plurality of novel elements, any one of such novel elements described herein, such as the method of adaptive upsampling, the methods of residual coding, decoder-based motion estimation and compensation, or adaptive blending, may form the basis for a novel decoder method and system. In such a case, for example, other elements of a decoding method and system may be those known in the art. Likewise select combinations of those novel elements disclosed herein may form a portion of a novel method and system for decoding, as appropriate to a particular application of the present invention, the remaining elements being as known in the art. Therefore, the foregoing detailed description provides those of ordinary skill in the art with a convenient guide for implementation of the invention, and contemplates that various changes in the functions and arrangements of the described embodiments may be made without departing from the spirit and scope of the invention defined by the claims thereto.
The subject matter herein relates to U.S. Provisional Patent Application 60/724,997, filed Oct. 7, 2005, which is incorporated by reference herein and to which priority is claimed, and also relates to pending U.S. patent application Ser. No. 10/446,347 titled “Predictive Interpolation of a Video Signal”, Ser. No. 10/447,213 titled “Video Interpolation Coding”, and Ser. No. 10/447,296 titled “Maintaining a Plurality of Codebooks Related to a Video Signal”, each of said applications being incorporated by reference here.
Number | Date | Country | |
---|---|---|---|
60724997 | Oct 2005 | US |