The present disclosure is directed to video coding systems.
Many modern electronic devices support video coding techniques, which find use in video conferencing applications, media delivery applications and the like. Many of these coding applications, particularly video conferencing and video streaming applications, require coding and decoding to be performed in real-time.
In real-time applications, communication bandwidth can change erratically and, for many communication networks (such as cellular networks), bandwidth can be very low (e.g., lower than 50 Kbps for 480×360, 30 fps video sequences). To meet the bandwidth limitations, video coders compress the video sequences heavily as compared to other scenarios where bandwidth is much higher. Heavy compression can introduce severe coding artifacts, like blocking artifacts, which lowers the perceptible quality of such coding sessions. And while it may be possible to reduce resolution of an input sequence to code the lower resolution representation at higher relative quality, doing so causes the sequence to look blurred on decode because the content lost by sub-sampling into smaller resolution cannot be recovered.
Accordingly, the inventors have identified a need in the art for a coding/decoding technique that responds to loss of bandwidth by compressing video sequences without introducing visual artifacts in areas of viewer interest.
Embodiments of the present disclosure provide coding techniques that can accommodate low bandwidth events and preserve visual quality, at least in areas of an image that have high significance to a viewer. According to these techniques, region(s) of interest may be identified from content of input frame that will be coded. Two representations of the input frame may be generated at different resolutions. A low resolution representation of the input frame may be coded according to predictive coding techniques in which a portion outside the region of interest is coded at higher quality than a portion inside the region of interest. A high resolution representation of the input frame may be coded according to predictive coding techniques in which a portion inside the region of interest is coded at higher quality than a portion outside the region of interest. Doing so preserves visual quality, at least in areas of the input image that correspond to the region of interest.
These techniques may take advantage of scalable extensions (colloquially, scalable video coding or “SVC”) of a coding protocol under which the coder operates. For example, the H.264/AVC and H.265/HEVC coding protocols permit coding of image data in different layers at different resolutions. Thus, a single video sequence can be encoded at lower resolution in a base layer and with inter-layer prediction, encoding at higher resolution the enhancement layer. SVC is used to generate scalable bit streams, which can be decoded into sequences in different resolutions according to user's requirements and network condition, for example, in multicast.
Although the terminals 110, 120 are illustrated as smartphones and tablet computers in
Each resampler 220.1, 220.2, . . . , 220.N may alter resolution of source frames presented to its respective pipeline to a resolution of the respective layer. By way of example, a base layer may code video at Quarter Video Graphics Array (commonly, “QVGA”) resolution, which has a 320×240 in width and height, and an enhancement layer may code video at Video Graphics Array (“VGA”) resolution, which is 640×480 in width and height. Each respective resampler 220.1, 220.2, . . . , 220.N may resample input video to meet the resolutions defined for its respective layer. In many cases, source video may be resampled to meet the resolution of the respective layer but, in some cases, resampling may be omitted if the source video resolution is equal to the resolution of the layer. The principles of the present disclosure find application with other coding formats described herein and even formats that may be defined in the future, in which coding resolutions may meet or exceed the resolutions of the video sources that provide image data for coding.
As discussed herein, in some embodiments, coding resolutions of each layer may change dynamically during operation, for example, to meet HVGA (480×320), WVGA (768×480), FWVGA (854×480), SVGA (800×600), DVGA (960×640) or WSVGA (1024×576/600) formats, in which case, operations of the resamplers 220.1, 220.2, . . . , 220.N may change dynamically to meet the layer's changing coding requirements. Video data in the enhancement layer pipeline 270.2 may have higher resolution than video data in the base layer pipeline 270.1. Where multiple enhancement layers are used, video data in higher level enhancement layer pipelines (say, layer 270.N) may have higher resolution than video data in lower level enhancement layer pipelines 270.2.
The region detector 230 may identify regions of interest (“ROIs”) within image content. ROIs represent areas of image content that are deemed by analysis to represent important image content. ROIs, for example, may be identified from object detection performed on image content (e.g., faces, textual elements or other objects with predetermined characteristics). Alternatively, they may be identified from foreground/background discrimination, which may be identified image activity (e.g., regions of high motion activity may represent foreground objects) or from image activity that contradicts estimates of overall motion in a field of view (for example, an object that is maintained in a center field of view against a moving background). Similarly, ROIs may be identified from location of image content within a field of view (for example, image content in a center area of an image as compared to image content toward a peripheral area of a field of view). And, of course, multiple ROIs may be identified simultaneously in a common image. The region detector 230 may output identifiers of ROI(s) to the controller 260.
The coders 240.1, 240.2, . . . 240.N may code the video data presented to them according to predictive coding techniques. The coding techniques may conform to a predetermined coding protocol defined for the video coding system and for the layer to which the respective coder belongs. Typically, each frame of video data is parsed into predetermined arrays of pixels (called “pixel blocks” herein for convenience) and coded. Partitioning may occur according to a predetermined partitioning scheme, which may by defined by the coding protocol to which the coders 240.1, 240.2, . . . 240.N conform. For example, HEVC-based coders may partition images recursively into coding units of various sizes. H.264-based coder may partition images into macroblocks or blocks. Other coding systems may partition image data into other arrays of image data.
The coders 240.1, 240.2, . . . 240.N may code each input pixel block according to a coding mode. For example, pixel blocks may be assigned a coding type, such as intra-coding (I-coding), uni-directionally predictive coding (P-coding), bi-directionally predictive coding (B-coding) or SKIP coding. SKIP coding causes no coded information to be generated for the pixel block; at a decoder (not shown), its content will be derived wholly from a pixel block located in a preceding frame by neighboring motion vectors. For I-, P- and B-coding, an input pixel block is coded differentially with respect to a predicted pixel block that is derived according to an I-, P- or B-coding mode, respectively. Prediction residuals representing a difference between content of the input pixel block and content of the predicted pixel block may be coded by transform coding, quantization and entropy coding. The coders 240.1, 240.2, . . . 240.N may include decoders and reference picture caches (not shown) that decode data of coded frames that are designated reference frames; these reference frames provided data from which predicted pixel blocks are generated to code new input pixel blocks.
During operation, an enhancement layer coding pipeline 270.2 may be configured to code image data that belongs to an ROI at higher image quality than image data outside the ROI. Similarly, the base layer coding pipeline 270.1 may be configured to coded image data outside the ROI at a higher image quality than image data within the ROI. When a decoder at a far end terminal (not shown) decodes the coded enhancement layer and base layer streams, it may obtain a high quality, high resolution representation of ROI data primarily from the enhancement layer and a high quality albeit lower resolution representation of non-ROI data primarily from the base layer. In this manner, it is expected that a visually pleasing image will be obtained at a decoder even when resource limitations and other constraints prevent terminals from exchanging coded high resolution for an entire image.
In an embodiment, the controller 260 may select coding parameters or, alternatively, a range of parameters that will be applied by the coders 240.1, 240.2, . . . 240.N, which may vary differently for regions of an input frame that belong to ROIs and regions of the input frame that do not belong to ROIs. For example, the controller 260 may cause the base layer pipeline 270.1 to code ROI data at lower quality than non-ROI data. In one embodiment, the controller 260 may assign coding modes to ROI data in the base layer corresponding to SKIP mode coding, which causes the pixel blocks to be omitted from predictive coding and, by extension, yields an extremely low coding rate. Alternatively, the base layer pipeline 270.1 may be controlled to code pixel blocks within ROIs according to P- and/or B-coding modes but using a higher quantization parameter (QP) than for pixel blocks outside the ROI. Higher quantization parameters typically lead to higher compression with increased loss of data. By contrast, non-ROI may be coded at relatively high quality within a bit budget allocated to the base layer data. Thus, in either technique—SKIP mode coding or predictive coding with high QPs—the base layer pipeline causes ROI data to be coded at lower quality than it codes non-ROI data.
The controller 260 may cause the enhancement layer pipeline 270.2 to code ROI data at higher quality than it codes non-ROI data. In one embodiment, the controller 260 may assign coding modes to non-ROI data in the enhancement layer corresponding to SKIP mode coding, which causes the pixel blocks to be omitted from predictive coding and, by extension, yields an extremely low coding rate. Alternatively, the enhancement layer pipeline 270.2 may be controlled to code pixel blocks outside the ROIs according to P- and/or B-coding modes but using a higher quantization parameter (QP) than for pixel blocks inside the ROI. Again, higher quantization parameters typically lead to higher compression with increased loss of data. Thus, in either technique—SKIP mode coding or predictive coding with high QPs—the enhancement layer pipeline 270.2 causes non-ROI data to be coded at lower quality than it codes ROI data.
Coded data output from the coding pipelines 270.1, 270.2, . . . , 270.N may be output to a syntax unit. The syntax unit 250 may merge the coded video data from each pipeline into a unitary bit stream according to the syntax of a governing coding protocol. For example, the syntax unit 250 may generate a bit stream that conforms to the Scalable Video Coding (SVC) extensions of H.264/AVC, the scalability extensions (SHVC) of HEVC and the like. The syntax unit may output a protocol-compliant bit stream to other components of a terminal (
For base layer coding, the method 400 may code content of the low resolution version of the source image according to a bitrate budget that is assigned to the base layer. Specifically, the method may code content of the non-ROI region according to a portion of the base layer budget that is assigned to the non-ROI region (box 430). The method 400 also may code content of the ROI region according to any remaining base layer budget that is not consumed by coding of the non-ROI region (box 440). In some embodiments, the non-ROI region may be assigned most of the budget assigned for base layer coding, in which case the ROI region may not be coded substantive (e.g., content within the ROI region may be coded by SKIP mode coding). In other embodiments, however, the non-ROI region may be assigned some lower amount of the base layer budget, for example 90% or 80% of the overall base layer bit rate budget, in which case coarse coding of the ROI region can occur in the base layer.
For enhancement layer coding, the method 400 may code content of the high resolution version of the source image according to a bitrate budget that is assigned to the enhancement layer. Specifically, the method may code content of the ROI region according to a portion of the enhancement layer budget that is assigned to the ROI region (box 450). The method 400 also may code content of the non-ROI region according to any remaining enhancement layer budget that is not consumed by coding of the ROI region (box 460). In some embodiments, the ROI region may be assigned most of the budget assigned for enhancement layer coding, in which case the non-ROI region may not be coded substantively (e.g., content within the non-ROI region may be coded by SKIP mode coding). In other embodiments, however, the ROI region may be assigned some lower amount of the enhancement layer budget, for example 90% or 80% of the overall enhancement layer bit rate budget, in which substantive coding of the ROI region can occur in the enhancement layer.
Coding operations performed in the base layer coding (boxes 430, 440) and in enhancement layer coding (boxes 450, 460) may be performed predictively. Predictive coding involves a selection of a coding mode (e.g., I-coding, P-coding, B-coding or SKIP coding, etc.) and selection of coding parameters that define how the selected coding parameters are performed. Some parameter selections, particularly motion vectors, involve a resource intensive search for a best parameter for use in coding. For example, a motion vector search often involves a comparison of image data between a block of a frame being coded and blocks of candidate prediction data at several different locations in a reference frame to identify a block that provides a closest prediction match to the input block. In an embodiment, when the method 400 performs enhancement layer coding of ROI data (box 450) coding mode selections and/or motion vectors may be derived from mode selections and motion vectors selected during coding of the ROI at the base layer (box 440). Similarly, when the method 400 performs enhancement layer coding of non-ROI data (box 460) coding mode selections and/or motion vectors may be derived from mode selections and motion vectors selected during coding of the non-ROI region at the base layer (box 430). Such derivations, however, need not occur in all embodiments. For example, in box 450, SKIP mode decisions made during base layer coding (box 440) may not be used in coding of ROI data in the enhancement layer.
For example, for non-ROI data, an enhancement layer coder 240.2 may conserve processing resources that otherwise would be spent on motion prediction searches simply by applying a motion vector of a pixel block from a common location in image data, as determined by a base layer coder 240.2. Shown in
T=w1*Pe+w2*Pb, where (1.)
T represents the predicted content of the enhancement layer pixel block 522 and w1 and w2 represent respective weights. The weights w1, w2 may be set to predetermined values (e.g., w1=w2=0.5) or they may be derived by an encoder and signaled to a decoder in coded video data.
Alternatively, prediction may occur as:
T=w1*HighFreq(Pe)+w2*Pb, where (2.)
T represents the predicted content of the enhancement layer pixel block 522, w1 and w2 represent respective weights and the HighFreq(Pe) operator represents a process that extracts high frequency content from the reference enhancement layer pixel block Pe. In an embodiment, the HighFreq(Pe) operator simply may be a selector that selects transform coefficients (e.g., DCT or wavelet coefficients) that correspond to the resolution differences between the enhancement layer and the base layer.
Alternatively, instead of relying solely on a base layer motion vector mvb as the basis of an enhancement layer motion vector mve, motion vectors of other base layer pixel blocks neighboring the co-located base layer pixel block 512 may be tested as candidates for coding.
In an embodiment, improved visual quality is expected to be obtained by preferentially coding portions of non-ROI regions according to a refresh selection pattern. In a default coding mode, particularly where bandwidth allocated to enhancement layer coding of non-ROI regions is small, many pixel blocks may be coded according to a SKIP coding mode, which causes co-located data from preceding frames to be reused for a new frame being coded. Image content of the SKIP-ed blocks may not be perfectly static and, therefore, the reuse of image content may cause abrupt discontinuities when the SKIP-ed blocks eventually are coded according to some other mode. In an embodiment, enhancement layer coding may be performed according to a refresh coding policy that preferentially allocates bandwidth assigned to enhancement layer coding of non-ROI data to a sub-set of the pixel blocks belonging to the non-ROI region of each frame.
According to this embodiment, while enhancement layer coding non-ROI regions of a high resolution frame (box 460), the method 400 may select a sub-set of non-ROI pixel blocks according to a refresh selection pattern (box 462). The method 400 then may predictively code the selected pixel blocks from the non-ROI region (box 464), which causes coding according to a mode other than a SKIP mode. In this manner, the method 400 may force non-SKIP coding of a sub-set of non-ROI pixel blocks in each frame, which imparts some amount of precision to those pixel blocks when they are decoded. The remaining pixel blocks likely will be coded according to SKIP mode coding in the enhancement layer, which will cause them to appear as low resolution versions when decoded; those other pixel block may be selected by the refresh selection pattern during coding of some other frame and thus high resolution components of the non-ROI may be refreshed albeit at a lower rate than ROI pixel blocks of the enhancement layer.
The principles of the present disclosure accommodate other processing techniques to smooth out visual artifacts that may be observed between coded high resolution and coded low resolution content. In one embodiment, video coders may vary coding parameters applied to video content along boundaries between a ROI and non-ROI content.
Similarly, when coding a low resolution base layer image 630, an encoder may code a non-ROI region 634 at a first, relatively high level of quality, the ROI 632 at second, lower level of quality and the intermediate zones 638, 636 at intermediate levels of quality. Such quality levels may be defined by application of coding budget and quantization parameters.
Smoothing of visual artifacts may be performed at a decoder as well. For example, a decoder may apply various filtering operations, such as deblocking filters, smoothing filters and pixel blending across boundaries between the ROI content 612 and non-ROI content 614, between those regions 612, 614 and the zones 616, 618 and between the zones 616, 618 themselves as needed.
The operation of base layer coding units 711-717 typically is determined by the coding protocols to which the coder 710 conforms, such as H.263, H.264 or H.265. Generally speaking, the base layer coder 710 operates on a ‘pixel block’-by-′pixel block′ basis as determined by the coding protocol to assign a coding mode to the pixel block and then code the pixel block according to the selected mode. When a prediction mode selects data from the prediction cache 720 for prediction of a pixel block from the base layer image, the subtractor 711 may generate pixel residuals representing differences between the input pixel block and the prediction pixel block on a pixel-by-pixel basis. The transform unit 712 may convert the pixel residuals from the pixel domain to a coefficient domain by a predetermined transform, such as a discrete cosine transform, a wavelet transform, or other transform that may be defined by the coding protocol. The quantization unit 713 may quantize transform coefficients generated by the transform unit 712 by a quantization parameter (QP) that is communicated to a decoder (not shown).
The transform coefficients typically content of the pixel block residuals across predetermined frequencies in the pixel block. Thus, the transform coefficients represent frequencies of image content that are observable in the base layer image.
The base layer coder 710 may generate prediction reference data by inverting the quantization, transform and subtractive processes for base layer images that are designated to serve as reference pictures for other frames. These inversion processes are represented as units 714-716, respectively. Reassembled decoded reference frames may be stored in the base layer prediction cache 720 for use in prediction of later-coded frames.
The base layer coder 710 also may include a predictor 717 that assigns a coding mode to each coded pixel block and, when a predictive coding mode is selected, outputs the prediction pixel block to the subtractor 711.
The enhancement layer coder 730 may have an architecture that is determined by the coding protocol to which it conforms. Generally, the enhancement layer coder 730 may include a forward coding pipeline that includes a pair of subtractors 731, 732 and a transform unit 733, as well as other units to code pixel blocks of the base layer image (such as an entropy coder). The enhancement layer coder 730 also may include a prediction system that includes an inverse quantizer 735, an inverse transform unit 736, an adder 737 and a predictor 738. Operation of the base layer coder 730 may be controlled by a controller 739.
The enhancement layer coder 730 also may operate on a ‘pixel block’-by-′pixel block′ basis as determined by the coding protocol to assign a coding mode to the pixel block and then code the pixel block according to the selected mode. The enhancement layer coder 730 may accept two sets of prediction data, a prediction pixel block from the base layer coder (which is scaled according to resolution differences between the enhancement layer image and the base layer image) and prediction data from the enhancement layer cache 750. Thus, the first subtractor 731 may generate first prediction residuals from comparison with the base layer prediction data and the second subtractor 732 may revise the first prediction residuals from comparison with enhancement layer prediction data. The revised prediction residuals may be input to the transform unit 733.
The transform unit 733 and the quantizer 734 may operate in a manner similar to their counterparts in the base layer coder 710. The transform unit 733 may convert the pixel residuals from the pixel domain to the coefficient domain by a predetermined transform, such as a discrete cosine transform, a wavelet transform, or other transform that may be defined by the coding protocol. The quantization unit 734 may quantize transform coefficients generated by the transform unit 733 by a quantization parameter (QP) that is communicated to a decoder (not shown).
The enhancement layer coder 730 may generate prediction reference data by inverting the quantization, transform and subtractive processes for base layer images that are designated to serve as reference pictures for other frames. These inversion processes are represented as units 735-737, respectively. Reassembled decoded reference frames may be stored in the enhancement layer prediction cache 750 for use in prediction of later-coded frames. The predictor 738 may assign a coding mode to each coded pixel block and, when a predictive coding mode is selected, outputs the prediction pixel block to the subtractor 732.
As with the base layer coder 710, transform coefficients generated within the enhancement layer coder 730 typically represent content of the pixel block residuals across predetermined frequencies in the pixel block. The enhancement layer image will have higher resolution than its corresponding base layer image and, therefore, the transform coefficients generated in the enhancement layer coder 730 will represent a higher range frequencies than the corresponding coefficients generated in the base layer coder 710. In an embodiment, a controller 739 in the enhancement layer coder may nullify frequency coefficients that are generated in the enhancement layer that are redundant to those generated in the base layer coder 710. This process is represented by the “MASK” unit illustrated in
Image reconstruction at a decoder (not shown) may perform operations represented by the inverse coding units 714-716, 735-737 and predictors 717, 738 of the base layer and enhancement layer coders 710, 730 respectively. For a given source pixel block ORG in a source image, an upsampled prediction of the base layer coded pixel block will be taken to represent low frequency content of the pixel block ORG and coded enhancement layer data will be taken to represent the source pixel block at higher frequencies. Therefore a decoded pixel block ORG′ will be derived as:
ORG′=LOW(ORG)+HIGH(ORG), where (3)
the LOW( ) and HIGH( ) operators represent low frequency and high frequency predictions of the base layer coding and enhancement layer coding, respectively.
In Eq. (3), the high frequency components of ORG may be derived by HIGH(ORG)=ORG−LOW(ORG), where LOW(ORG) may be derived by upsampling the base layer image data from the base layer image's native resolution to a resolution of the enhancement layer image. Similarly, prediction references for the enhancement layer data may be derived as HIGH(REF)=REF−LOW(REF), which may be derived by upsampling the downsampled reference pictures REF.
The principles of the present disclosure find application with variable resolution adaptation (VRA) techniques, which permit coders to vary resolution of frames being coded within a coding session. VRA techniques are described generally in U.S. Pat. No. 9,215,466 and U.S. Publication No. 2012/0195376, the disclosures of which are incorporated herein.
Thus, integration of VRA techniques with the coding techniques described in the foregoing embodiments permits a coding system to respond to changes in coding bandwidth in a graceful manner. Resolution of the multiple coding layers may be selected to optimize coding quality given an overall bandwidth available for coding. When bandwidth increases, a coding system may increase first the coding resolution applied to regions of interest, which are represented most accurately in the enhancement layer and increase resolution applied to non-ROI regions in the base layer if supplementary bandwidth is available. Similarly, if coding circumstances change and bandwidth decreases, an encoder may respond by lowering resolution first in the base layer, which may preserve coding resolution for the regions of interest, before changing resolution of the enhancement layer.
In an embodiment, the coding resolutions may progress though a sequence such as:
The principles of the disclosure also find application with frame rate adaptation. In this embodiment, base layer images may be coded at lower frame rates than enhancement layer frames. On decode, a decoder (not shown) may interpolate base layer content at temporal positions that coincide with temporal positions of the decoded enhancement layer images and merge the interpolated base layer content and decoded enhancement layer content into a final representation of the decoded frame.
The operation of coding units 915-950 typically is determined by the coding protocols to which the coder 910 conforms, such as H.263, H.264 or H.265. Generally speaking, the coder 900 operates on a pixel block-by-pixel block basis as determined by the coding protocol to assign a coding mode to the pixel block and then code the pixel block according to the selected mode. When a prediction mode selects data from the prediction cache 960 for prediction of a pixel block from the input image, the subtractor 915 may generate pixel residuals representing differences between the input pixel block and the prediction pixel block on a pixel-by-pixel basis. The transform unit 920 may convert the pixel residuals from the pixel domain to a coefficient domain by a predetermined transform, such as a discrete cosine transform, a wavelet transform, or other transform that may be defined by the coding protocol. The quantization unit 925 may quantize transform coefficients generated by the transform unit 920 by a quantization parameter (QP) that is communicated to a decoder (not shown).
The pixel block coder 910 may generate prediction reference data by inverting the quantization, transform and subtractive processes for coded images that are designated to serve as reference pictures for other frames. These inversion processes are represented as units 930-940, respectively. Reassembled decoded reference frames may be stored in the prediction cache 90 for use in prediction of later-coded frames. The predictor 945 may assign a coding mode to each coded pixel block and, when a predictive coding mode is selected, outputs the prediction pixel block to the subtractor 915.
The system 900 of
HEVC coding employs a significance map to identify to a decoder pixel blocks that have non-zero coefficients. In an embodiment, an encoder may choose coefficient groups adaptively to maximize coding efficiency.
Returning to
In an embodiment, rather than setting coefficient values in the second sets 1120, 1140, 1160 (
When estimating the number of coefficients to use for coding (
Additionally, the techniques of
Embodiments of the present disclosure also accommodate multi-resolution coding of image data in a single layer coder by coding frames of different resolutions in logically separated sessions.
The embodiment of
The syntax unit 1310 may parse coded data into its constituent streams and forward those streams to respective decoders. Thus, the syntax unit 1310 may route coded base layer data and coded enhancement layer data to the predictive decoders 1320.1, 1320.2, . . . , 1320.N to which they belong. The predictive decoders 1320.1, 1320.2, . . . , 1320.N may decode the coded data of their respective layers and may output recovered frame data. The recovered frame data from each layer's decoder 1320.1, 1320.2, . . . , 1320.N may be output at the resolution(s) at which those layers were coded. The resamplers 1330.1, 1330.2, . . . , 1330.N may change the resolution of the streams to a common resolution representation, typically a resolution that matches the resolution of the highest-resolution enhancement layer. The formatter 1340 may merge the output from the resamplers 1330.1, 1330.2, . . . , 1330.N to a common output signal, which may be displayed or stored for further uses
The foregoing discussion has described operation of the foregoing embodiments in the context of terminals, coders and decoders. Commonly, these components are provided as electronic devices. They can be embodied in integrated circuits, such as application specific integrated circuits, field programmable gate arrays and/or digital signal processors. Alternatively, they can be embodied in computer programs that execute on personal computers, notebook computers, computer servers or mobile computing platforms such as smartphones and tablet computers. As such, these programs may be stored in memory of those devices and be executed by processors within them. Similarly, decoders can be embodied in integrated circuits, such as application specific integrated circuits, field programmable gate arrays and/or digital signal processors, or they can be embodied in computer programs that execute on personal computers, notebook computers, computer servers or mobile computing platforms such as smartphones and tablet computers. Decoders commonly are packaged in consumer electronics devices, such as gaming systems, DVD players, portable media players and the like and they also can be packaged in consumer software applications such as video games, browser-based media players and the like. Again, these programs may be stored in memory of those devices and be executed by processors within them. And, of course, these components may be provided as hybrid systems that distribute functionality across dedicated hardware components and programmed general purpose processors as desired.
Several embodiments of the disclosure are specifically illustrated and/or described herein. However, it will be appreciated that modifications and variations of the disclosure are covered by the above teachings and within the purview of the appended claims without departing from the spirit and intended scope of the disclosure.