The present application for patent is related to U.S. patent application Ser. No. 11/373,577 entitled “CONTENT CLASSIFICATION FOR MULTIMEDIA PROCESSING” filed on Mar. 10, 2006, assigned to the assignee hereof and hereby expressly incorporated by reference herein.
1. Field
The present application is directed to apparatus and methods for video transcoding of video data for real-time streaming and, more particularly, to transcoding video data for real-time streaming in mobile broadcast application.
2. Background
Efficient video compression is useful in many multimedia applications such as wireless video streaming and video telephony, due to the limited bandwidth resources and the variability of available bandwidth. Certain video coding standards, such as MPEG-4 (ISO/IEC), H.264 (ITU), or similar video coding provide high efficiency coding well suited for applications such as wireless broadcasting. Some multimedia data, for example, digital television presentations, is generally coded according to other standards such as MPEG-2. Accordingly, transcoders are used to transcode or convert multimedia data coded according to one standard (e.g., MPEG-2) to another standard (e.g., H.264) prior to wireless broadcasting.
Improvements rate optimized codecs could offer advantages in error resiliency, error recovery, and scalability. Moreover, use of information determined from the multimedia data itself could also offer additional improvements for encoding, including error resiliency, error recovery, and scalability. Accordingly, a need exists for a transcoder that provide highly efficient processing and compression of multimedia data that uses information determined from the multimedia data itself, is scalable, and is error resilient for use in many multimedia data applications including mobile broadcasting of streaming multimedia information.
Each of the inventive content based transcoding apparatuses and methods described and illustrated has several aspects, no single one of which is solely responsible for its desirable attributes. Without limiting the scope of this disclosure, its more prominent features will now be discussed briefly. After considering this discussion, and particularly after reading the section entitled “Detailed Description” one will understand how the features of this content driven transcoding provides improvements for multimedia data processing apparatuses and methods.
Inventive aspects described herein relate to using content information for various methods of encoding multimedia data and in various modules or components of an encoder, for example, an encoder used in a transcoder. A transcoder can orchestrate transcoding multimedia data using content information. The content information can be received from another source, for example, metadata that is received with the video. The transcoder can be configured to generate content information through a variety of different processing operation. In some aspects, the transcoder generates a content classification of the multimedia data, which is then used in one or more encoding processes. In some aspects, a content driven transcoder can determine spatial and temporal content information of the multimedia data, and use the content information for content-aware uniform quality encoding across channels, and content classification based compression/bit allocation.
In some aspects, content information (e.g., metadata, content metrics and/or a content classification) of multimedia data is obtained or calculated, and then provided to components of the transcoder for use in processing the multimedia data for encoding. For example, a preprocessor can use certain content information for scene change detection, performing inverse telecine (“IVTC”), de-interlacing, motion compensation and noise suppression (e.g., 2D wavelet transform) and spatio-temporal noise reduction, e.g., artifact removal, de-ringing, de-blocking, and/or de-noising. In some aspects, a preprocessor can also use the content information for spatial resolution down-sampling, e.g., determining appropriate “safe” and “action handling” areas when down-sampling from standard definition (SD) to Quarter Video Graphics Array (QVGA).
In some aspects, an encoder includes a content classification module that is configured to calculate content information. The encoder can use content classification for bit rate control (e.g., bit allocation) in determining quantization parameters (QP) for each MB, for motion estimation, for example, performing color motion estimation (ME), performing motion vector (MV) prediction, scalability in providing a base layer and an enhancement layer, and for error resilience by using a content classification to affect prediction hierarchy and error resiliency schemes including, e.g., adaptive intra refresh, boundary alignment processes, and providing redundant I-frame data in an enhancement layer. In some aspects, the transcoder uses the content classification in coordination with a data multiplexer for maintaining optimal multimedia data quality across channels. In some aspects, the encoder can use the content classification information for forcing I-frames to periodically appear in the encoded data to allow fast channel switching. Such implementations can also make use of I-blocks that may be required in the encoded data for error resilience, such that random access switching and error resilience (based on, e.g., content classification) can be effectively combined through prediction hierarchy to improve coding efficiency while increasing robustness to errors.
In one aspect a method of processing multimedia data includes obtaining content information of multimedia data, and encoding the multimedia data so as to align a data boundary with a frame boundary in a time domain, wherein said encoding is based on the content information. The content can comprise a content classification. Obtaining the content information can include calculating the content information from the multimedia data. In some cases, the data boundary comprises an I-frame data boundary. The data boundary can also be a boundary of independently decodable encoded data of the multimedia data. In some cases, the data boundary comprises a slice boundary. The data boundary can also be an intra-coded access unit boundary. The data boundary can also be a P-frame boundary or a B-frame boundary. The content information can include a complexity of the multimedia data, and the complexity can comprise temporal complexity, spatial complexity, or temporal complexity and spatial complexity.
In another aspect, an apparatus for processing multimedia data includes a content classifier configured to determine a content classification of multimedia data, and an encoder configured to encode the multimedia data so as to align a data boundary with a frame boundary in a time domain, wherein said encoding is based on the content information.
In another aspect, an apparatus for processing multimedia data includes means for obtaining content information of multimedia data, and means for encoding the multimedia data so as to align a data boundary with a frame boundary in a time domain, wherein said encoding is based on the content information.
In another aspect, a processor is configured to obtain content information of multimedia data, and encode the multimedia data so as to align a data boundary with a frame boundary in a time domain, wherein said encoding is based on the content information.
Another aspect includes a machine readable medium comprising instructions that upon execution cause a machine to obtain content information of multimedia data, and encode the multimedia data so as to align a data boundary with a frame boundary in a time domain, wherein said encoding is based on the content information.
Another aspect comprises a method of processing multimedia data comprising obtaining a content classification of the multimedia data, and encoding blocks in the multimedia data as intra-coded blocks or inter-coded blocks based on the content classification to increase the error resilience of the encoded multimedia data.
In yet another aspect, an apparatus for processing multimedia data comprises a content classifier configured to obtain a content classification of the multimedia data, and an encoder configured to encode blocks in the multimedia data as intra-coded blocks or inter-coded blocks based on the content classification to increase the error resilience of the encoded multimedia data.
Another aspect comprises a processor being configured to obtain a content classification of the multimedia data, and encode blocks in the multimedia data as intra-coded blocks or inter-coded blocks based on the content classification to increase the error resilience of the encoded multimedia data.
In another aspect, an apparatus for processing multimedia data comprises means for obtaining a content classification of the multimedia data, and means for encoding blocks in the multimedia data as intra-coded blocks or inter-coded blocks based on the content classification to increase the error resilience of the encoded multimedia data.
Another aspect comprises a machine-readable medium comprising instructions that upon execution cause a machine to obtain a content classification of the multimedia data, and encode blocks in the multimedia data as intra-coded blocks or inter-coded blocks based on the content classification to increase the error resilience of the encoded multimedia data.
It is noted that, where appropriate, like numerals refer to like parts throughout the several views of the drawings.
The following detailed description is directed to certain aspects discussed in this disclosure. However, the invention can be embodied in a multitude of different ways. Reference in this specification to “one aspect” or “an aspect” means that a particular feature, structure, or characteristic described in connection with the aspect is included in at least one aspect. The appearances of the phrase “in one aspect,” “according to one aspect,” or “in some aspects” in various places in the specification are not necessarily all referring to the same aspect, nor are separate or alternative aspects mutually exclusive of other aspects. Moreover, various features are described which may be exhibited by some aspects and not by others. Similarly, various requirements are described which may be requirements for some aspects but not other aspects.
The following description includes details to provide a thorough understanding of the examples. However, it is understood by one of ordinary skill in the art that the examples may be practiced even if every detail of a process or device in an example or aspect is not described or illustrated herein. For example, electrical components may be shown in block diagrams that do not illustrate every electrical connection or every electrical element of the component in order not to obscure the examples in unnecessary detail. In other instances, such components, other structures and techniques may be shown in detail to further explain the examples.
The present disclosure relates to controlling encoding and transcoding apparatus and methods using content information of the multimedia data being encoded. “Content information” or “content” (of the multimedia data) are broad terms meaning information relating to the content of multimedia data and can include, for example, metadata, metrics calculated from the multimedia data and content related information associated with one or more metrics, for example a content classification. Content information can be provided to an encoder or determined by an encoder, depending on the particular application. The content information can be used for many aspects of multimedia data encoding, including scene change detection, temporal processing, spatio-temporal noise reduction, down-sampling, determining bit rates for quantization, scalability, error resilience, maintaining optimal multimedia quality across broadcast channels, and fast channel switching. Using one ore more of these aspects, a transcoder can orchestrate processing multimedia data and produce content-related encoded multimedia data. Descriptions and figures herein that describe transcoding aspects can also be applicable to encoding aspects and decoding aspects.
The transcoder apparatus and methods relates to transcoding from one format to another, and is specifically described herein as relating to transcoding MPEG-2 video to enhanced, scalable H.264 format for transmission over wireless channels to mobile devices, illustrative of some aspects. However, the description of transcoding MPEG-2 video to H.264 format is not intended as limiting the scope of the invention, but is merely exemplary of some aspects of the invention. The disclosed apparatus and methods provide a highly efficient architecture that supports error resilient encoding with random access and layering capabilities, and can be applicable to transcoding and/or encoding video formats other than MPEG-2 and H.264 as well.
“Multimedia data” or simply “multimedia” as used herein, is a broad term that includes video data (which can include audio data), audio data, or both video data and audio data. “Video data” or “video” as used herein as a broad term referring to frame-based or field-based data, which includes one or more images or related sequences of images, containing text, image information and/or audio data, and can be used to refer to multimedia data (e.g., the terms can be used interchangeably) unless otherwise specified.
Described below are examples of various components of a transcoder and examples of processes that can use content information for encoding multimedia data.
The transcoder 130 or the preprocessor 140 (configured for transcoding) components thereof, and processes contained therein, can be implemented by hardware, software, firmware, middleware, microcode, or any combination thereof. For example, a parser, decoder, preprocessor, or encoder may be standalone components, incorporated as hardware, firmware, middleware in a component of another device, or be implemented in microcode or software that is executed on a processor, or a combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments that perform the motion compensation, shot classifying and encoding processes may be stored in a machine readable medium such as a storage medium. A code segment may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents.
Illustrative Example of a Transcoder Architecture
The transcoder 200 described herein can be configured to transcode a variety of multimedia data, and many of the processes apply to whatever type of multimedia data is transcoded. Although some of the examples provided herein relate particularly to transcoding MPEG-2 data to H.264 data, these examples are not meant to limit the disclosure to such data. Encoding aspects described below can be applied to transcoding any suitable multimedia data standard to another suitable multimedia data standard.
Parser/Decoder
Referring again to
The parser 205 communicates the video ES 206 to a decoder 214 which is part of the parser/decoder 202 shown here. In other configurations the parser 205 and the decoder 214 are separate components. The PTS 210 are sent to a transcoder PTS generator 215, which can generate separate presentation time stamps particular to the transcoder 200 for use in arranging data to be sent from the transcoder 200 to a broadcast system. The transcoder PTS generator 215 can be configured to provide data to a sync layer 240 of the transcoder 200 to coordinate the synchronization of the data broadcast.
As illustrated in block 304, initialization can include acquiring a command syntax verification, performing a first pass PSI/PSIP/SI (program specific information/program and system information protocol/system information) processing, performing processing specifically related to either the acquisition command or the PSI/PSIP/SI consistency verification, allocating a PES buffers for each PES, and setting timing (e.g., for alignment with desired acquisition start instant). The PES buffers hold the parsed ES data and communicate each parsed ES data to a corresponding audio decoder 216, test encoder 220, decoder 214, or transcoder PTS generator 215.
After initialization, process 300 proceeds to block 310 for main processing of the received multimedia data 104. Processing in block 310 can include target packet identifier (PID) filtering, continuous PSI/PSIP/SI monitoring and processing, and a timing process (e.g., for achieving a desired acquisition duration) so that the incoming multimedia data is passed into the appropriate PES buffers. As a result of processing the multimedia data in block 310, a program descriptor and indication of the PES buffer ‘read’ are generated, which will interface with the decoder 214 (
After block 310, the process 300 proceeds to block 314, where termination of the parsing operations occur, including generating a timer interrupt and freeing of PES buffers consequent to their consumption. It is noted that PES buffers will exist for all relevant elementary streams of the program cited in its descriptor such as audio, video, and subtitle streams.
Referring again to
The parser/decoder 202 also includes the decoder 214, which receives the video ES 206. The decoder 214 can generate metadata associated with video data, decodes the encoded video packetized elementary stream into raw video 224 (for example, in standard definition format), and processes the video closed captioned data in the video ES stream.
After initialization at block 404, the process 400 proceeds to block 408 where the main processing of the video ES is performed by the decoder 214. Main processing includes polling the video PES buffer ‘read’ information or “interface” for new data availability, decoding the video ES, reconstructing and storing pixel data at picture boundaries synchronizing, video & a/v generating metadata and storing at picture boundaries, and CC data storing at picture boundaries. The results block 410, of the main processing 408 includes generation of sequence descriptors, decoded picture buffer descriptors, metadata buffer descriptors, and CC data buffer descriptors.
After the main processing 408, process 400 proceeds to block 412 where it performs a termination process. The termination process can include determining termination conditions, including no new data occurring for a particular duration above a predetermined threshold, detection of a sequence end code, and/or detection of an explicit termination signal. The termination process can further include freeing decoded picture, associated metadata, and CC data buffers consequent to their consumption by a preprocessor to be described below. Process 400 ends block 414, where it can enter a state of waiting for video ES to be received as input.
Preprocessor
The preprocessor 226 can use metadata from the decoder to affect one or more of the preprocessing operations. Metadata can include information relating to, describing, or classifying the content of the multimedia data (“content information”); in particular the metadata can include a content classification. In some aspects, the metadata does not include content information desired for encoding operations. In such cases the preprocessor 226 can be configured to determine content information and use the content information for preprocessing operations and/or provides content information to other components of the transcoder 200, e.g., the decoder 228. In some aspects, the preprocessor 226 can use such content information to influence GOP partitioning, determine appropriate type of filtering, and/or determine encoding parameters that are communicated to an encoder.
At block 601, the preprocessor 226 determines if the received video data 222, 224 is progressive video. In some cases, this can be determined from the metadata if the metadata contains such information, or by processing of the video data itself. For example, an inverse telecine process, described below, can determine if the received video 222 is progressive video. If it is, the process proceeds to block 607 where filtering (e.g., denoiser) operations are performed on the video to reduce noise, such as white Gaussian noise. If the video data 222, 224 is not progressive video, at block 601 the process proceeds to block 604 to a phase detector 604.
Phase detector 604 distinguishes between video that originated in a telecine and that which began in a standard broadcast format. If the decision is made that the video was telecined (the YES decision path exiting phase detector 604), the telecined video is returned to its original format in inverse telecine 606. Redundant frames are identified and eliminated and fields derived from the same video frame are rewoven into a complete image. Since the sequence of reconstructed film images were photographically recorded at regular intervals of 1/24 of a second, the motion estimation process performed in a GOP partitioner 612 or the decoder 228 is more accurate using the inverse telecined images rather than the telecined data, which has an irregular time base.
In one aspect, the phase detector 604 makes certain decisions after receipt of a video frame. These decisions include: (i) whether the present video from a telecine output and the 3:2 pull down phase is one of the five phases P0, P1, P2, P3, and P4 shown in
When conventional NTSC video is recognized (the NO path from phase detector 601), it is transmitted to deinterlacer 605 for compression. The deinterlacer 605 transforms the interlaced fields to progressive video, and denoising operations can then be performed on the progressive video. One illustrative example of deinterlacing processing is described below.
Traditional analog video devices like televisions render video in an interlaced manner, i.e., such devices transmit even-numbered scan lines (even field), and odd-numbered scan lines (odd field). From the signal sampling point of view, this is equivalent to a spatio-temporal subsampling in a pattern described by:
where Θ stands for the original frame picture, F stands for the interlaced field, and (x,y,n) represents the horizontal, vertical, and temporal position of a pixel respectively.
Without loss of generality, it can be assumed n=0 is an even field throughout this disclosure so that Equation 1 above is simplified as
Since decimation is not conducted in the horizontal dimension, the subsampling pattern can be depicted in the next n˜y coordinate.
The goal of a deinterlacer is to transform interlaced video (a sequence of fields) into non-interlaced progressive frames (a sequence of frames). In other words, interpolate even and odd fields to “recover” or generate full-frame pictures. This can be represented by Equation 3:
where Fi represent deinterlacing results for missing pixels.
The deinterlacer 605 can also include a denoiser (denoising filter) 4006 configured to filter the spatio-temporal provisional deinterlaced frame generated by the Wmed filter 4004. Denoising the spatio-temporal provisional deinterlaced frame makes the subsequent motion search process more accurate especially if the source interlaced multimedia data sequence is contaminated by white noise. It can also at least partly remove alias between even and odd rows in a Wmed picture. The denoiser 4006 can be implemented as a variety of filters including a wavelet shrinkage and wavelet Wiener filter based denoiser. A denoiser can be used to remove noise from the candidate Wmed frame before it is further processed using motion compensation information, and can remove noise that is present in the Wmed frame and retain the signal present regardless of the signal's frequency content. Various types of denoising filters can be used, including wavelet filters. Wavelets are a class of functions used to localize a given signal in both space and scaling domains. The fundamental idea behind wavelets is to analyze the signal at different scales or resolutions such that small changes in the wavelet representation produce a correspondingly small change in the original signal.
A wavelet shrinkage or a wavelet Wiener filter can be also be applied as the denoiser. Wavelet shrinkage consists of a wavelet transformation of the noisy signal, followed by a shrinking of the small wavelet coefficients to zero (or smaller value), while leaving the large coefficients unaffected. Finally, an inverse transformation is performed to acquire the estimated signal.
The denoising filtering boosts the accuracy of motion compensation in noisy environments. Wavelet shrinkage denoising can involve shrinking in the wavelet transform domain, and typically comprises three steps: a linear forward wavelet transform, a nonlinear shrinkage denoising, and a linear inverse wavelet transform. The Wiener filter is a MSE-optimal linear filter which can be used to improve images degraded by additive noise and blurring. Such filters are generally known in the art and are described, for example, in “Ideal spatial adaptation by wavelet shrinkage,” referenced above, and by S. P. Ghael, A. M. Sayeed, and R. G. Baraniuk, “Improvement Wavelet denoising via empirical Wiener filtering,” Proceedings of SPIE, vol. 3169, pp. 389-399, San Diego, July 1997, which is expressly incorporated by reference herein in its entirety.
In some aspects, a denoising filter is based on an aspect of a (4, 2) bi-orthogonal cubic B-spline wavelet filter. One such filter can be defined by the following forward and inverse transforms:
Application of a denoising filter can increase the accuracy of motion compensation in a noisy environment. Implementations of such filters are further described in “Ideal spatial adaptation by wavelet shrinkage,” D. L. Donoho and I. M. Johnstone, Biometrika, vol. 8, pp. 425-455, 1994, which is expressly incorporated by reference herein in its entirety.
The bottom part of
It is possible to decouple deinterlacing prediction schemes comprising inter-field interpolation from intra-field interpolation with a Wmed+MC deinterlacing scheme. In other words, the spatio-temporal Wmed filtering can be used mainly for intra-field interpolation purposes, while inter-field interpolation can be performed during motion compensation. This reduces the peak signal-to-noise ratio of the Wmed result, but the visual quality after motion compensation is applied is more pleasing, because bad pixels from inaccurate inter-field prediction mode decisions will be removed from the Wmed filtering process.
After the appropriate inverse telecine or deinterlacing processing, at block 608 the progressive video is processed for alias suppressing and resampling (e.g., resizing). In some resampling aspects, a poly-phase resampler is implemented for picture size resizing. In one example of downsampling, the ratio between the original and the resized picture can be p/q, where p and q are relatively prime integers. The total number of phases is p. The cutoff frequency of the poly-phase filter in some aspects is 0.6 for resizing factors around 0.5. The cutoff frequency does not exactly match the resizing ratio in order to boost the high-frequency response of the resized sequence. This inevitably allows some aliasing. However, it is well-known that human eyes prefer sharp but a little aliased pictures to blurry and alias-free pictures.
where fc is the cutoff frequency. The above 1-D poly-phase filter can be applied to both the horizontal dimension and the vertical dimension.
Another aspect of resampling (resizing) is accounting for overscan. In an NTSC television signal, an image has 486 scan lines, and in digital video could have 720 pixels on each scan line. However, not all of the entire image is visible on the television due to mismatches between the size and the screen format. The part of the image that is not visible is called overscan.
To help broadcasters put useful information in the area visible by as many televisions as possible, the Society of Motion Picture & Television Engineers (SMPTE) defined specific sizes of the action frame called the safe action area and the safe title area. See SMPTE recommended practice RP 27.3-1989 on Specifications for Safe Action and Safe Title Areas Test Pattern for Television Systems. The safe action area is defined by the SMPTE as the area in which “all significant action must take place.” The safe title area is defined as the area where “all the useful information can be confined to ensure visibility on the majority of home television receivers.”
For example, referring to
Usually black borders may be seen in the overscan. For example, in
Referring again to
In one example of deblocking processing, a deblocking filter can be applied to all the 4×4 block edges of a frame, except edges at the boundary of the frame and any edges for which the deblocking filter process is disabled. This filtering process shall be performed on a macroblock basis after the completion of the frame construction process with all macroblocks in a frame processed in order of increasing macroblock addresses. For each macroblock, vertical edges are filtered first, from left to right, and then horizontal edges are filtered from top to bottom. The luma deblocking filter process is performed on four 16-sample edges and the deblocking filter process for each chroma component is performed on two 8-sample edges, for the horizontal direction and for the vertical direction, as shown in
In an example of deringing processing, a 2-D filter can be adaptively applied to smooth out areas near edges. Edge pixels undergo little or no filtering in order to avoid blurring.
GOP Partitioner
After deblocking and deringing, the progressive video is processed by a GOP partitioner 612. GOP positioning can include detecting shot changes, generating complexity maps (e.g., temporal, spatial bandwidth maps), and adaptive GOP partitioning. These are each described below.
A. Scene Change Detection
Shot detection relates to determining when a frame in a group of pictures (GOP) exhibits data that indicates a scene change has occurred. Generally, within a GOP, the frames may have no significant changes in any two or three (or more) adjacent frames, or there may be slow changes, or fast changes. Of course, these scene change classifications can be further broken down to a greater level of changes depending on a specific application, if necessary.
Detecting shot or scene changes is important for efficient encoding of video. Typically, when a GOP is not changing significantly, an I-frame at the beginning of the GOP is followed by a number of predictive frames can sufficiently encode the video so that subsequent decoding and display of the video is visually acceptable. However, when a scene is changing, either abruptly or slowly, additional I-frames and less predictive encoding (P-frames an B-frames) may be necessary to produce subsequently decoded visually acceptable results.
Shot detection and encoding systems and methods that improve the performance of existing encoding systems are described below. Such aspects can be implemented in the GOP partitioner 612 of the preprocessor 226 (
The illustrative example of a shot detector described herein only needs to utilize statistics from a previous frame, a current frame, and a next frame, and accordingly has very low latency. The shot detector differentiates several different types of shot events, including abrupt scene change, cross-fading and other slow scene change, and camera flashlight. By determining different type of shot events with different strategies in the encoder, the encoding efficiency and visual quality is enhanced.
Scene change detection can be used for any video coding system for it to intelligently conserve bits by inserting an I-frame at a fixed interval. In some aspects, the content information obtained by the preprocessor (e.g., either incorporated in metadata or calculated by the preprocessor 226) can be used for scene change detection. For example, depending on the content information, threshold values and other criteria described below may be dynamically adjusted for different types of video content.
Video encoding usually operates on a structured group of pictures (GOP). A GOP normally starts with an intra-coded frame (I-frame), followed by a series of P (predictive) or B (bi-directional) frames. Typically, an I-frame can store all the data required to display the frame, a B-frame relies on data in the preceding and following frames (e.g., only containing data changed from the preceding frame or is different from data in the next frame), and a P-frame contains data that has changed from the preceding frame. In common usage, I-frames are interspersed with P-frames and B-frames in encoded video. In terms of size (e.g., number of bits used to encode the frame), I-frames are typically much larger than P-frames, which in turn are larger than B-frames. For efficient encoding, transmission and decoding processing, the length of a GOP should be long enough to reduce the efficient loss from big I-frames, and short enough to fight mismatch between encoder and decoder, or channel impairment. In addition, macro blocks (MB) in P frames can be intra coded for the same reason.
Scene change detection can be used for a video encoder to determine a proper GOP length and insert I-frames based on the GOP length, instead of inserting an often unneeded I-frame at a fixed interval. In a practical streaming video system, the communication channel is usually impaired by bit errors or packet losses. Where to place I frames or I MBs may significantly impact decoded video quality and viewing experience. One encoding scheme is to use intra-coded frames for pictures or portions of pictures that have significant change from collocated previous pictures or picture portions. Normally these regions cannot be predicted effectively and efficiently with motion estimation, and encoding can be done more efficiently if such regions are exempted from inter-frame coding techniques (e.g., encoding using B-frames and P-frames). In the context of channel impairment, those regions are likely to suffer from error propagation, which can be reduced or eliminated (or nearly so) by intra-frame encoding.
Portions of the GOP video can be classified into two or more categories, where each region can have different intra-frame encoding criteria that may depend on the particular implementation. As an example, the video can be classified into three categories: abrupt scene changes, cross-fading and other slow scene changes, and camera flashlights.
Abrupt scene changes includes frames that are significantly different from the previous frame, usually caused by a camera operation. Since the content of these frames is different from that of the previous frame, the abrupt scene change frames should be encoded as I frames.
Cross-fading and other slow scene changes includes slow switching of scenes, usually caused by computer processing of camera shots. Gradual blending of two different scenes may look more pleasing to human eyes, but poses a challenge to video coding. Motion compensation cannot reduce the bitrate of those frames effectively, and more intra MBs can be updated for these frames.
Camera flashlights, or camera flash events, occur when the content of a frame includes camera flashes. Such flashes are relatively short in duration (e.g., one frame) and extremely bright such that the pixels in a frame portraying the flashes exhibit unusually high luminance relative to a corresponding area on an adjacent frame. Camera flashlights shift the luminance of a picture suddenly and swiftly. Usually the duration of a camera flashlight is shorter than the temporal masking duration of the human vision system (HVS), which is typically defined to be 44 ms. Human eyes are not sensitive to the quality of these short bursts of brightness and therefore they can be encoded coarsely. Because the flashlight frames cannot be handled effectively with motion compensation and they are bad prediction candidate for future frames, coarse encoding of these frames does not reduce the encoding efficiency of future frames. Scenes classified as flashlights should not be used to predict other frames because of the “artificial” high luminance, and other frames cannot effectively be used to predict these frames for the same reason. Once identified, these frames can be taken out because they can require a relatively high amount of processing. One option is to remove the camera flashlight frames and encode a DC coefficient in their place; such a solution is simple, computationally fast and saves many bits.
When any of the above categories if frames are detected, a shot event is declared. Shot detection is not only useful to improve encoding quality, it can also aid in identifying video content searching and indexing. One illustrative aspect of a scene detection process is described hereinbelow. In this example, a shot detection process first calculates information, or metrics, for a selected frame being processed for shot detection. The metrics can include information from bi-directional motion estimation and compensation processing of the video, and other luminance-based metrics.
To perform bi-directional motion estimation/compensation, a video sequence can be preprocessed with a bi-directional motion compensator that matches every 8×8 block of the current frame with blocks in two of the frames most adjacent neighboring frames, one in the past, and one in the future. The motion compensator produces motion vectors and difference metrics for every block.
After determining bi-directional motion information (e.g., motion information which identifies MBs (best matched) in corresponding adjacent frames, additional metrics can be generated (e.g., by a motion compensator in the GOP partitioner 612 or another suitable component) by various comparisons of the current frame to the next frame and the previous frame. The motion compensator can produce a difference metric for every block. The difference metric can be a sum of square difference (SSD) or a sum of absolute difference (SAD). Without loss of generality, here SAD is used as an example.
For every frame, a SAD ratio, also referred to as a “contrast ratio,” is calculated as below:
where SADP and SADN are the sum of absolute differences of the forward and the backward difference metric, respectively. It should be noted that the denominator contains a small positive number ε to prevent the “divide-by-zero” error. The nominator also contains an ε to balance the effect of the unity in the denominator. For example, if the previous frame, the current frame, and the next frame are identical, motion search should yield SADP=SADN=0. In this case, the above calculation generators γ=1 instead of 0 or infinity.
A luminance histogram can be calculated for every frame. Typically the multimedia images have a luminance depth (e.g., number of “bins”) of eight bits. The luminance depth used for calculating the luminance histogram according to some aspects can be set to 16 to obtain the histogram. In other aspects, the luminance depth can be set to an appropriate number which may depend upon the type of data being processed, the computational power available, or other predetermined criteria. In some aspects, the luminance depth can be set dynamically based on a calculated or received metric, such as the content of the data.
The equation below illustrates one example of calculating a luminance histogram difference (lambda):
where NPi is the number of blocks in the ith bin for the previous frame, and NCi is the number of blocks in the ith bin for the current frame, and N is the total number of blocks in a frame. If the luminance histogram difference of the previous and the current frame are completely dissimilar (or disjoint), then λ=2.
Using this information, a frame difference metric (D) is calculated as follows:
where A is a constant chosen by application,
The selected (current) frame is classified as an abrupt scene change frame if the frame difference metric meets the criterion shown in Equation 9:
where A is a constant chosen by application, and T1 is a threshold.
In one example simulation shows, setting A=1, and T1=5 achieve good detection performance. If the current frame is an abrupt scene change frame, then γC should be large and γP should be small. The ratio
can be used instead of γC alone so that the metric is normalized to the activity level of the context.
It should be noted that the above criterion uses the luminance histogram difference lambda (λ) in a non-linear way.
The current frame is determined to be a cross-fading or slow scene change if the scene strength metric D meets the criterion shown in Equation 5:
T2≦D<T1 [10]
for a certain number of continuous frames, where T1 is the same threshold used above and T2 is another threshold value.
A flashlight event usually causes the luminance histogram to shift to brighter side. In this illustrative aspect camera, the luminance histogram statistics are used to determine if the current frame comprises camera flashlights. A shot detection process can determine if the luminance of the current frame minus is greater than the luminance of the previous frame by a certain threshold T3, and the luminance of the current frame is greater than the luminance of the next frame by the threshold T3, as shown in Equations 11 and 12:
If the above criterion are not met, the current frame is not classified as comprising camera flashlights. If the criterion is met, the shot detection process determines if a backwards difference metric SADP and the forward difference metric SADN are greater than a certain threshold T4, as illustrated in the Equations below:
SADP≧T4 [13]
SADN≧T4 [14]
where
The shot detection process determines camera flash events by first determining if the luminance of a current frame is greater than the luminance of the previous frame and the luminance of the next frame. If not, the frame is not a camera flash event; but if so it may be. The shot detection process then can evaluate whether the backwards difference metric is greater than a threshold T3, and if the forwards difference metric is greater than a threshold T4; if both these conditions are satisfied, the shot detection process classifies the current frame as having camera flashlights. If the criterion is not met, the frame is not classified as any type of shot even, or it can be given a default classification that identifies the encoding to be done on the frame (e.g., drop frame, encode as I-frame).
Some exemplary values for T1, T2, T3, and T4 are shown above. Typically, these threshold values are selected through testing of a particular implementation of shot detection. In some aspects, one or more of the threshold values T1, T2, T3, and T4 are predetermined and such values are incorporated into the shot classifier in the encoding device. In some aspects, one or more of the threshold values T1, T2, T3, and T4 can be set during processing (e.g., dynamically) based on using information (e.g., metadata) supplied to the shot classifier or based on information calculated by the shot classifier itself.
Encoding the video using the shot detection information is typically performed in the encoder, but is described here for completeness of the shot detection disclosure. Referring to
In the above-described aspect, the amount of difference between the frame to be compressed and its adjacent two frames is indicated by a frame difference metric D. If a significant amount of a one-way luminance change is detected, it signifies a cross-fade effect in the frame. The more prominent the cross-fade is, the more gain may be achieved by using B-frames. In some aspects, a modified frame difference metric is used as shown in the equation below:
where dP=|YC−YP| and dN=|YC−YN| are the luma difference between the current frame and the previous frame, and the luma difference between the current frame and the next frame, respectively, Δ represents a constant that can be determined in normal experimentation as it can depend on the implementation, and α is a weighting variable having a value between 0 and 1.
B. Bandwidth Map Generation
The preprocessor 226 (
Human visual quality V can be a function of both encoding complexity C and allocated bits B (also referred to as bandwidth).
To achieve constant visual quality, a bandwidth (Bi) is assigned to the ith object (frame or MB) to be encoded that satisfies the criteria expressed in the two equations immediately below:
In the two equations immediately above, Ci is the encoding complexity of the ith object, B is the total available bandwidth, and V is the achieved visual quality for an object. Human visual quality is difficult to formulate as an equation. Therefore, the above equation set is not precisely defined. However, if it is assumed that the 3-D model is continuous in all variables, bandwidth ratio (Bi/B) can be treated as unchanged within the neighborhood of a (C, V) pair. The bandwidth ratio βi is defined in the equation shown below:
βi=Bi/B [18]
Bit allocation can then be defined as expressed in the following equations:
where δ indicates the “neighborhood.”
The encoding complexity is affected by human visual sensitivity, both spatial and temporal. Girod's human vision model is an example of a model that can be used to define the spatial complexity. This model considers the local spatial frequency and ambient lighting. The resulting metric is called Dcsat. At a pre-processing point in the process, whether a picture is to be intra-coded or inter-coded is not known and bandwidth ratios for both are generated. Bits are allocated according to the ratio between βINTRA of different video objects. For intra-coded pictures, the bandwidth ratio is expressed in the following equation:
βINTRA=β0INTRA log10(1+αINTRAY2Dcsat) [20]
In the equation above, Y is the average luminance component of a macroblock, αINTRA is a weighing factor for the luminance square and Dcsat term following it, β0INTRA is a normalization factor to guarantee
For example, a value for αINTRA=4 achieves good visual quality. Content information (e.g., a content classification) can be used to set αINTRA to a value that corresponds to a desired good visual quality level for the particular content of the video. In one example, if the video content comprises a “talking head” news broadcast, the visual quality level may be set lower because the information image or displayable portion of the video may be deemed of less importance than the audio portion, and less bits can be allocated to encode the data. In another example, if the video content comprises a sporting event, content information may be used to set αINTRA to a value that corresponds to a higher visual quality level because the displayed images may be more important to a viewer, and accordingly more bits can be allocated to encode the data.
To understand this relationship, it should be noted that bandwidth is allocated logarithmically with encoding complexity. The luminance squared term Y2 reflects the fact that coefficients with larger magnitude use more bits to encode. To prevent the logarithm from getting negative values, unity is added to the term in the parenthesis. Logarithms with other bases can also be used.
The temporal complexity is determined by a measure of a frame difference metric, which measures the difference between two consecutive frames taking into account the amount of motion (e.g., motion vectors) along with a frame difference metric such as the sum of the absolute differences (SAD).
Bit allocation for inter-coded pictures can consider spatial as well as temporal complexity. This is expressed below:
βINTER=β0INTER log10(1+αINTER·SSD·Dcsatexp(−γ∥MVP+MVN∥2)) [21]
In the above equation, MVP and MVN are the forward and the backward motion vectors for the current MB (see
C. Adaptive GOP Partitioning
In another illustrative example of processing that may be performed by the preprocessor 226, the GOP Partitioner 612 of
The concept underlying the use of P and B-frames, and in more recent compression algorithms, the skipping of frames to reduce the rate of the data needed to represent the video is the elimination of temporal redundancy. When temporal redundancy is high—i.e., there is little change from picture to picture—use of P, B, or skipped pictures efficiently represents the video stream, because I or P pictures decoded earlier are used later as references to decode other P or B pictures.
Adaptive GOP partitioning is based on using this concept adaptively. Differences between frames are quantified and a decision to represent the picture by a I, P, B, or skipped frame is automatically made after suitable tests are performed on the quantified differences. An adaptive structure has advantages not available in a fixed GOP structure. A fixed structure would ignore the possibility that little change in content has taken place; an adaptive procedure would allow far more B-frames to be inserted between each I and P, or two P frames, thereby reducing the number of bits needed to adequately represent the sequence of frames. Conversely when the change in video content is significant, the efficiency of P frames is greatly reduced because the difference between the predicted and the reference frames is too large. Under these conditions, matching objects may fall out of the motion search regions, or the similarity between matching objects is reduced due to distortion caused by changes in camera angle. At that point the P frames or the I and its adjacent P frame should be chosen to be closer to each other and fewer B-frames should be inserted. A fixed GOP could not make that adjustment.
In the system disclosed here, these conditions are automatically sensed. The GOP structure is flexible and is made to adapt to these changes in content. The system evaluates a frame difference metric, which can be thought of as measure of distance between frames, with the same additive properties of distance. In concept, given frames F1, F2, and F3 having the inter-frame distances d12 and d23, the distance between F1 and F3 is taken as being at least d12+d23. Frame assignments are made on the basis of this distance-like metric.
The GOP partitioner operates by assigning picture types to frames as they are received. The picture type indicates the method of prediction that may be required in coding each block:
I-pictures are coded without reference to other pictures. Since they stand alone they provide access points in the data stream where decoding can begin. An I encoding type is assigned to a frame if the “distance” to its predecessor frame exceeds a scene change threshold.
P-pictures can use the previous I or P pictures for motion compensated prediction. They use blocks in the previous fields or frames that may be displaced from the block being predicted as a basis for encoding. After the reference block is subtracted from the block being considered, the residual block is encoded, typically using the discrete cosine transform for the elimination of spatial redundancy. A P encoding types is assigned to a frame if the “distance” between it and the last frame assigned to be a P frame exceeds a second threshold, which is typically less than the first.
B-frame pictures can use the previous and next P- or I-pictures for motion compensation as described above. A block in a B picture can be forward, backward or bi-directionally predicted; or it could be intra-coded without reference to other frames. In H.264 a reference block can be a linear combination of as many as 32 blocks from as many frames. If the frame cannot be assigned to be an I or P type, it is assigned to be a B type, if the “distance” from it to its immediate predecessor is greater than a third threshold, which typically is less than the second threshold.
If the frame cannot be assigned to become a B-frame encoded, it is assigned to “skip frame” status. This frame can be skipped because it is virtually a copy of a previous frame.
Evaluating a metric that quantifies the difference between adjacent frames in the display order is the first part of this processing that takes place. This metric is the distance referred to above; with it, every frame is evaluated for its proper type. Thus, the spacing between the I and adjacent P, or two successive P frames, can be variable. Computing the metric begins by processing the video frames with a block-based motion compensator, a block being the basic unit of video compression, composed usually of 16×16 pixels, though other block sizes such as 8×8, 4×4 and 8×16 are possible. For frames consisting of two deinterlaced fields, the motion compensation can be done on a field basis, the search for the reference blocks taking place in fields rather than frames. For a block in the first field of the current frame a forward reference block is found in fields of the frame that follows it; likewise a backward reference block found in fields of the frame that immediately precedes the current field. The current blocks are assembled into a compensated field. The process continues with the second field of the frame. The two compensated fields are combined to form a forward and a backward compensated frame.
For frames created in the inverse telecine 606, the search for reference blocks is on a frame basis only, since only reconstructed film frames. Two reference blocks and two differences, forward and backward, are found, leading also to a forward and backward compensated frame. In summary, the motion compensator produces motion vectors and difference metrics for every block; but a block is part of a NTSC field in the case of the output of deinterlacer 605 being processed and is part of a film frame if the inverse telecine's output is processed. Note that the differences in the metric are evaluated between a block in the field or frame being considered and a block that best matches it, either in a preceding field or frame or a field or frame that immediately follows it, depending on whether a forward or backward difference is being evaluated. Only luminance values enter into this calculation.
The motion compensation step thus generates two sets of differences. These are between blocks of current values of luminance and the luminance values in reference blocks taken from frames that are immediately ahead and immediately behind the current one in time. The absolute value of each forward and each backward difference is determined for each pixel and each is separately summed over the entire frame. Both fields are included in the two summations when the deinterlaced NTSC fields that comprise a frame are processed. In this way, SADP, and SADN, the summed absolute values of the forward and backward differences are found.
For every frame a SAD ratio is calculated using the relationship,
where SADP and SADN are the summed absolute values of the forward and backward differences respectively. A small positive number is added to the numerator e to prevent the “divide-by-zero” error. A similar e term is added to the denominator, further reducing the sensitivity of γ when either SADP or SADN is close to zero.
In an alternate aspect, the difference can be the SSD, the sum of squared differences, and SAD, the sum of absolute differences, or the SATD, in which the blocks of pixel values are transformed by applying the two dimensional Discrete Cosine Transform to them before differences in block elements are taken. The sums are evaluated over the area of active video, though a smaller area may be used in other aspects.
The luminance histogram of every frame as received (non-motion compensated) is also computed. The histogram operates on the DC coefficient, i.e., the (0,0) coefficient, in the 16×16 array of coefficients that is the result of applying the two dimensional Discrete Cosine Transform to the block of luminance values if it were available. Equivalently the average value of the 256 values of luminance in the 16×16 block may be used in the histogram. For images whose luminance depth is eight bits, the number of bins is set at 16. The next metric evaluates the histogram difference
In the above, NPi is the number of blocks from the previous frame in the ith bin, and Nci is the number of blocks from the current frame that belong in the ith bin, N is the total number of blocks in a frame.
These intermediate results are assembled to form the current frame difference metric as
where γC is the SAD ratio based on the current frame and γP is the SAD ratio based on the previous frame. If a scene has smooth motion and its luma histogram barely change, then D≈1. If the current frame displays an abrupt scene change, then γC will be large and γP should be small. The ratio
instead of γC alone is used so that the metric is normalized to the activity level of the context.
If the current frame difference is less than the scene change threshold, the NO path is followed to block 4212 where the current frame difference is added the accumulated frame difference. Continuing through the flowchart at decision block 4214, the accumulated frame difference is compared with threshold t, which is in general less than the scene change threshold. If the accumulated frame difference is larger than t, control transfers to block 4216, and the frame is assigned to be a P frame; the accumulated frame difference is then reset to zero in step 4218. If the accumulated frame difference is less than t, control transfers from block 4214 to block 4220. There the current frame difference is compared with τ, which is less than t. If the current frame difference is smaller than τ, the frame is assigned to be skipped in block 4222 and then the process returns; if the current frame difference is larger than τ, the frame is assigned to be a B-frame in block 4226.
Encoder
Referring back to
Base Layer and Enhancement Layer Encoding
The encoder 228 can be a SNR scalable encoder, which can encode the raw video and the metadata from the preprocessor 226 into a first group of encoded data, also referred to herein as a base layer, and one or more additional groups of encoded data, also referred to herein as enhancement layers. An encoding algorithm generates base layer and enhancement layer coefficients which, when decoded, may be combined at the decoder when both layers are available for decoding. When both layers are not available, the encoding of the base layer allows it to be decoded as a single layer.
One aspect of a such a multilayer encoding process is described in reference to
At block 323, an encoder encodes base layer data and enhancement layer data for P and/or B-frames in the GOP being processed. Encoding means, such as encoder 228 can perform the encoding at block 323. At block 325, the encoding process checks if there are more P or B-frames to encode. Encoding means, such as SNR scalable encoder 228 can perform act 325. If more P or B-frames remain, step 323 is repeated until all the frames in the GOP are finished being encoded. P and B-frames are comprised of inter-coded macroblocks (inter-coded MBs), although there can be intra-coded MB's in P and B-frames as will be discussed below.
In order for a decoder to distinguish between base layer and enhancement layer data, the encoder 228 encodes overhead information, block 327. The types of overhead information include, for example, data identifying the number of layers, data identifying a layer as a base layer, data identifying a layer as an enhancement layer, data identifying inter-relationships between layers (such as, layer 2 is an enhancement layer for base layer 1, or layer 3 is an enhancement layer for enhancement layer 2), or data identifying a layer as a final enhancement layer in a string of enhancement layers. The overhead information can be contained in headers connected with the base and/or enhancement layer data that it pertains to, or contained in separate data messages. Encoding means, such as encoder 228 of
To have single layer decoding, the coefficients of two layers must be combined before inverse quantization. Therefore the coefficients of the two layers have to be generated interactively; otherwise this could introduce a significant amount of overhead. One reason for the increased overhead is that the base layer encoding and the enhancement layer encoding could use different temporal references. An algorithm is needed to generate base layer and enhancement layer coefficients, which can be combined at the decoder before dequantization when both layers are available. At the same time, the algorithm should provide for acceptable base layer video when the enhancement layer is not available or the decoder decides not to decode the enhancement layer for reasons such as, for example, power savings. The details of an illustrative example of such a process are discussed further below in context of the brief discussion of standard predictive coding immediately below.
P-frames (or any inter-coded sections) can exploit temporal redundancy between a region in a current picture and a best matching prediction region in a reference picture. The location of the best matching prediction region in the reference frame can be encoded in a motion vector. The difference between the current region and the best matching reference prediction region is known as residual error (or prediction error).
The encoded quantized coefficients of residual error 343, along with encoded motion vector 341 can be used to reconstruct current macroblock 335 in the encoder for use as part of a reference frame for subsequent motion estimation and compensation. The encoder can emulate the procedures of a decoder for this P Frame reconstruction. The emulation of the decoder will result in both the encoder and decoder working with the same reference picture. The reconstruction process, whether done in an encoder, for further inter-coding, or in a decoder, is presented here. Reconstruction of a P Frame can be started after the reference frame (or a portion of a picture or frame that is being referenced) is reconstructed. The encoded quantized coefficients are dequantized 351 and then 2D Inverse DCT, or IDCT, 353 is performed resulting in decoded or reconstructed residual error 355. Encoded motion vector 341 is decoded and used to locate the already reconstructed best matching macroblock 357 in the already reconstructed reference picture 337. Reconstructed residual error 355 is then added to reconstructed best matching macroblock 357 to form reconstructed macroblock 359. Reconstructed macroblock 359 can be stored in memory, displayed independently or in a picture with other reconstructed macroblocks, or processed further for image enhancement.
B-frames (or any section coded with bi-directional prediction) can exploit temporal redundancy between a region in a current picture and a best matching prediction region in a previous picture and a best matching prediction region in a subsequent picture. The subsequent best matching prediction region and the previous best matching prediction region are combined to form a combined bi-directional predicted region. The difference between the current picture region and the best matching combined bi-directional prediction region is a residual error (or prediction error). The locations of the best matching prediction region in the subsequent reference picture and the best matching prediction region in the previous reference picture can be encoded in two motion vectors.
The transformed coefficients are then parsed into base layer and enhancement layer coefficients in selector 379. The parsing of selector 379 can take on several forms, as discussed below. One common feature of the parsing techniques is that the enhancement layer coefficient, C′enh, is calculated such that it is a differential refinement to the base layer coefficient C′base. Calculating the enhancement layer to be a refinement to the base layer allows a decoder to decode the base layer coefficient by itself and have a reasonable representation of the image, or to combine the base and enhancement layer coefficients and have a refined representation of the image. The coefficients selected by selector 379 are then quantized by quantizers 381 and 383. The quantized coefficients {tilde over (C)}′base and {tilde over (C)}′enh (calculated with quantizers 381 and 383 respectively) can be stored in memory or transmitted over a network to a decoder.
To match the reconstruction of the macroblock in a decoder, dequantizer 385 dequantizes the base layer residual error coefficients. The dequantized residual error coefficients are inverse transformed 387 and added 389 to the best matching macroblock found in buffer 371, resulting in a reconstructed macroblock that matches what will be reconstructed in the decoder. Quantizer 383, dequantizer 391, inverse transformer 393, adder 397 and buffer 373 perform similar calculations in enhancement loop 365 as were done in base layer loop 363. In addition, adder 393 is used to combine the dequantized enhancement layer and base layer coefficients used in the reconstruction of the enhancement layer. The enhancement layer quantizer and dequantizer will generally utilize a finer quantizer step size (a lower QP) than the base layer.
where the “min” function can be either a mathematical minimum or a minimum magnitude of the two arguments. Equation 25 is depicted as block 401 and Equation 26 is depicted as adder 510 in
Adder 407 computes the enhancement layer coefficient as shown in the following two equations:
C′enh=Cenh−Qb−1(Qb(C′base)) [28]
where C′base is given by Equation 27.
In addition to the base and enhancement layer residual error coefficients, the decoder needs information identifying how MB's are encoded. Encoding means such as encoder component 228 of
P-frames and B-frames can contain intra-coded MBs as well as inter MBs. It is common for hybrid video encoders to use rate distortion (RD) optimization to decide to encode certain macroblocks in P or B-frames as intra-coded MBs. In order to have single layer decoding where intra-coded MB's do not depend on enhancement layer inter MB's, any neighboring inter MBs are not used for spatial prediction of base layer intra-coded MBs. In order to keep the computational complexity unchanged for the enhancement layer decoding, for the intra-coded MBs in the base layer P or B-frame, the refinement at the enhancement layer could be skipped.
Intra-coded MBs in P or B-frames require many more bits than inter MBs. For this reason, intra-coded MBs in P or B-frames could be encoded only at base layer quality at a higher QP. This will introduce some deterioration in video quality, but this deterioration should be unnoticeable if it is refined in a later frame with the inter MB coefficients in the base and enhancement layer as discussed above. Two reasons make this deterioration unnoticeable. The first is a feature of the human visual system (HVS) and the other one is that Inter MBs refine intra MBs. With objects that change position from a first frame to a second frame, some pixels in the first frame are invisible in the second frame (to-be-covered information), and some pixels in the second frame are visible for the first time (uncovered information). Human eyes are not sensitive to the uncovered and to-be-covered visual information. So for the uncovered information, even though it is encoded at a lower quality, the eyes may not tell the difference. If the same information remains in the following P frame, there will be a high chance that the following P frame at the enhancement layer can refine it because the enhancement layer has lower QP.
Another common technique that introduces intra-coded MBs in P or B-frames is known as Intra Refresh. In this case, some MBs are coded as intra-coded MBs, even though standard R-D optimization would dictate that they should be Inter-coded MBs. These intra-coded MBs, contained in the base layer, can be encoded with either QPb or QPe. If QPe, is used for the base layer, then no refinement is needed at the enhancement layer. If QPb, is used for the base layer, then refinement may be needed, otherwise at the enhancement layer, the drop of quality will be noticeable. Since inter-coding is more efficient than intra-coding in the sense of coding efficiency, these refinements at the enhancement layer will be inter-coded. This way, the base layer coefficients will not be used for the enhancement layer. Therefore the quality gets improved at the enhancement layer without introducing new operations.
B-frames are commonly used in enhancement layers because of the high compression qualities they offer. However, B-frames may have to reference intra-coded MBs of a P frame. If the pixels of the B-frame were to be encoded at enhancement layer quality, it could require too many bits due to the lower quality of the P frame intra-coded MBs, as discussed above. By taking advantage of the qualities of the HVS, as discussed above, the B-frame MBs could be encoded at a lower quality when referencing lower quality intra-coded MB's of P frames.
One extreme case of intra-coded MBs in P or B-frames is when all MBs in a P or B-frame are encoded at Intra mode due to the presence of a scene change in the video being encoded. In this case the whole frame can be coded at the base layer quality and no refinement at the enhancement layer. If a scene change occurs at a B-frame, and assume that B-frames are only encoded in the enhancement layer, then the B-frame could be encoded at base layer quality or simply dropped. If a scene change occurs at a P frame, no changes may be needed, but the P frame could be dropped or encoded at base layer quality. Scalable layer encoding is further described in U.S. patent application Ser. No. 11/373,604, filed on Mar. 9, 2006 [Attorney docket/ref. no. 050078] entitled “S
Encoder First Pass Portion
The encoder 228 receives metadata and raw video from the preprocessor 226. The metadata can include any metadata received or calculated by the preprocessor 226, including metadata related to content information of the video. The first pass portion 702 of encoder 228 illustrates exemplary processes that can be included in first pass encoding 702, which is described below in terms of its functionality. As one of skill in the art will know, such functionality can be embodied in various forms (e.g., hardware, software, firmware, or a combination thereof).
As stated above, the content classification module 712 receives the metadata and raw video supplied by the preprocessor 226. In some examples, the preprocessor 226 calculates content information from the multimedia data and provides the content information to the content classification module 712 (e.g., in the metadata), which can use the content information to determine a content classification for the multimedia data. In some other aspects, the content classification module 712 is configured to determine various content information from the multimedia data, and can also be configured to determine a content classification.
The content classification module 712 can be configured to determine a different content classification for video having different types of content. The different content classification can result in different parameters used in aspects of encoding the multimedia data, for example, determining a bit rate (e.g., bit allocation) for determining quantization parameters, motion estimation, scalability, error resiliency, maintaining optimal multimedia data quality across channels, and for fast channel switching schemes (e.g., forcing I-frames periodically to allow fast channel switching. According to one example, the encoder 228 is configured to determine rate-distortion (R-D) optimization and bit rate allocations based on the content classification. Determining a content classification allows multimedia data to be compressed to a given quality level corresponding to a desired bit rate based on a content classification. Also, by classifying the content of the multimedia data (e.g., determining a content classification based on the Human Visual System), the resulting perceptive quality of communicated multimedia data on a display of a receiving device is made dependent on the video content.
As an example of a procedure that content classification module 712 undergoes to classify content,
Content Information
The content classification module 712 can be configured to calculate a variety of content information from the multimedia data, including a variety of content related metrics, including spatial complexity, temporal complexity, contrast ratio values, standard deviations and frame difference metrics, described further below.
The content classification module 712 can be configured to determine spatial complexity and temporal complexity of the multimedia data, and also to associate a texture value to the spatial complexity and a motion value to the temporal complexity. The content classification module 712 receives preprocessed content information relating to the contents of the multimedia data being encoded from the preprocessor 226, or alternatively, the preprocessor 226 can be configured to calculate the content information. As described above, the content information can include, for example, one or more Dcsat values, contrast ratio values, motion vectors (MVs), and sum of absolute differences (SADs).
In general, multimedia data includes one or more sequences of images, or frames. Each frame can be broken up into blocks of pixels for processing. Spatial complexity is a broad term which generally describes a measure of the level of spatial details within a frame. Scenes with mainly plain or unchanging or low changing areas of luminance and chrominance will have low spatial complexity. The spatial complexity is associated with the texture of the video data. Spatial complexity is based on, in this aspect, a human visual sensitivity metric called Dcsat, which is calculated for each block as a function of local spatial frequency and ambient lighting. Ordinary skilled artisans are aware of techniques for using spatial frequency patterns and lighting and contrast characteristics of visual images to take advantage of the human visual system. A number of sensitivity metrics are known for taking advantage of the perspective limitations of the human visual system and could be used with method described herein.
Temporal complexity is a broad term which is used to generally describe a measure of the level of motion in multimedia data as referenced between frames in a sequence of frames. Scenes (e.g., sequences of frames of video data) with little or no motion have a low temporal complexity. Temporal complexity can be calculated for each macroblock, and can be based on the Dcsat value, motion vectors and the sum of absolute pixel differences between one frame and another frame (e.g., a reference frame).
The frame difference metric gives a measure of the difference between two consecutive frames taking into account the amount of motion (example, motion vector or MV) along with the residual energy represented as sum of absolute difference (SAD) between a predictor and the current macroblock. Frame difference also provides a measure of bidirectional or unidirectional prediction efficiencies.
One example of a frame difference metric based on the motion information received from a pre-processor potentially performing motion compensated de-interlacing is as follows. The deinterlacer performs a bidirectional motion estimation and thus bidirectional motion vector and SAD information is available. A frame difference represented by SAD_MV for each macroblock can be derived as follows:
SAD_MV=log10 [SAD*exp(−min(1,MV))] [29]
where MV=Square_root (MVx2+MVy2), SAD=min(SADN, SADP), where SADN is the SAD computed from the backward reference frame, and SADP is the SAD computed from the forward reference frame.
Another approach of estimating a frame difference was described above in reference to Equations 6-8. A SAD ratio (or contrast ration) γ can be calculated as earlier described above in Equation 6. A luminance histogram of every frame can also be determined, the histogram difference λ being calculated using Equation 7. The frame difference metric D can be calculated as shown in Equation 8.
In one illustrative example, a contrast ratio and a frame difference metric are utilized in the following manner to obtain a video content classification, which could reliably predict the features in a given video sequence. Although described here as occurring in the encoder 228, a preprocessor 226 can also be configured to determine a content classification (or other content information) and pass the content classification to the encoder 228 via metadata. The process described in the example below classifies the content into eight possible classes, similar to the classification obtained from the R-D curve based analysis. The classification process outputs a value in the range between 0 and 1 for each superframe depending on the complexity of the scene and the number of scene change occurrences in that superframe. The content classification module in the preprocessor can execute the following steps (1)-(5) for each superframe to obtain a content classification metric from the frame contrast and frame difference values.
1. Calculate Mean Frame Contrast and Frame Contrast Deviation from the macroblock contrast values.
2. Normalize Frame Contrast and Frame Difference values using the values obtained from simulations, which are 40 and 5 respectively.
3. Compute a content classification metric using, e.g., the generalized equation:
CCMetric=CCW1*I_Frame_Contrast_Mean+CCW2*Frame_Difference_Mean−CCW3*I_Contrast_Deviation^2*exp(CCW4*Frame_Difference_Deviation^2) [30]
where CCW1, CCW2, CCW3 and CCW4 are weighting factors. In this example, the values are chosen to be 0.2 for CCW1, 0.9 for CCW2, 0.1 for CCW3 and −0.00009 for CCW4.
4. Determine the number of scene changes in the super frame. Generally, a super frame refers to a group of pictures or frames that can be displayed in a particular time period. Typically, the time period is one second. In some aspects, a super frame comprises 30 frames (for 30/fps video). In other aspects a super frame comprises 24 frames (24/fps video). Depending upon the number of scene changes, one of the following cases gets executed.
5. A correction may be used for the metric in the case of low motion scenes when the Frame Difference mean is less than 0.05. An offset of (CCOFFSET) 0.33 would be added to the CCMetric.
The content classification module 712 uses the Dcsat value, motion vectors and/or the sum of absolute differences to determine a value indicating a spatial complexity for the macroblock (or designated amount of video data). The temporal complexity is determined by a measure of the frame difference metric (the difference between two consecutive frames taking into account the amount of motion, with motion vectors, and the sum of absolute differences between the frames).
In some aspects, the content classification module 712 can be configured to generate a bandwidth map. For example, bandwidth map generation can be performed by the content classification module 712 if the preprocessor 226 does not generate a bandwidth map.
Determining Texture and Motion Values
For each macroblock in the multimedia data, the content classification module 712 associates a texture value with the spatial complexity and a motion value with the temporal complexity. The texture value relates to the luminescence values of the multimedia data, where a low texture value indicates small changes in luminescence values of neighboring pixels of the data, and a high texture value indicates large changes in the luminescence values of neighboring pixels of the data. Once the texture and motion values are calculated, the content classification module 712 determines a content classification by considering both the motion and texture information. The content classification module 712 associates the texture for the video data being classified with a relative texture value, for example, “Low” texture, “Medium” texture, or “High” texture, which generally indicates the complexity of luminance values of the macroblocks. Also, the content classification module 712 associates the motion value calculated for the video data being classified with a relative motion value, for example, “Low” motion, “Medium” motion, or “High” motion which generally indicates the amount of motion of the macroblocks. In alternative aspects, fewer or more categories for motion and texture can be used. Then, a content classification metric is then determined by considering the associated texture and motion values. Further description of an illustrative aspect of content classification is disclosed in co-pending U.S. patent application Ser. No. 11/373,577 entitled “CONTENT CLASSIFICATION FOR MULTIMEDIA PROCESSING” filed on Mar. 10, 2006, assigned to the assignee hereof and hereby expressly incorporated by reference herein.
Rate Control Bit Allocation
As described herein, a multimedia data content classification can be used in encoding algorithms to effectively improve the bit management while maintaining a constant the perceptive quality of video. For example, the classification metric can be used in algorithms for scene-change detection, encoding bit rate allocation control, and frame rate up conversion (FRUC). Compressor/decompressor (codec) systems and digital signal processing algorithms are commonly used in video data communications, and can be configured to conserve bandwidth, but there is a trade-off between quality and bandwidth conservation. The best codecs provide the most bandwidth conservation while producing the least degradation of video quality.
In one illustrative example, the rate control bit allocation module 714 uses the content classification to determine a bit rate (e.g., the number of bits allocated for encoding the multimedia data) and stores the bit rate into memory for use by other process and components of the encoder 228. A bit rate determined from the classification of the video data can help conserve bandwidth while providing multimedia data at a consistent quality level. In one aspect, a different bit rate can be associated with each of the eight different content classifications and then that bit rate is used to encode the multimedia data. The resulting effect is that although the different content classifications of multimedia data are allocated a different number of bits for encoding, the perceived quality is similar or consistent when viewed on a display.
Generally, multimedia data with a higher content classification are indicative of a higher level of motion and/or texture and is allocated more bits when encoded. Multimedia data with a lower classification (indicative of less texture and motion) is allocated less bits. For multimedia data of a particular content classification, the bit rate can be determined based on a selected target perceived quality level for viewing the multimedia data. Determining multimedia data quality can be determined by humans viewing and grading the multimedia data. In some alternative aspects, estimates of the multimedia data quality can be made by automatic test systems using, for example, signal to noise ratio algorithms. In one aspect, a set of standard quality levels (e.g., five) and a corresponding bit rate needed to achieve each particular quality level are predetermined for multimedia data of each content classification. To determine a set of quality levels, multimedia data of a particular content classification can be evaluated by generating a Mean Opinion Score (MOS) that provides a numerical indication of a visually perceived quality of the multimedia data when it is encoded using a certain bit rate. The MOS can be expressed as a single number in the range 1 to 5, where 1 is lowest perceived quality, and 5 is the highest perceived quality. In other aspects, the MOS can have more than five or fewer than five quality levels, and different descriptions of each quality level can be used.
Determining multimedia data quality can be determined by humans viewing and grading the multimedia data. In some alternative aspects, estimates of the multimedia data quality can be made by automatic test systems using, for example, signal to noise ratio algorithms. In one aspect, a set of standard quality levels (e.g., five) and a corresponding bit rate needed to achieve each particular quality level are predetermined for multimedia data of each content classification.
Knowing the relationship between the visually perceived quality level and a bit rate for multimedia data of a certain content classification can be determined by selecting a target (e.g., desired) quality level. The target quality level used to determine the bit rate can be preselected, selected by a user, selected through an automatic process or a semi-automatic process requiring an input from a user or from another process, or be selected dynamically by the encoding device or system based on predetermined criteria. A target quality level can be selected based on, for example, the type of encoding application, or the type of client device that will be receiving the multimedia data.
In the illustrated example in
An exemplary flow diagram of the operation of the rate control module 714 is illustrated in
After the inputs at block 1002, process 1000 proceeds to block 1004 for initialization for encoding the bitstream. Concurrently, a buffer initialization 1006 is performed. Next, a GOP is initialized as shown in block 1008, with GOP bit allocation 1010 received as part of the initialization. After GOP initialization, flow proceeds to block 1012, where a slice is initialized. This initialization includes an update of the header bits as shown by block 1014. After the initializations of block 1004, 1008 and 1012 are performed, rate control (RC) for a basic unit or macroblock (MB) is carried out as shown by block 1016. As part of the rate control determination of a macroblock in block 1016, inputs are received via interfaces in the encoder 228. These inputs can include macroblock (MB) bit allocation 1018, an update of quadratic model parameters 1020, and an update of median absolute deviation from the median (“MAD,” a robust estimate of dispersion) parameters 1022. Next process 1000 proceeds to block 1024 for execution of operations after encoding one picture 1024. This procedure includes receiving an update of buffer parameters as shown by block 1026. Process 1000 then proceeds to output block 1028 where the rate control module 714 outputs quantization parameters QP for each macroblock MB to be used by a mode decision module 715 as shown in
Motion Estimation
Motion estimation module 720 receives inputs of metadata and raw video from the preprocessor 226, and provides output that can include block size, motion vectors distortion metrics, and reference frame identifiers to a mode decision module 715.
Scalability R-D for Base and Enhancement Layer
Slice/Macroblock Ordering
The first pass portion 702 also includes a slice/macroblock ordering module 722, which receives an input from an error resilience module 740 in the second pass portion and provides an slice alignment information to the mode decision module 715. Slices are chunks of independently decodable (entropy decoding) coded video data. Access units (AU) are coded video frames each comprising a set of NAL units always containing exactly one primary coded picture. In addition to the primary coded picture, an access unit may also contain one or more redundant coded pictures or other NAL units not containing slices or slice data partitions of a coded picture. The decoding of an access unit always results in a decoded picture.
Frames can be time division multiplexed blocks of physical layer packets (called a TDM capsule) that offer the highest time diversity. A superframe corresponds to one unit of time (e.g., 1 sec) and contains four frames. Aligning slice and AU boundaries to frame boundaries in the time domain results in the most efficient separation and localization of corrupted data. During a deep fade, most of the contiguous data in a TDM capsule is affected by errors. Due to time diversity, the remaining TDM capsules have a high probability of being intact. The uncorrupted data can be utilized to recover and conceal the lost data from the affected TDM capsule. Similar logic applies to frequency domain multiplexing (FDM) where frequency diversity is attained through separation in frequency subcarriers that the data symbols modulate. Furthermore, similar logic applies to spatial (through separation in transmitters and receivers antennas) and other forms of diversity often applied in wireless networks.
In order to align slices and AU to frames, the outer code (FEC) code block creation and MAC layer encapsulation should align as well.
The video bitstream comprises AUs as illustrated in
Since I-frames are typically large, for example, on the order of 10's of kbits, the overhead due to multiple slices is not a large proportion of the total I-frame size or total bitrate. Also, having more slices in an intra-coded AU enables better and more frequent resynchronization and more efficient spatial error concealment. Also, I-frames carry the most important information in the video bitstream since P and B-frames are predicted off of I-frames. I-frames also serve as random access points for channel acquisition.
Referring now to
Because P-frames are typically sized on the order of a few kbits, aligning slices of a P-frame and integer number of P-frames to frame boundaries enables error resilience without a detrimental loss of efficiency (for similar reasons as those for I-frames). Temporal error concealment can be employed in such aspects. Alternatively, dispersing consecutive P-frames such that they arrive in different frames provides added time diversity among P-frames, which can be because temporal concealment is based on motion vectors and data from previously reconstructed I or P frames. B-frames can be extremely small (100's of bits) to moderately large (few 1000 bits). Hence aligning integer number of B-frames to frame boundaries is desirable to achieve error resiliency without a detrimental loss of efficiency.
Mode Decision Module
Encoder Second Pass Portion
Referring again to
Re-encoder
Referring again to
Error Resilience Module
The encoder 228 example illustrated in
Error Resilience
For a prediction based hybrid compression system, intra-coded frames are independently coded without any temporal prediction. Inter-coded frames can be temporally predicted from past frames (P-frames) and future frames (B-frames). The best predictor can be identified through a search process in the reference frame (one or more) and a distortion measure such as SAD is used to identify the best match. The predictive coded region of the current frame can be a block of varying size and shape (16×16, 32×32, 8×4 etc) or a group of pixels identified as an object through, for example, segmentation.
Temporal prediction typically extends over many frames (e.g., 10 to 10's of frames) and is terminated when a frame is coded as an I-frame, the GOP typically being defined by the I-frame frequency. For maximum coding efficiency, a GOP is a scene, for example, GOP boundaries are aligned with scene boundaries and scene change frames are coded as I-frames. In low motion sequences comprise a relatively static background and motion is generally restricted to the foreground object. Examples of content of such low motion sequences include news and weather forecasts programs where more than 30% of most viewed content is of this nature. In low motion sequences, most of the regions are inter-coded and the predicted frames refer back to the I-frame through intermediate predicted frames.
Referring to
Prediction hierarchy is defined as the tree of blocks created based on this importance level or measure of persistence with the parent at the top (intra coded block 2205) and the children at the bottom. Note that the inter coded block 2215 in P1 is on the second level of the hierarchy and so on. Leaves are blocks that terminate a prediction chain.
Prediction hierarchy can be created for video sequences irrespective of content type (such as music and sports as well and not just news) and is applicable to prediction based video (and data) compression in general (this applies to all the inventions described in this application). Once the prediction hierarchy is established, error resilience algorithms such as adaptive intra refresh, described below, can be applied more effectively. The importance measure can be based on recoverability of a given block from errors such as through concealment operations and applying adaptive intra refresh to enhance resilience of the coded bitstream to errors. An estimate of the importance measure can be based on number of times a block is used as a predictor also referred to as the persistence metric. The persistence metric is also used to improve coding efficiency by arresting prediction error propagation. The persistence metric also increases bit allocation for the blocks with higher importance.
Adaptive Intra Refresh
Adaptive intra refresh is an error resilience technique that can be based on content information of the multimedia data. In an intra refresh process, some MBs are intra-coded even though standard R-D optimization would dictate that they should be inter-coded MBs. AIR employs motion-weighted intra refresh to introduce intra-coded MBs in P or B-frames. These intra-coded MBs, contained in the base layer, can be encoded with either QPb or QPe. If QPe, is used for the base layer, then no refinement should be done at the enhancement layer. If QPb, is used for the base layer, then refinement may be appropriate, otherwise at the enhancement layer, the drop of quality will be noticeable. Since inter-coding is more efficient than intra-coding in the sense of coding efficiency, these refinements at the enhancement layer will be inter-coded. This way, the base layer coefficients will not be used for the enhancement layer, and the quality is improved at the enhancement layer without introducing new operations.
In some aspects, adaptive intra refresh can be based on content information of the multimedia data (e.g., a content classification) instead of, or in addition to, a motion weighted basis. For example, if the content classification is relatively high (e.g., scenes having high spatial and temporal complexity) adaptive intra refresh can introduce relatively more intra-coded MB's into P or B-frames. Alternatively, if the content classification was relatively low (indicating a less dynamic scene with low spatial and/or temporal complexity) Adaptive intra refresh can introduce fewer intra coded MB's in the P and B-frames. Such metrics and methods for improving error resiliency can be applied not just in the context of wireless multimedia communications but towards data compression and multimedia processing in general (e.g., in graphics rendering).
Channel Switch Frame
A channel switch frame (CSF) as defined herein is a broad term describing a random access frame inserted at an appropriate location in a broadcast stream for fast channel acquisition and thus fast channel change between streams in a broadcast multiplex. Channel switch frames also increase error robustness, as they provide redundant data that can be used if the primary frame is transmitted with an error. An I-frame or a progressive I-frame such as the progressive decoder refresh frame in H.264 is typically serves as a random access point. However, frequent I-frames (or short GOPs, shorter than scene durations) results in a significant reduction in compression efficiency. Since intra-coded blocks may be used for error resilience, random access and error resilience can be effectively combined through prediction hierarchy to improve coding efficiency while increasing robustness to errors.
Improvement of random access switching and error robustness can be achieved in concert, and can be based on content information such as a content classification. For low motion sequences, prediction chains are long and a significant portion of the information required to reconstruct a superframe or scene is contained in the I-frame that occurred at the start of the scene. Channel errors tend to be bursty and when a fade strikes and FEC and channel coding fail, there is heavy residual error that concealment fails. This is particularly severe for low motion (and hence low bit rate) sequences since the amount of coded data is not significant enough to provide good time diversity within the video bitstream and because these are highly compressible sequences that renders every bit useful for reconstruction. High motion sequences are more robust to errors due to the nature of content—more new information in every frame increases the number of coded intra blocks which are independently decodable and more resilient to error inherently. Adaptive intra refresh based on prediction hierarchy achieves a high performance for high motion sequences and performance improvement is not significant for low motion sequences. Hence, a channel switch frame containing most of the I-frame is a good source of diversity for low motion sequences. When an error strikes a superframe, decoding in the consecutive frame starts from the CSF which recovers the lost information due to prediction and error resilience is achieved.
In the case of high motion sequences such as sequences having a relatively high content classification (e.g., 6-8), the CSF can consists of blocks that persist in the SF—those that are good predictors. All other regions of the CSF do not have to be coded since these are blocks that have short prediction chains which implies that they are terminated with intra blocks. Hence, CSF still serves to recover from lost information due to prediction when an error occurs. CSFs for low motion sequences are on par with the size of I-frames, and they can be coded at a lower bit rate through heavier quantization, where CSFs for high motion sequences are much smaller than the corresponding I-frames.
Error resilience based on prediction hierarchy can work well with scalability and can achieve a highly efficient layered coding. Scalability to support hierarchical modulation in physical layer technologies may require data partitioning of the video bitstream with specific bandwidth ratios. These may not always be the ideal ratios for optimal scalability (for example, with the least overhead). In some aspects, a 2-layer scalability with 1:1 bandwidth ratio is used. Partitioning video bitstream to 2-layers of equal size may not be as efficient for low motion sequences. For low motion sequences, the base layer containing all header and metadata information is larger than the enhancement layer. However, since CSFs for low motion sequences are larger, they fit nicely in the remaining bandwidth in the enhancement layer.
High motion sequences have sufficient residual information that data partitioning to 1:1 can be achieved with the least overhead. Additionally, a channel switch frame for such sequences are much smaller for high motion sequences. Hence, error resilience based on prediction hierarchy can work well with scalability for high motion sequences as well. Extending the concepts discussed above for moderate motion clips is possible based on the descriptions of these algorithms, and the proposed concepts apply for video coding in general.
Multiplexer
In some encoder aspects, a multiplexer can be used for encoding multiple multimedia streams produced by the encoder and used to prepare encoded bits for broadcast. For example, in the illustrative aspect of encoder 228 shown I
The transmission medium 1808 can correspond to a variety of mediums, such as, but not limited to, digital satellite communication, such as DirecTV®, digital cable, wired and wireless Internet communications, optical networks, cell phone networks, and the like. The transmission medium 1808 can include, for example, modulation to radio frequency (RF). Typically, due to spectral constraints and the like, the transmission medium has a limited bandwidth and the data from the multiplexer 1806 to the transmission medium is maintained at a relatively constant bit rate (CBR).
In conventional systems, the use of constant bit rate (CBR) at the output of the multiplexer 1806 may require that the encoded multimedia or video streams that are inputted to the multiplexer 1806 are also CBR. As described in the background, the use of CBR when encoding video content can result in a variable visual quality, which is typically undesirable.
In the illustrated system, two or more of the encoders 1804 communicate an anticipated encoding complexity of input data. One or more of the encoders 1804 may receive adapted bit rate control from the multiplexer 1806 in response. This permits an encoder 1804 that expects to encode relatively complex video to receive a higher bit rate or higher bandwidth (more bits per frame) for those frames of video in a quasi-variable bit rate manner. This permits the multimedia stream 1802 to be encoded with a consistent visual quality. The extra bandwidth that is used by a particular encoder 1804 encoding relatively complex video comes from the bits that would otherwise have been used for encoding other video streams 1804 if the encoders were implemented to operate at constant bit rates. This maintains the output of the multiplexer 1806 at the constant bit rate (CBR).
While an individual multimedia stream 1802 can be relatively “bursty,” that is, vary in used bandwidth, the cumulative sum of multiple video streams can be less bursty. The bit rate from channels that are encoding less complex video that can be reallocated by, for example, the multiplexer 1806, to channels that are encoding relatively complex video, and this can enhance the visual quality of the combined video streams as whole.
The encoders 1804 provide the multiplexer 1806 with an indication of the complexity of a set of video frames to be encoded and multiplexed together. The output of the multiplexer 1806 should provide an output that is no higher than the bit rate specified for the transmission medium 1808. The indications of the complexity can be based on the content classification as discussed above to provide a selected level of quality. The multiplexer 1006 analyzes the indications of complexity, and provides the various encoders 1004 with an allocated number of bits or bandwidth, and the encoders 1804 use this information to encode the video frames in the set. This permits a set of video frames to individually be variable bit rate, and yet achieve constant bit rate as a group.
Content Classification can also be used in enabling quality based compression of multimedia in general for any generic compressor. Content Classification and the methods and apparatuses described here may be used in quality based and/or content based multimedia processing of any multimedia data. One example is its use in compression of multimedia in general for any generic compressor. Another example is in decompression or decoding in any decompressor or decoder or post-processor such as interpolation, resampling, enhancement, restoration and presentation operations.
Referring now to
Video compression reduces redundancy in the source video and increases the amount of information carried in each bit of the coded video data. This increases the impact in quality when even a small portion of the coded video is lost. Spatial and temporal prediction inherent in video compression systems aggravates the loss and causes errors to propagate resulting in visible artifacts in the reconstructed video. Error resilience algorithms at the video encoder and error recovery algorithms at the video decoder enhance the error robustness of the video compression system.
Typically, the video compression system is agnostic to the underlying network. However, in error prone networks, integrating or aligning error protection algorithms in the application layer with FEC and channel coding in the link/physical layers is highly desirable and provides the most efficiency in enhancing error performance of the system.
After an affirmative condition occurs for the conditions of block 1404, process 1400 proceeds to a preparation block 1414 where the rate R is set to value R=Rqual, the desired target quality based on R-D curves. This setting is received from a data block 1416 comprising R-D curves. Process 1400 then proceeds from to block 1418 where Rate Control Bit Allocation {Qpi} is performed based on image/video activity information (e.g., a content classification) from a content classification process at block 1420.
The rate control bit allocation block 1418 is used, in turn, for motion estimation in block 1422. The motion estimation 1422 can also receive input of metadata from the preprocessor 1412, motion vector smoothing (MPEG-2+History) from block 1424 and multiple reference frames (causal+non-causal macroblock MBs) from block 1426. Process 1400 then proceeds to block 1428 where rate calculations for intra-coded modes are determined for the rate control bit allocation (Qpi). Process 1400 next proceeds to block 1430 where mode and quantization parameters are determined The mode decision of block 1430 is made based on the motion estimation of block 1422 input, error resilience 1406 input, and scalability R-D, which is determined at block 1432. Once the mode is decided, flow proceeds to block 1432. It is noted that the flow from block 1430 to 1432 occurs when data is passed from the first pass to the second pass portions of the encoder.
At block 1432, transform and quantization is performed by the second pass of the encoder 228. The transform/quantization process is adjusted or fine tuned as indicated with block 1444. This transform/quantization process may be influenced by a rate control fine tuning module (
It should be noted that the methods described herein may be implemented on a variety of communication hardware, processors and systems known by one of ordinary skill in the art. For example, the general requirement for the client to operate as described herein is that the client has a display to display content and information, a processor to control the operation of the client and a memory for storing data and programs related to the operation of the client. In one aspect, the client is a cellular phone. In another aspect, the client is a handheld computer having communications capabilities. In yet another aspect, the client is a personal computer having communications capabilities. In addition, hardware such as a GPS receiver may be incorporated in the client to implement the various aspects. The various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The disclosed methods and apparatus provide transcoding of video data encoded in one format to video data encoded to another format where the encoding is based on the content of the video data and the encoding is resilient to error. The methods or algorithms described in connection with the examples disclosed herein may be embodied directly in hardware, in a software module executed by a processor, firmware, or in a combination of two or more of these. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor, such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
The examples described above are merely exemplary and those skilled in the art may now make numerous uses of, and departures from, the above-described examples without departing from the inventive concepts disclosed herein. Various modifications to these examples may be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other examples, e.g., in an instant messaging service or any general wireless data communication applications, without departing from the spirit or scope of the novel aspects described herein. Thus, the scope of the disclosure is not intended to be limited to the examples shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. The word “exemplary” is used exclusively herein to mean “serving as an example, instance, or illustration.” Any example described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other examples. Accordingly, the novel aspects described herein is to be defined solely by the scope of the following claims.
The present application for patent is a continuation of patent application Ser. No. 11/528,139, filed Sep. 26, 2006, pending, which claims priority to Provisional Application No. 60/721,416, filed Sep. 27, 2005, Provisional Application No. 60/789,377, filed Apr. 4, 2006, Provisional Application No. 60/727,643, filed Oct. 17, 2005, Provisional Application No. 60/727,644, filed Oct. 17, 2005, Provisional Application No. 60/727,640, filed Oct. 17, 2005, Provisional Application No. 60/730,145, filed Oct. 24, 2005, Provisional Application No. 60/789,048, filed Apr. 3, 2006, and assigned to the assignee hereof and hereby expressly incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
5289276 | Siracusa et al. | Feb 1994 | A |
5404174 | Sugahara | Apr 1995 | A |
5508752 | Kim et al. | Apr 1996 | A |
5565920 | Lee et al. | Oct 1996 | A |
5598428 | Sato | Jan 1997 | A |
5619272 | Salmon et al. | Apr 1997 | A |
5642294 | Taniguchi et al. | Jun 1997 | A |
5642460 | Shimoda | Jun 1997 | A |
5654805 | Boon | Aug 1997 | A |
5682204 | Uz et al. | Oct 1997 | A |
5684917 | Yanagihara et al. | Nov 1997 | A |
5686962 | Chung et al. | Nov 1997 | A |
5699119 | Chung et al. | Dec 1997 | A |
5745645 | Nakamura et al. | Apr 1998 | A |
5754233 | Takashima | May 1998 | A |
5771357 | Kato et al. | Jun 1998 | A |
5790179 | Shibata et al. | Aug 1998 | A |
5793895 | Chang et al. | Aug 1998 | A |
5801765 | Gotoh et al. | Sep 1998 | A |
5821991 | Kwok | Oct 1998 | A |
5835163 | Liou et al. | Nov 1998 | A |
5841939 | Takahashi et al. | Nov 1998 | A |
5864369 | Swan | Jan 1999 | A |
5929902 | Kwok | Jul 1999 | A |
5960148 | Miyazawa | Sep 1999 | A |
5978029 | Boice et al. | Nov 1999 | A |
5991502 | Kawakami et al. | Nov 1999 | A |
6012091 | Boyce | Jan 2000 | A |
6014493 | Shimoda | Jan 2000 | A |
6064796 | Nakamura et al. | May 2000 | A |
6091460 | Hatano et al. | Jul 2000 | A |
6115499 | Wang et al. | Sep 2000 | A |
6157674 | Oda et al. | Dec 2000 | A |
6175593 | Kim et al. | Jan 2001 | B1 |
6229925 | Alexandre et al. | May 2001 | B1 |
6317518 | Enari | Nov 2001 | B1 |
6333950 | Karasawa | Dec 2001 | B1 |
6363114 | Kato | Mar 2002 | B1 |
6370672 | Rick et al. | Apr 2002 | B1 |
6449002 | Markman et al. | Sep 2002 | B1 |
6473459 | Sugano et al. | Oct 2002 | B1 |
6490320 | Vetro et al. | Dec 2002 | B1 |
6501796 | Dusseux et al. | Dec 2002 | B1 |
6507618 | Wee et al. | Jan 2003 | B1 |
6538688 | Giles | Mar 2003 | B1 |
6539220 | Sakai et al. | Mar 2003 | B1 |
6553068 | Wake et al. | Apr 2003 | B1 |
6574211 | Padovani et al. | Jun 2003 | B2 |
6580829 | Hurst, Jr. et al. | Jun 2003 | B1 |
6600836 | Thyagarajan et al. | Jul 2003 | B1 |
6718121 | Shikunami | Apr 2004 | B1 |
6721492 | Togashi | Apr 2004 | B1 |
6724819 | Takaki et al. | Apr 2004 | B1 |
6744474 | Markman | Jun 2004 | B2 |
6773437 | Ogilvie et al. | Aug 2004 | B2 |
6784942 | Selby et al. | Aug 2004 | B2 |
6791602 | Sasaki et al. | Sep 2004 | B1 |
6798834 | Murakami et al. | Sep 2004 | B1 |
6891891 | Pau et al. | May 2005 | B2 |
6900846 | Lee et al. | May 2005 | B2 |
6904081 | Frank | Jun 2005 | B2 |
6909745 | Puri et al. | Jun 2005 | B1 |
6928151 | Yamada et al. | Aug 2005 | B2 |
6934335 | Liu et al. | Aug 2005 | B2 |
6952500 | Sheraizin et al. | Oct 2005 | B2 |
6959044 | Jin et al. | Oct 2005 | B1 |
6970506 | Kim et al. | Nov 2005 | B2 |
6985635 | Chen et al. | Jan 2006 | B2 |
6987728 | Deshpande | Jan 2006 | B2 |
6996186 | Ngai et al. | Feb 2006 | B2 |
7009656 | Thomson et al. | Mar 2006 | B2 |
7027512 | Jeon | Apr 2006 | B2 |
7039855 | Nikitin et al. | May 2006 | B2 |
7042512 | Yang et al. | May 2006 | B2 |
7075581 | Ozgen et al. | Jul 2006 | B1 |
7089313 | Lee et al. | Aug 2006 | B2 |
7093028 | Shao et al. | Aug 2006 | B1 |
7095814 | Kyeong et al. | Aug 2006 | B2 |
7095874 | Moskowitz et al. | Aug 2006 | B2 |
7123816 | McGrath et al. | Oct 2006 | B2 |
7129990 | Wredenhagen et al. | Oct 2006 | B2 |
7136417 | Rodriguez | Nov 2006 | B2 |
7139551 | Jamadagni | Nov 2006 | B2 |
7142599 | Henocq | Nov 2006 | B2 |
7154555 | Conklin | Dec 2006 | B2 |
7167507 | Mailaender et al. | Jan 2007 | B2 |
7203236 | Yamada | Apr 2007 | B2 |
7203238 | Liu et al. | Apr 2007 | B2 |
7280708 | Song et al. | Oct 2007 | B2 |
7339980 | Grant et al. | Mar 2008 | B2 |
7356073 | Heikkila | Apr 2008 | B2 |
7359466 | Huang et al. | Apr 2008 | B2 |
7430336 | Raveendran | Sep 2008 | B2 |
7433982 | Peev et al. | Oct 2008 | B2 |
7443448 | Yang et al. | Oct 2008 | B2 |
7474701 | Boice et al. | Jan 2009 | B2 |
7479978 | Cho et al. | Jan 2009 | B2 |
7483581 | Raveendran et al. | Jan 2009 | B2 |
7486736 | Zhidkov | Feb 2009 | B2 |
7528887 | Wyman | May 2009 | B2 |
7529426 | Neuman | May 2009 | B2 |
7536626 | Sutivong et al. | May 2009 | B2 |
7557861 | Wyman et al. | Jul 2009 | B2 |
7634260 | Chun | Dec 2009 | B2 |
7660987 | Baylis et al. | Feb 2010 | B2 |
7676106 | Panusopone et al. | Mar 2010 | B2 |
7705913 | Jia et al. | Apr 2010 | B2 |
7738716 | Song | Jun 2010 | B2 |
7840112 | Rao | Nov 2010 | B2 |
7949205 | De Haan et al. | May 2011 | B2 |
8060720 | Uppala | Nov 2011 | B2 |
20010001614 | Boice et al. | May 2001 | A1 |
20010017888 | Bruls | Aug 2001 | A1 |
20010055337 | Matsuzaki et al. | Dec 2001 | A1 |
20020012396 | Pau et al. | Jan 2002 | A1 |
20020021485 | Pilossof | Feb 2002 | A1 |
20020036705 | Lee et al. | Mar 2002 | A1 |
20020037051 | Takenaka | Mar 2002 | A1 |
20020047936 | Tojo | Apr 2002 | A1 |
20020054621 | Kyeong et al. | May 2002 | A1 |
20020097791 | Hansen | Jul 2002 | A1 |
20020135697 | Wredenhagen et al. | Sep 2002 | A1 |
20020146071 | Liu et al. | Oct 2002 | A1 |
20020149703 | Adams et al. | Oct 2002 | A1 |
20020150162 | Liu et al. | Oct 2002 | A1 |
20020154705 | Walton et al. | Oct 2002 | A1 |
20020163964 | Nichols | Nov 2002 | A1 |
20020196362 | Yang et al. | Dec 2002 | A1 |
20030021485 | Raveendran et al. | Jan 2003 | A1 |
20030071917 | Selby et al. | Apr 2003 | A1 |
20030076908 | Huang et al. | Apr 2003 | A1 |
20030118097 | Chen et al. | Jun 2003 | A1 |
20030142762 | Burke | Jul 2003 | A1 |
20030160899 | Ngai et al. | Aug 2003 | A1 |
20030169933 | Song et al. | Sep 2003 | A1 |
20030185302 | Abrams | Oct 2003 | A1 |
20030198293 | Chen et al. | Oct 2003 | A1 |
20030219143 | Moskowitz et al. | Nov 2003 | A1 |
20030219160 | Song et al. | Nov 2003 | A1 |
20030227977 | Henocq | Dec 2003 | A1 |
20040001426 | Mailaender et al. | Jan 2004 | A1 |
20040012685 | Thomson et al. | Jan 2004 | A1 |
20040013196 | Takagi et al. | Jan 2004 | A1 |
20040045038 | Duff et al. | Mar 2004 | A1 |
20040073901 | Imamatsu | Apr 2004 | A1 |
20040125877 | Chang et al. | Jul 2004 | A1 |
20040136566 | Cho et al. | Jul 2004 | A1 |
20040190609 | Watanabe | Sep 2004 | A1 |
20040192274 | Vuori | Sep 2004 | A1 |
20040264790 | Song et al. | Dec 2004 | A1 |
20050022178 | Ghafoor et al. | Jan 2005 | A1 |
20050053172 | Heikkila | Mar 2005 | A1 |
20050062885 | Kadono et al. | Mar 2005 | A1 |
20050063473 | Koyama et al. | Mar 2005 | A1 |
20050063500 | Li et al. | Mar 2005 | A1 |
20050076057 | Sharma et al. | Apr 2005 | A1 |
20050078750 | Shen et al. | Apr 2005 | A1 |
20050081482 | Lembo | Apr 2005 | A1 |
20050134735 | Swartz | Jun 2005 | A1 |
20050168634 | Wyman et al. | Aug 2005 | A1 |
20050168656 | Wyman et al. | Aug 2005 | A1 |
20050185719 | Hannuksela | Aug 2005 | A1 |
20050192878 | Minear et al. | Sep 2005 | A1 |
20050195889 | Grant et al. | Sep 2005 | A1 |
20050195899 | Han | Sep 2005 | A1 |
20050201478 | Claussen et al. | Sep 2005 | A1 |
20050222961 | Staib et al. | Oct 2005 | A1 |
20050231635 | Lin | Oct 2005 | A1 |
20050249282 | Landsiedel et al. | Nov 2005 | A1 |
20050254692 | Caldwell | Nov 2005 | A1 |
20050265461 | Raveendran | Dec 2005 | A1 |
20050276505 | Raveendran | Dec 2005 | A1 |
20050283687 | Sutivong et al. | Dec 2005 | A1 |
20060002340 | Criss et al. | Jan 2006 | A1 |
20060023788 | Otsuka et al. | Feb 2006 | A1 |
20060129646 | Rhee et al. | Jun 2006 | A1 |
20060133514 | Walker | Jun 2006 | A1 |
20060146934 | Caglar et al. | Jul 2006 | A1 |
20060153294 | Wang et al. | Jul 2006 | A1 |
20060159160 | Kim et al. | Jul 2006 | A1 |
20060166739 | Lin | Jul 2006 | A1 |
20060197879 | Covell et al. | Sep 2006 | A1 |
20060210184 | Song et al. | Sep 2006 | A1 |
20060215539 | Vrcelj et al. | Sep 2006 | A1 |
20060215761 | Shi et al. | Sep 2006 | A1 |
20060222078 | Raveendran | Oct 2006 | A1 |
20060230162 | Chen et al. | Oct 2006 | A1 |
20060233239 | Sethi et al. | Oct 2006 | A1 |
20060239347 | Koul | Oct 2006 | A1 |
20060244840 | Eshet et al. | Nov 2006 | A1 |
20060282737 | Shi et al. | Dec 2006 | A1 |
20070014354 | Murakami et al. | Jan 2007 | A1 |
20070074266 | Raveendran et al. | Mar 2007 | A1 |
20070081586 | Raveendran et al. | Apr 2007 | A1 |
20070081587 | Raveendran et al. | Apr 2007 | A1 |
20070081588 | Raveendran et al. | Apr 2007 | A1 |
20070097259 | MacInnis et al. | May 2007 | A1 |
20070124443 | Nanda et al. | May 2007 | A1 |
20070124459 | Kasama | May 2007 | A1 |
20070160128 | Tian et al. | Jul 2007 | A1 |
20070160142 | Abrams, Jr. | Jul 2007 | A1 |
20070171280 | Tian et al. | Jul 2007 | A1 |
20070171972 | Tian et al. | Jul 2007 | A1 |
20070171986 | Hannuksela | Jul 2007 | A1 |
20070206117 | Tian et al. | Sep 2007 | A1 |
20070208557 | Li et al. | Sep 2007 | A1 |
20070252894 | Satou et al. | Nov 2007 | A1 |
20080151101 | Tian et al. | Jun 2008 | A1 |
20090074070 | Yin et al. | Mar 2009 | A1 |
20090092944 | Pirker | Apr 2009 | A1 |
20090122186 | Rodriguez et al. | May 2009 | A1 |
20090168880 | Jeon et al. | Jul 2009 | A1 |
20090244840 | Takawa et al. | Oct 2009 | A1 |
20100020886 | Raveendran et al. | Jan 2010 | A1 |
20100171814 | Routhier et al. | Jul 2010 | A1 |
Number | Date | Country |
---|---|---|
1328747 | Dec 2001 | CN |
1355995 | Jun 2002 | CN |
1372769 | Oct 2002 | CN |
1383327 | Dec 2002 | CN |
1395800 | Feb 2003 | CN |
1436423 | Aug 2003 | CN |
1520185 | Aug 2004 | CN |
1623332 | Jun 2005 | CN |
1647540 | Jul 2005 | CN |
1669314 | Sep 2005 | CN |
0547460 | Jun 1993 | EP |
0626789 | Nov 1994 | EP |
0626789 | Nov 1994 | EP |
0644695 | Mar 1995 | EP |
0690617 | Jan 1996 | EP |
0946054 | Sep 1999 | EP |
1005227 | May 2000 | EP |
1022667 | Jul 2000 | EP |
1164792 | Dec 2001 | EP |
1168731 | Jan 2002 | EP |
1195992 | Apr 2002 | EP |
1209624 | May 2002 | EP |
1265373 | Dec 2002 | EP |
1289182 | Mar 2003 | EP |
1505488 | Feb 2005 | EP |
1547016 | Jun 2005 | EP |
1615447 | Jan 2006 | EP |
1657835 | May 2006 | EP |
2646047 | Oct 1990 | FR |
3189292 | Aug 1991 | JP |
5064175 | Mar 1993 | JP |
H05344492 | Dec 1993 | JP |
H0622298 | Jan 1994 | JP |
H06339115 | Dec 1994 | JP |
H07135657 | May 1995 | JP |
7222145 | Aug 1995 | JP |
H07203433 | Aug 1995 | JP |
7298272 | Nov 1995 | JP |
7312756 | Nov 1995 | JP |
8046969 | Feb 1996 | JP |
08102938 | Apr 1996 | JP |
8130716 | May 1996 | JP |
08214210 | Aug 1996 | JP |
8251594 | Sep 1996 | JP |
09018782 | Jan 1997 | JP |
H09503890 | Apr 1997 | JP |
09284770 | Oct 1997 | JP |
10013826 | Jan 1998 | JP |
10302396 | Nov 1998 | JP |
10313463 | Nov 1998 | JP |
H114260 | Jan 1999 | JP |
11243547 | Sep 1999 | JP |
11316843 | Nov 1999 | JP |
2000032474 | Jan 2000 | JP |
2000059774 | Feb 2000 | JP |
2000115778 | Apr 2000 | JP |
2000209553 | Jul 2000 | JP |
2000287173 | Oct 2000 | JP |
2000295626 | Oct 2000 | JP |
2000350217 | Dec 2000 | JP |
2001045494 | Feb 2001 | JP |
2001169251 | Jun 2001 | JP |
3189292 | Jul 2001 | JP |
2001204026 | Jul 2001 | JP |
2001251629 | Sep 2001 | JP |
2001346207 | Dec 2001 | JP |
2001346214 | Dec 2001 | JP |
2002010259 | Jan 2002 | JP |
2002044668 | Feb 2002 | JP |
2002051336 | Feb 2002 | JP |
2002064817 | Feb 2002 | JP |
2002077833 | Mar 2002 | JP |
2002101416 | Apr 2002 | JP |
2002125227 | Apr 2002 | JP |
2002252834 | Sep 2002 | JP |
2002543714 | Dec 2002 | JP |
2003037844 | Feb 2003 | JP |
2003110474 | Apr 2003 | JP |
2003111079 | Apr 2003 | JP |
2003209837 | Jul 2003 | JP |
2003209848 | Jul 2003 | JP |
2003224847 | Aug 2003 | JP |
2003319341 | Nov 2003 | JP |
2003348583 | Dec 2003 | JP |
2004023288 | Jan 2004 | JP |
2004140488 | May 2004 | JP |
2004248124 | Sep 2004 | JP |
2005123732 | May 2005 | JP |
2005517342 | Jun 2005 | JP |
2006074684 | Mar 2006 | JP |
2007520126 | Jul 2007 | JP |
2008500935 | Jan 2008 | JP |
1020010099660 | Nov 2001 | KR |
20020010171 | Feb 2002 | KR |
10-2002-0042730 | Sep 2002 | KR |
20030029507 | Apr 2003 | KR |
100389893 | Jun 2003 | KR |
20030073254 | Sep 2003 | KR |
1020040046320 | Jun 2004 | KR |
20050089721 | Sep 2005 | KR |
20060011281 | Feb 2006 | KR |
536918 | Jun 2003 | TW |
9535628 | Dec 1995 | WO |
9739577 | Oct 1997 | WO |
9932993 | Jul 1999 | WO |
9943157 | Aug 1999 | WO |
0019726 | Apr 2000 | WO |
0067486 | Nov 2000 | WO |
0166936 | Sep 2001 | WO |
0169936 | Sep 2001 | WO |
0178389 | Oct 2001 | WO |
0178398 | Oct 2001 | WO |
0225925 | Mar 2002 | WO |
0243398 | May 2002 | WO |
02087255 | Oct 2002 | WO |
03052695 | Jun 2003 | WO |
03067778 | Aug 2003 | WO |
2004008747 | Jan 2004 | WO |
2004008757 | Jan 2004 | WO |
2004019273 | Mar 2004 | WO |
2004049722 | Jun 2004 | WO |
2004054270 | Jun 2004 | WO |
2004057460 | Jul 2004 | WO |
2004070953 | Aug 2004 | WO |
2004114667 | Dec 2004 | WO |
2004114672 | Dec 2004 | WO |
2005006764 | Jan 2005 | WO |
2005074147 | Aug 2005 | WO |
2005081482 | Sep 2005 | WO |
2005107266 | Nov 2005 | WO |
2006099082 | Sep 2006 | WO |
Entry |
---|
3GPP2-C10-20040607-102, “Third Generation Partnership Project 2”, TSG-C Working Group 1.2—Multimedia Services, Montreal, Quebec, Canada, May 17-20, 2004, Qualcomm. |
Boyce, Jill M.: “Packet loss resilient transmission of MPEG video over the Internet” Signal Processing: Image Communication, Elsevier Science Publishers, Amsterdam, NL, vol. 15, No. 1-2, Sep. 1999, pp. 7-24, XP004180635. |
C. Huang et al.: “A Robust Scene-Change Detection method for Video Segmentation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 11, No. 12, pp. 1281-1288, Dec. 2001. |
D. Lelescu et al.: “Statistical Sequential Analysis for Real-Time Video Scene Change Detection on Compressed Multimedia Bitstream,” IEEE Transactions on Multimedia, vol. 5, No. 1, pp. 106-117, Mar. 2003. |
De Haan Gerard, et al., “Deinterlacing—An Overview,” Proceeding of the IEEE, 1998, 86 (9), 1839-1857. |
D.L. Donoho et al.: “Ideal spatial adaptation by wavelet shrinkage,” Biometrika, vol. 8, pp. 1-30, 1994. |
European Search Report—EP08000729—Search Authority—The Hague—Jun. 15, 2010. |
European Search Report—EP10179387, Search Authority—Munich Patent Office, Jan. 27, 2011. |
European Search Report—EP10181339, Search Authority—Munich Patent Office, Jan. 24, 2011. |
European Search Report—EP10182837—Search Authority—Munich—Feb. 11, 2013. |
F.M. Wang et al.: “Time recursive Deinterlacing for IDTV and Pyramid Coding,” Signal Processing: Image Communications 2, pp. 1306-1309, 1990. |
G.D. Haan et al.: “De-interlacing of video data,” IEEE Transactions on Consumer Electronics, vol. 43, No. 3, pp. 1-7, 1997. |
Girod et al., “Scalable codec architectures for Internet video-on-demand,” Signals, Systems & Computers, 1997 Conference Record of the Thirty-First Asilomar Conference on Pacific Grove, CA, USA Nov. 2-5, 1997, Los Alamitos, CA, USA, IEEE Comput Soc. US. vo. |
International Preliminary Report on Patentability—PCT/US2006/008484, International Search Authority—The International Bureau of WIPO, Geneva, Switzerland—Sep. 12, 2007. |
International Preliminary Report on Patentability—PCT/US2006/023210, International Bureau of WIPO, Geneva, Switzerland—Dec. 24, 2007. |
International Preliminary Report on Patentability—PCT/US2006/037948, International Search Authority—The International Bureau of WIPO—Geneva, Switzerland—Apr. 1, 2008. |
International Preliminary Report on Patentability—PCT/US2006/037949, International Search Authority—The International Bureau of WIPO—Geneva, Switzerland—Apr. 1, 2008. |
International Preliminary Report on Patentability—PCT/US2006/037993, International Search Authority—The International Bureau of WIPO—Geneva, Switzerland—Apr. 1, 2008. |
International Preliminary Report on Patentability—PCT/US2006/037994, International Search Authority—Jan. 4, 2008The International Bureau of WIPO—Geneva, Switzerland—Apr. 1, 2008. |
International Preliminary Report on Patentability—PCT/US2006/040709, International Search Authority—The International Bureau of WIPO—Geneva, Switzerland—Apr. 23, 2008. |
International Preliminary Report on Patentability—PCT/US2006/040712, International Search Authority—The International Bureau of WIPO—Geneva, Switzerland—Apr. 23, 2008. |
International Preliminary Report on Patentability—PCT/US2006/060196, International Search Authority—The International Bureau of WIPO—Geneva, Switzerland—Apr. 29, 2008. |
International Preliminary Report—PCT/US2006/040593, International Search Authority—The International Bureau of WIPO—Geneva, Switzerland—Apr. 23, 2008. |
International Search Report and Written Opinion—PCT/US2007/063929, International Search Authority—European Patent Office—Jul. 17, 2007. |
International Search Report—PCT/US2006/008484, International Search Authority—European Patent Office—Jul. 17, 2007. |
International Search Report—PCT/US2006/023210, International Search Authority—US Patent Office—Feb. 26, 2007. |
International Search Report—PCT/US2006/037948, International Search Authority—European Patent Office—Mar. 1, 2007. |
International Search Report—PCT/US2006/037949, International Search Authority—European Patent Office—Mar. 1, 2007. |
International Search Report—PCT/US2006/037993, International Search Authority—European Patent Office—Mar. 5, 2007. |
International Search Report—PCT/US2006/037994, International Search Authority—European Patent Office—Mar. 22, 2007. |
International Search Report—PCT/US2006/040593, International Serach Authority—European Patent Office—May 11, 2007. |
International Search Report—PCT/US2006/040709, International Search Authority—European Patent Office—Mar. 22, 2007. |
International Search Report—PCT/US2006/040712, International Search Authority—European Patent Office—Mar. 7, 2007. |
International Search Report—PCT/US2006/060196, International Search Authority—European Patent Office—May 4, 2007. |
JVT: “Draft ITU-T Recommendation and Final Draft International Standard of Joint Video Specification (ITU-T Rec. H .264 ISO/IEC 14496—10 AVC)”, 7. JVT Meeting; 64. MPEG Meeting; Jul. 3, 2003-Mar. 14, 2003; Pattaya,TH; (Joint Video Team of ISO/IEC JTC1/SC29/WG11 and ITU-T 56.16 ), No. JVT-G050r1, Mar. 14, 2003, XP030005712, ISSN: 0000-0427. |
K. Jung et al.: “Deinterlacing using Edge based Motion Estimation,” IEEE MWSCS, pp. 892-895, 1995. |
Kwon et al., “Overview of H.264/MPEG-4 part 10,” Journal of visual Communication and Image Representation, Academic Press, Inc., vol. 17, No. 2, Apr. 2006, pp. 186-216, XP005312621. |
Lee J, “A fast frame type selection technique for very low bit rate coding using MPEG-1” Real-Time Imaging, Academic Press Limited, GB, vol. 5, No. 2, Apr. 1999, pp. 83-94, XP004429625. |
Liang, Y. & Tan, Y.P., “A new content-based hybrid video transcoding method,” 2001 International Conference on Image Processing, Oct. 7-10, 2001, vol. 4, pp. 429-432. |
Mailaender, L., et al., “Linear MIMO equalization for CDMA downlink signals with code reuse”, Wireless Communications, IEEE Transactions on, Sep. 2005, vol. 4, Issue 5, pp. 2423-2434. |
P. Haavisto et al.: “Scan rate up-conversions using adaptive weighted median filtering,” Signal Processing of HDTV II, Elsevier Science Publishers, pp. 703-710, 1990. |
Puri, et al., “Video Coding Using the H.264/Mpeg-4 Avc Compression Standard,” Signal Processing Image Communication, Elsevier Science Publishers, Oct. 1, 2004, pp. 793-849, vol. 19 (9),Amsterdam, NL, XP004607150. |
R. Simonetti et al.: “Deinterlacing of HDTV Images for Multimedia Applications,” Signal Processing of HDTV IV, Elsevier Science Publishers, pp. 765-772, 1993. |
R4-030797, “An Advanced Receiver Proposal for MIMO,” TSG-RAN WG4 #28, Lucent Technologies, Sophia-Antipolis, Aug. 18-22, 2003, pp. 1-8. |
Rusert et al., “Enhanced Interframe wavelet video coding considering the interrelation of spatio-temporal transform and motion compensation,” Signal Processing. Image Communication, Elsevier Science Publishers, vol. 19, No. 7, Aug. 2004, pp. 617-635, XP00004524456. |
S. Lee et al.: “Fast Scene Change Detection using Direct Feature Extraction from MPEG Compressed Videos,” IEEE Transactions on Multimedia, vol. 2, No. 4, pp. 240-254, Dec. 2000. |
S. Pei et al.: “Effective Wipe Detection in MPEG Compressed Video Using Macroblock Type Information,” IEEE Transactions on Multimedia, vol. 4, No. 3, pp. 309-319, Sep. 2002. |
Schaar et al., “A Hybrid Temporal-SNR Fine-Granular Scalability for Internet Video”, IEEE Transactions on Circuits and Systems for Video Technology, Mar. 2001, pp. 318-331, vol. 11, No. 3, IEEE Service Center, XP011014178, ISSN : 1051-8215, DOI: 10.1109/76.911158. |
Sikora T., “MPEG Digital Video-Coding Standards,” IEEE Signal Processing Magazine, IEEE Service Center, 1997-1999, pp. 82-100, vol. 14 (5),Piscataway, NJ, XP011089789. |
SMPTE RP 27.3-1989 “SMPTE Recommended Practice Specifications for Safe Action and Safe Title Areas Test Pattern for Television Systems,” Society of Motion Picture and Television Engineers, Mar. 29, 1989, pp. 1-3. |
S.P. Ghael et al.: “Improved Wavelet denoising via empirical Wiener filtering,” Proceedings of SPIE, vol. 3169, pp. 1-11, Jul. 1997. |
Supplementary European Search Report—EP06773184—Search Authority—Munich—Jun. 28, 2012. |
Taiwan Search Report—Application No. 095138256—TIPO, Mar. 19, 2010. |
Taiwan Search Report—TW095108342—TIPO—May 1, 2012. |
Taiwan Search Report—TW095135898—TIPO—May 15, 2011. |
Taiwanese Search Report—095135898—TIPO—Sep. 30, 2009. |
Taiwanese Search Report—095135900—TIPO—Mar. 16, 2010. |
Taiwanese Search report—095135902—TIPO—Mar. 12, 2010. |
Taiwanese Search report—096110382—TIPO—Aug. 16, 2010. |
Taiwanese Search Report—095138289—TIPO—Oct. 10, 2009. |
Written Opinion—PCT/US2006/008484, International Search Authority—European Patent Office—Jul. 17, 2007. |
Written Opinion—PCT/US2006/023210, International Search Authority—US Patent Office—Feb. 26, 2007. |
Written Opinion—PCT/US2006/037948, International Search Authority—European Patent Office—Mar. 1, 2007. |
Written Opinion—PCT/US2006/037949, International Search Authority—European Patent Office—Mar. 1, 2007. |
Written Opinion—PCT/US2006/037993, International Search Authority—European Patent Office—Mar. 5, 2007. |
Written Opinion—PCT/US2006/037994, International Search Authority—European Patent Office—Mar. 22, 2007. |
Written Opinion—PCT/US2006/040593, International Search Authority—European Patent Office—May 11, 2007. |
Written Opinion—PCT/US2006/040709, International Search Authority—European Patent Office—Mar. 22, 2007. |
Written Opinion—PCT/US2006/040712, International Search Authority—European Patent Office—Mar. 7, 2007. |
Written Opinion—PCT/US2006/060196, International Search Authority—European Patent Office—May 4, 2007. |
Fablet R., et al., “Motion Recognition Using Nonparametric Image Motion Models Estimated From Temporal and Multiscale Co-Occurence Statistics”, IEEE Transaction on Pattern analysis and machine Intelligence, vol. 25, No. 12, pp. 1619-1624, Dec. 2003. |
Peh C.H., et al., “Synergizing Spatial and Temporal Texture”, IEEE Transaction on Image Processing, vol. 11, No. 10, pp. 1179-1191, Oct. 2002. |
Partial European Search Report—EP10182837—Search Authority—Munich—Feb. 10, 2011. |
Number | Date | Country | |
---|---|---|---|
20130308707 A1 | Nov 2013 | US |
Number | Date | Country | |
---|---|---|---|
60721416 | Sep 2005 | US | |
60789377 | Apr 2006 | US | |
60727643 | Oct 2005 | US | |
60727644 | Oct 2005 | US | |
60727640 | Oct 2005 | US | |
60730145 | Oct 2005 | US | |
60789048 | Apr 2006 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11528139 | Sep 2006 | US |
Child | 13760233 | US |