Complexity-based adaptive preprocessing for multiple-pass video compression

Information

  • Patent Grant
  • 8238424
  • Patent Number
    8,238,424
  • Date Filed
    Friday, February 9, 2007
    17 years ago
  • Date Issued
    Tuesday, August 7, 2012
    12 years ago
Abstract
Multiple-pass video encoding systems and techniques are described which utilize statistics taken during a first-pass encoding to create complexity measurements for video data which is to be encoded. By analyzing these complexity measurements, preprocessing decisions, such as, for example, the determination of strength of denoise filters, can be made with greater accuracy. In one implementation, these complexity measurements take the form of calculation of temporal and spatial complexity parameters, which are then used to compute a unified complexity parameter for each group of pictures being encoded.
Description
BACKGROUND
Block Transform-Based Coding

Transform coding is a compression technique used in many audio, image and video compression systems. Uncompressed digital image and video is typically represented or captured as samples of picture elements or colors at locations in an image or video frame arranged in a two-dimensional (2D) grid. This is referred to as a spatial-domain representation of the image or video. For example, a typical format for images consists of a stream of 24-bit color picture element samples arranged as a grid. Each sample is a number representing color components at a pixel location in the grid within a color space, such as RGB, or YIQ, among others. Various image and video systems may use various different color, spatial and time resolutions of sampling. Similarly, digital audio is typically represented as time-sampled audio signal stream. For example, a typical audio format consists of a stream of 16-bit amplitude samples of an audio signal taken at regular time intervals.


Uncompressed digital audio, image and video signals can consume considerable storage and transmission capacity. Transform coding reduces the size of digital audio, images and video by transforming the spatial-domain representation of the signal into a frequency-domain (or other like transform domain) representation, and then reducing resolution of certain generally less perceptible frequency components of the transform-domain representation. This generally produces much less perceptible degradation of the digital signal compared to reducing color or spatial resolution of images or video in the spatial domain, or of audio in the time domain.


Quantization


According to one possible definition, quantization is a term used in transform coding for an approximating non-reversible mapping function commonly used for lossy compression, in which there is a specified set of possible output values, and each member of the set of possible output values has an associated set of input values that result in the selection of that particular output value. A variety of quantization techniques have been developed, including scalar or vector, uniform or non-uniform, with or without dead zone, and adaptive or non-adaptive quantization.


The quantization operation is essentially a biased division by a quantization parameter which is performed at the encoder. The inverse quantization or multiplication operation is a multiplication by the quantization parameter performed at the decoder.


Additional Techniques


In general, video compression techniques include intraframe compression and interframe compression. Intraframe compression techniques compress individual frames, typically called I-frames or key frames. Interframe compression techniques compress frames with reference to preceding and/or following frames, which are typically called predicted frames, P-frames, or B-frames.


In addition to the mechanisms described above, video encoding can also benefit from the use of preprocessing prior to encoding to provide for more efficient coding. In one example, denoise filters are used to remove extraneous noise from a video source, allowing a later encoding step to operate with greater efficiency.


However, with typical video encoding, it is difficult to know how exactly to perform preprocessing in order to create the most efficient encoding with the fewest number of visible artifacts. What is needed is a mechanism for gaining knowledge about a video source which can be used to facilitate preprocessing decisions.


SUMMARY

Multiple-pass video encoding systems and techniques are described. In various implementations, these systems and techniques utilize statistics taken during a first-pass encoding to create complexity measurements for video data to be encoded. In one implementation, through analyzing these complexity measurements, preprocessing decisions, such as the determination of strength of denoise filters, are made. In one implementation, temporal and spatial complexity parameters are calculated as the complexity measurements. These parameters are then used to compute a unified complexity parameter for each group of pictures being encoded.


In one example implementation, a method of determining parameters for pre-processing of a group of one or more pictures is described. The example method comprises determining one or more complexity parameters for the group of pictures and encoding the group of pictures in a video stream based at least in part on the one or more complexity parameters.


In another example implementation, a system for encoding video is described. The example system comprises a first-pass video encoding module which is configured to analyze one or more frames in a video sequence and to calculate one or more encoding parameters to be used in encoding the one or more frames in the video sequence. The example system also comprises a complexity-based adaptive preprocessing module which is configured to determine one or more complexity parameters for the one or more frames and to determine preprocessing filters to be used during encoding the one or more frames based on the one or more complexity parameters. The example system also comprises a second-pass video encoding module which is configured to apply preprocessing filters to the one or more frames based on the preprocessing filter parameters and to encode the filtered frames into encoded video stream data.


In another example implementation, one or more computer-readable media are described which contain instructions which, when executed by a computer, cause the computer to perform an example method for encoding video. The example method comprises performing a first-pass analysis on one or more frames in a video sequence in order to calculate one or more encoding parameters to be used in encoding the one or more frames in a video sequence. The example method also comprises determining one or more complexity parameters for the one or more frames based on the one or more encoding parameters, determining preprocessing filters to be used during encoding the one or more frames based on the one or more complexity parameters, applying preprocessing filters to the one or more frames based on the preprocessing filter parameters, and performing a second-pass analysis on the one or more frames to encoding the filtered frames into encoded video stream data.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates an example of the operation of a multiple-pass video encoding system.



FIG. 2 is a block diagram illustrating an example system for performing complexity-based adaptive preprocessing in a multiple-pass video encoding system.



FIG. 3 is flowchart illustrating an example process performed by the system of FIG. 2 for encoding video data.



FIG. 4 is a flowchart illustrating an example process performed by the system of FIG. 2 as a part of the process of FIG. 3 for determining complexity parameters.



FIG. 5 is a block diagram illustrating examples of pictures demonstrating differences in temporal and spatial complexity.



FIG. 6 is a flow chart illustrating an example process performed as part of the process of FIG. 4 for determining a spatial complexity parameter.



FIG. 7 is a flow chart illustrating an example process performed as part of the process of FIG. 4 for determining a temporal complexity parameter.



FIG. 8 is a flow chart illustrating an example process performed as part of the process of FIG. 4 for determining a unified complexity parameter.



FIG. 9 is a flowchart illustrating an example process performed as part of the process of FIG. 3 for encoding video based on complexity parameters.



FIG. 10 is a flow chart illustrating an example process performed as part of the process of FIG. 9 for determining performing filtering preprocessing according to complexity parameters.



FIG. 11 is a block diagram illustrating an example computing environment for performing the complexity-based adaptive preprocessing techniques described herein.





DETAILED DESCRIPTION

The exemplary techniques and systems described herein allow for and perform additional preprocessing on video data in a multiple-pass video encoding system. After a first pass is performed, video statistics based on the first-pass encoding are analyzed to determine complexity parameters from the first-pass encoding process. These complexity parameters are then used to control preprocessing performed on the video data. In one example, the preprocessing performed is the application of a filter. The preprocessed video data is then encoded by a later pass of the encoding system. By determining and utilizing complexity data, the systems and techniques described herein can use the content of the encoded video to make more-informed decisions about what preprocessing should or should not be performed. This results in more efficient video encoding that is more accurate to the qualities of the video being encoded. Additionally, the techniques described herein offer very little overhead in the encoding process, and so do not overly complicate encoding.


Examples of Multiple-Pass Video Encoding Systems


Multiple-pass video encoders generally perform a first encoding on video data in order to determine statistics about the video data. These statistics are then used to create controls for later processing and encoding. By using information gained during a first-pass analysis, multiple-pass encoding systems are able to perform processing and encoding that is more accurately directed toward the particular nature of the video being encoded. This tuning of the process results in an eventual encoded video stream that either has a lower bit-rate, has fewer visible artifacts, or both.



FIG. 1 is a block diagram illustrating one example of a multiple-pass video encoding system 100. As FIG. 1 exists to explicate background qualities of multiple-pass encoding, the figure illustrates only particular entities and processes which take place in a multiple-pass video encoding process. The particular qualities of the illustration of FIG. 1 should not be viewed to imply any limitation on or requirements for the techniques and systems described herein.



FIG. 1 illustrates raw video data 110, which is used as input into the system. The use of the term “raw video data” is used herein solely to refer to video data that is yet to be processed by the encoder, and should not be read to imply any particular limitation as to format or type of the video data. Thus, the term “raw video data” may, in various implementations, refer to compressed or uncompressed video, either recorded or generated by a computer and from a variety of sources.



FIG. 1 illustrates a first-pass encoding process 120. It is in this process that the raw video data 110 is first analyzed using the techniques described herein, in order to determine video statistics, which are then sent to a preprocessing process 130. In the illustrated implementation, the first-pass encoding process utilizes an actual video encoder to perform the first-pass analysis, resulting in a first-pass encoded video stream 125. While the encoding system 100 may utilize this output in its entirety, or may store this output, in many implementations this output is not provided by the system as a final output encoded video. For example, a system which uses a two-pass system to generate a variable bit-rate encoded video stream may first use a simpler constant bit-rate encoding method at block 120 to determine statistics which may be useful during preprocessing and encoding. This constant bit-rate stream may later be discarded after it has been used for variable bit-rate encoding.



FIG. 1 also illustrates a preprocessing process 130. In various implementations, this block illustrates such processes as determination of data to control bit-rate or the determination of filters which may be applied to the raw video data before final encoding. In one particular example, described in more detail below, the preprocessing involves various implementations which utilize the application of filters, such as, for example, low-pass or bilateral filters. These filters in general act to reduce source complexities before encoding. Thus, they may act to smooth pictures in the raw video data, or to smooth while maintaining edges in the case of a bi-lateral filter. But while the use of these filters allow easier and more accurate encoding during later passes, using too strong a filter may cause unwanted side-effects to a particular picture, especially if the picture is less complex. Conversely, using too weak a filter could result in video streams which are not as efficient as they could be for a given video. Thus, as will be described below, a particularly complex picture in a video may make a higher rate of preprocessing filtering more desirable than a less-complex picture would. The techniques described below seek to deal with this problem by determining complexity parameters which can be used to determine a level of filter strength which can be applied during preprocessing.


In the particular illustrated example, the preprocessing 130 takes the raw video data as input and applies preprocessing filters or other techniques to it before passing it to a second-pass encoding 140, where the processed video data is then encoded into a final encoded video stream 150. In other implementations, the preprocessing 130 may simply analyze the statistics provided it by the first-pass encoding 120 and give control data to the second-pass encoding 140, which would then take the raw video data 110 as input and encode the raw video data according to the control data. The result is a final encoded video stream which is then output from the video encoding system. Note also that in alterative implementations, more than two passes may be used before outputting a final encoded video stream.



FIG. 2 is a block diagram illustrating a Video Encoding System 300 which utilizes the complexity-based adaptive preprocessing techniques described herein to encode video. FIG. 2 illustrates modules 210-230 which perform various processes in the course of performing the complexity-based adaptive preprocessing techniques described herein. While the modules of FIG. 2 are illustrated separately, in various implementations the modules may be combined or split into more modules; additionally, the modules may represent hardware, software, or a combination thereof.


In the illustrated implementation, the display system 200 comprises a first-pass video encoding module 210. In one implementation, this module is configured to accept raw video data and perform a first-pass encoding of the video data. As discussed above, this first pass is performed in order to acquire statistics about the video data that can then be used in later encoding. Additionally, in various implementations the first-pass video encoding module may also produce a first-pass encoded video stream which may or may not be used in later encoding.


The illustrated implementation also shows a complexity-based adaptive preprocessing module 230, which is configured to perform preprocessing on the raw video data (or the first-pass encoded data, in alternative implementations), before final encoding. Then, in the illustrated implementation, a final encoding is performed by the second-pass encoding module 220, which is configured in one implementation to accept preprocessed video data from the complexity-based adaptive preprocessing module and perform a final encoding on it. In alternative implementations, additional video encoding modules (not illustrated) may also be included in the system 200 and/or the two (or more) encoding modules may be combined into a single module.



FIG. 3 is a flowchart of an example process 300 performed by the video encoding system 200 for encoding video according to the techniques described herein. In various implementations, the illustrated process blocks may be merged, divided into sub-blocks, or omitted. The process begins at block 310, where the system receives raw video data. As discussed above, in various implementations, different raw video data formats may be supported by the system 200. Additionally, in alternative implementations the system 200 may receive video data that has already been encoded, either by a similar encoding method to that used in process 300 or by a different method.


Next, the process continues to block 320, where a first encoding is performed in order to generate encoding statistics. In some implementations, the encoding is performed according to the VC-1 video encoding standard. In other implementations other standards may be used, including, but not limited to, Windows Media Players 7, 8 and 9, H.264, MPEG-2, MPEG-4. During the process of block 320, various statistics may be reported. However, for the ease of description, the techniques described herein will be performed only with reference to two statistics: the frame size and quantization parameter for each frame encoded during the process of block 320. Thus, in one implementation only quantization parameters and frame sizes for each frame are recorded after this first pass. In another implementation, if variable quantization parameters are used during the first pass, an average over the frame is recorded for use during preprocessing. In alternative implementations, other statistics may be collected which provide additional information about complexity and can be used in preprocessing.


Next, the process 300 continues to block 330, where the system 200 determines complexity parameters from the encoding statistics determined at block 320. Particular examples of processes to determine complexity parameters are described below. Next, at block 340, the system 200 encodes the video data based on the complexity parameters determined at the process of block 330. Particular examples of processes to encode video data using complexity parameters are described below as well.


Finally, in one implementation, the encoded video stream created at block 340 is output by the system 200. In alternative implementations, additional encoding or post-processing modifications may be made to the video stream before output, but for the sake of simplicity these implementations are not illustrated.



FIG. 4 is a flowchart of an example process 400 performed by the video encoding system 200 for determining complexity parameters from encoding statistics. In various implementations, the illustrated process blocks may be merged, divided into sub-blocks, or omitted. The process begins at block 410, where the system partitions frames encoded during the first-pass encoding process into groups of pictures. In a preferred implementation, the term “group of pictures” is a term of art, used to refer to a set of one or more frames containing at least a single I-frame as well as P- and B-frames when applicable. Thus, the process of block 410, in one such an implementation, partitions the first-pass encoded frames into smaller sets of frames until each set contains only one I-frame. In alternative implementations different partitions may be used or the entire encoded video may be analyzed as a whole. Additionally, while the process of block 410 discusses the partitioning of “frames” for the purpose of simplicity of description, in some implementations the process may be performed only with reference to video statistics. Thus, both the partitioning and further analysis may be done solely through manipulation of statistics which are associated with frames; the unneeded video data itself may be discarded in such implementations.


Next, at block 420, the process begins a loop to analyze each partitioned group of pictures. Then, at block 430, the system determines a spatial complexity parameter for the currently-analyzed group of pictures. This is followed, at block 440, by the system determining a temporal complexity parameter for the current group of pictures. Descriptions of temporal and spatial complexity will follow.


The last illustrated block in the loop is at block 450, where a unified complexity parameter is determined for the group of pictures. While in some implementations, including ones described below, the unified complexity parameter is determined through manipulation of the previously-determined temporal and spatial complexity parameters, in some implementations, the unified complexity parameter by be determined through other analysis. In yet other implementations a unified parameter may not be calculated at all, but instead individual parameters, such as the spatial and temporal complexity parameters computed in blocks 430 and 440, may be used for preprocessing. Finally, at block 460, the loop is repeated for the next group of pictures.


Examples of Determining Complexity



FIG. 5 is a block diagram illustrating examples of video pictures with differences in spatial and temporal complexity. FIG. 5 seeks to provide an abstract handle on the calculations described below, which will demonstrate how to determine parameters for spatial and temporal complexity for a group of pictures. The illustrations in FIG. 5 are chosen for their simplicity and serve only to represent the ideas of spatial and temporal complexity for a group of pictures, not to represent an actual group of pictures, or even particular pictures, themselves.


Each of the ideas illustrated in FIG. 5 seeks to capture the idea that different source data may be more or less complex and different raw video data may exhibit complexities in different ways. Thus, a video containing a static room is easier, generally, to encode than a video capturing a busy street. By bifurcating complexities into temporal and spatial complexities, the calculations used to measure complexity are made simpler, both to understand and to implement.


Example images 510 and 520 illustrate differences in spatial complexity. In one implementation spatial complexity captures the idea of the number of details in a video frame. Thus, in the example shown, image 510, which contains many shapes, some of which are overlapped, contains a non-trivially greater amount of spatial complexity than does image 520, which has only a single circle in it.


By contrast, in one implementation temporal complexity captures the difficulty in predicting one frame from a previously-encoded frame. An example of this is illustrated in images 530 and 540. Please note that in each of the two images 530 and 540 movement within the image is illustrated through the use of arrows and dotted figures; this is merely an abstraction of movement that would take place over the course of various frames within a group of pictures. In the examples of images 530 and 540, image 530 shows a lower temporal complexity than does image 540. This is because, while image 530 has a high spatial complexity, its only movement, and thus the only part of the frame that needs to be predicted, is a simple sideways movement of the triangle 535. In contrast, image 540 shows a large movement of the circle 545, which provides a more difficult task of prediction, and therefore raises the level of temporal complexity of the group of pictures represented by image 540.



FIG. 6 is a flowchart of an example process 600 performed by the video encoding system 200 for determining a spatial complexity parameter for a group of pictures from encoding statistics. In one implementation, the system performs the process of FIG. 6 as an implementation of the process of block 430 of FIG. 4. In various implementations, the illustrated process blocks may be merged, divided into sub-blocks, or omitted. In general, the process of 600 serves to calculate a spatial complexity parameter by taking the quantization parameter of an I-frame within the group of pictures, whose value is related to the amount of detail in the I-frame, and to combine that with the I-frame's frame size. Thus, generally, as quantization parameters and/or frame sizes increase for an I-frame in a group of pictures, the level of detail, and thus spatial complexity, of the image shown by that group of pictures is assumed to increase.


The process begins at block 610, where an I-frame is located for the group of pictures being analyzed. As mentioned above, in a preferred implementation, there is only one I-frame within the group of pictures. Next, the quantization parameter and frame size are determined for this I-frame. In one implementation, this determination may consist solely of looking up the recorded values for the quantization parameter and the frame size for the I-frame. In another, when variable quantization parameters are used, an average quantization parameter is found for the I-frame to ease later computations.


Next, at block 630, the quantization parameter and frame size for the I-frame are multiplied and, at block 640, this product is set as the spatial complexity parameter for the group of pictures. Thus, for a quantization parameter and frame size for the I-frame of QP1 and Size1, respectively, the spatial complexity parameter for every frame in the group of pictures is calculated by:

Cs=QP1×Size1

In alternative implementations, the calculation of the spatial complexity parameter may be modified by scaling either or both of the input statistics before combining them into the final parameter. Thus, one or both of the quantization parameter and frame size may be scaled exponentially, or may be multiplied by a scale before calculating a spatial complexity parameter.



FIG. 7 is a flowchart of an example process 700 performed by the video encoding system 200 for determining a spatial complexity parameter for a group of pictures from encoding statistics. In one implementation, the system performs the process of FIG. 7 as an implementation of the process of block 440 of FIG. 4. In various implementations, the illustrated process blocks may be merged, divided into sub-blocks, or omitted. In general, the process of 700 serves to calculate a spatial complexity parameter by taking the quantization parameters of P-frames within the group of pictures, whose value is related to the amount of change exhibited in the group of pictures over the group's reference I-frame, and to combine that with the P-frames' frame sizes. Thus, generally, as quantization parameters and/or frame sizes increase for P-frames in a group of pictures, the amount of change that is predicted, and thus the temporal complexity of the image shown by that group of pictures, is assumed to increase.


The process begins at block 710, where one or more P-frames are located for the group of pictures being analyzed. As mentioned above, in a preferred implementation, there is only one I-frame and a collection of P-frames (as well as B-frames) within the group of pictures. Next, at block 720, a loop is performed to analyze each P-frame within the group of pictures.


At block 730, the quantization parameter and frame size are determined for the particular P-frame being analyzed. In one implementation, this determination may consist solely of looking up the recorded values for the quantization parameter and the frame size for the P-frame. An another, when variable quantization parameters are used, an average quantization parameter is found for the P-frame to ease later computations.


Next, at block 740, the quantization parameter and frame size for the P-frame are multiplied. Thus, for a quantization parameter and frame size for the P-frame of QPp and Sizep, respectively, the a first product is calculated for the P-frame by:

Ct′=QPp×Sizep

While this product does capture the general concept that lower temporal complexity should lead to a smaller frame size at a given QP, experimentation has discovered that the above measure is largely related to spatial complexity. Thus, given the same amount of motion and the same QP, a scene with higher spatial complexity is likely to have a bigger-sized P-frame compared to a low spatial complexity scene. In some implementations of encoders, this is due to imperfections in the capturing process and motion-estimation processes.


To account for this correlation, at block 750, the product given above is divided by the spatial complexity parameter for the P-frame. As discussed, above, in the illustrated implementation of FIG. 6, this spatial complexity parameter was calculated for every frame in the group of pictures by calculating it for the I-frame in the group. This gives a more-accurate measure for the temporal complexity of P-frame as:







C
t

=


C
t



C
s







This process is then repeated for each P-frame in the group of pictures, at block 760.


Next, in order to have a single temporal complexity parameter for the group of pictures, an average of the temporal complexity parameters for the P-frames in the group of pictures is taken. This is performed by the system in block 770. Finally, at block 780 this average is set as the temporal complexity parameter for the group of pictures.



FIG. 8 is a flowchart of an example process 800 performed by the video encoding system 200 for determining a unified complexity parameter for a group of pictures from the temporal and spatial complexity paramters. In one implementation, the system performs the process of FIG. 8 as an implementation of the process of block 450 of FIG. 4. In various implementations, the illustrated process blocks may be merged, divided into sub-blocks, or omitted. In general, the process of FIG. 8 serves to create a combined, single complexity parameter which can serve as a shorthand for the general complexity of a group of pictures. In alternative implementations, a unified complexity parameter may be created from different complexity calculations or may be omitted altogether in lieu of more-detailed complexity parameters.


The illustrated process begins at block 810, where the temporal and spatial complexity parameters are normalized. In one implementation, this normalization is performed according to the following two equations:







C
t

=

Int


(



C
t
*


MAXCOMP
Temporal


×
255

)









C
s

=

Int


(



C
s
*


MAXCOMP
Spatial


×
255

)







where Ct* and Cs* are the previously-calculated temporal and spatial complexity parameters, respectively, and MAXCOMPTemporal and MAXCOMPSpatial are numbers considered as the upper bounds of the complexities. In one implementation, used in the VC-1 encoder, MAXCOMPTemporal and MAXCOMPSpatial are chosen to be two numbers close to 2×108 and 2.0, respectively. In one implementation, if either of the above calculations results in a number greater than 255, that number is clipped to remain inside the interval [0, 255].


Next, at block 820, the normalized temporal complexity parameter is scaled according to a predetermined exponent. This is done to adjust the relative strength of the spatial and temporal complexities within the unified complexity paramters. In one implementation, a value of 0.5 is used as an exponent for the temporal complexity parameter. Next, at block 830 the scaled temporal complexity parameter and the spatial complexity parameter are multiplied and at block 840 this product is set as the unified complexity parameter for the group of pictures. Thus, the unified complexity parameter is found as:

C=Cs×Ctα

where α is the scaling exponent used in block 820. It should be noticed that this equation can be written in an equivalent fashion as:

C=Cs(1−α)×(Ct′)α

This alternative form demonstrates more clearly the capability of the α exponent as a relative strength control between the two particular complexity parameters.


Examples of Complexity-Based Adaptive Preprocessing



FIG. 9 is a flowchart of an example process 900 performed by the video encoding system 200 for encoding video data based on complexity parameters. In one implementation, the system performs the process of FIG. 9 as an implementation of the process of block 340 of FIG. 3. In various implementations, the illustrated process blocks may be merged, divided into sub-blocks, or omitted. The process begins at block 910, where the system performs preprocessing according to the complexity parameters calculated during implementations of the above-described processes. As discussed above, this preprocessing may be performed with reference to raw video data or to already-encoded video data in different implementations. Next, at block 910, the system 200 encodes the preprocessed video. This encoding may be performed by the same encoding engine utilized during earlier analyses, or, in alternative implementations, may be performed by a different encoder or one configured according to different specifications than one used during an earlier encoding pass.



FIG. 10 is a flowchart of an example process 1000 performed by the video encoding system 200 for performing preprocessing on video data based on complexity parameters. In one implementation, the system performs the process of FIG. 10 as an implementation of the process of block 910 of FIG. 9. In various implementations, the illustrated process blocks may be merged, divided into sub-blocks, or omitted. While the process illustrated in FIG. 10 performs only control of filtering strength according to complexity parameters, in alternative implementations, other preprocessing techniques may be used. The process begins at block 1010, where the system loops over every group of pictures in the video to be encoded. In the case where the preprocessing is performed on raw video data, the preprocessing may take place on sections of the raw video data corresponding to the particular group of pictures at issue. Within the loop, at block 1020, the system first scales the unified complexity parameter according to a predetermined exponent. This is done to modulate the complexity in order to obtain a mapping between complexity parameter and filler strength that is appropriate to the specific design of an encoder's preprocessing filters (such as, for example a denoise filter). In experimental trials, a value of 1.2 was found to work well for a VC-1 encoding system.


Next, at block 1030, the scaled complexity parameter is normalized to form an appropriate filter strength value. In the case of the VC-1 encoding, one implementation gives the scaling and normalization calculations according to the following equation:

FilterStrength=(Cβ−2048)>>10

Where β is the exponential scale of block 1020 (e.g. 1.2 in a VC-1 encoding system), and the operator>> represents a right bit-shift operation. Additionally, in some implementations, if the resulting FilterStrength value is outside of the proper range for the filters being used, the number is clipped. Thus, in an exemplary VC-1 implementation, FilterStrength is clipped to reside in the range [0, 8]. Next, at block 1040, the filters are applied to the group of pictures (or raw video associated therewith) according to the calculated filter strength. The loop then repeats for additional groups of pictures at block 1050.


It should be noted that the estimated complexities Cs, Ct, and C may be used in alternative implementations to make better encoding decisions in other encoding and preprocessing modules. For example, and not by way of limitation, the system may make rate control decisions at to what quantization parameter, second quantization parameter or P- or B-frame delta quantization parameters to use, if the system considers the three complexity parameters from multiple frame altogether. In another example, a quantization module of an encoding system may benefit from the use of complexity parameters, such as using a bigger deadzone for quantization in the case of a high value for C.


Computing Environment


The above surface approximation techniques can be performed on any of a variety of computing devices. The techniques can be implemented in hardware circuitry, as well as in software executing within a computer or other computing environment, such as shown in FIG. 11.



FIG. 11 illustrates a generalized example of a suitable computing environment 1100 in which described embodiments may be implemented. The computing environment 1100 is not intended to suggest any limitation as to scope of use or functionality of the invention, as the present invention may be implemented in diverse general-purpose or special-purpose computing environments.


With reference to FIG. 11, the computing environment 1100 includes at least one processing unit 1110 and memory 1120. In FIG. 11, this most basic configuration 1130 is included within a dashed line. The processing unit 1110 executes computer-executable instructions and may be a real or a virtual processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power. The memory 1120 may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two. The memory 1120 stores software 1180 implementing the described techniques.


A computing environment may have additional features. For example, the computing environment 1100 includes storage 1140, one or more input devices 1150, one or more output devices 1160, and one or more communication connections 1170. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing environment 1100. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment 1100, and coordinates activities of the components of the computing environment 1100.


The storage 1140 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, or any other medium which can be used to store information and which can be accessed within the computing environment 1100. The storage 1140 stores instructions for the software 1180 implementing the described techniques.


The input device(s) 1150 may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device that provides input to the computing environment 1100. For audio, the input device(s) 1150 may be a sound card or similar device that accepts audio input in analog or digital form, or a CD-ROM reader that provides audio samples to the computing environment. The output device(s) 1160 may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment 1100.


The communication connection(s) 1170 enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, compressed audio or video information, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.


The techniques described herein can be described in the general context of computer-readable media. Computer-readable media are any available media that can be accessed within a computing environment. By way of example, and not limitation, with the computing environment 1100, computer-readable media include memory 1120, storage 1140, communication media, and combinations of any of the above.


The techniques herein can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computing environment on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing environment.


For the sake of presentation, the detailed description uses terms like “calculate,” “generate,” and “determine,” to describe computer operations in a computing environment. These terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being. The actual computer operations corresponding to these terms vary depending on implementation.


In view of the many possible variations of the subject matter described herein, we claim as our invention all such embodiments as may come within the scope of the following claims and equivalents thereto.

Claims
  • 1. A method of determining parameters for pre-processing of a group of one or more pictures, the method comprising: determining one or more complexity parameters for the group of pictures; andencoding, using an encoder, the group of pictures in a video stream based at least in part on the one or more complexity parameters to transform the group of one or more pictures into an encoded group of pictures;wherein determining the one or more complexity parameters is based at least in part on spatial and temporal complexity of the group of pictures and wherein determining one or more complexity parameters comprises:determining a spatial complexity parameter for the group of pictures;determining a temporal complexity parameter for the group of pictures;determining a unified complexity parameter by combining the spatial and temporal complexity parameters; andadjusting one or both of the spatial and temporal complexity parameters for the group of pictures by a predetermined factor to adjust a relative strength of the parameters and wherein either or both of the spatial and temporal complexity parameters are normalized before calculating a unified complexity parameter.
  • 2. The method of claim 1, wherein determining a spatial complexity parameter for the group of pictures comprises multiplying a quantization parameter for at least one frame found in the group of pictures with a frame size for at least one frame found in the group of pictures.
  • 3. The method of claim 1, wherein the quantization parameter and frame size used in determining a spatial complexity parameter are calculated with reference to an I-frame in the group of pictures.
  • 4. The method of claim 1, wherein determining a temporal complexity parameter comprises: calculating a temporal complexity parameter for each of one or more P-frames in the group of pictures; andcalculating an average of the temporal complexity parameters calculated for each of the one or more P-frames in the group of pictures.
  • 5. The method of claim 4, wherein calculating a temporal complexity parameter for a frame comprises multiplying a quantization parameter for each of the one or more frames with a frame size for each of the one or more frames.
  • 6. The method of claim 5, wherein calculating a temporal complexity parameter for a frame further comprises dividing the product of the quantization parameter and the frame size by the spatial complexity parameter for that frame.
  • 7. The method of claim 1, wherein determining a unified complexity parameter by combining the spatial and temporal complexity parameters comprises: after adjusting, multiplying the spatial and temporal complexity parameters together to calculate a unified complexity parameter.
  • 8. The method of claim 1, wherein encoding the group of pictures in a video stream based at least in part on the one or more complexity parameters comprises selecting a filter strength based on the value of the one or more complexity parameters.
  • 9. The method of claim 8, wherein selecting a filter strength based on the value of the one or more complexity parameters comprises: scaling a unified complexity parameter by a predetermined exponential value; andnormalizing the scaled parameter to a predetermined set of filter-strength values.
  • 10. A system for encoding video, comprising: a first-pass video encoding module in an encoding device, configured to analyze one or more frames in a video sequence and to calculate one or more encoding parameters to be used in encoding the one or more frames in the video sequence;a complexity-based adaptive preprocessing module, configured to determine one or more complexity parameters for the one or more frames and to determine preprocessing filters to be used during encoding the one or more frames based on the one or more complexity parameters, the complexity parameters being based on a combined spatial and temporal complexity wherein one or both of the spatial or temporal complexity parameters are normalized for the one or more frames of the video sequence and scaled to adjust a relative strength thereof; anda second-pass video encoding module, configured to apply preprocessing filters to the one or more frames based on the preprocessing filter parameters and to encode the filtered frames into encoded video stream data.
  • 11. The system of claim 10, wherein the first-pass video encoding module is configured to calculate, for each of a plurality of frames in the video sequence, a quantization parameter and a frame size and the adaptive preprocessing module is configured to determine a spatial complexity parameter and a temporal complexity parameter for each group of pictures in the video sequence based on the calculated quantization parameters and frame sizes.
  • 12. The system of claim 11, wherein the adaptive preprocessing module is configured to determine a spatial complexity parameter for a group of pictures in the video sequence by multiplying a quantization parameter for an I-frame by a frame size for that I-frame; and wherein the adaptive preprocessing module is configured to determine a temporal complexity parameter for a group of pictures in the video sequence by:for each of a plurality of P-frames in the group of pictures, multiplying a quantization parameter for the P-frame by a frame size for that P-frame to determine a temporal complexity value for that P-frame;scaling each temporal complexity value for each P-frame by dividing the temporal complexity value by the spatial complexity value for the group of pictures; andtaking the average of the scaled temporal complexity values for the plurality of P-frames to determine a temporal complexity parameter for the group of pictures.
  • 13. The system of claim 12, wherein the adaptive preprocessing module is further configured to calculate a unified complexity parameter for each group of pictures in the video sequence by: normalizing temporal and spatial complexity parameters for the group of pictures;scaling the temporal complexity parameter for the group of pictures by exponentiating it by a first pre-determined value; andmultiplying the scaled temporal complexity for the group of pictures by the temporal complexity for the group of pictures to determine a unified complexity parameter for the group of pictures; andwherein the adaptive preprocessing module is further configured to determine preprocessing filters by: scaling the unified complexity parameter for the group of pictures by exponentiating it by a second pre-determined value;normalizing the scaled unified complexity parameter for the group of pictures to a filter value within a range of filter strength values; andselecting a filter strength according to the normalized filter value.
  • 14. One or more computer-readable storage devices containing instructions which, when executed by a computer, cause the computer to perform a method for encoding video, the method comprising: performing a first-pass analysis on one or more frames in a video sequence in order to calculate one or more encoding parameters to be used in encoding the one or more frames in a video sequence;determining one or more complexity parameters for the one or more frames based on the one or more encoding parameters, the complexity parameters being based on spatial and temporal complexity that are combined by multiplying the spatial and temporal complexity parameters together to calculate a unified complexity parameter, and wherein at least one of the complexity parameters are normalized;determining preprocessing filters to be used during encoding the one or more frames based on the one or more complexity parameters;applying preprocessing filters to the one or more frames based on the preprocessing filter parameters; andperforming a second-pass analysis on the one or more frames to encoding the filtered frames into encoded video stream data.
  • 15. The computer readable media of claim 14, wherein performing a first-pass analysis on one or more frames in a video sequence in order to calculate one or more encoding parameters comprises calculating, for each of a plurality of frames in the video sequence, a quantization parameter and a frame size; and wherein determining one or more complexity parameters for the one or more frames comprises determining a spatial complexity parameter and a temporal complexity parameter for each group of pictures in the video sequence based on the calculated quantization parameters and frame sizes.
  • 16. The computer-readable media of claim 15, wherein determining one or more complexity parameters for the one or more frames based on the one or more encoding parameters comprises: determining a spatial complexity parameter for a group of pictures in the video sequence by multiplying a quantization parameter for an I-frame by a frame size for that I-frame; anddetermine a temporal complexity parameter for a group of pictures in the video sequence by: for each of a plurality of P-frames in the group of pictures, multiplying a quantization parameter for the P-frame by a frame size for that P-frame to determine a temporal complexity value for that P-frame;scaling each temporal complexity value for each P-frame by dividing the temporal complexity value by the spatial complexity value for the group of pictures; andtaking the average of the scaled temporal complexity values for the plurality of P-frames to determine a temporal complexity parameter for the group of pictures.
  • 17. The computer-readable media of claim 16, wherein determining one or more complexity parameters for the one or more frames based on the one or more encoding parameters further comprises calculating a unified complexity parameter for each group of pictures in the video sequence by: normalizing temporal and spatial complexity parameters for the group of pictures;scaling the temporal complexity parameter for the group of pictures by exponentiating it by a first pre-determined value; andmultiplying the scaled temporal complexity for the group of pictures by the temporal complexity for the group of pictures to determine a unified complexity parameter for the group of pictures; andwherein determining preprocessing filters to be used during encoding the one or more frames based on the one or more complexity parameters comprises: scaling the unified complexity parameter for the group of pictures by exponentiating it by a second pre-determined value;normalizing the scaled unified complexity parameter for the group of pictures to a filter value within a range of filter strength values; andselecting a filter strength according to the normalized filter value.
US Referenced Citations (392)
Number Name Date Kind
762026 Connstein Jun 1904 A
4334244 Chan et al. Jun 1982 A
4460924 Lippel Jul 1984 A
4583114 Catros Apr 1986 A
4679079 Catros et al. Jul 1987 A
4774574 Daly et al. Sep 1988 A
4821119 Gharavi Apr 1989 A
4849812 Borgers et al. Jul 1989 A
4862264 Wells et al. Aug 1989 A
4965830 Barham et al. Oct 1990 A
4992889 Yamagami et al. Feb 1991 A
5072295 Murakami et al. Dec 1991 A
5089889 Sugiyama Feb 1992 A
5128758 Azadegan et al. Jul 1992 A
5136377 Johnston et al. Aug 1992 A
5144426 Tanaka et al. Sep 1992 A
5179442 Azadegan et al. Jan 1993 A
5237410 Inoue Aug 1993 A
5241395 Chen Aug 1993 A
5253058 Gharavi Oct 1993 A
5263088 Hazu et al. Nov 1993 A
5289283 Hopper et al. Feb 1994 A
5301242 Gonzales et al. Apr 1994 A
5303058 Fukuda et al. Apr 1994 A
5317396 Fujinami May 1994 A
5317672 Crossman et al. May 1994 A
5333212 Ligtenberg Jul 1994 A
5351310 Califano et al. Sep 1994 A
5374958 Yanagihara Dec 1994 A
5412429 Glover May 1995 A
5452104 Lee Sep 1995 A
5461421 Moon Oct 1995 A
5467134 Laney et al. Nov 1995 A
5473377 Kim Dec 1995 A
5481553 Suzuki et al. Jan 1996 A
5506916 Nishihara et al. Apr 1996 A
5509089 Ghoshal Apr 1996 A
5510785 Segawa et al. Apr 1996 A
5537440 Eyuboglu et al. Jul 1996 A
5537493 Wilkinson Jul 1996 A
5539469 Jung Jul 1996 A
5544286 Laney Aug 1996 A
5559557 Kato Sep 1996 A
5565920 Lee et al. Oct 1996 A
5585861 Taniguchi et al. Dec 1996 A
5587708 Chiu Dec 1996 A
5604856 Guenter et al. Feb 1997 A
5606371 Gunnewiek et al. Feb 1997 A
5611038 Shaw et al. Mar 1997 A
5623424 Azadegan et al. Apr 1997 A
5625714 Fukuda Apr 1997 A
5631644 Katata et al. May 1997 A
5646691 Yokoyama Jul 1997 A
5654760 Ohtsuki Aug 1997 A
5657087 Jeong et al. Aug 1997 A
5663763 Yagasaki et al. Sep 1997 A
5724097 Hibi et al. Mar 1998 A
5724456 Boyack et al. Mar 1998 A
5731836 Lee Mar 1998 A
5731837 Hurst, Jr. Mar 1998 A
5739861 Music Apr 1998 A
5751358 Suzuki et al. May 1998 A
5751379 Markandey et al. May 1998 A
5761088 Hulyalkar et al. Jun 1998 A
5764803 Jacquin et al. Jun 1998 A
5781788 Woo et al. Jul 1998 A
5786856 Hall et al. Jul 1998 A
5787203 Lee et al. Jul 1998 A
5799113 Lee Aug 1998 A
5802213 Gardos Sep 1998 A
5809178 Anderson et al. Sep 1998 A
5815097 Schwartz et al. Sep 1998 A
5819035 Devaney et al. Oct 1998 A
5825310 Tsutsui Oct 1998 A
5835145 Ouyang et al. Nov 1998 A
5835149 Astle Nov 1998 A
5835237 Ebrahimi Nov 1998 A
5835495 Ferriere Nov 1998 A
5844613 Chaddha Dec 1998 A
5850482 Meany et al. Dec 1998 A
5867167 Deering Feb 1999 A
5870435 Choi et al. Feb 1999 A
5877813 Lee et al. Mar 1999 A
5878166 Legall Mar 1999 A
5880775 Ross Mar 1999 A
5883672 Suzuki et al. Mar 1999 A
5905504 Barkans et al. May 1999 A
5923784 Rao et al. Jul 1999 A
5926209 Glatt Jul 1999 A
5926791 Ogata et al. Jul 1999 A
5946419 Chen et al. Aug 1999 A
5959693 Wu et al. Sep 1999 A
5969764 Sun et al. Oct 1999 A
5970173 Lee et al. Oct 1999 A
5990957 Ryo Nov 1999 A
6026190 Astle Feb 2000 A
6044115 Horiike et al. Mar 2000 A
6049630 Wang et al. Apr 2000 A
6058362 Malvar May 2000 A
6072831 Chen Jun 2000 A
6084636 Sugahara et al. Jul 2000 A
6088392 Rosenberg Jul 2000 A
6091777 Guetz et al. Jul 2000 A
6104751 Artieri Aug 2000 A
6115420 Wang Sep 2000 A
6115689 Malvar Sep 2000 A
6118817 Wang Sep 2000 A
6118903 Liu Sep 2000 A
6125140 Wilkinson Sep 2000 A
6125147 Florencio et al. Sep 2000 A
6148107 Ducloux et al. Nov 2000 A
6148109 Boon et al. Nov 2000 A
6160846 Chiang et al. Dec 2000 A
6167091 Okada et al. Dec 2000 A
6182034 Malvar Jan 2001 B1
6212232 Reed et al. Apr 2001 B1
6215905 Lee et al. Apr 2001 B1
6219838 Cherichetti et al. Apr 2001 B1
6223162 Chen et al. Apr 2001 B1
6240135 Kim May 2001 B1
6240380 Malvar May 2001 B1
6243497 Chiang et al. Jun 2001 B1
6249614 Kolesnik et al. Jun 2001 B1
6256422 Mitchell et al. Jul 2001 B1
6256423 Krishnamurthy Jul 2001 B1
6263022 Chen et al. Jul 2001 B1
6263024 Matsumoto Jul 2001 B1
6275614 Krishnamurthy et al. Aug 2001 B1
6278735 Mohsenian Aug 2001 B1
6281942 Wang Aug 2001 B1
6292588 Shen et al. Sep 2001 B1
6314208 Konstantinides et al. Nov 2001 B1
6337881 Chaddha Jan 2002 B1
6347116 Haskell et al. Feb 2002 B1
6348945 Hayakawa Feb 2002 B1
6356709 Abe et al. Mar 2002 B1
6359928 Wang et al. Mar 2002 B1
6360017 Chiu et al. Mar 2002 B1
6370502 Wu et al. Apr 2002 B1
6373894 Florencio et al. Apr 2002 B1
6380985 Callahan Apr 2002 B1
6385343 Kuroda et al. May 2002 B1
6389171 Washington May 2002 B1
6393155 Bright et al. May 2002 B1
6408026 Tao Jun 2002 B1
6418166 Wu et al. Jul 2002 B1
6438167 Shimizu et al. Aug 2002 B1
6456744 Lafe Sep 2002 B1
6463100 Cho et al. Oct 2002 B1
6466620 Lee Oct 2002 B1
6473409 Malvar Oct 2002 B1
6473534 Merhav et al. Oct 2002 B1
6490319 Yang Dec 2002 B1
6493385 Sekiguchi et al. Dec 2002 B1
6519284 Pesquet-Popescu et al. Feb 2003 B1
6526096 Lainema et al. Feb 2003 B2
6546049 Lee Apr 2003 B1
6556925 Mori et al. Apr 2003 B1
6571019 Kim et al. May 2003 B1
6593925 Hakura et al. Jul 2003 B1
6600836 Thyagarajan et al. Jul 2003 B1
6625215 Faryar et al. Sep 2003 B1
6647152 Willis et al. Nov 2003 B2
6654417 Hui Nov 2003 B1
6678422 Sharma et al. Jan 2004 B1
6687294 Yan et al. Feb 2004 B2
6704718 Burges et al. Mar 2004 B2
6721359 Bist et al. Apr 2004 B1
6728317 Demos Apr 2004 B1
6731811 Rose May 2004 B1
6738423 Lainema et al. May 2004 B1
6747660 Olano et al. Jun 2004 B1
6759999 Doyen Jul 2004 B1
6760482 Taubman Jul 2004 B1
6765962 Lee et al. Jul 2004 B1
6771830 Goldstein et al. Aug 2004 B2
6785331 Jozawa et al. Aug 2004 B1
6788740 Van der Schaar et al. Sep 2004 B1
6792157 Koshi et al. Sep 2004 B1
6795584 Karczewicz et al. Sep 2004 B2
6801572 Yamada et al. Oct 2004 B2
6807317 Mathew et al. Oct 2004 B2
6810083 Chen et al. Oct 2004 B2
6831947 Ribas Corbera Dec 2004 B2
6862320 Isu et al. Mar 2005 B1
6865291 Zador Mar 2005 B1
6873368 Yu et al. Mar 2005 B1
6873654 Rackett Mar 2005 B1
6876703 Ismaeil et al. Apr 2005 B2
6882753 Chen et al. Apr 2005 B2
6907142 Kalevo et al. Jun 2005 B2
6909745 Puri et al. Jun 2005 B1
6947045 Ostermann et al. Sep 2005 B1
6975680 Demos Dec 2005 B2
6977659 Dumitras et al. Dec 2005 B2
6980595 Rose et al. Dec 2005 B2
6983018 Lin et al. Jan 2006 B1
6990242 Malvar Jan 2006 B2
6992725 Mohsenian Jan 2006 B2
7016546 Fukuhara et al. Mar 2006 B2
7020204 Auvray et al. Mar 2006 B2
7027506 Lee et al. Apr 2006 B2
7027507 Wu Apr 2006 B2
7035473 Zeng et al. Apr 2006 B1
7042941 Laksono et al. May 2006 B1
7058127 Lu et al. Jun 2006 B2
7072525 Covell Jul 2006 B1
7099389 Yu et al. Aug 2006 B1
7099515 Lin et al. Aug 2006 B2
7110455 Wu et al. Sep 2006 B2
7158668 Munsil et al. Jan 2007 B2
7162096 Horowitz Jan 2007 B1
7200277 Joshi et al. Apr 2007 B2
7233362 Wu Jun 2007 B2
7289154 Gindele Oct 2007 B2
7295609 Sato et al. Nov 2007 B2
7301999 Filippini et al. Nov 2007 B2
7307639 Dumitras et al. Dec 2007 B1
7308151 Munsil et al. Dec 2007 B2
7356085 Gavrilescu et al. Apr 2008 B2
7463780 Fukuhara et al. Dec 2008 B2
7471830 Lim et al. Dec 2008 B2
7570834 Deshpande Aug 2009 B2
7580584 Holcomb et al. Aug 2009 B2
7738554 Lin et al. Jun 2010 B2
7778476 Alvarez et al. Aug 2010 B2
7801383 Sullivan Sep 2010 B2
7869517 Ghanbari Jan 2011 B2
7889790 Sun Feb 2011 B2
7995649 Zuo et al. Aug 2011 B2
20010048718 Bruls et al. Dec 2001 A1
20020021756 Jayant et al. Feb 2002 A1
20020044602 Ohki Apr 2002 A1
20020118748 Inomata et al. Aug 2002 A1
20020118884 Cho et al. Aug 2002 A1
20020136297 Shimada et al. Sep 2002 A1
20020136308 Le Maguet et al. Sep 2002 A1
20020154693 Demos et al. Oct 2002 A1
20020186890 Lee et al. Dec 2002 A1
20030021482 Lan et al. Jan 2003 A1
20030053702 Hu Mar 2003 A1
20030058944 MacInnis et al. Mar 2003 A1
20030095599 Lee et al. May 2003 A1
20030103677 Tastl et al. Jun 2003 A1
20030108100 Sekiguchi et al. Jun 2003 A1
20030113026 Srinivasan et al. Jun 2003 A1
20030128754 Akimoto et al. Jul 2003 A1
20030128756 Oktem Jul 2003 A1
20030138150 Srinivasan Jul 2003 A1
20030185420 Sefcik et al. Oct 2003 A1
20030194010 Mukerjee et al. Oct 2003 A1
20030206582 Srinivasan et al. Nov 2003 A1
20030215011 Wang et al. Nov 2003 A1
20030219073 Lee et al. Nov 2003 A1
20030223493 Ye et al. Dec 2003 A1
20030235247 Wu et al. Dec 2003 A1
20040008901 Avinash Jan 2004 A1
20040022316 Ueda et al. Feb 2004 A1
20040036692 Alcorn et al. Feb 2004 A1
20040090397 Doyen et al. May 2004 A1
20040091168 Jones et al. May 2004 A1
20040151243 Bhaskaran et al. Aug 2004 A1
20040158719 Lee et al. Aug 2004 A1
20040170395 Filippini et al. Sep 2004 A1
20040174464 MacInnis et al. Sep 2004 A1
20040190610 Song et al. Sep 2004 A1
20040202376 Schwartz et al. Oct 2004 A1
20040228406 Song Nov 2004 A1
20040264568 Florencio Dec 2004 A1
20040264580 Chiang Wei Yin et al. Dec 2004 A1
20050002575 Joshi et al. Jan 2005 A1
20050008075 Chang et al. Jan 2005 A1
20050013365 Mukerjee et al. Jan 2005 A1
20050013497 Hsu et al. Jan 2005 A1
20050013498 Srinivasan et al. Jan 2005 A1
20050013500 Lee et al. Jan 2005 A1
20050015246 Thumpudi et al. Jan 2005 A1
20050015259 Thumpudi et al. Jan 2005 A1
20050021579 Bae et al. Jan 2005 A1
20050024487 Chen Feb 2005 A1
20050031034 Kamaci et al. Feb 2005 A1
20050036698 Beom Feb 2005 A1
20050036699 Holcomb et al. Feb 2005 A1
20050041738 Lin et al. Feb 2005 A1
20050052294 Liang et al. Mar 2005 A1
20050053151 Lin et al. Mar 2005 A1
20050053158 Regunathan et al. Mar 2005 A1
20050084009 Furukawa et al. Apr 2005 A1
20050084013 Wang et al. Apr 2005 A1
20050094731 Xu et al. May 2005 A1
20050105612 Sung et al. May 2005 A1
20050105622 Gokhale May 2005 A1
20050105889 Conklin May 2005 A1
20050123274 Crinon et al. Jun 2005 A1
20050135484 Lee et al. Jun 2005 A1
20050147163 Li et al. Jul 2005 A1
20050152448 Crinon et al. Jul 2005 A1
20050152451 Byun Jul 2005 A1
20050180500 Chiang et al. Aug 2005 A1
20050180502 Puri Aug 2005 A1
20050190836 Lu et al. Sep 2005 A1
20050207492 Pao Sep 2005 A1
20050232501 Mukerjee Oct 2005 A1
20050238096 Holcomb et al. Oct 2005 A1
20050254719 Sullivan Nov 2005 A1
20050259729 Sun Nov 2005 A1
20050276493 Xin et al. Dec 2005 A1
20060013307 Olivier et al. Jan 2006 A1
20060013309 Ha et al. Jan 2006 A1
20060018552 Malayath et al. Jan 2006 A1
20060034368 Klivington Feb 2006 A1
20060038826 Daly Feb 2006 A1
20060056508 Lafon et al. Mar 2006 A1
20060071825 Demos Apr 2006 A1
20060083300 Han et al. Apr 2006 A1
20060083308 Schwarz et al. Apr 2006 A1
20060088098 Vehvilainen Apr 2006 A1
20060098733 Matsumura et al. May 2006 A1
20060104350 Liu May 2006 A1
20060104527 Koto et al. May 2006 A1
20060126724 Cote Jun 2006 A1
20060126728 Yu et al. Jun 2006 A1
20060133478 Wen Jun 2006 A1
20060133479 Chen et al. Jun 2006 A1
20060133689 Andersson et al. Jun 2006 A1
20060140267 He et al. Jun 2006 A1
20060165176 Raveendran et al. Jul 2006 A1
20060188014 Civanlar et al. Aug 2006 A1
20060197777 Cha et al. Sep 2006 A1
20060227868 Chen et al. Oct 2006 A1
20060238444 Wang et al. Oct 2006 A1
20060239576 Mukherjee Oct 2006 A1
20060245506 Lin et al. Nov 2006 A1
20060256851 Wang et al. Nov 2006 A1
20060256867 Turaga et al. Nov 2006 A1
20060257037 Samadani Nov 2006 A1
20060268990 Lin et al. Nov 2006 A1
20060268991 Segall et al. Nov 2006 A1
20060274959 Piastowski Dec 2006 A1
20070002946 Bouton et al. Jan 2007 A1
20070009039 Ryu Jan 2007 A1
20070009042 Craig et al. Jan 2007 A1
20070053603 Monro Mar 2007 A1
20070081588 Raveendran et al. Apr 2007 A1
20070091997 Fogg et al. Apr 2007 A1
20070140333 Chono et al. Jun 2007 A1
20070140354 Sun Jun 2007 A1
20070147497 Bao et al. Jun 2007 A1
20070160126 Van Der Meer et al. Jul 2007 A1
20070160138 Wedi et al. Jul 2007 A1
20070160151 Bolton et al. Jul 2007 A1
20070189626 Tanizawa et al. Aug 2007 A1
20070201553 Shindo Aug 2007 A1
20070230565 Tourapis et al. Oct 2007 A1
20070237221 Hsu et al. Oct 2007 A1
20070237222 Xia et al. Oct 2007 A1
20070237236 Chang et al. Oct 2007 A1
20070237237 Chang et al. Oct 2007 A1
20070248163 Zuo et al. Oct 2007 A1
20070248164 Zuo et al. Oct 2007 A1
20070258518 Tu et al. Nov 2007 A1
20070258519 Srinivasan Nov 2007 A1
20070268964 Zhao Nov 2007 A1
20080008249 Yan Jan 2008 A1
20080008394 Segall Jan 2008 A1
20080013630 Li et al. Jan 2008 A1
20080024513 Raveendran Jan 2008 A1
20080031346 Segall Feb 2008 A1
20080068446 Barkley et al. Mar 2008 A1
20080080615 Tourapis et al. Apr 2008 A1
20080089410 Lu et al. Apr 2008 A1
20080089417 Bao et al. Apr 2008 A1
20080095235 Hsiang Apr 2008 A1
20080101465 Chono et al. May 2008 A1
20080165848 Ye et al. Jul 2008 A1
20080187042 Jasinschi Aug 2008 A1
20080240235 Holcomb et al. Oct 2008 A1
20080240250 Lin et al. Oct 2008 A1
20080240257 Chang et al. Oct 2008 A1
20080260278 Zuo et al. Oct 2008 A1
20080304562 Chang et al. Dec 2008 A1
20090003718 Liu et al. Jan 2009 A1
20090161756 Lin Jun 2009 A1
20090207912 Holcomb et al. Aug 2009 A1
20090207919 Yin et al. Aug 2009 A1
20090213930 Ye et al. Aug 2009 A1
20090219994 Tu et al. Sep 2009 A1
20090245587 Holcomb et al. Oct 2009 A1
20090262798 Chiu et al. Oct 2009 A1
20090290635 Kim et al. Nov 2009 A1
20090296808 Regunathan et al. Dec 2009 A1
20100177826 Bhaumik et al. Jul 2010 A1
Foreign Referenced Citations (35)
Number Date Country
1327074 Feb 1994 CA
0932306 Jul 1999 EP
1465349 Oct 2004 EP
1871113 Dec 2007 EP
897363 May 1962 GB
1 218 015 Jan 1971 GB
05-227525 Sep 1993 JP
07-222145 Aug 1995 JP
07-250327 Sep 1995 JP
08-336139 Dec 1996 JP
10-336656 Dec 1998 JP
11-041610 Feb 1999 JP
6-296275 Oct 2004 JP
2007-281949 Oct 2007 JP
132895 Oct 1998 KR
WO 9403988 Feb 1994 WO
WO 9721302 Jun 1997 WO
WO 9948300 Sep 1999 WO
WO 0021207 Apr 2000 WO
WO 0072599 Nov 2000 WO
WO 0207438 Jan 2002 WO
JP 2003061090 Feb 2003 WO
WO 2004100554 Nov 2004 WO
WO 2004100556 Nov 2004 WO
WO 2005065030 Jul 2005 WO
WO 2005076614 Aug 2005 WO
WO 2006075895 Jul 2006 WO
WO 2006079997 Aug 2006 WO
WO 2006112620 Oct 2006 WO
WO 2007008286 Jan 2007 WO
WO 2007009875 Jan 2007 WO
WO 2007015047 Feb 2007 WO
WO 2007018669 Feb 2007 WO
WO 2007042365 Apr 2007 WO
WO 2007130580 Nov 2007 WO
Related Publications (1)
Number Date Country
20080192822 A1 Aug 2008 US