Transform coding is a compression technique used in many audio, image and video compression systems. Uncompressed digital image and video is typically represented or captured as samples of picture elements or colors at locations in an image or video frame arranged in a two-dimensional (2D) grid. This is referred to as a spatial-domain representation of the image or video. For example, a typical format for images consists of a stream of 24-bit color picture element samples arranged as a grid. Each sample is a number representing color components at a pixel location in the grid within a color space, such as RGB, or YIQ, among others. Various image and video systems may use various different color, spatial and time resolutions of sampling. Similarly, digital audio is typically represented as time-sampled audio signal stream. For example, a typical audio format consists of a stream of 16-bit amplitude samples of an audio signal taken at regular time intervals.
Uncompressed digital audio, image and video signals can consume considerable storage and transmission capacity. Transform coding reduces the size of digital audio, images and video by transforming the spatial-domain representation of the signal into a frequency-domain (or other like transform domain) representation, and then reducing resolution of certain generally less perceptible frequency components of the transform-domain representation. This generally produces much less perceptible degradation of the digital signal compared to reducing color or spatial resolution of images or video in the spatial domain, or of audio in the time domain.
Quantization
According to one possible definition, quantization is a term used in transform coding for an approximating non-reversible mapping function commonly used for lossy compression, in which there is a specified set of possible output values, and each member of the set of possible output values has an associated set of input values that result in the selection of that particular output value. A variety of quantization techniques have been developed, including scalar or vector, uniform or non-uniform, with or without dead zone, and adaptive or non-adaptive quantization.
The quantization operation is essentially a biased division by a quantization parameter which is performed at the encoder. The inverse quantization or multiplication operation is a multiplication by the quantization parameter performed at the decoder.
Additional Techniques
In general, video compression techniques include intraframe compression and interframe compression. Intraframe compression techniques compress individual frames, typically called I-frames or key frames. Interframe compression techniques compress frames with reference to preceding and/or following frames, which are typically called predicted frames, P-frames, or B-frames.
In addition to the mechanisms described above, video encoding can also benefit from the use of preprocessing prior to encoding to provide for more efficient coding. In one example, denoise filters are used to remove extraneous noise from a video source, allowing a later encoding step to operate with greater efficiency.
However, with typical video encoding, it is difficult to know how exactly to perform preprocessing in order to create the most efficient encoding with the fewest number of visible artifacts. What is needed is a mechanism for gaining knowledge about a video source which can be used to facilitate preprocessing decisions.
Multiple-pass video encoding systems and techniques are described. In various implementations, these systems and techniques utilize statistics taken during a first-pass encoding to create complexity measurements for video data to be encoded. In one implementation, through analyzing these complexity measurements, preprocessing decisions, such as the determination of strength of denoise filters, are made. In one implementation, temporal and spatial complexity parameters are calculated as the complexity measurements. These parameters are then used to compute a unified complexity parameter for each group of pictures being encoded.
In one example implementation, a method of determining parameters for pre-processing of a group of one or more pictures is described. The example method comprises determining one or more complexity parameters for the group of pictures and encoding the group of pictures in a video stream based at least in part on the one or more complexity parameters.
In another example implementation, a system for encoding video is described. The example system comprises a first-pass video encoding module which is configured to analyze one or more frames in a video sequence and to calculate one or more encoding parameters to be used in encoding the one or more frames in the video sequence. The example system also comprises a complexity-based adaptive preprocessing module which is configured to determine one or more complexity parameters for the one or more frames and to determine preprocessing filters to be used during encoding the one or more frames based on the one or more complexity parameters. The example system also comprises a second-pass video encoding module which is configured to apply preprocessing filters to the one or more frames based on the preprocessing filter parameters and to encode the filtered frames into encoded video stream data.
In another example implementation, one or more computer-readable media are described which contain instructions which, when executed by a computer, cause the computer to perform an example method for encoding video. The example method comprises performing a first-pass analysis on one or more frames in a video sequence in order to calculate one or more encoding parameters to be used in encoding the one or more frames in a video sequence. The example method also comprises determining one or more complexity parameters for the one or more frames based on the one or more encoding parameters, determining preprocessing filters to be used during encoding the one or more frames based on the one or more complexity parameters, applying preprocessing filters to the one or more frames based on the preprocessing filter parameters, and performing a second-pass analysis on the one or more frames to encoding the filtered frames into encoded video stream data.
The exemplary techniques and systems described herein allow for and perform additional preprocessing on video data in a multiple-pass video encoding system. After a first pass is performed, video statistics based on the first-pass encoding are analyzed to determine complexity parameters from the first-pass encoding process. These complexity parameters are then used to control preprocessing performed on the video data. In one example, the preprocessing performed is the application of a filter. The preprocessed video data is then encoded by a later pass of the encoding system. By determining and utilizing complexity data, the systems and techniques described herein can use the content of the encoded video to make more-informed decisions about what preprocessing should or should not be performed. This results in more efficient video encoding that is more accurate to the qualities of the video being encoded. Additionally, the techniques described herein offer very little overhead in the encoding process, and so do not overly complicate encoding.
Examples of Multiple-Pass Video Encoding Systems
Multiple-pass video encoders generally perform a first encoding on video data in order to determine statistics about the video data. These statistics are then used to create controls for later processing and encoding. By using information gained during a first-pass analysis, multiple-pass encoding systems are able to perform processing and encoding that is more accurately directed toward the particular nature of the video being encoded. This tuning of the process results in an eventual encoded video stream that either has a lower bit-rate, has fewer visible artifacts, or both.
In the particular illustrated example, the preprocessing 130 takes the raw video data as input and applies preprocessing filters or other techniques to it before passing it to a second-pass encoding 140, where the processed video data is then encoded into a final encoded video stream 150. In other implementations, the preprocessing 130 may simply analyze the statistics provided it by the first-pass encoding 120 and give control data to the second-pass encoding 140, which would then take the raw video data 110 as input and encode the raw video data according to the control data. The result is a final encoded video stream which is then output from the video encoding system. Note also that in alterative implementations, more than two passes may be used before outputting a final encoded video stream.
In the illustrated implementation, the display system 200 comprises a first-pass video encoding module 210. In one implementation, this module is configured to accept raw video data and perform a first-pass encoding of the video data. As discussed above, this first pass is performed in order to acquire statistics about the video data that can then be used in later encoding. Additionally, in various implementations the first-pass video encoding module may also produce a first-pass encoded video stream which may or may not be used in later encoding.
The illustrated implementation also shows a complexity-based adaptive preprocessing module 230, which is configured to perform preprocessing on the raw video data (or the first-pass encoded data, in alternative implementations), before final encoding. Then, in the illustrated implementation, a final encoding is performed by the second-pass encoding module 220, which is configured in one implementation to accept preprocessed video data from the complexity-based adaptive preprocessing module and perform a final encoding on it. In alternative implementations, additional video encoding modules (not illustrated) may also be included in the system 200 and/or the two (or more) encoding modules may be combined into a single module.
Next, the process continues to block 320, where a first encoding is performed in order to generate encoding statistics. In some implementations, the encoding is performed according to the VC-1 video encoding standard. In other implementations other standards may be used, including, but not limited to, Windows Media Players 7, 8 and 9, H.264, MPEG-2, MPEG-4. During the process of block 320, various statistics may be reported. However, for the ease of description, the techniques described herein will be performed only with reference to two statistics: the frame size and quantization parameter for each frame encoded during the process of block 320. Thus, in one implementation only quantization parameters and frame sizes for each frame are recorded after this first pass. In another implementation, if variable quantization parameters are used during the first pass, an average over the frame is recorded for use during preprocessing. In alternative implementations, other statistics may be collected which provide additional information about complexity and can be used in preprocessing.
Next, the process 300 continues to block 330, where the system 200 determines complexity parameters from the encoding statistics determined at block 320. Particular examples of processes to determine complexity parameters are described below. Next, at block 340, the system 200 encodes the video data based on the complexity parameters determined at the process of block 330. Particular examples of processes to encode video data using complexity parameters are described below as well.
Finally, in one implementation, the encoded video stream created at block 340 is output by the system 200. In alternative implementations, additional encoding or post-processing modifications may be made to the video stream before output, but for the sake of simplicity these implementations are not illustrated.
Next, at block 420, the process begins a loop to analyze each partitioned group of pictures. Then, at block 430, the system determines a spatial complexity parameter for the currently-analyzed group of pictures. This is followed, at block 440, by the system determining a temporal complexity parameter for the current group of pictures. Descriptions of temporal and spatial complexity will follow.
The last illustrated block in the loop is at block 450, where a unified complexity parameter is determined for the group of pictures. While in some implementations, including ones described below, the unified complexity parameter is determined through manipulation of the previously-determined temporal and spatial complexity parameters, in some implementations, the unified complexity parameter by be determined through other analysis. In yet other implementations a unified parameter may not be calculated at all, but instead individual parameters, such as the spatial and temporal complexity parameters computed in blocks 430 and 440, may be used for preprocessing. Finally, at block 460, the loop is repeated for the next group of pictures.
Examples of Determining Complexity
Each of the ideas illustrated in
Example images 510 and 520 illustrate differences in spatial complexity. In one implementation spatial complexity captures the idea of the number of details in a video frame. Thus, in the example shown, image 510, which contains many shapes, some of which are overlapped, contains a non-trivially greater amount of spatial complexity than does image 520, which has only a single circle in it.
By contrast, in one implementation temporal complexity captures the difficulty in predicting one frame from a previously-encoded frame. An example of this is illustrated in images 530 and 540. Please note that in each of the two images 530 and 540 movement within the image is illustrated through the use of arrows and dotted figures; this is merely an abstraction of movement that would take place over the course of various frames within a group of pictures. In the examples of images 530 and 540, image 530 shows a lower temporal complexity than does image 540. This is because, while image 530 has a high spatial complexity, its only movement, and thus the only part of the frame that needs to be predicted, is a simple sideways movement of the triangle 535. In contrast, image 540 shows a large movement of the circle 545, which provides a more difficult task of prediction, and therefore raises the level of temporal complexity of the group of pictures represented by image 540.
The process begins at block 610, where an I-frame is located for the group of pictures being analyzed. As mentioned above, in a preferred implementation, there is only one I-frame within the group of pictures. Next, the quantization parameter and frame size are determined for this I-frame. In one implementation, this determination may consist solely of looking up the recorded values for the quantization parameter and the frame size for the I-frame. In another, when variable quantization parameters are used, an average quantization parameter is found for the I-frame to ease later computations.
Next, at block 630, the quantization parameter and frame size for the I-frame are multiplied and, at block 640, this product is set as the spatial complexity parameter for the group of pictures. Thus, for a quantization parameter and frame size for the I-frame of QP1 and Size1, respectively, the spatial complexity parameter for every frame in the group of pictures is calculated by:
Cs=QP1×Size1
In alternative implementations, the calculation of the spatial complexity parameter may be modified by scaling either or both of the input statistics before combining them into the final parameter. Thus, one or both of the quantization parameter and frame size may be scaled exponentially, or may be multiplied by a scale before calculating a spatial complexity parameter.
The process begins at block 710, where one or more P-frames are located for the group of pictures being analyzed. As mentioned above, in a preferred implementation, there is only one I-frame and a collection of P-frames (as well as B-frames) within the group of pictures. Next, at block 720, a loop is performed to analyze each P-frame within the group of pictures.
At block 730, the quantization parameter and frame size are determined for the particular P-frame being analyzed. In one implementation, this determination may consist solely of looking up the recorded values for the quantization parameter and the frame size for the P-frame. An another, when variable quantization parameters are used, an average quantization parameter is found for the P-frame to ease later computations.
Next, at block 740, the quantization parameter and frame size for the P-frame are multiplied. Thus, for a quantization parameter and frame size for the P-frame of QPp and Sizep, respectively, the a first product is calculated for the P-frame by:
Ct′=QPp×Sizep
While this product does capture the general concept that lower temporal complexity should lead to a smaller frame size at a given QP, experimentation has discovered that the above measure is largely related to spatial complexity. Thus, given the same amount of motion and the same QP, a scene with higher spatial complexity is likely to have a bigger-sized P-frame compared to a low spatial complexity scene. In some implementations of encoders, this is due to imperfections in the capturing process and motion-estimation processes.
To account for this correlation, at block 750, the product given above is divided by the spatial complexity parameter for the P-frame. As discussed, above, in the illustrated implementation of
This process is then repeated for each P-frame in the group of pictures, at block 760.
Next, in order to have a single temporal complexity parameter for the group of pictures, an average of the temporal complexity parameters for the P-frames in the group of pictures is taken. This is performed by the system in block 770. Finally, at block 780 this average is set as the temporal complexity parameter for the group of pictures.
The illustrated process begins at block 810, where the temporal and spatial complexity parameters are normalized. In one implementation, this normalization is performed according to the following two equations:
where Ct* and Cs* are the previously-calculated temporal and spatial complexity parameters, respectively, and MAXCOMPTemporal and MAXCOMPSpatial are numbers considered as the upper bounds of the complexities. In one implementation, used in the VC-1 encoder, MAXCOMPTemporal and MAXCOMPSpatial are chosen to be two numbers close to 2×108 and 2.0, respectively. In one implementation, if either of the above calculations results in a number greater than 255, that number is clipped to remain inside the interval [0, 255].
Next, at block 820, the normalized temporal complexity parameter is scaled according to a predetermined exponent. This is done to adjust the relative strength of the spatial and temporal complexities within the unified complexity paramters. In one implementation, a value of 0.5 is used as an exponent for the temporal complexity parameter. Next, at block 830 the scaled temporal complexity parameter and the spatial complexity parameter are multiplied and at block 840 this product is set as the unified complexity parameter for the group of pictures. Thus, the unified complexity parameter is found as:
C=Cs×Ctα
where α is the scaling exponent used in block 820. It should be noticed that this equation can be written in an equivalent fashion as:
C=Cs(1−α)×(Ct′)α
This alternative form demonstrates more clearly the capability of the α exponent as a relative strength control between the two particular complexity parameters.
Examples of Complexity-Based Adaptive Preprocessing
Next, at block 1030, the scaled complexity parameter is normalized to form an appropriate filter strength value. In the case of the VC-1 encoding, one implementation gives the scaling and normalization calculations according to the following equation:
FilterStrength=(Cβ−2048)>>10
Where β is the exponential scale of block 1020 (e.g. 1.2 in a VC-1 encoding system), and the operator>> represents a right bit-shift operation. Additionally, in some implementations, if the resulting FilterStrength value is outside of the proper range for the filters being used, the number is clipped. Thus, in an exemplary VC-1 implementation, FilterStrength is clipped to reside in the range [0, 8]. Next, at block 1040, the filters are applied to the group of pictures (or raw video associated therewith) according to the calculated filter strength. The loop then repeats for additional groups of pictures at block 1050.
It should be noted that the estimated complexities Cs, Ct, and C may be used in alternative implementations to make better encoding decisions in other encoding and preprocessing modules. For example, and not by way of limitation, the system may make rate control decisions at to what quantization parameter, second quantization parameter or P- or B-frame delta quantization parameters to use, if the system considers the three complexity parameters from multiple frame altogether. In another example, a quantization module of an encoding system may benefit from the use of complexity parameters, such as using a bigger deadzone for quantization in the case of a high value for C.
Computing Environment
The above surface approximation techniques can be performed on any of a variety of computing devices. The techniques can be implemented in hardware circuitry, as well as in software executing within a computer or other computing environment, such as shown in
With reference to
A computing environment may have additional features. For example, the computing environment 1100 includes storage 1140, one or more input devices 1150, one or more output devices 1160, and one or more communication connections 1170. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing environment 1100. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment 1100, and coordinates activities of the components of the computing environment 1100.
The storage 1140 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, or any other medium which can be used to store information and which can be accessed within the computing environment 1100. The storage 1140 stores instructions for the software 1180 implementing the described techniques.
The input device(s) 1150 may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device that provides input to the computing environment 1100. For audio, the input device(s) 1150 may be a sound card or similar device that accepts audio input in analog or digital form, or a CD-ROM reader that provides audio samples to the computing environment. The output device(s) 1160 may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment 1100.
The communication connection(s) 1170 enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, compressed audio or video information, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
The techniques described herein can be described in the general context of computer-readable media. Computer-readable media are any available media that can be accessed within a computing environment. By way of example, and not limitation, with the computing environment 1100, computer-readable media include memory 1120, storage 1140, communication media, and combinations of any of the above.
The techniques herein can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computing environment on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing environment.
For the sake of presentation, the detailed description uses terms like “calculate,” “generate,” and “determine,” to describe computer operations in a computing environment. These terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being. The actual computer operations corresponding to these terms vary depending on implementation.
In view of the many possible variations of the subject matter described herein, we claim as our invention all such embodiments as may come within the scope of the following claims and equivalents thereto.
Number | Name | Date | Kind |
---|---|---|---|
762026 | Connstein | Jun 1904 | A |
4334244 | Chan et al. | Jun 1982 | A |
4460924 | Lippel | Jul 1984 | A |
4583114 | Catros | Apr 1986 | A |
4679079 | Catros et al. | Jul 1987 | A |
4774574 | Daly et al. | Sep 1988 | A |
4821119 | Gharavi | Apr 1989 | A |
4849812 | Borgers et al. | Jul 1989 | A |
4862264 | Wells et al. | Aug 1989 | A |
4965830 | Barham et al. | Oct 1990 | A |
4992889 | Yamagami et al. | Feb 1991 | A |
5072295 | Murakami et al. | Dec 1991 | A |
5089889 | Sugiyama | Feb 1992 | A |
5128758 | Azadegan et al. | Jul 1992 | A |
5136377 | Johnston et al. | Aug 1992 | A |
5144426 | Tanaka et al. | Sep 1992 | A |
5179442 | Azadegan et al. | Jan 1993 | A |
5237410 | Inoue | Aug 1993 | A |
5241395 | Chen | Aug 1993 | A |
5253058 | Gharavi | Oct 1993 | A |
5263088 | Hazu et al. | Nov 1993 | A |
5289283 | Hopper et al. | Feb 1994 | A |
5301242 | Gonzales et al. | Apr 1994 | A |
5303058 | Fukuda et al. | Apr 1994 | A |
5317396 | Fujinami | May 1994 | A |
5317672 | Crossman et al. | May 1994 | A |
5333212 | Ligtenberg | Jul 1994 | A |
5351310 | Califano et al. | Sep 1994 | A |
5374958 | Yanagihara | Dec 1994 | A |
5412429 | Glover | May 1995 | A |
5452104 | Lee | Sep 1995 | A |
5461421 | Moon | Oct 1995 | A |
5467134 | Laney et al. | Nov 1995 | A |
5473377 | Kim | Dec 1995 | A |
5481553 | Suzuki et al. | Jan 1996 | A |
5506916 | Nishihara et al. | Apr 1996 | A |
5509089 | Ghoshal | Apr 1996 | A |
5510785 | Segawa et al. | Apr 1996 | A |
5537440 | Eyuboglu et al. | Jul 1996 | A |
5537493 | Wilkinson | Jul 1996 | A |
5539469 | Jung | Jul 1996 | A |
5544286 | Laney | Aug 1996 | A |
5559557 | Kato | Sep 1996 | A |
5565920 | Lee et al. | Oct 1996 | A |
5585861 | Taniguchi et al. | Dec 1996 | A |
5587708 | Chiu | Dec 1996 | A |
5604856 | Guenter et al. | Feb 1997 | A |
5606371 | Gunnewiek et al. | Feb 1997 | A |
5611038 | Shaw et al. | Mar 1997 | A |
5623424 | Azadegan et al. | Apr 1997 | A |
5625714 | Fukuda | Apr 1997 | A |
5631644 | Katata et al. | May 1997 | A |
5646691 | Yokoyama | Jul 1997 | A |
5654760 | Ohtsuki | Aug 1997 | A |
5657087 | Jeong et al. | Aug 1997 | A |
5663763 | Yagasaki et al. | Sep 1997 | A |
5724097 | Hibi et al. | Mar 1998 | A |
5724456 | Boyack et al. | Mar 1998 | A |
5731836 | Lee | Mar 1998 | A |
5731837 | Hurst, Jr. | Mar 1998 | A |
5739861 | Music | Apr 1998 | A |
5751358 | Suzuki et al. | May 1998 | A |
5751379 | Markandey et al. | May 1998 | A |
5761088 | Hulyalkar et al. | Jun 1998 | A |
5764803 | Jacquin et al. | Jun 1998 | A |
5781788 | Woo et al. | Jul 1998 | A |
5786856 | Hall et al. | Jul 1998 | A |
5787203 | Lee et al. | Jul 1998 | A |
5799113 | Lee | Aug 1998 | A |
5802213 | Gardos | Sep 1998 | A |
5809178 | Anderson et al. | Sep 1998 | A |
5815097 | Schwartz et al. | Sep 1998 | A |
5819035 | Devaney et al. | Oct 1998 | A |
5825310 | Tsutsui | Oct 1998 | A |
5835145 | Ouyang et al. | Nov 1998 | A |
5835149 | Astle | Nov 1998 | A |
5835237 | Ebrahimi | Nov 1998 | A |
5835495 | Ferriere | Nov 1998 | A |
5844613 | Chaddha | Dec 1998 | A |
5850482 | Meany et al. | Dec 1998 | A |
5867167 | Deering | Feb 1999 | A |
5870435 | Choi et al. | Feb 1999 | A |
5877813 | Lee et al. | Mar 1999 | A |
5878166 | Legall | Mar 1999 | A |
5880775 | Ross | Mar 1999 | A |
5883672 | Suzuki et al. | Mar 1999 | A |
5905504 | Barkans et al. | May 1999 | A |
5923784 | Rao et al. | Jul 1999 | A |
5926209 | Glatt | Jul 1999 | A |
5926791 | Ogata et al. | Jul 1999 | A |
5946419 | Chen et al. | Aug 1999 | A |
5959693 | Wu et al. | Sep 1999 | A |
5969764 | Sun et al. | Oct 1999 | A |
5970173 | Lee et al. | Oct 1999 | A |
5990957 | Ryo | Nov 1999 | A |
6026190 | Astle | Feb 2000 | A |
6044115 | Horiike et al. | Mar 2000 | A |
6049630 | Wang et al. | Apr 2000 | A |
6058362 | Malvar | May 2000 | A |
6072831 | Chen | Jun 2000 | A |
6084636 | Sugahara et al. | Jul 2000 | A |
6088392 | Rosenberg | Jul 2000 | A |
6091777 | Guetz et al. | Jul 2000 | A |
6104751 | Artieri | Aug 2000 | A |
6115420 | Wang | Sep 2000 | A |
6115689 | Malvar | Sep 2000 | A |
6118817 | Wang | Sep 2000 | A |
6118903 | Liu | Sep 2000 | A |
6125140 | Wilkinson | Sep 2000 | A |
6125147 | Florencio et al. | Sep 2000 | A |
6148107 | Ducloux et al. | Nov 2000 | A |
6148109 | Boon et al. | Nov 2000 | A |
6160846 | Chiang et al. | Dec 2000 | A |
6167091 | Okada et al. | Dec 2000 | A |
6182034 | Malvar | Jan 2001 | B1 |
6212232 | Reed et al. | Apr 2001 | B1 |
6215905 | Lee et al. | Apr 2001 | B1 |
6219838 | Cherichetti et al. | Apr 2001 | B1 |
6223162 | Chen et al. | Apr 2001 | B1 |
6240135 | Kim | May 2001 | B1 |
6240380 | Malvar | May 2001 | B1 |
6243497 | Chiang et al. | Jun 2001 | B1 |
6249614 | Kolesnik et al. | Jun 2001 | B1 |
6256422 | Mitchell et al. | Jul 2001 | B1 |
6256423 | Krishnamurthy | Jul 2001 | B1 |
6263022 | Chen et al. | Jul 2001 | B1 |
6263024 | Matsumoto | Jul 2001 | B1 |
6275614 | Krishnamurthy et al. | Aug 2001 | B1 |
6278735 | Mohsenian | Aug 2001 | B1 |
6281942 | Wang | Aug 2001 | B1 |
6292588 | Shen et al. | Sep 2001 | B1 |
6314208 | Konstantinides et al. | Nov 2001 | B1 |
6337881 | Chaddha | Jan 2002 | B1 |
6347116 | Haskell et al. | Feb 2002 | B1 |
6348945 | Hayakawa | Feb 2002 | B1 |
6356709 | Abe et al. | Mar 2002 | B1 |
6359928 | Wang et al. | Mar 2002 | B1 |
6360017 | Chiu et al. | Mar 2002 | B1 |
6370502 | Wu et al. | Apr 2002 | B1 |
6373894 | Florencio et al. | Apr 2002 | B1 |
6380985 | Callahan | Apr 2002 | B1 |
6385343 | Kuroda et al. | May 2002 | B1 |
6389171 | Washington | May 2002 | B1 |
6393155 | Bright et al. | May 2002 | B1 |
6408026 | Tao | Jun 2002 | B1 |
6418166 | Wu et al. | Jul 2002 | B1 |
6438167 | Shimizu et al. | Aug 2002 | B1 |
6456744 | Lafe | Sep 2002 | B1 |
6463100 | Cho et al. | Oct 2002 | B1 |
6466620 | Lee | Oct 2002 | B1 |
6473409 | Malvar | Oct 2002 | B1 |
6473534 | Merhav et al. | Oct 2002 | B1 |
6490319 | Yang | Dec 2002 | B1 |
6493385 | Sekiguchi et al. | Dec 2002 | B1 |
6519284 | Pesquet-Popescu et al. | Feb 2003 | B1 |
6526096 | Lainema et al. | Feb 2003 | B2 |
6546049 | Lee | Apr 2003 | B1 |
6556925 | Mori et al. | Apr 2003 | B1 |
6571019 | Kim et al. | May 2003 | B1 |
6593925 | Hakura et al. | Jul 2003 | B1 |
6600836 | Thyagarajan et al. | Jul 2003 | B1 |
6625215 | Faryar et al. | Sep 2003 | B1 |
6647152 | Willis et al. | Nov 2003 | B2 |
6654417 | Hui | Nov 2003 | B1 |
6678422 | Sharma et al. | Jan 2004 | B1 |
6687294 | Yan et al. | Feb 2004 | B2 |
6704718 | Burges et al. | Mar 2004 | B2 |
6721359 | Bist et al. | Apr 2004 | B1 |
6728317 | Demos | Apr 2004 | B1 |
6731811 | Rose | May 2004 | B1 |
6738423 | Lainema et al. | May 2004 | B1 |
6747660 | Olano et al. | Jun 2004 | B1 |
6759999 | Doyen | Jul 2004 | B1 |
6760482 | Taubman | Jul 2004 | B1 |
6765962 | Lee et al. | Jul 2004 | B1 |
6771830 | Goldstein et al. | Aug 2004 | B2 |
6785331 | Jozawa et al. | Aug 2004 | B1 |
6788740 | Van der Schaar et al. | Sep 2004 | B1 |
6792157 | Koshi et al. | Sep 2004 | B1 |
6795584 | Karczewicz et al. | Sep 2004 | B2 |
6801572 | Yamada et al. | Oct 2004 | B2 |
6807317 | Mathew et al. | Oct 2004 | B2 |
6810083 | Chen et al. | Oct 2004 | B2 |
6831947 | Ribas Corbera | Dec 2004 | B2 |
6862320 | Isu et al. | Mar 2005 | B1 |
6865291 | Zador | Mar 2005 | B1 |
6873368 | Yu et al. | Mar 2005 | B1 |
6873654 | Rackett | Mar 2005 | B1 |
6876703 | Ismaeil et al. | Apr 2005 | B2 |
6882753 | Chen et al. | Apr 2005 | B2 |
6907142 | Kalevo et al. | Jun 2005 | B2 |
6909745 | Puri et al. | Jun 2005 | B1 |
6947045 | Ostermann et al. | Sep 2005 | B1 |
6975680 | Demos | Dec 2005 | B2 |
6977659 | Dumitras et al. | Dec 2005 | B2 |
6980595 | Rose et al. | Dec 2005 | B2 |
6983018 | Lin et al. | Jan 2006 | B1 |
6990242 | Malvar | Jan 2006 | B2 |
6992725 | Mohsenian | Jan 2006 | B2 |
7016546 | Fukuhara et al. | Mar 2006 | B2 |
7020204 | Auvray et al. | Mar 2006 | B2 |
7027506 | Lee et al. | Apr 2006 | B2 |
7027507 | Wu | Apr 2006 | B2 |
7035473 | Zeng et al. | Apr 2006 | B1 |
7042941 | Laksono et al. | May 2006 | B1 |
7058127 | Lu et al. | Jun 2006 | B2 |
7072525 | Covell | Jul 2006 | B1 |
7099389 | Yu et al. | Aug 2006 | B1 |
7099515 | Lin et al. | Aug 2006 | B2 |
7110455 | Wu et al. | Sep 2006 | B2 |
7158668 | Munsil et al. | Jan 2007 | B2 |
7162096 | Horowitz | Jan 2007 | B1 |
7200277 | Joshi et al. | Apr 2007 | B2 |
7233362 | Wu | Jun 2007 | B2 |
7289154 | Gindele | Oct 2007 | B2 |
7295609 | Sato et al. | Nov 2007 | B2 |
7301999 | Filippini et al. | Nov 2007 | B2 |
7307639 | Dumitras et al. | Dec 2007 | B1 |
7308151 | Munsil et al. | Dec 2007 | B2 |
7356085 | Gavrilescu et al. | Apr 2008 | B2 |
7463780 | Fukuhara et al. | Dec 2008 | B2 |
7471830 | Lim et al. | Dec 2008 | B2 |
7570834 | Deshpande | Aug 2009 | B2 |
7580584 | Holcomb et al. | Aug 2009 | B2 |
7738554 | Lin et al. | Jun 2010 | B2 |
7778476 | Alvarez et al. | Aug 2010 | B2 |
7801383 | Sullivan | Sep 2010 | B2 |
7869517 | Ghanbari | Jan 2011 | B2 |
7889790 | Sun | Feb 2011 | B2 |
7995649 | Zuo et al. | Aug 2011 | B2 |
20010048718 | Bruls et al. | Dec 2001 | A1 |
20020021756 | Jayant et al. | Feb 2002 | A1 |
20020044602 | Ohki | Apr 2002 | A1 |
20020118748 | Inomata et al. | Aug 2002 | A1 |
20020118884 | Cho et al. | Aug 2002 | A1 |
20020136297 | Shimada et al. | Sep 2002 | A1 |
20020136308 | Le Maguet et al. | Sep 2002 | A1 |
20020154693 | Demos et al. | Oct 2002 | A1 |
20020186890 | Lee et al. | Dec 2002 | A1 |
20030021482 | Lan et al. | Jan 2003 | A1 |
20030053702 | Hu | Mar 2003 | A1 |
20030058944 | MacInnis et al. | Mar 2003 | A1 |
20030095599 | Lee et al. | May 2003 | A1 |
20030103677 | Tastl et al. | Jun 2003 | A1 |
20030108100 | Sekiguchi et al. | Jun 2003 | A1 |
20030113026 | Srinivasan et al. | Jun 2003 | A1 |
20030128754 | Akimoto et al. | Jul 2003 | A1 |
20030128756 | Oktem | Jul 2003 | A1 |
20030138150 | Srinivasan | Jul 2003 | A1 |
20030185420 | Sefcik et al. | Oct 2003 | A1 |
20030194010 | Mukerjee et al. | Oct 2003 | A1 |
20030206582 | Srinivasan et al. | Nov 2003 | A1 |
20030215011 | Wang et al. | Nov 2003 | A1 |
20030219073 | Lee et al. | Nov 2003 | A1 |
20030223493 | Ye et al. | Dec 2003 | A1 |
20030235247 | Wu et al. | Dec 2003 | A1 |
20040008901 | Avinash | Jan 2004 | A1 |
20040022316 | Ueda et al. | Feb 2004 | A1 |
20040036692 | Alcorn et al. | Feb 2004 | A1 |
20040090397 | Doyen et al. | May 2004 | A1 |
20040091168 | Jones et al. | May 2004 | A1 |
20040151243 | Bhaskaran et al. | Aug 2004 | A1 |
20040158719 | Lee et al. | Aug 2004 | A1 |
20040170395 | Filippini et al. | Sep 2004 | A1 |
20040174464 | MacInnis et al. | Sep 2004 | A1 |
20040190610 | Song et al. | Sep 2004 | A1 |
20040202376 | Schwartz et al. | Oct 2004 | A1 |
20040228406 | Song | Nov 2004 | A1 |
20040264568 | Florencio | Dec 2004 | A1 |
20040264580 | Chiang Wei Yin et al. | Dec 2004 | A1 |
20050002575 | Joshi et al. | Jan 2005 | A1 |
20050008075 | Chang et al. | Jan 2005 | A1 |
20050013365 | Mukerjee et al. | Jan 2005 | A1 |
20050013497 | Hsu et al. | Jan 2005 | A1 |
20050013498 | Srinivasan et al. | Jan 2005 | A1 |
20050013500 | Lee et al. | Jan 2005 | A1 |
20050015246 | Thumpudi et al. | Jan 2005 | A1 |
20050015259 | Thumpudi et al. | Jan 2005 | A1 |
20050021579 | Bae et al. | Jan 2005 | A1 |
20050024487 | Chen | Feb 2005 | A1 |
20050031034 | Kamaci et al. | Feb 2005 | A1 |
20050036698 | Beom | Feb 2005 | A1 |
20050036699 | Holcomb et al. | Feb 2005 | A1 |
20050041738 | Lin et al. | Feb 2005 | A1 |
20050052294 | Liang et al. | Mar 2005 | A1 |
20050053151 | Lin et al. | Mar 2005 | A1 |
20050053158 | Regunathan et al. | Mar 2005 | A1 |
20050084009 | Furukawa et al. | Apr 2005 | A1 |
20050084013 | Wang et al. | Apr 2005 | A1 |
20050094731 | Xu et al. | May 2005 | A1 |
20050105612 | Sung et al. | May 2005 | A1 |
20050105622 | Gokhale | May 2005 | A1 |
20050105889 | Conklin | May 2005 | A1 |
20050123274 | Crinon et al. | Jun 2005 | A1 |
20050135484 | Lee et al. | Jun 2005 | A1 |
20050147163 | Li et al. | Jul 2005 | A1 |
20050152448 | Crinon et al. | Jul 2005 | A1 |
20050152451 | Byun | Jul 2005 | A1 |
20050180500 | Chiang et al. | Aug 2005 | A1 |
20050180502 | Puri | Aug 2005 | A1 |
20050190836 | Lu et al. | Sep 2005 | A1 |
20050207492 | Pao | Sep 2005 | A1 |
20050232501 | Mukerjee | Oct 2005 | A1 |
20050238096 | Holcomb et al. | Oct 2005 | A1 |
20050254719 | Sullivan | Nov 2005 | A1 |
20050259729 | Sun | Nov 2005 | A1 |
20050276493 | Xin et al. | Dec 2005 | A1 |
20060013307 | Olivier et al. | Jan 2006 | A1 |
20060013309 | Ha et al. | Jan 2006 | A1 |
20060018552 | Malayath et al. | Jan 2006 | A1 |
20060034368 | Klivington | Feb 2006 | A1 |
20060038826 | Daly | Feb 2006 | A1 |
20060056508 | Lafon et al. | Mar 2006 | A1 |
20060071825 | Demos | Apr 2006 | A1 |
20060083300 | Han et al. | Apr 2006 | A1 |
20060083308 | Schwarz et al. | Apr 2006 | A1 |
20060088098 | Vehvilainen | Apr 2006 | A1 |
20060098733 | Matsumura et al. | May 2006 | A1 |
20060104350 | Liu | May 2006 | A1 |
20060104527 | Koto et al. | May 2006 | A1 |
20060126724 | Cote | Jun 2006 | A1 |
20060126728 | Yu et al. | Jun 2006 | A1 |
20060133478 | Wen | Jun 2006 | A1 |
20060133479 | Chen et al. | Jun 2006 | A1 |
20060133689 | Andersson et al. | Jun 2006 | A1 |
20060140267 | He et al. | Jun 2006 | A1 |
20060165176 | Raveendran et al. | Jul 2006 | A1 |
20060188014 | Civanlar et al. | Aug 2006 | A1 |
20060197777 | Cha et al. | Sep 2006 | A1 |
20060227868 | Chen et al. | Oct 2006 | A1 |
20060238444 | Wang et al. | Oct 2006 | A1 |
20060239576 | Mukherjee | Oct 2006 | A1 |
20060245506 | Lin et al. | Nov 2006 | A1 |
20060256851 | Wang et al. | Nov 2006 | A1 |
20060256867 | Turaga et al. | Nov 2006 | A1 |
20060257037 | Samadani | Nov 2006 | A1 |
20060268990 | Lin et al. | Nov 2006 | A1 |
20060268991 | Segall et al. | Nov 2006 | A1 |
20060274959 | Piastowski | Dec 2006 | A1 |
20070002946 | Bouton et al. | Jan 2007 | A1 |
20070009039 | Ryu | Jan 2007 | A1 |
20070009042 | Craig et al. | Jan 2007 | A1 |
20070053603 | Monro | Mar 2007 | A1 |
20070081588 | Raveendran et al. | Apr 2007 | A1 |
20070091997 | Fogg et al. | Apr 2007 | A1 |
20070140333 | Chono et al. | Jun 2007 | A1 |
20070140354 | Sun | Jun 2007 | A1 |
20070147497 | Bao et al. | Jun 2007 | A1 |
20070160126 | Van Der Meer et al. | Jul 2007 | A1 |
20070160138 | Wedi et al. | Jul 2007 | A1 |
20070160151 | Bolton et al. | Jul 2007 | A1 |
20070189626 | Tanizawa et al. | Aug 2007 | A1 |
20070201553 | Shindo | Aug 2007 | A1 |
20070230565 | Tourapis et al. | Oct 2007 | A1 |
20070237221 | Hsu et al. | Oct 2007 | A1 |
20070237222 | Xia et al. | Oct 2007 | A1 |
20070237236 | Chang et al. | Oct 2007 | A1 |
20070237237 | Chang et al. | Oct 2007 | A1 |
20070248163 | Zuo et al. | Oct 2007 | A1 |
20070248164 | Zuo et al. | Oct 2007 | A1 |
20070258518 | Tu et al. | Nov 2007 | A1 |
20070258519 | Srinivasan | Nov 2007 | A1 |
20070268964 | Zhao | Nov 2007 | A1 |
20080008249 | Yan | Jan 2008 | A1 |
20080008394 | Segall | Jan 2008 | A1 |
20080013630 | Li et al. | Jan 2008 | A1 |
20080024513 | Raveendran | Jan 2008 | A1 |
20080031346 | Segall | Feb 2008 | A1 |
20080068446 | Barkley et al. | Mar 2008 | A1 |
20080080615 | Tourapis et al. | Apr 2008 | A1 |
20080089410 | Lu et al. | Apr 2008 | A1 |
20080089417 | Bao et al. | Apr 2008 | A1 |
20080095235 | Hsiang | Apr 2008 | A1 |
20080101465 | Chono et al. | May 2008 | A1 |
20080165848 | Ye et al. | Jul 2008 | A1 |
20080187042 | Jasinschi | Aug 2008 | A1 |
20080240235 | Holcomb et al. | Oct 2008 | A1 |
20080240250 | Lin et al. | Oct 2008 | A1 |
20080240257 | Chang et al. | Oct 2008 | A1 |
20080260278 | Zuo et al. | Oct 2008 | A1 |
20080304562 | Chang et al. | Dec 2008 | A1 |
20090003718 | Liu et al. | Jan 2009 | A1 |
20090161756 | Lin | Jun 2009 | A1 |
20090207912 | Holcomb et al. | Aug 2009 | A1 |
20090207919 | Yin et al. | Aug 2009 | A1 |
20090213930 | Ye et al. | Aug 2009 | A1 |
20090219994 | Tu et al. | Sep 2009 | A1 |
20090245587 | Holcomb et al. | Oct 2009 | A1 |
20090262798 | Chiu et al. | Oct 2009 | A1 |
20090290635 | Kim et al. | Nov 2009 | A1 |
20090296808 | Regunathan et al. | Dec 2009 | A1 |
20100177826 | Bhaumik et al. | Jul 2010 | A1 |
Number | Date | Country |
---|---|---|
1327074 | Feb 1994 | CA |
0932306 | Jul 1999 | EP |
1465349 | Oct 2004 | EP |
1871113 | Dec 2007 | EP |
897363 | May 1962 | GB |
1 218 015 | Jan 1971 | GB |
05-227525 | Sep 1993 | JP |
07-222145 | Aug 1995 | JP |
07-250327 | Sep 1995 | JP |
08-336139 | Dec 1996 | JP |
10-336656 | Dec 1998 | JP |
11-041610 | Feb 1999 | JP |
6-296275 | Oct 2004 | JP |
2007-281949 | Oct 2007 | JP |
132895 | Oct 1998 | KR |
WO 9403988 | Feb 1994 | WO |
WO 9721302 | Jun 1997 | WO |
WO 9948300 | Sep 1999 | WO |
WO 0021207 | Apr 2000 | WO |
WO 0072599 | Nov 2000 | WO |
WO 0207438 | Jan 2002 | WO |
JP 2003061090 | Feb 2003 | WO |
WO 2004100554 | Nov 2004 | WO |
WO 2004100556 | Nov 2004 | WO |
WO 2005065030 | Jul 2005 | WO |
WO 2005076614 | Aug 2005 | WO |
WO 2006075895 | Jul 2006 | WO |
WO 2006079997 | Aug 2006 | WO |
WO 2006112620 | Oct 2006 | WO |
WO 2007008286 | Jan 2007 | WO |
WO 2007009875 | Jan 2007 | WO |
WO 2007015047 | Feb 2007 | WO |
WO 2007018669 | Feb 2007 | WO |
WO 2007042365 | Apr 2007 | WO |
WO 2007130580 | Nov 2007 | WO |
Number | Date | Country | |
---|---|---|---|
20080192822 A1 | Aug 2008 | US |