Graphics layer reduction for video composition

Information

  • Patent Grant
  • 8063916
  • Patent Number
    8,063,916
  • Date Filed
    Friday, October 8, 2004
    20 years ago
  • Date Issued
    Tuesday, November 22, 2011
    13 years ago
Abstract
A method and system that blend graphics layers and a video layer. The graphics layers may be above and below the video layer, which may be a streaming video. The graphics layers may be stored in memory, blended and stored back in memory. The blended graphics layers may be combined with streaming video and output on a display. Blending the graphics in memory may be done offline and may save processing time and improve real-time combining with streaming video. In an embodiment of the present invention, there may be several layers of graphics below the video layer, and several graphics layers above the video layer. The top graphics layers may be blended into one top graphics layer, and the bottom graphics layers may be blended into one bottom graphics layer. The top and bottom graphics layers may be then blended into one graphics layer and combined with the video layer.
Description
FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT

[Not Applicable]


MICROFICHE/COPYRIGHT REFERENCE

[Not Applicable]


BACKGROUND OF THE INVENTION

In video systems where graphics are added to video content, there may be a video layer and a graphics layer on top of the video layer. The video layer may come from a television (TV) source, and the graphics layer may be used to add information such as, for example, a user guide or some kind of a user-graphical interface such as a TV menu. In more sophisticated video systems, there may be graphics added on top of the video, as well as, graphics below the video. For example, adding a user guide on top of the video and a still background behind the video.


Outputting the layers as one output onto a display such as, for example, a monitor, requires blending the layers into one stream of data. The process of blending the layers together is known as graphics blending or graphics and video compositing.


A system with graphics and video layers may be viewed as a planar system with planes composed of the different layers. A low-end system consists of only one graphics layer and one video layer. A higher-end system consists of at least one graphics layer above the video layer, and at least one graphics layer below the video layer.


The way to blend the layers is generally done from the bottom up. So in a system with three layers, for example, the blending is done by blending the bottom graphics layer with the video layer, which results in a new blended layer, which may then be blended with the top graphics layer to get a composite image to output to the display.


Each layer of graphics, or the layer of video is composed of a buffer and an alpha. The buffer is a region of the memory in the system that contains pixels of the layer. Alpha is the blend factor of a layer and it indicates how much to blend of that layer with the layer below it. The value of alpha ranges from 0 to 1, inclusive. For video there alpha value can be the same for the whole layer or per pixel. Whereas with graphics, each pixel may have a different alpha value.


For example, if a system has two layers, a graphics layer on the top and a video layer below it, the buffer and the alpha for the graphics and the video layer would be, Bt, At, Bb, and Abs, respectively. Blending the two layers together yields the following:

Btb=AtBt+(1−At)Bb  (1)

Where Btb is the buffer for the blended layer. If At is 1, then when blending the graphics with the video below it, all that is seen is entirely graphics, so an alpha value of 0 implies complete transparency. If At is 0, then when blending the graphics with the video below it, all that is seen is entirely video, so an alpha value of 1 implies complete opaqueness.


In most systems, alpha is an 8-bit number ranging from 0 to 1, inclusive. So there are 256 levels of transparency ranging from complete transparency to complete opaqueness.


In a more complex system, with graphics layers above and below the video layer, things may get more complex as well. For example, a system may have a graphics layer on top of the video layer, with buffer B1, and alpha A1, the video layer with buffer BV and alpha AV, and a graphics layer below the video layer, with buffer B2 and alpha A2. Applying equation (1) above, blending the video and the graphics layer below it yields:

BV2=AVBV+(1−AV)B2  (2)

Where BV2 is the buffer for the blended bottom layer. Then blending the top graphics layer with the blended bottom layer yields:

BV3=A1B1+(1−A1)BV2  (3)

Where BV3 is the buffer for the three blended layers. Expanding and re-arranging equation (3) after applying equation (2) yields:

BV3=AV(1−A1)BV+A1B1+(1−A1)(1−AV)B2  (4)


Equation (4) above illustrates the calculation required to blend two layers of graphics with one layer of video. In more complex systems, there may be several layers of graphics above a layer of video, and several layers of graphics below the layer of video. In such systems, the graphics layers on top may be blended together into one top graphics layer, the graphics layers below may be blended together into one bottom graphics layer, then the top layer, video layer, and bottom layer, may be blended together according to equation (4).


In video systems, hardware that performs the calculations for compositing needs to read all three layers simultaneously, with the video, which is streaming data, and output the result onto the monitor in real-time, which can get bandwidth-expensive. The compositing process can also be hardware expensive when two graphics layers are read out of the memory, and calculations are made to accommodate streaming video data. Problems may be seen sometimes on a personal computer (PC), for example, when moving a window, a portion of the screen that was covered by the window may remain blank for a few seconds, because the graphics engine may take time to respond and do all the blending to accommodate the new graphics layers.


Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of ordinary skill in the art through comparison of such systems with the present invention.


BRIEF SUMMARY OF THE INVENTION

Aspects of the present invention may be seen in a system and method that blends graphics layers and a video layer, wherein the graphics layers are stored in a memory. The method may comprise retrieving the graphics layers from the memory; blending the graphics layers; storing the blended graphics layers in the memory; reading the stored blended graphics layers from the memory; and combining the blended graphics layers with a streaming video layer. In an embodiment of the present invention, the blended graphics layers and the streaming video may be combined in raster format. The combined graphics and video may then be output onto a display device.


In an embodiment of the present invention, at least a portion of the graphics layers may be above the video layer and at least a portion of the graphics layers may be below the video layer. Blending the graphics layers may comprise blending the at least a portion of the graphics layers above the video layer into a top graphics layer; blending the at least a portion of the graphics layers below the video layer into a bottom graphics layer; and blending the top graphics layer and the bottom graphics layer into one graphics layer.


The system comprises a memory and at least one processor capable of performing the method that blends graphics layers and a video layer, wherein the graphics layers are stored in a memory.


These and other features and advantages of the present invention may be appreciated from a review of the following detailed description of the present invention, along with the accompanying figures in which like reference numerals refer to like parts throughout.





BRIEF DESCRIPTION OF SEVERAL VIEWS OF THE DRAWINGS


FIG. 1 illustrates a block diagram of an exemplary system for compositing graphics and video, in accordance with an embodiment of the present invention.



FIG. 2A illustrates three exemplary layers of graphics and video to be blended, in accordance with an embodiment of the preset invention.



FIG. 2B illustrates two exemplary layers of graphics and video to be blended, in accordance with an embodiment of the preset invention.



FIG. 3A illustrates an exemplary graphics layer, in accordance with an embodiment of the present invention.



FIG. 3B illustrates another exemplary graphics layer, in accordance with an embodiment of the present invention.



FIG. 3C illustrates an exemplary video layer, in accordance with an embodiment of the present invention.



FIG. 3D illustrates an exemplary blended alpha plane, in accordance with an embodiment of the present invention.



FIG. 3E illustrates an exemplary blended graphics color plane, in accordance with an embodiment of the present invention.



FIG. 3F illustrates exemplary composited graphics and video layers, in accordance with an embodiment of the present invention.



FIG. 4 illustrates a flow diagram of an exemplary method of compositing graphics layers and a video layer, in accordance with an embodiment of the present invention.



FIG. 5 illustrates an exemplary computer system, in accordance with an embodiment of the present invention.





DETAILED DESCRIPTION OF THE INVENTION

Aspects of the present invention generally relate to the field of graphics and video compositing. Specifically, the present invention relates to the compositing of graphics layers appearing both, above and below a video layer, and to the reduction of graphics layers in video compositing.



FIG. 1 illustrates a block diagram of an exemplary system 100 for compositing graphics and video, in accordance with an embodiment of the present invention. In an embodiment of the present invention, the system 100 may comprise a central processing unit (CPU) 101, a memory-to-memory (M2MC) compositor 103, and a graphics-video compositor 105. The CPU 101 may generate graphics objects 111 and store them in a memory unit. The memory-to-memory compositor 103 may then read the graphics objects 111 and combine them into one graphics object 113. The memory-to-memory compositor 103 may combine two or more graphics objects 111.


The combined graphics object 113 may then be fed into the graphics-video compositor 105, which may also have a streaming video 115 coming into it. The graphics-video compositor 105 may be responsible for combining video with graphics that are being fed from a graphics feeder. The graphics-video compositor 105 may then mix the graphics object 113 and the streaming video 115, and output the result 117. In an embodiment of the present invention, the graphics object 113 and the streaming video 115 may be mixed in a raster format, i.e. pixel by pixel, as the video is output onto a display device.



FIG. 2A illustrates three exemplary layers of graphics and video to be blended, in accordance with an embodiment of the preset invention. Two layers of graphics, 211 and 215, one above and one below a video layer 213 may be combined together prior to blending with the video layer 213.



FIG. 2B illustrates two exemplary layers of graphics and video to be blended, in accordance with an embodiment of the preset invention. The graphics layers, 211 and 215 of FIG. 2A, above and below the video layer 213 may be combined into one graphics layer 221, which may then be blended with the video layer 213 according to equation (1) above. As a result only one graphics layer 221 and one video layer 213 may be read out and outputted onto a display such as, for example, a monitor. In an embodiment of the present invention, the graphics layers may change with each video frame.


In an embodiment of the present invention, the graphics layers may be combined and treated as if they were one graphics layer above the video layer. In an embodiment of the present invention, the graphics layers may have different alpha for each pixel of a layer, but since the blending of the graphics layers may be done offline, there may be no effect on real-time processing and no delays in video display.


In an embodiment of the present invention, a combined graphics layer 221 may have a buffer BG and an alpha AG, and a video layer 213 may have a buffer BV and alpha AV. Combining the combined graphics layer 221 on top with the video layer 213 on bottom according to equation (1) yields:

BGV=AGBG+(1−AG)BV  (5)

Equation (4) above implies that BV is multiplied by AV (1−A1), which yields:

AG=1−AV(1−A1)  (6)

Applying equation (6) to equation (5) yields:

BGV=(1−AV(1−A1))BG+AV(1−A1)BV  (7)

To match equation (7) and equation (4):

BG=(A1B1+(1−A1)(1−AV)B2)/(1−AV(1−A1))  (8)


As a result, equations (6) and (8) may be the alpha and the buffer of the combined graphics layer 221, respectively. In an embodiment of the present invention, the process of combining the graphics layers may be performed in the background, which may be done less expensively by consuming less bandwidth compared to blending a bottom graphics layer with the video layer above it, then blending the result with the top graphics layer. In such an embodiment, the real-time processing may also be reduced.


In an embodiment of the present invention, equation (6) may be computed by the M2MC in one pass. In another embodiment of the present invention, A1 and AV may be constant, and the CPU may compute equation (6).


In an embodiment of the present invention, the combined graphics layer buffer (BG) may depend on the alpha of the video layer (AV), which may change from one frame to another. In such an embodiment, the combined graphics layer buffer may be updated. In another embodiment of the present invention, the alpha of the video layer may stay constant over several frames. In such an embodiment, the computation for the combined graphics layer may be done once and re-used as long as the graphics layers stay the same. In yet another embodiment of the present invention, the alpha of the video layer may be different from pixel to pixel within a frame. In such an embodiment, the additional computation may not have an effect on the real-time system, since the equations involving the alpha of the video layer may be done offline as part of the graphics operations in the CPU or the M2MC.


If the alpha factors for all the layers remain constant, equation (8) becomes of the form K1*B1+K2*B2, where K1, and K2 may need to be computed only once, and:

K1=A1/(1−AV(1−A1))  (9)
K2=(1−A1)(1−AV)/(1−AV(1−A1))  (10)
BG=K1*B1+K2*B2  (11)


However, if the alpha values are per pixel, then the values of K1, and K2 may change per pixel, and the division operation may be required per pixel. In an embodiment of the present invention, an alternative compositing equation may be used where equation (5) may be changed to the following:

BGV=BG+(1−AG)BV  (12)

Then equation (8) becomes:

BG=A1B1+(1−A1)(1−AV)B2  (13)


In such an embodiment, a division operation may not be required, and the processing cost may be further reduced compared to the processing cost in an embodiment that may have a division operation.


The video compositor may be set up either to compute equation (12) (when graphics feeder is in pre-multiply) or equation (5) (when graphics feeder is not in pre-multiply).


Equation (13) may be calculated by the M2MC. In an embodiment of the present invention, if A1 and AV are constant, then (1−A1) (1−AV) may be computed and turned into one constant value. As a result, equation (13) becomes:

BG=A1B1+AKB2  (14)


In another embodiment of the present invention, if A1 and AV are not both constant, then equation (13) may need to be done in two passes, where the first pass may compute an intermediate value BC:

BC=(1−AV)B2  (15)

Then, the second pass may compute BG:

BG=A1B1+(1−A1)BC  (16)


When equation (12) is calculated, the graphics feeder may be setup in “alpha pre-multiply” mode, which may allow having BG and not AGBG as found in equation (5). AG may still be fed out by the graphics feeder, so that (1−AG)*BV can be computed in the video compositor.


In an embodiment of the present invention, results of computations involving values that may not change may be kept and stored so that they are not re-computed. Stored values may be re-computed when inputs to the equations are made.



FIG. 3A illustrates an exemplary graphics layer, in accordance with an embodiment of the present invention. The graphics layer 311 may be an upper graphics layer such as, for example, graphics layer 211 of FIG. 2A. The graphics layer 311 may comprises portions with different alpha values. For example, the entire graphics layer 311 may have an alpha value of 0 (completely transparent) except for an area in the shape of a circle having an alpha value of 1 (completely opaque).



FIG. 3B illustrates another exemplary graphics layer, in accordance with an embodiment of the present invention. The graphics layer 315 may be a lower graphics layer such as, for example, graphics layer 215 of FIG. 2A. The graphics layer 315 may comprises portions with different alpha values. For example, the entire graphics layer 315 may have an alpha value of 0 (completely transparent) except for an area in the shape of a circle having an alpha value of 1 (completely opaque), but at a different position than the circle with alpha of value 1 of the upper graphics layer 311.



FIG. 3C illustrates an exemplary video layer, in accordance with an embodiment of the present invention. The video layer 313 may be a video layer such as, for example, the video layer 213 of FIG. 2A. The video layer 313 may comprises portions with different alpha values. For example, the entire video layer 313 may have an alpha value of 0 (completely transparent) except for an area in the shape of a square having an alpha value of 1 (completely opaque).


In an embodiment of the present invention, the graphic layers 311 and 315 may be blended together and composited with the video layer 313. The alphas of the top graphics layer and the video layer may be used to compute the alpha plane AG. FIG. 3D illustrates an exemplary blended alpha plane, in accordance with an embodiment of the present invention. The alphas of the graphics layer 311 and the video layer 313 may be blended to determine the alpha AG 317 of the blended layers.


The colors of the graphics layers may then be blended together to compute the color plane BG. The computation of BG may be done according to equation 16 above, which may also utilize the alpha of the video layer in the computation. FIG. 3E illustrates an exemplary blended graphics color plane, in accordance with an embodiment of the present invention. The blended graphics color plane BG 319 may be the result of blended the graphics layers 311 and 315 of FIG. 3A and FIG. 3B, respectively, in addition to the alpha of the video layer 313.


The blended graphics layer 319 may then be combined together with the video layer 313 of FIG. 3C, using the alpha AG 317, to get a graphics-video composition 321. FIG. 3F illustrates exemplary composited graphics and video layers, in accordance with an embodiment of the present invention. The output on a display may be for example the graphics-video composition 321, where the elements of the upper graphics layer 311 may appear above the elements of the video layer 313, and the elements of the lower graphics layer 315 may appear below the elements of the video layer 313.



FIG. 4 illustrates a flow diagram of an exemplary method 400 of compositing graphics layers and a video layer, in accordance with an embodiment of the present invention. In an embodiment of the present invention, the graphics layers may appear both above and below the video layer. The method may start at a starting block 401, and at a next block 403 graphics may be retrieved from a memory. At a next block 405, it may be determined whether there are multiple graphics layers above the video layer. If it is determined that there is only one graphics layer above the video layer, the method may proceed to a next block 409, where it may be determined whether there are multiple graphics layers below the video layer.


If there are multiple graphics layers above the video layer, then at a block 407, the graphics layers above the video layer may be blended together into one top graphics layer. At a next block 409, it may be determined whether there are multiple graphics layers below the video layer.


If at the block 409, it is determined that there is only one graphics layer below the video layer, the method may proceed to a next block 413. If there are multiple graphics layers below the video layer, then at a block 411, the graphics layers below the video layer may be blended together into one bottom graphics layer, the method may then proceed to a next block 413. At a next block 413, the top graphics layer and the bottom graphics layer may be blended together into one graphics layer, which may then be stored back in memory. As the video streams in, for each image, the video layer may be then combined with the appropriate blended graphics layer at a next block 415. Then at a next block 417, the combined graphics and video layers may be output on a display device.


In an embodiment of the present invention, the method of the flow diagram of FIG. 4 may be performed utilizing a system such as, for example, the system 100 of FIG. 1. The system 100 may be a portion of a system such as, for example, a video decoder system.



FIG. 5 illustrates an exemplary computer system 500, in accordance with an embodiment of the present invention. A central processing unit (CPU) 511 may be interconnected via a system bus 540 to a random access memory (RAM) 531, a read only memory (ROM) 521, an input/output (I/O) adapter 512, a user interface adapter 501, a communications adapter 591, and a display adapter 530. The I/O adapter 512 may connect to the bus 540 peripheral devices such as hard disc drives 541, floppy disc drives 553 for reading removable floppy discs 561, and optical disc drives 510 for reading removable optical discs 571 (such as a compact disc or a digital versatile disc). The user interface adapter 501 may connect to the bus 540 devices such as a keyboard 550, a mouse 580 having a plurality of buttons 590, a speaker 570, a microphone 560, and/or other user interface devices such as a touch screen device (not shown). The communications adapter 591 may connect the computer system to a data processing network 581. The display adapter 530 may connect a monitor 520 to the bus 540.


An alternative embodiment of the present invention may be implemented as sets of instructions resident in the RAM 531 of one or more computer systems 500 configured generally as described in FIG. 1. Until required by the computer system 500, the sets of instructions may be stored in another computer readable memory, for example in a hard disc drive 541, or in removable memory such as an optical disc 571 for eventual use in an optical disc drive 510, or in a floppy disc 561 for eventual use in a floppy disc drive 553. The physical storage of the sets of instructions may physically change the medium upon which it is stored electrically, magnetically, or chemically so that the medium carries computer readable information.


The present invention may be realized in hardware, software, firmware and/or a combination thereof. The present invention may be realized in a centralized fashion in at least one computer system, or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein may be suitable. A typical combination of hardware and software may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system to carry out the methods described herein.


The present invention may also be embedded in a computer program product comprising all of the features enabling implementation of the methods described herein which when loaded in a computer system is adapted to carry out these methods. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; and b) reproduction in a different material form.


While the present invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present invention without departing from its scope. Therefore, it is intended that the present invention not be limited to the particular embodiment disclosed, but that the present invention will include all embodiments falling within the scope of the appended claims.

Claims
  • 1. A method that blends graphics layers and a video layer, wherein the graphics layers are stored in a memory, the method comprising: retrieving a plurality of graphics layers from the memory;blending the plurality of graphics layers, using an alpha value of only one of the graphics layers; blending the at least a portion of the graphics layers above the video layer into a top graphics layer;blending the at least a portion of the graphics layers below the video layer into a bottom graphics layer; andblending the top graphics layer and the bottom graphics layer into one graphics layer;storing the blended graphics layers in the memory;reading the stored blended graphics layers from the memory; andand combining the blended graphics layers with a streaming video layer; and
  • 2. The method according to claim 1 further comprising outputting the combined graphics and video onto a display device.
  • 3. The method according to claim 1 further comprising combining the blended graphics layers and the streaming video in raster format.
  • 4. The method according to claim 1 wherein each layer comprises a buffer and an alpha.
  • 5. The method according to claim 4 wherein the buffer contains pixel values for the layer.
  • 6. The method according to claim 4 wherein the alpha is constant for the whole layer.
  • 7. The method according to claim 6 wherein the alpha is different for each pixel.
  • 8. A system that blends graphics layers and a video layer, wherein the graphics layers are stored in a memory, the system comprising: a memory;at least one processor capable of retrieving a graphics layers from the memory;the at least one processor capable of blending the graphics layers, wherein the blending the graphics layers comprises: the at least one processor capable of blending the at least a portion of the graphics layers above the video layer into a top graphics layer;the at least one processor capable of blending the at least a portion of the graphics layers below the video layer into a bottom graphics layer; andthe at least one processor capable of blending the top graphics layer and the bottom graphics layer into one graphics layer;the at least one processor capable of storing the blended graphics layers in the memory;the at least one processor capable of reading the stored blended graphics layers from the memory;the at least one processor capable of combining the blended graphics layers with a streaming video layer; andthe at least one processor capable of combining the blended graphics layers and the streaming video in raster format; wherein at least a portion of the graphics layers is above the video layer and at least a portion of the graphics layers is below the video layer.
  • 9. The system according to claim 8 further comprising the at least one processor capable of outputting the combined graphics and video onto a display device.
  • 10. The system according to claim 8 wherein each layer comprises a buffer and an alpha.
  • 11. The system according to claim 8 wherein the buffer contains pixel values for the layer.
  • 12. The system according to claim 10 wherein the alpha is constant for the whole layer.
  • 13. The system according to claim 10 wherein the alpha is different for each pixel.
  • 14. A computer-readable memory having stored thereon, a computer program having at least one code section that blends graphics layers and a video layer, wherein the graphics layers are stored in a memory, the at least one code section being executable by a computer for causing the computer to perform steps comprising: retrieving graphics layers from the memory; blending the graphics layers, wherein the code for blending the graphics layers comprises: code for blending the at least a portion of the graphics layers above the video layer into a top graphics layer;code for blending the at least a portion of the graphics layers below the video layer into a bottom graphics layer; andcode for blending the top graphics layer and the bottom graphics layer into one graphics layer;storing the blended graphics layers in the memory; reading the stored blended graphics layers from the memory; andcombining the blended graphics layers with a streaming video layer; and wherein at least a portion of the graphics layers is above the video layer and at least a portion of the graphics layers is below the video layer.
  • 15. the computer-readable memory according to claim 14 further comprising code for outputting the combined graphics and video onto a display device.
  • 16. The computer-readable memory according to claim 14 further comprising code for combining the blended graphics layers and the streaming video in raster format.
  • 17. The computer-readable memory according to claim 14 wherein each layer comprises a buffer and an alpha.
  • 18. The computer-readable memory according to claim 17 wherein the buffer contains pixel values for the layer.
  • 19. The computer-readable memory according to claim 18 wherein the alpha is constant for the whole layer.
  • 20. The computer-readable memory according to claim 17 wherein the alpha is different for each pixel.
  • 21. The system of claim 1, wherein the blending the graphics layers comprises blending only graphics layers.
  • 22. The system on claim 8, wherein the blending the graphics layers comprises blending only graphics layers.
  • 23. The system of claim 14, wherein the blending the graphics layers comprises blending only graphics layers.
RELATED APPLICATIONS

This patent application makes reference to, claims priority to and claims benefit from U.S. Provisional Patent Application Ser. No. 60/513,276, entitled “Graphics Layer Reduction for Video Composition,” filed on Oct. 22, 2003, the complete subject matter of which is hereby incorporated herein by reference, in its entirety.

US Referenced Citations (378)
Number Name Date Kind
4020332 Crochiere et al. Apr 1977 A
4367466 Takeda et al. Jan 1983 A
4412294 Watts et al. Oct 1983 A
4481594 Staggs et al. Nov 1984 A
4532547 Bennett Jul 1985 A
4679040 Yan Jul 1987 A
4682225 Graham Jul 1987 A
4688033 Carini et al. Aug 1987 A
4710761 Kapur et al. Dec 1987 A
4727365 Bunker et al. Feb 1988 A
4751446 Pineda et al. Jun 1988 A
4799053 Van Aken et al. Jan 1989 A
4908780 Priem et al. Mar 1990 A
4954970 Walker et al. Sep 1990 A
4959718 Bennett Sep 1990 A
4967392 Werner et al. Oct 1990 A
5003299 Batson et al. Mar 1991 A
5039983 Yoon Aug 1991 A
5043714 Perlman Aug 1991 A
5065231 Greaves et al. Nov 1991 A
5097257 Clough et al. Mar 1992 A
5142273 Wobermin Aug 1992 A
5146592 Pfeiffer et al. Sep 1992 A
5148417 Wong et al. Sep 1992 A
5155816 Kohn Oct 1992 A
5200738 Fumotu et al. Apr 1993 A
5243447 Bodenkamp et al. Sep 1993 A
5250928 Kuriki Oct 1993 A
5253059 Ansari et al. Oct 1993 A
5254981 Disanto et al. Oct 1993 A
5258747 Oda et al. Nov 1993 A
5262854 Ng Nov 1993 A
5287178 Acampora et al. Feb 1994 A
5301332 Dukes Apr 1994 A
5307177 Shibata et al. Apr 1994 A
5319742 Edgar Jun 1994 A
5327125 Iwase et al. Jul 1994 A
5335074 Stec Aug 1994 A
5371547 Siracusa et al. Dec 1994 A
5371877 Drako et al. Dec 1994 A
5384912 Ogrinc et al. Jan 1995 A
5396567 Jass Mar 1995 A
5396594 Griffith et al. Mar 1995 A
5398211 Willenz et al. Mar 1995 A
5402181 Jenison Mar 1995 A
5404447 Drako et al. Apr 1995 A
5418535 Masucci et al. May 1995 A
5422858 Mizukami et al. Jun 1995 A
5430465 Sabella et al. Jul 1995 A
5432769 Honjo Jul 1995 A
5432900 Rhodes et al. Jul 1995 A
5434683 Sekine et al. Jul 1995 A
5434957 Moller Jul 1995 A
5463728 Blahut et al. Oct 1995 A
5467144 Saeger et al. Nov 1995 A
5469223 Kimura Nov 1995 A
5471411 Adams et al. Nov 1995 A
5475400 Sellers et al. Dec 1995 A
5479606 Gray Dec 1995 A
5488385 Singhal et al. Jan 1996 A
5515077 Tateyama May 1996 A
5526054 Greenfield et al. Jun 1996 A
5532749 Hong Jul 1996 A
5533182 Bates et al. Jul 1996 A
5539891 Childers et al. Jul 1996 A
5546103 Rhodes et al. Aug 1996 A
5550594 Cooper et al. Aug 1996 A
5570296 Heyl et al. Oct 1996 A
5574572 Malinowski et al. Nov 1996 A
5577187 Mariani Nov 1996 A
5579028 Takeya Nov 1996 A
5583575 Arita et al. Dec 1996 A
5592601 Kelley et al. Jan 1997 A
5594467 Marlton et al. Jan 1997 A
5598525 Nally et al. Jan 1997 A
5598545 Childers et al. Jan 1997 A
5600364 Hendricks et al. Feb 1997 A
5600379 Wagner Feb 1997 A
5604514 Hancock Feb 1997 A
5610942 Chen et al. Mar 1997 A
5610983 Stewart Mar 1997 A
5614952 Boyce et al. Mar 1997 A
5615376 Ranganathan Mar 1997 A
5619270 Demmer Apr 1997 A
5619337 Naimpally Apr 1997 A
5621478 Demmer Apr 1997 A
5621869 Drews Apr 1997 A
5621906 O'Neill et al. Apr 1997 A
5623311 Phillips et al. Apr 1997 A
5625379 Reinert et al. Apr 1997 A
5625611 Yokota et al. Apr 1997 A
5625764 Tsujimoto et al. Apr 1997 A
5631668 Katsura et al. May 1997 A
5635985 Boyce et al. Jun 1997 A
5638499 O'Connor et al. Jun 1997 A
5638501 Gough et al. Jun 1997 A
5640543 Farrell et al. Jun 1997 A
5664162 Dye Sep 1997 A
5673401 Volk et al. Sep 1997 A
5694143 Fielder et al. Dec 1997 A
5696527 King et al. Dec 1997 A
5701365 Harrington et al. Dec 1997 A
5706415 Kelley et al. Jan 1998 A
5706478 Dye Jan 1998 A
5706482 Matsushima et al. Jan 1998 A
5708764 Borrel et al. Jan 1998 A
5719593 De Lange Feb 1998 A
5727084 Pan et al. Mar 1998 A
5727192 Baldwin Mar 1998 A
5737455 Harrington et al. Apr 1998 A
5742779 Steele et al. Apr 1998 A
5742796 Huxley Apr 1998 A
5745095 Parchem et al. Apr 1998 A
5745645 Nakamura et al. Apr 1998 A
5748178 Drewry May 1998 A
5748983 Gulick et al. May 1998 A
5751979 McCrory May 1998 A
5754185 Hsao et al. May 1998 A
5754186 Tam et al. May 1998 A
5757377 Lee et al. May 1998 A
5758177 Gulick et al. May 1998 A
5761516 Rostoker et al. Jun 1998 A
5764238 Lum et al. Jun 1998 A
5764243 Baldwin Jun 1998 A
5765010 Chung et al. Jun 1998 A
5774110 Edelson Jun 1998 A
5777629 Baldwin Jul 1998 A
5790134 Lentz Aug 1998 A
5790136 Hoffert et al. Aug 1998 A
5790795 Hough Aug 1998 A
5790842 Charles et al. Aug 1998 A
5793384 Okitsu Aug 1998 A
5793445 Lum et al. Aug 1998 A
5802579 Crary Sep 1998 A
5812210 Arai et al. Sep 1998 A
5815137 Weatherford et al. Sep 1998 A
5818533 Auld et al. Oct 1998 A
5828383 May et al. Oct 1998 A
5831615 Drews et al. Nov 1998 A
5831637 Young et al. Nov 1998 A
5838296 Butler et al. Nov 1998 A
5838389 Mical et al. Nov 1998 A
5844608 Yu et al. Dec 1998 A
5847717 Berry Dec 1998 A
5850232 Engstrom et al. Dec 1998 A
5854761 Patel et al. Dec 1998 A
5864345 Wickstrom et al. Jan 1999 A
5867166 Myhrvold et al. Feb 1999 A
5870622 Gulick et al. Feb 1999 A
5874967 West et al. Feb 1999 A
5877754 Keith et al. Mar 1999 A
5883670 Sporer et al. Mar 1999 A
5889949 Charles Mar 1999 A
5894300 Takizawa Apr 1999 A
5894526 Watanabe et al. Apr 1999 A
5896136 Augustine et al. Apr 1999 A
5903261 Walsh et al. May 1999 A
5903277 Sutherland et al. May 1999 A
5903281 Chen et al. May 1999 A
5907295 Lin May 1999 A
5907635 Numata May 1999 A
5909559 So Jun 1999 A
5912710 Fujimoto Jun 1999 A
5914725 MacInnis et al. Jun 1999 A
5914728 Yamagishi et al. Jun 1999 A
5917502 Kirkland et al. Jun 1999 A
5920495 Hicok et al. Jul 1999 A
5920572 Washington et al. Jul 1999 A
5920682 Shu et al. Jul 1999 A
5920842 Cooper et al. Jul 1999 A
5923316 Kitamura et al. Jul 1999 A
5923385 Mills et al. Jul 1999 A
5926647 Adams et al. Jul 1999 A
5929872 Greene Jul 1999 A
5936677 Fries et al. Aug 1999 A
5940080 Ruehle et al. Aug 1999 A
5940089 Dilliplane et al. Aug 1999 A
5941968 Mergard et al. Aug 1999 A
5948082 Ichikawa Sep 1999 A
5949432 Gough et al. Sep 1999 A
5949439 Ben-Yoseph et al. Sep 1999 A
5951664 Lambrecht et al. Sep 1999 A
5953691 Mills Sep 1999 A
5956041 Koyamada et al. Sep 1999 A
5959626 Garrison et al. Sep 1999 A
5959637 Mills et al. Sep 1999 A
5960464 Lam Sep 1999 A
5961603 Kunkel et al. Oct 1999 A
5963201 McGreggor et al. Oct 1999 A
5963222 Cheney et al. Oct 1999 A
5963262 Ke et al. Oct 1999 A
5973955 Nogle et al. Oct 1999 A
5977933 Wicher et al. Nov 1999 A
5977989 Lee et al. Nov 1999 A
5978509 Nachtergaele et al. Nov 1999 A
5982305 Taylor Nov 1999 A
5982381 Joshi et al. Nov 1999 A
5982425 Allen et al. Nov 1999 A
5982459 Fandrianto et al. Nov 1999 A
5987555 Alzien et al. Nov 1999 A
6002411 Dye Dec 1999 A
6002882 Garde Dec 1999 A
6005546 Keene Dec 1999 A
6006286 Baker et al. Dec 1999 A
6006303 Barnaby et al. Dec 1999 A
6008820 Chauvin et al. Dec 1999 A
6018803 Kardach Jan 2000 A
6023302 MacInnis et al. Feb 2000 A
6023738 Priem et al. Feb 2000 A
6028583 Hamburg Feb 2000 A
6038031 Murphy Mar 2000 A
6046740 LaRoche et al. Apr 2000 A
6057850 Kichury May 2000 A
6061094 Maietta May 2000 A
6061402 Boyce et al. May 2000 A
6064676 Slattery et al. May 2000 A
6067098 Dye May 2000 A
6067322 Wang May 2000 A
6077084 Mino et al. Jun 2000 A
6078305 Mizutani Jun 2000 A
6081297 Lee Jun 2000 A
6081854 Priem et al. Jun 2000 A
6085273 Ball et al. Jul 2000 A
6088045 Lumelsky et al. Jul 2000 A
6088046 Larson et al. Jul 2000 A
6088355 Mills et al. Jul 2000 A
6092124 Priem et al. Jul 2000 A
6094226 Ke et al. Jul 2000 A
6098046 Cooper et al. Aug 2000 A
6100826 Jeon et al. Aug 2000 A
6100899 Ameline et al. Aug 2000 A
6105048 He Aug 2000 A
6108014 Dye Aug 2000 A
6111896 Slattery et al. Aug 2000 A
6115422 Anderson et al. Sep 2000 A
6121978 Miler Sep 2000 A
6124865 Meinerth et al. Sep 2000 A
6124878 Adams et al. Sep 2000 A
6125410 Salbaum et al. Sep 2000 A
6133901 Law Oct 2000 A
6134378 Abe et al. Oct 2000 A
6144392 Rogers Nov 2000 A
6151030 DeLeeuw et al. Nov 2000 A
6151074 Werner Nov 2000 A
6157398 Jeddeloh Dec 2000 A
6157415 Glen Dec 2000 A
6157978 Ng et al. Dec 2000 A
6160989 Hendricks et al. Dec 2000 A
6167498 Larson et al. Dec 2000 A
6169843 Lenihan et al. Jan 2001 B1
6178486 Gill et al. Jan 2001 B1
6184908 Chan et al. Feb 2001 B1
6189064 MacInnis et al. Feb 2001 B1
6189073 Pawlowski Feb 2001 B1
6199131 Melo et al. Mar 2001 B1
6204859 Jouppi et al. Mar 2001 B1
6205260 Crinon et al. Mar 2001 B1
6208350 Herrera Mar 2001 B1
6208354 Porter Mar 2001 B1
6208671 Paulos et al. Mar 2001 B1
6208691 Balakrishnan et al. Mar 2001 B1
6212590 Melo et al. Apr 2001 B1
6215703 Bogin et al. Apr 2001 B1
6226794 Anderson et al. May 2001 B1
6229550 Gloudemans et al. May 2001 B1
6229853 Gebler et al. May 2001 B1
6233634 Clark et al. May 2001 B1
6236727 Ciacelli et al. May 2001 B1
6239810 Van Hook et al. May 2001 B1
6252608 Snyder et al. Jun 2001 B1
6256348 Laczko et al. Jul 2001 B1
6263019 Ryan Jul 2001 B1
6263023 Ngai Jul 2001 B1
6263396 Cottle et al. Jul 2001 B1
6266072 Koga et al. Jul 2001 B1
6266753 Hicok et al. Jul 2001 B1
6269107 Jong Jul 2001 B1
6271826 Pol et al. Aug 2001 B1
6271847 Shum et al. Aug 2001 B1
6275507 Anderson et al. Aug 2001 B1
6281873 Oakley Aug 2001 B1
6286103 Maillard et al. Sep 2001 B1
6301299 Sita et al. Oct 2001 B1
6311204 Mills Oct 2001 B1
6313822 McKay et al. Nov 2001 B1
6320619 Jiang Nov 2001 B1
6326963 Meehan Dec 2001 B1
6326984 Chow et al. Dec 2001 B1
6327000 Auld et al. Dec 2001 B1
6327002 Rinaldi et al. Dec 2001 B1
6327005 Han Dec 2001 B1
6335746 Enokida et al. Jan 2002 B1
6337703 Konar et al. Jan 2002 B1
6339434 West et al. Jan 2002 B1
6342892 Van Hook et al. Jan 2002 B1
6351471 Robinett et al. Feb 2002 B1
6351474 Robinett et al. Feb 2002 B1
6353460 Sokawa et al. Mar 2002 B1
6357045 Devaney Mar 2002 B1
6362827 Ohba Mar 2002 B1
6369826 Shimotono et al. Apr 2002 B1
6369855 Chauvel et al. Apr 2002 B1
6373497 McKnight et al. Apr 2002 B1
6374244 Shibata Apr 2002 B1
6380945 MacInnis et al. Apr 2002 B1
6384831 Nakamura et al. May 2002 B1
6384840 Frank et al. May 2002 B1
6393021 Chow et al. May 2002 B1
6400832 Sevigny Jun 2002 B1
6408436 De Haas Jun 2002 B1
6411333 Auld et al. Jun 2002 B1
6421460 Hamburg Jul 2002 B1
6426755 Deering Jul 2002 B1
6434319 Wine Aug 2002 B1
6442201 Choi Aug 2002 B2
6448966 Yet Sep 2002 B1
6452641 Chauvel et al. Sep 2002 B1
6456335 Miura et al. Sep 2002 B1
6459456 Oh Oct 2002 B1
6466206 Deering Oct 2002 B1
6466210 Carlsen et al. Oct 2002 B1
6466220 Cesana et al. Oct 2002 B1
6466581 Yee et al. Oct 2002 B1
6466624 Fog Oct 2002 B1
6467093 Inoue et al. Oct 2002 B1
6470100 Horiuchi Oct 2002 B2
6496186 Deering Dec 2002 B1
6496228 McGee et al. Dec 2002 B1
6501480 MacInnis et al. Dec 2002 B1
6510554 Gordon et al. Jan 2003 B1
6518965 Dye et al. Feb 2003 B2
6519283 Cheney et al. Feb 2003 B1
6529284 Ganapathy et al. Mar 2003 B1
6538656 Cheung et al. Mar 2003 B1
6538658 Herrera Mar 2003 B1
6570579 MacInnis et al. May 2003 B1
6570922 Wang et al. May 2003 B1
6573905 MacInnis et al. Jun 2003 B1
6636222 Valmiki et al. Oct 2003 B1
6661422 Valmiki et al. Dec 2003 B1
6662329 Foster et al. Dec 2003 B1
6687302 Nakaya Feb 2004 B2
6720976 Shimizu et al. Apr 2004 B1
6738072 MacInnis et al. May 2004 B1
6771274 Dawson Aug 2004 B2
6798420 Xie Sep 2004 B1
6853385 MacInnis et al. Feb 2005 B1
6879330 MacInnis et al. Apr 2005 B2
6947050 Jeddeloh Sep 2005 B2
6987518 Dawson Jan 2006 B2
7039245 Hamery May 2006 B1
7098930 MacInnis et al. Aug 2006 B2
7110006 MacInnis et al. Sep 2006 B2
7310104 MacInnis et al. Dec 2007 B2
7483042 Glen Jan 2009 B1
7530027 MacInnis et al. May 2009 B2
7538783 MacInnis et al. May 2009 B2
7545438 MacInnis et al. Jun 2009 B2
7554562 MacInnis et al. Jun 2009 B2
20010005218 Gloudemans et al. Jun 2001 A1
20020176506 Ferreira Florencio et al. Nov 2002 A1
20030085903 Hrusecky et al. May 2003 A1
20030133441 Watanabe et al. Jul 2003 A1
20030184553 Dawson Oct 2003 A1
20030190952 Smith et al. Oct 2003 A1
20040017383 Baer et al. Jan 2004 A1
20040017398 MacInnis et al. Jan 2004 A1
20040034874 Hord et al. Feb 2004 A1
20040049781 Flesch et al. Mar 2004 A1
20040056874 MacInnis et al. Mar 2004 A1
20040071453 Valderas Apr 2004 A1
20040136698 Mock Jul 2004 A1
20040189676 Dischert Sep 2004 A1
20040189868 Molaro et al. Sep 2004 A1
20040207723 Davis et al. Oct 2004 A1
20040257369 Fang Dec 2004 A1
20050012759 Valmiki et al. Jan 2005 A1
20050086702 Cormack et al. Apr 2005 A1
Foreign Referenced Citations (21)
Number Date Country
0 746 116 Dec 1996 EP
0746116 Dec 1996 EP
0 752 695 Jan 1997 EP
0 752 695 Jan 1997 EP
752695 Jan 1997 EP
0 840 276 May 1998 EP
0 840 277 May 1998 EP
0 840 505 May 1998 EP
0 840 505 May 1998 EP
840276 May 1998 EP
0840276 May 1998 EP
0840277 May 1998 EP
840505 May 1998 EP
0850462 Apr 2006 EP
2 287 627 Mar 1995 GB
2287627 Mar 1995 GB
2000-196586 Jul 2000 JP
WO 94-10641 May 1994 WO
WO 9410641 May 1994 WO
WO 00-28518 May 2000 WO
WO 0028518 May 2000 WO
Related Publications (1)
Number Date Country
20050088446 A1 Apr 2005 US
Provisional Applications (1)
Number Date Country
60513276 Oct 2003 US