Method and system for implementing compression across a graphics bus interconnect

Information

  • Patent Grant
  • 8427496
  • Patent Number
    8,427,496
  • Date Filed
    Friday, May 13, 2005
    19 years ago
  • Date Issued
    Tuesday, April 23, 2013
    11 years ago
Abstract
A system for compressed data transfer across a graphics bus in a computer system. The system includes a bridge, a system memory coupled to the bridge, and a graphics bus coupled to the bridge. A graphics processor is coupled to the graphics bus. The graphics processor is configured to compress graphics data and transfer compressed graphics data across the graphics bus to the bridge for subsequent storage in the system memory.
Description
FIELD OF THE INVENTION

The present invention is generally related to graphics computer systems.


BACKGROUND OF THE INVENTION

Generally, a computer system suited to handle 3D image data includes a specialized graphics processor unit, or GPU, in addition to a traditional CPU (central processing unit). The GPU includes specialized hardware configured to handle 3D computer-generated objects. The GPU is configured to operate on a set of data models and their constituent “primitives” (usually mathematically described polygons) that define the shapes, positions, and attributes of the objects. The hardware of the GPU processes the objects, implementing the calculations required to produce realistic 3D images on a display of the computer system.


The performance of a typical graphics rendering process is highly dependent upon the performance of the system's underlying hardware. High performance real-time graphics rendering requires high data transfer bandwidth to the memory storing the 3D object data and the constituent primitives. Thus, more expensive prior art GPU subsystems (e.g., GPU equipped graphics cards) typically include larger (e.g., 128 MB or larger) specialized, expensive, high bandwidth local graphics memories for feeding the required data to the GPU. Less expensive prior art GPU subsystems include smaller (e.g., 64 MB or less) such local graphics memories, and some of the least expensive GPU subsystems have no local graphics memory.


A problem with the prior art low-cost GPU subsystems (e.g., having smaller amounts of local graphics memory) is the fact that the data transfer bandwidth to the system memory, or main memory, of a computer system is much less than the data transfer bandwidth to the local graphics memory. Typical GPUs with any amount of local graphics memory need to read command streams and scene descriptions from system memory. A GPU subsystem with a small or absent local graphics memory also needs to communicate with system memory in order to access and update pixel data including pixels representing images which the GPU is constructing. This communication occurs across a graphics bus, or the bus that connects the graphics subsystem to the CPU and system memory.


In one example, per-pixel Z-depth data is read across the system bus and compared with a computed value for each pixel to be rendered. For all pixels which have a computed Z value less than the Z value read from system memory, the computed Z value and the computed pixel color value are written to system memory. In another example, pixel colors are read from system memory and blended with computed pixel colors to produce translucency effects before being written to system memory. Higher resolution images (images with a greater number of pixels) require more system memory bandwidth to render. Images representing larger numbers of 3D objects require more system memory bandwidth to render. The low data transfer bandwidth of the graphics bus acts as a bottleneck on overall graphics rendering performance.


Thus, what is required is a solution capable of reducing the limitations imposed by the limited data transfer bandwidth of a graphics bus of a computer system. What is required is a solution that ameliorates the bottleneck imposed by the much smaller data transfer bandwidth of the graphics bus in comparison to the data transfer bandwidth of the GPU to local graphics memory. The present invention provides a novel solution to the above requirement.


SUMMARY OF THE INVENTION

Embodiments of the present invention ameliorate the bottleneck imposed by the much smaller data transfer bandwidth of the graphics bus in comparison to the data transfer bandwidth of the GPU to local graphics memory.


In one embodiment, the present invention is implemented as a system for compressed data transfer across a graphics bus in a computer system. The system includes a bridge, a system memory coupled to the bridge, and a graphics bus coupled to the bridge. A graphics processor (e.g., GPU) is coupled to the graphics bus. The GPU is configured to compress graphics data and transfer compressed graphics data across the graphics bus to the bridge for subsequent storage in the system memory.


In one embodiment, the bridge can be configured to store the compressed graphics data directly into the system memory (e.g., in compressed form). Alternatively, the bridge can be configured to decompress the compressed graphics data and store the resulting decompressed graphics data into the system memory (e.g., in uncompressed form), in accordance with any specific requirements of a system memory management system (e.g., minimum block access size, latency, etc.).


In one embodiment, a transfer logic unit within the bridge performs an efficient data merge operation with pre-existing, compressed graphics data stored in the system memory. The transfer logic unit is configured to fetch and decompress the pre-existing graphics data from the system memory, decompress the compressed graphics data from the GPU, and generate merged data therefrom. The merged data is then compressed and stored in the system memory.





BRIEF DESCRIPTION OF THE DRAWINGS

The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements.



FIG. 1 shows a computer system in accordance with one embodiment of the present invention.



FIG. 2 shows a diagram depicting an efficient compressed graphics data transfer process as implemented by a computer system in accordance with one embodiment of the present invention.



FIG. 3 shows a diagram depicting a data transfer across system memory bus in accordance with a plurality of 128 byte system memory tile sizes in accordance with one embodiment of the present invention.



FIG. 4 shows a diagram depicting a four to one compression ratio of a graphics data compression process in accordance with one embodiment of the present invention.



FIG. 5 shows a flowchart of the steps of a graphics data compression and transfer process in accordance with one embodiment of the present invention.





DETAILED DESCRIPTION OF THE INVENTION

Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the preferred embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of embodiments of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be recognized by one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the embodiments of the present invention.


Notation and Nomenclature:


Some portions of the detailed descriptions, which follow, are presented in terms of procedures, steps, logic blocks, processing, and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. A procedure, computer executed step, logic block, process, etc., is here, and generally, conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.


It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present invention, discussions utilizing terms such as “processing” or “accessing” or “executing” or “storing” or “rendering” or the like, refer to the action and processes of a computer system (e.g., computer system 100 of FIG. 1), or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.


Computer System Platform:



FIG. 1 shows a computer system 100 in accordance with one embodiment of the present invention. Computer system 100 depicts the components of a basic computer system in accordance with embodiments of the present invention providing the execution platform for certain hardware-based and software-based functionality. In general, computer system 100 comprises at least one CPU 101, a system memory 115, and at least one graphics processor unit (GPU) 110. The CPU 101 can be coupled to the system memory 115 via the bridge component 105 or can be directly coupled to the system memory 115 via a memory controller internal to the CPU 101. The GPU 110 is coupled to a display 112. System 100 can be implemented as, for example, a desktop computer system or server computer system, having a powerful general-purpose CPU 101 coupled to a dedicated graphics rendering GPU 110. In such an embodiment, components would be included that are designed to add peripheral buses, specialized graphics memory, IO devices (e.g., disk drive 112), and the like. The bridge component 105 also supports expansion buses coupling the disk drive 112.


It should be appreciated that although the GPU 110 is depicted in FIG. 1 as a discrete component, the GPU 110 can be implemented as a discrete graphics card designed to couple to the computer system via a graphics bus connection (e.g., AGP, PCI Express, etc.), as a discrete integrated circuit die (e.g., mounted directly on the motherboard), or as an integrated GPU included within the integrated circuit die of a computer system chipset component (e.g., integrated within the bridge chip 105). Additionally, a local graphics memory 111 can optionally be included for the GPU 110 for high bandwidth graphics data storage. It also should be noted that although the bridge component 105 is depicted as a discrete component, the bridge component 105 can be implemented as an integrated controller within a different component (e.g., within the CPU 101, GPU 110, etc.) of the computer system 100. Similarly, system 100 can be implemented as a set-top video game console device such as, for example, the Xbox®, available from Microsoft Corporation of Redmond, Wash.


Embodiments of the Present Invention


Referring still to FIG. 1, embodiments of the present invention reduce constraints imposed by the limited data transfer bandwidth of a graphics bus (e.g., graphics bus 120) of a computer system. Embodiments of the present invention ameliorate the bottleneck imposed by the much smaller data transfer bandwidth of the graphics bus 120 in comparison to the data transfer bandwidth of the system memory bus 121 to system memory 115. This is accomplished in part by the GPU 110 compressing graphics data and transferring the compressed graphics data across the graphics bus 120 to the bridge 105 for subsequent storage in the system memory 115.


The compression reduces the total amount data that must be transferred across the bandwidth constrained graphics bus 120. The resulting reduction in access latency, and increase in transfer speed, allows the GPU 110 to more efficiently access graphics data 116 stored within the system memory 115, thereby increasing the performance of bandwidth-limited 3D rendering applications. This data transfer process is described in further detail in FIG. 2 below.



FIG. 2 shows a diagram depicting an efficient compressed graphics data transfer process in accordance with one embodiment of the present invention. As depicted in FIG. 2, the GPU 110 is coupled to the bridge 105 via the low bandwidth graphics bus 120. The bridge 105 is further coupled to the system memory 115 via the high bandwidth system memory bus 121.


In one embodiment, the bridge 105 is configured to store the compressed graphics data received from the GPU 110 via the graphics bus 120 directly into the system memory 115 (e.g., in compressed form). In such an embodiment, the graphics processor 110 executes a compression algorithm (e.g., codec) and compresses graphics data prior to transmission across the graphics bus 120 to the bridge 105. As described above, the compression reduces the total number of bits that must be sent across the bandwidth constrained graphics bus 120. A typical compression ratio can yield a four to one reduction (e.g., 128 bytes being compressed to 32 bytes) which would yield a fourfold effective increase in the data transfer bandwidth of the graphics bus 120. The resulting compressed graphics data is then stored by the bridge 105 directly into the system memory 115 (e.g., as graphics data 116). When the graphics data is subsequently needed by the GPU 110, it is fetched from the system memory 115, across the system memory bus 121 and the graphics bus 120 in compressed form, and decompressed within the GPU 110.


It should be noted that, in some memory management systems, the direct storage of compressed graphics data within the system memory 115 can generate undesirable complications for the system memory management system. For example, many systems have a minimum data access size. For maximum efficiency, it is desirable to match data writes and data reads to the system memory 115 with this minimum data access size. Industry-standard ×86 machines typically have a 128 byte minimum data access size (e.g., corresponding to a single CPU cache line), or “tile” size (e.g., tiles 230). Thus, in some applications, it may be desirable to decompress the compressed graphics data received from the GPU 110 prior to storage in the system memory 115 to more correctly align with the 128 bytes tile size.


Accordingly, in one embodiment, the bridge 105 is configured to decompress the compressed graphics data received from the GPU 110 via the graphics bus 120. A transfer logic unit 120 is included within the bridge 105 to execute the decompression algorithm (e.g., the codec). A RAM 215 within the bridge 105 can be used for temporary storage. The resulting decompressed graphics data is then stored into the system memory 115 (e.g., in uncompressed form), in accordance with any specific requirements of a system memory management system (e.g., 128 byte tile size).



FIG. 3 shows a diagram depicting a data transfer across system memory bus 121 in accordance with a plurality of 128 byte system memory tile sizes (e.g., tiles 230 of FIG. 2). In one embodiment, a transfer logic unit 210 of the bridge 105 performs an efficient data merge operation with pre-existing, compressed graphics data 116 stored in the system memory 115. As described above, it is desirable that graphics data be stored within system memory 115 in alignment with the minimum block access size/tile size of the system memory 115. However, in this embodiment, compressed graphics data is stored in the system memory 115, in proper alignment with the minimum tile size, as opposed to uncompressed graphics data. To accomplish the alignment, the transfer logic unit 210 must execute a data merge operation with the pre-existing compressed graphics data already stored within the tiles 230 of the graphics data 116.


In one embodiment, the data merge is performed by the transfer logic unit 210 fetching and decompressing the pre-existing graphics data from the tiles 230, decompressing the compressed graphics data from the GPU 110, and generating merged data therefrom. As described above, the RAM 215 can be used as temporary storage. The merged data is then compressed by the transfer logic unit 210 and stored, in alignment, in the tiles 230 of the system memory 115.


In this manner, embodiments of the present invention greatly improve on the efficiency of data transfers and accesses to/from the system memory 115 in comparison to prior art solutions. For example, compressed graphics data is transferred across the bandwidth constrained graphics bus 120 and is merged and stored within the system memory 115 in compressed form. This minimizes the latency penalties of the graphics bus 120 and maximizes the available space set aside in the system memory 115 for graphics data (e.g., frame buffer, etc.). The benefits provided by the compressed graphics data are applicable in both directions, from the GPU 110 to the system memory 115, and from the system memory 115 back to the GPU 110.



FIG. 4 shows a diagram depicting a four-to-one compression ratio of a graphics data compression process in accordance with one embodiment of the present invention. As depicted in FIG. 4, a four pixel block 401 of graphics data, comprising pixels having 128 bits of information each, is compressed by compression process 410. This yields a resulting compressed four pixel block 402, comprising pixels having 32 bits of information each. Thus, the resulting compressed pixel data comprises the graphics data that must be pushed through the graphics bus 120, both from the GPU 110 to the bridge 105, and from the bridge 105 to the GPU 110.


The above graphics data transfer efficiency benefits enable a computer system in accordance with embodiments of the present invention to perform on a level equal to prior art computer systems having an expensive dedicated local graphics memory (e.g., coupled to the GPU directly). Alternatively, a computer system in accordance with embodiments of the present invention can greatly outperform a similar prior art computer system that uses system memory for graphics data storage (e.g., frame buffer storage, etc.), as opposed to a large expensive local graphics memory.



FIG. 5 shows a flowchart of the steps of a graphics data compression and transfer process 500 in accordance with one embodiment of the present invention. As depicted in FIG. 5, process 500 shows the basic steps involved in a graphics data compression and merge operation as implemented by a bridge (e.g., bridge of 105) of a computer system (e.g., computer system 100 of FIG. 2).


Process 500 begins in step 501, where compressed graphics data is received from the GPU 110 by the bridge 105. As described above, the GPU 110 compresses its graphics data prior to pushing it across the bandwidth constrained graphics bus 120. The bridge 105 temporarily stores the compressed graphics data within an internal RAM 215. In step 502, pre-existing graphics data is fetched from the system memory 115 into the internal RAM 215. In step 503, the pre-existing graphics data is decompressed by a transfer logic unit 210 within the bridge 105. In step 504, the graphics data from the GPU 110 is decompressed by the transfer logic unit 210. In step 505, the transfer logic unit 210 performs a merge operation on the uncompressed data. In step 506, the resulting merged data is then recompressed by the transfer logic unit 210. Subsequently, in step 507, the compressed merged data is then stored into the system memory in alignment with the system memory tile sizes 230 (e.g., 128 byte tile size).


The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents.

Claims
  • 1. A system for data transfer across a graphics bus in a computer system, comprising: a bridge;a system memory coupled to the bridge;a graphics bus coupled to the bridge; anda graphics processor coupled to the graphics bus, wherein the graphics processor is configured to compress graphics data and transfer compressed graphics data across the graphics bus to the bridge, wherein the bridge is configured to divide the graphics data into a plurality of system memory aligned tiles and store the tiles into the system memory, wherein the system memory aligned tiles are aligned with a minimum data access size, and wherein the graphics data is compressed to at least a four to one ratio;wherein the graphics processor includes a transfer logic unit for merging the compressed graphics data into tiles of pre-existing graphics data stored in the system memory; andwherein a data merge is performed by a transfer logic unit fetching and decompressing pre-existing graphics data from tiles, decompressing the compressed graphics data from the GPU, and generating merged data therefrom, wherein the system memory is used as temporary storage, and wherein the merged data is then compressed by the transfer logic unit and stored, in alignment, in the tiles of the system memory.
  • 2. The system of claim 1, wherein the bridge includes a memory controller and the memory controller is configured to store the compressed graphics data into the system memory.
  • 3. The system of claim 1, wherein the bridge is configured to decompress the compressed graphics data and store decompressed graphics data into the system memory.
  • 4. The system of claim 1, wherein the tiles are sized to align with a plurality of 128 byte boundaries of the system memory.
  • 5. The system of claim 1, wherein the bridge is a North bridge of the computer system.
  • 6. The system of claim 1, wherein the graphics processor is configured to use a portion of the system memory for frame buffer memory.
  • 7. The system of claim 1, wherein the graphics processor is detachably coupled to the graphics bus by a connector.
  • 8. The system of claim 1, wherein the graphics bus is an AGP graphics bus.
  • 9. The system of claim 1, wherein the graphics bus is a SATA graphics bus.
  • 10. A bridge for implementing data transfer across a graphics bus in a computer system, comprising: a system memory bus interface;a graphics bus interface;a RAM;a transfer logic unit, wherein the transfer logic unit is configured to receive compressed graphics data from a graphics processor via the graphics bus interface for storage in a system memory, and wherein the transfer logic unit is configured to divide the graphics data into a plurality of system memory aligned tiles and store the tiles into the system memory, wherein the memory aligned tiles are aligned with a minimum data access size, and wherein the graphics data is compressed to at least a four to one ratio; andwherein a data merge is performed by a transfer logic unit fetching and decompressing pre-existing graphics data from tiles, decompressing the compressed graphics data from the GPU, and generating merged data therefrom, wherein the system memory is used as temporary storage, and wherein the merged data is then compressed by the transfer logic unit and stored, in alignment, in the tiles of the system memory.
  • 11. The bridge of claim 10, wherein the tiles are sized to align with a plurality of 128 byte boundaries of the system memory.
  • 12. The bridge of claim 10, wherein the bridge is a North bridge of the computer system.
  • 13. The bridge of claim 10, wherein the graphics bus interface is an AGP interface.
  • 14. The bridge of claim 10, wherein the graphics bus interface is a SATA interface.
  • 15. In a bridge of a computer system, a method for implementing data transfer across a graphics bus in a computer system, comprising: fetching pre-existing compressed graphics data from a system memory to a RAM of the bridge;decompressing the pre-existing compressed graphics data;decompressing compressed graphics data received from a graphics processor;generating merged data;compressing the merged data;storing the merged data in the system memory, wherein the merged data is divided into a plurality of system memory aligned tiles and stored as tiles in the system memory, wherein the memory aligned tiles are aligned with a minimum data access size, and wherein the graphics data is compressed to at least a four to one ratio; andwherein a data merge is performed by a transfer logic unit fetching and decompressing pre-existing graphics data from tiles, decompressing the compressed graphics data from the GPU, and generating merged data therefrom, wherein the system memory is used as temporary storage, and wherein the merged data is then compressed by the transfer logic unit and stored, in alignment, in the tiles of the system memory.
  • 16. The method of claim 15, wherein the tiles are sized to align with a plurality of 128 byte boundaries of the system memory.
  • 17. The method of claim 15, wherein the bridge is a North bridge of the computer system.
  • 18. The method of claim 15, wherein the graphics processor is configured to use a portion of the system memory for frame buffer memory.
  • 19. The method of claim 15, wherein a transfer logic unit executes a data merge operation with pre-existing compressed graphics data to implement a memory alignment operation.
US Referenced Citations (166)
Number Name Date Kind
4208810 Rohner et al. Jun 1980 A
4918626 Watkins et al. Apr 1990 A
5081594 Horsley Jan 1992 A
5212633 Franzmeier May 1993 A
5237460 Miller et al. Aug 1993 A
5287438 Kelleher Feb 1994 A
5313287 Barton May 1994 A
5432898 Curb et al. Jul 1995 A
5446836 Lentz et al. Aug 1995 A
5452104 Lee Sep 1995 A
5452412 Johnson, Jr. et al. Sep 1995 A
5483258 Cornett et al. Jan 1996 A
5543935 Harrington Aug 1996 A
5570463 Dao Oct 1996 A
5594854 Baldwin et al. Jan 1997 A
5623692 Priem et al. Apr 1997 A
5633297 Valko et al. May 1997 A
5664162 Dye Sep 1997 A
5815162 Levine Sep 1998 A
5854631 Akeley et al. Dec 1998 A
5854637 Sturges Dec 1998 A
5872902 Kuchkuda et al. Feb 1999 A
5977987 Duluk, Jr. Nov 1999 A
6028608 Jenkins Feb 2000 A
6034699 Wong et al. Mar 2000 A
6072500 Foran et al. Jun 2000 A
6104407 Aleksic et al. Aug 2000 A
6104417 Nielsen et al. Aug 2000 A
6115049 Winner et al. Sep 2000 A
6118394 Onaya Sep 2000 A
6128000 Jouppi et al. Oct 2000 A
6137918 Harrington et al. Oct 2000 A
6160557 Narayanaswami Dec 2000 A
6160559 Omtzigt Dec 2000 A
6188394 Morein et al. Feb 2001 B1
6201545 Wong et al. Mar 2001 B1
6204859 Jouppi et al. Mar 2001 B1
6219070 Baker et al. Apr 2001 B1
6249853 Porterfield Jun 2001 B1
6259460 Gossett et al. Jul 2001 B1
6323874 Gossett Nov 2001 B1
6359623 Larson Mar 2002 B1
6362819 Dalal et al. Mar 2002 B1
6366289 Johns Apr 2002 B1
6429877 Stroyan Aug 2002 B1
6437780 Baltaretu et al. Aug 2002 B1
6452595 Montrym et al. Sep 2002 B1
6469707 Voorhies Oct 2002 B1
6480205 Greene et al. Nov 2002 B1
6501564 Schramm et al. Dec 2002 B1
6504542 Voorhies et al. Jan 2003 B1
6522329 Ihara et al. Feb 2003 B1
6523102 Dye et al. Feb 2003 B1
6525737 Duluk, Jr. et al. Feb 2003 B1
6529207 Landau et al. Mar 2003 B1
6545684 Dragony et al. Apr 2003 B1
6606093 Gossett et al. Aug 2003 B1
6611272 Hussain et al. Aug 2003 B1
6614444 Duluk, Jr. et al. Sep 2003 B1
6614448 Garlick et al. Sep 2003 B1
6624823 Deering Sep 2003 B2
6633197 Sutardja Oct 2003 B1
6633297 McCormack et al. Oct 2003 B2
6646639 Greene et al. Nov 2003 B1
6671000 Cloutier Dec 2003 B1
6693637 Koneru et al. Feb 2004 B2
6693639 Duluk, Jr. et al. Feb 2004 B2
6697063 Zhu Feb 2004 B1
6704022 Aleksic Mar 2004 B1
6717576 Duluk, Jr. et al. Apr 2004 B1
6717578 Deering Apr 2004 B1
6734861 Van Dyke et al. May 2004 B1
6741247 Fenney May 2004 B1
6747057 Ruzafa et al. Jun 2004 B2
6765575 Voorhies et al. Jul 2004 B1
6778177 Furtner Aug 2004 B1
6788301 Thrasher Sep 2004 B2
6798410 Redshaw et al. Sep 2004 B1
6803916 Ramani et al. Oct 2004 B2
6819332 Baldwin Nov 2004 B2
6833835 van Vugt Dec 2004 B1
6901497 Tashiro et al. May 2005 B2
6906716 Moreton et al. Jun 2005 B2
6938176 Alben et al. Aug 2005 B1
6940514 Wasserman et al. Sep 2005 B1
6947057 Nelson et al. Sep 2005 B2
6956579 Diard et al. Oct 2005 B1
6961057 Van Dyke et al. Nov 2005 B1
6978317 Anantha et al. Dec 2005 B2
7002591 Leather et al. Feb 2006 B1
7009607 Lindholm et al. Mar 2006 B2
7009615 Kilgard et al. Mar 2006 B1
7061495 Leather Jun 2006 B1
7061640 Maeda Jun 2006 B1
7064771 Jouppi et al. Jun 2006 B1
7075542 Leather Jul 2006 B1
7081902 Crow et al. Jul 2006 B1
7119809 McCabe Oct 2006 B1
7126600 Fowler et al. Oct 2006 B1
7154066 Talwar et al. Dec 2006 B2
7158148 Toji et al. Jan 2007 B2
7167259 Varga Jan 2007 B2
7170515 Zhu Jan 2007 B1
7184040 Tzvetkov Feb 2007 B1
7224364 Yue et al. May 2007 B1
7243191 Ying et al. Jul 2007 B2
7307628 Goodman et al. Dec 2007 B1
7307638 Leather et al. Dec 2007 B2
7317459 Fouladi et al. Jan 2008 B2
7382368 Molnar et al. Jun 2008 B1
7453466 Hux et al. Nov 2008 B2
7483029 Crow et al. Jan 2009 B2
7548996 Baker et al. Jun 2009 B2
7551174 Iourcha et al. Jun 2009 B2
7633506 Leather et al. Dec 2009 B1
7634637 Lindholm et al. Dec 2009 B1
7791617 Crow et al. Sep 2010 B2
7965902 Zelinka et al. Jun 2011 B1
8063903 Vignon et al. Nov 2011 B2
20010005209 Lindholm et al. Jun 2001 A1
20020050979 Oberoi et al. May 2002 A1
20020097241 McCormack et al. Jul 2002 A1
20020130863 Baldwin Sep 2002 A1
20020140655 Liang et al. Oct 2002 A1
20020158885 Brokenshire et al. Oct 2002 A1
20020196251 Duluk, Jr. et al. Dec 2002 A1
20030067468 Duluk, Jr. et al. Apr 2003 A1
20030076325 Thrasher Apr 2003 A1
20030122815 Deering Jul 2003 A1
20030163589 Bunce et al. Aug 2003 A1
20030194116 Wong et al. Oct 2003 A1
20030201994 Taylor et al. Oct 2003 A1
20040085313 Moreton et al. May 2004 A1
20040130552 Duluk, Jr. et al. Jul 2004 A1
20040183801 Deering Sep 2004 A1
20040196285 Rice et al. Oct 2004 A1
20040207642 Crisu et al. Oct 2004 A1
20040246251 Fenney et al. Dec 2004 A1
20050030314 Dawson Feb 2005 A1
20050041037 Dawson Feb 2005 A1
20050066148 Luick Mar 2005 A1
20050122338 Hong et al. Jun 2005 A1
20050134588 Aila et al. Jun 2005 A1
20050134603 Iourcha et al. Jun 2005 A1
20050179698 Vijayakumar et al. Aug 2005 A1
20050259100 Teruyama Nov 2005 A1
20060044317 Bourd et al. Mar 2006 A1
20060170690 Leather Aug 2006 A1
20060203005 Hunter Sep 2006 A1
20060245001 Lee et al. Nov 2006 A1
20060267981 Naoi Nov 2006 A1
20060282604 Temkine et al. Dec 2006 A1
20070008324 Green Jan 2007 A1
20070129990 Tzruya et al. Jun 2007 A1
20070139440 Crow et al. Jun 2007 A1
20070268298 Alben et al. Nov 2007 A1
20070273689 Tsao Nov 2007 A1
20070296725 Steiner et al. Dec 2007 A1
20080024497 Crow et al. Jan 2008 A1
20080024522 Crow et al. Jan 2008 A1
20080034238 Hendry et al. Feb 2008 A1
20080100618 Woo et al. May 2008 A1
20080158233 Shah et al. Jul 2008 A1
20080273218 Kitora et al. Nov 2008 A1
20090153540 Blinzer et al. Jun 2009 A1
20100226441 Tung et al. Sep 2010 A1
Foreign Referenced Citations (6)
Number Date Country
101093578 Dec 2007 CN
06180758 Jun 1994 JP
10134198 May 1998 JP
11195132 Jul 1999 JP
2005182547 Jul 2005 JP
0013145 Mar 2000 WO
Non-Patent Literature Citations (8)
Entry
A Hardware Assisted Design Rule Check Architecture Larry Seller Jan. 1982 Proceedings of the 19th Conference on Design Automation DAC '82 Publisher: IEEE Press.
A Parallel Alogorithm for Polygon Rasterization Juan Pineda Jun. 1988 ACM.
A VLSI Architecture for Updating Raster-Scan Displays Satish Gupta, Robert F. Sproull, Ivan E. Sutherland Aug. 1981 ACM SIGGRAPH Computer Graphics, Proceedings of the 8th Annual Conference on Computer Graphics and Interactive Techniques SIGGRAPH '81, vol. 15 Issue Publisher: ACM Press.
Blythe, OpenGL section 3.4.1, Basic Line Segment Rasterization, Mar. 29, 1997, pp. 1-3.
Boyer, et al.; “Discrete Analysis for Antialiased Lines;” Eurographics 2000; 3 Pages.
Crow; “The Use of Grayscale for Improves Raster Display of Vectors and Characters;” University of Texas, Austin, Texas; Work supported by the National Science Foundation unser Grants MCS 76-83889; pp. 1-5: ACM Press.
Foley, J. “Computer Graphics: Principles and Practice”, 1987, Addison-Wesley Publishing, 2nd Edition, p. 545-546.
Fuchs; “Fast Spheres Shadow, Textures, Transparencies, and Image Enhancements in Pixel-Planes”; ACM; 1985; Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514.