The present invention is generally related to hardware accelerated graphics computer systems.
Recent advances in computer performance have enabled graphic systems to provide more realistic graphical images using personal computers, home video game computers, handheld devices, and the like. In such graphic systems, a number of procedures are executed to “render” or draw graphic primitives to the screen of the system. A “graphic primitive” is a basic component of a graphic picture, such as a point, line, polygon, or the like. Rendered images are formed with combinations of these graphic primitives. Many procedures may be utilized to perform 3-D graphics rendering.
Specialized graphics processing units (e.g., GPUs, etc.) have been developed to optimize the computations required in executing the graphics rendering procedures. The GPUs are configured for high-speed operation and typically incorporate one or more rendering pipelines. Each pipeline includes a number of hardware-based functional units that are optimized for high-speed execution of graphics instructions/data, where the instructions/data are fed into the front end of the pipeline and the computed results emerge at the back end of the pipeline. The hardware-based functional units, cache memories, firmware, and the like, of the GPU are optimized to operate on the low-level graphics primitives (e.g., comprising “points”, “lines”, “triangles”, etc.) and produce real-time rendered 3-D images.
The real-time rendered 3-D images are generated using raster display technology. Raster display technology is widely used in computer graphics systems, and generally refers to the mechanism by which the grid of multiple pixels comprising an image are influenced by the graphics primitives. For each primitive, a typical rasterization system generally steps from pixel to pixel and determines whether or not to “render,” or write a given pixel into a frame buffer or pixel map, as per the contribution of the primitive. This, in turn, determines how to write the data to the display buffer representing each pixel.
Various traversal algorithms and various rasterization methods have been developed for computing from a graphics primitive based description to a pixel based description (e.g., rasterizing pixel to pixel per primitive) in a way such that all pixels within the primitives comprising a given 3-D scene are covered. For example, some solutions involve generating the pixels in a unidirectional manner. Such traditional unidirectional solutions involve generating the pixels row-by-row in a constant direction. This requires that the sequence shift across the primitive to a starting location on a first side of the primitive upon finishing at a location on an opposite side of the primitive.
Other traditional methods involve utilizing per pixel evaluation techniques to closely evaluate each of the pixels comprising a display and determine which pixels are covered by which primitives. The per pixel evaluation involves scanning across the pixels of a display to determine which pixels are touched/covered by the edges of a graphics primitive.
Once the primitives are rasterized into their constituent pixels, these pixels are then processed in pipeline stages subsequent to the rasterization stage where the rendering operations are performed. Generally, these rendering operations assign a color to each of the pixels of a display in accordance with the degree of coverage of the primitives comprising a scene. The per pixel color is also determined in accordance with texture map information that is assigned to the primitives, lighting information, and the like.
A problem exists however with the ability of prior art 3-D rendering architectures to scale to handle the increasingly complex 3-D scenes of today's applications. Computer screens now commonly have screen resolutions of 1920×1200 pixels or larger. Traditional methods of increasing 3-D rendering performance, such as, for example, increasing clock speed, have negative side effects such as increasing power consumption and increasing the heat produced by the GPU integrated circuit die. Other methods for increasing performance, such as incorporating large numbers of parallel execution units for parallel execution of GPU operations have negative side effects such as increasing integrated circuit die size, decreasing yield of the GPU manufacturing process, increasing power requirements, and the like.
Thus, a need exists for a rasterization process that can scale as graphics application needs require and provide added performance without incurring penalties such as increased power consumption and/or reduced fabrication yield.
Embodiments of the present invention provide a method and system for a rasterization process that can scale as graphics application needs require and provide added performance without incurring penalties such as increased power consumption and/or reduced fabrication yield.
In one embodiment, the present invention is implemented as a method for interface compression in a raster stage of a graphics processor (e.g., GPU). The method includes receiving a graphics primitive (e.g., a triangle polygon) for rasterization in a raster stage of the GPU and rasterizing the graphics primitive at a first level in a coarse raster component to generate a plurality of tiles related to the graphics primitive. The method further includes determining whether a window ID operation is required for the plurality of tiles. If a window ID operation is required, a respective plurality of uncompressed coverage masks for the plurality of tiles are output from the coarse raster component to a fine raster component on a one coverage mask per clock cycle basis (e.g., one 64-bit coverage mask per tile, etc.). If a window ID operation is not required, a compressed coverage mask for the plurality of tiles is output in a single clock cycle (e.g., a single 64-bit compressed coverage mask for all of the tiles). The plurality of tiles are subsequently rasterized at a second-level in the fine raster component to generate a plurality of pixels related to the graphics primitive. In one embodiment, the compressed coverage mask includes compressed depth cull information for the plurality of tiles.
In one embodiment, the method includes determining whether a polygon stipple operation or a window ID operation is required for the plurality of tiles. If a polygon stipple or window ID operation is required, a respective plurality of uncompressed coverage masks for the plurality of tiles are output from the coarse raster component to a fine raster component on a one coverage mask per clock cycle basis (e.g., one 64-bit coverage mask per tile, etc.). If a polygon stipple or window ID operation is not required, a compressed coverage mask for the plurality of tiles is output in a single clock cycle (e.g., a single 64-bit compressed coverage mask for all of the tiles).
In one embodiment, a transfer interface is coupled between the coarse raster component and the fine raster component and is configured for transferring coverage masks from the coarse raster component to the fine raster component. The transfer interface has a size configured to accept one compressed coverage mask per clock cycle, or alternatively accept a plurality of uncompressed coverage masks on a one coverage mask per clock cycle basis.
In this manner, the interface compression method that can transfer a single compressed coverage mask for the plurality of tiles generated by the coarse raster component in a single clock cycle significantly reduces the amount of silicon die area that must be dedicated to a transfer interface between the coarse raster component and the fine raster component. The reduced silicon die area allows the performance of a raster stage to scale dramatically (e.g., with increased parallelism) without causing unnecessary bloat in the interfaces between components.
The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements.
Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the preferred embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of embodiments of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be recognized by one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the embodiments of the present invention.
Notation and Nomenclature:
Some portions of the detailed descriptions, which follow, are presented in terms of procedures, steps, logic blocks, processing, and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. A procedure, computer executed step, logic block, process, etc., is here, and generally, conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present invention, discussions utilizing terms such as “processing” or “accessing” or “executing” or “storing” or “rendering” or the like, refer to the action and processes of a computer system (e.g., computer system 100 of
Computer System Platform:
The CPU 101 and the GPU 110 can also be integrated into a single integrated circuit die and the CPU and GPU may share various resources, such as instruction logic, buffers, functional units and so on, or separate resources may be provided for graphics and general-purpose operations. Accordingly, any or all the circuits and/or functionality described herein as being associated with the GPU 110 can also be implemented in, and performed by, a suitably equipped CPU 101. Additionally, while embodiments herein may make reference to a GPU, it should be noted that the described circuits and/or functionality can also be implemented and other types of processors (e.g., general purpose or other special-purpose coprocessors) or within a CPU.
System 100 can be implemented as, for example, a desktop computer system or server computer system having a powerful general-purpose CPU 101 coupled to a dedicated graphics rendering GPU 110. In such an embodiment, components can be included that add peripheral buses, specialized audio/video components, JO devices, and the like. Similarly, system 100 can be implemented as a handheld device (e.g., cellphone, etc.) or a set-top video game console device such as, for example, the Xbox®, available from Microsoft Corporation of Redmond, Wash., or the PlayStation3®, available from Sony Computer Entertainment Corporation of Tokyo, Japan. System 100 can also be implemented as a “system on a chip”, where the electronics (e.g., the components 101, 115, 110, 114, and the like) of a computing device are wholly contained within a single integrated circuit die. Examples include a hand-held instrument with a display, a car navigation system, a portable entertainment system, and the like.
Embodiments of the present invention implement a method and system for interface compression in a raster stage of a graphics processor (e.g., GPU 110 of
In one embodiment, as depicted in
Thus, as depicted in
Referring still to
Additional details regarding boustrophedonic pattern rasterization can be found in U.S. patent application “A GPU HAVING RASTER COMPONENTS CONFIGURED FOR USING NESTED BOUSTROPHEDONIC PATTERNS TO TRAVERSE SCREEN AREAS” by Franklin C. Crow et al., Ser. No. 11/304,904, filed on Dec. 15, 2005, which is incorporated herein in its entirety.
It should be noted that although embodiments of the present invention are described in the context of boustrophedonic rasterization, other types of rasterization patterns can be used. For example, the algorithms and GPU stages described herein for rasterizing tile groups can be readily applied to traditional left-to-right, line-by-line rasterization patterns.
As described above, the line 321 shows a boustrophedonic pattern of traversal, where the raster unit visits all pixels on a 2D area of the triangle 301 by scanning along one axis as each pass moves farther along on the orthogonal axis. In the
As described above, in one embodiment, the first level rasterization generates a tile (e.g., tile 401) comprising a set of pixels related to the graphics primitive (e.g., a tile that has at least some coverage with respect to the primitive). Generally, the first level rasterization is intended to quickly determine which pixels of the screen area relate to a given graphics primitive. Accordingly, relatively large groups of pixels (e.g., tiles) are examined at a time in order to quickly find those pixels that relate to the primitive. The process can be compared to a reconnaissance, whereby the coarse raster unit quickly scans a screen area and finds tiles that cover the triangle 301. Thus the pixels that relate to the triangle 301 can be discovered much more quickly than the traditional prior art process which utilizes a single level of rasterization and examines much smaller numbers of pixels at a time, in a more fine-grained manner.
In the
Referring still to
In one embodiment, the hardware comprising the raster unit 502 is optimized for operations on a per clock basis. For example, to provide high throughput and thereby maintain high rendering frame rates, the coarse raster component 503 and the fine raster component 504 comprise hardware designed to implement the first level rasterization and the second level rasterization on a per-clock cycle basis. The rasterizer unit 502 can be implemented such that the first level rasterization is implemented in the coarse raster component 503 that “stamps out” tiles covering a given primitive within a single clock cycle. Subsequently, the rasterization at the second level can be implemented in the fine raster component 504 that stamps out the covered pixels of a tile in a single clock cycle.
In the
The window ID component 506 examines the tiles identified by the coarse raster component 503 and functions by turning off those pixels that are not associated with a given window of interest. Such a window could comprise, for example, one of several windows on a computer screen as displayed by one or more applications, were each window is associated with a designated window identifier (e.g., window ID) as described in, for example, the OpenGL specification.
The polygon stipple component 507 examines the tiles identified by the coarse raster component and functions by turning off those pixels that are impacted by a polygon stipple operation. As with the window ID component 506, the polygon stipple component 507 compiles its information into a polygon stipple coverage mask.
Thus, for example, once the related tiles are identified by the coarse raster component 503, those pixels of each tile that are turned off as a result of the depth cull operation, window ID operation, and polygon stipple operation are identified by combining the respective coverage masks into a combined coverage mask, typically one combined coverage mask per tile, that indicates which pixels of the tile are turned off/on. This combined coverage mask is transferred to the fine raster component 504 and is used by the fine raster component 504 in its second level fine rasterization process as it stamps out individual covered pixels of the tile that have not been killed by depth culling, window ID, or polygon stipple.
In one embodiment, the combined coverage mask indicates which tiles identified by the coarse raster component 503 have all of their constituent pixels turned off and can therefore be discarded. Discarding such dead tiles reduces the amount of work that must be performed by the fine raster component 504. For example, in a case where the coarse raster component 503 works with tile groups comprising 1024 pixels (e.g., a 32×32 block of pixels with 16 tiles of 64 pixels each), those tiles having all of their constituent pixels turned off as indicated by the combined coverage mask can be completely discarded. Those tiles having at least some coverage from the graphics primitive and have at least one pixel that has not been turned off are passed on to the fine raster component 504 for further processing.
In one embodiment, the rasterizer unit 502 can preferably take advantage of those cases where rendering operations do not require either window ID operations or polygon stipple operations. In those cases, there are no respective coverage masks resulting from the window ID operation from the window ID component 506, or from the polygon stipple operation from the polygon stipple component 507. However, there would still be a coverage mask from the depth cull component 505. The combined coverage mask would thus be essentially the coverage mask from the depth cull component 505.
In situations where no window ID operations or polygon stipple operations occur, the information comprising the coverage mask generated by depth cull component 505 can be compressed such that the depth cull information for a multiple tile group (e.g., a four tile group) can be captured using a single coverage mask. For example, normally each tile is associated with a coverage mask having one bit per pixel of the tile (e.g., a 64-bit coverage mask for each of the pixels of an 8×8 tile). The depth information can be compressed by utilizing one bit of the coverage mask to represent more than one pixel. For example, in one embodiment, each bit of a coverage mask can be used to present an 8×2 region of pixels. In such an embodiment, a single coverage mask can contain the depth information for four 8×8 tiles. This allows the transmission of a single coverage mask representing four tiles to the fine raster unit 504 in a single clock cycle.
Thus, as shown in
Additionally, it should be noted that compression can also be implemented in those cases where both window ID and polygon stipple operations result in zero killed pixels. This would generate a combined coverage mask with full coverage. Similarly, compression can be implemented when window ID provides full coverage and there is no polygon stipple operation, or vice-versa when polygon stipple provides full coverage and there is no window ID operation.
Alternatively, if either window ID operations are used or polygon stipple operations are used (e.g., where there is at least one killed pixel), the combined coverage mask must be transmitted in an uncompressed form. As described above, in the uncompressed form, one combined coverage mask is transmitted per clock cycle, per tile.
Thus, for example, in a case where window ID operations and polygon stipple operations are relatively rare, the raster unit 502 can have a transfer interface 532 it is optimized for the transfer of compressed combined coverage masks. Continuing the above example, the transfer interface 532 can be sized to handle 64-bit coverage masks, and the pipeline 531 can transfer four tiles per cycle and a corresponding 64-bit coverage mask representing all four tiles per cycle. This allows the raster unit 502 to quickly crunch through large-screen areas, efficiently stamping out pixels for further processing by subsequent stages of the graphics pipeline. For those rare occasions when window ID operations and/or polygon stippling operations are required, the raster unit 502 can slow down, and send the four tiles down one per clock cycle with a corresponding respective combined coverage mask one per clock cycle. The combined coverage masks would use the same 64-bit transfer interface 532.
For example, in one embodiment, the coarse raster component (e.g., coarse raster component 503 of
In this manner, the multi-tile interface compression method of embodiments of the present invention can transfer a single compressed coverage mask for a plurality of tiles generated by the coarse raster component in a single clock cycle, thereby significantly reducing the amount of silicon die area that must be dedicated to a transfer interface (e.g., transfer interface 532) between the coarse raster component and the fine raster component. The reduced silicon die area allows the performance of a raster stage to scale dramatically with increased parallelism without causing unnecessary bloat in the interfaces between components.
The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents.
| Number | Name | Date | Kind |
|---|---|---|---|
| 4208810 | Rohner et al. | Jun 1980 | A |
| 4918626 | Watkins et al. | Apr 1990 | A |
| 5081594 | Horsley | Jan 1992 | A |
| 5212633 | Franzmeier | May 1993 | A |
| 5237460 | Miller et al. | Aug 1993 | A |
| 5287438 | Kelleher | Feb 1994 | A |
| 5313287 | Barton | May 1994 | A |
| 5432898 | Curb et al. | Jul 1995 | A |
| 5446836 | Lentz et al. | Aug 1995 | A |
| 5452104 | Lee | Sep 1995 | A |
| 5452412 | Johnson, Jr. et al. | Sep 1995 | A |
| 5483258 | Cornett et al. | Jan 1996 | A |
| 5543935 | Harrington | Aug 1996 | A |
| 5570463 | Dao | Oct 1996 | A |
| 5594854 | Baldwin et al. | Jan 1997 | A |
| 5623692 | Priem et al. | Apr 1997 | A |
| 5633297 | Valko et al. | May 1997 | A |
| 5664162 | Dye | Sep 1997 | A |
| 5815162 | Levine | Sep 1998 | A |
| 5854631 | Akeley et al. | Dec 1998 | A |
| 5854637 | Sturges | Dec 1998 | A |
| 5872902 | Kuchkuda et al. | Feb 1999 | A |
| 5977987 | Duluk, Jr. | Nov 1999 | A |
| 6028608 | Jenkins | Feb 2000 | A |
| 6034699 | Wong et al. | Mar 2000 | A |
| 6072500 | Foran et al. | Jun 2000 | A |
| 6104407 | Aleksic et al. | Aug 2000 | A |
| 6104417 | Nielsen et al. | Aug 2000 | A |
| 6115049 | Winner et al. | Sep 2000 | A |
| 6118394 | Onaya | Sep 2000 | A |
| 6128000 | Jouppi et al. | Oct 2000 | A |
| 6137918 | Harrington et al. | Oct 2000 | A |
| 6160557 | Narayanaswami | Dec 2000 | A |
| 6160559 | Omtzigt | Dec 2000 | A |
| 6188394 | Morein et al. | Feb 2001 | B1 |
| 6201545 | Wong et al. | Mar 2001 | B1 |
| 6204859 | Jouppi et al. | Mar 2001 | B1 |
| 6219070 | Baker et al. | Apr 2001 | B1 |
| 6249853 | Porterfield | Jun 2001 | B1 |
| 6259460 | Gossett et al. | Jul 2001 | B1 |
| 6323874 | Gossett | Nov 2001 | B1 |
| 6359623 | Larson | Mar 2002 | B1 |
| 6362819 | Dalal et al. | Mar 2002 | B1 |
| 6366289 | Johns | Apr 2002 | B1 |
| 6429877 | Stroyan | Aug 2002 | B1 |
| 6437780 | Baltaretu et al. | Aug 2002 | B1 |
| 6452595 | Montrym et al. | Sep 2002 | B1 |
| 6469707 | Voorhies | Oct 2002 | B1 |
| 6480205 | Greene et al. | Nov 2002 | B1 |
| 6501564 | Schramm et al. | Dec 2002 | B1 |
| 6504542 | Voorhies et al. | Jan 2003 | B1 |
| 6522329 | Ihara et al. | Feb 2003 | B1 |
| 6523102 | Dye et al. | Feb 2003 | B1 |
| 6525737 | Duluk, Jr. et al. | Feb 2003 | B1 |
| 6529207 | Landau et al. | Mar 2003 | B1 |
| 6545684 | Dragony et al. | Apr 2003 | B1 |
| 6606093 | Gossett et al. | Aug 2003 | B1 |
| 6611272 | Hussain et al. | Aug 2003 | B1 |
| 6614444 | Duluk, Jr. et al. | Sep 2003 | B1 |
| 6614448 | Garlick et al. | Sep 2003 | B1 |
| 6624823 | Deering | Sep 2003 | B2 |
| 6633197 | Sutardja | Oct 2003 | B1 |
| 6633297 | McCormack et al. | Oct 2003 | B2 |
| 6646639 | Greene et al. | Nov 2003 | B1 |
| 6671000 | Cloutier | Dec 2003 | B1 |
| 6693637 | Koneru et al. | Feb 2004 | B2 |
| 6693639 | Duluk, Jr. et al. | Feb 2004 | B2 |
| 6697063 | Zhu | Feb 2004 | B1 |
| 6704022 | Aleksic | Mar 2004 | B1 |
| 6717576 | Duluk, Jr. et al. | Apr 2004 | B1 |
| 6717578 | Deering | Apr 2004 | B1 |
| 6734861 | Van Dyke et al. | May 2004 | B1 |
| 6741247 | Fenney | May 2004 | B1 |
| 6747057 | Ruzafa et al. | Jun 2004 | B2 |
| 6765575 | Voorhies et al. | Jul 2004 | B1 |
| 6778177 | Furtner | Aug 2004 | B1 |
| 6788301 | Thrasher | Sep 2004 | B2 |
| 6798410 | Redshaw et al. | Sep 2004 | B1 |
| 6803916 | Ramani et al. | Oct 2004 | B2 |
| 6819332 | Baldwin | Nov 2004 | B2 |
| 6833835 | van Vugt | Dec 2004 | B1 |
| 6901497 | Tashiro et al. | May 2005 | B2 |
| 6906716 | Moreton et al. | Jun 2005 | B2 |
| 6938176 | Alben et al. | Aug 2005 | B1 |
| 6940514 | Wasserman et al. | Sep 2005 | B1 |
| 6947057 | Nelson et al. | Sep 2005 | B2 |
| 6956579 | Diard et al. | Oct 2005 | B1 |
| 6961057 | Van Dyke et al. | Nov 2005 | B1 |
| 6978317 | Anantha et al. | Dec 2005 | B2 |
| 7002591 | Leather et al. | Feb 2006 | B1 |
| 7009607 | Lindholm et al. | Mar 2006 | B2 |
| 7009615 | Kilgard et al. | Mar 2006 | B1 |
| 7061495 | Leather | Jun 2006 | B1 |
| 7061640 | Maeda | Jun 2006 | B1 |
| 7064771 | Jouppi et al. | Jun 2006 | B1 |
| 7075542 | Leather | Jul 2006 | B1 |
| 7081902 | Crow et al. | Jul 2006 | B1 |
| 7119809 | McCabe | Oct 2006 | B1 |
| 7126600 | Fowler et al. | Oct 2006 | B1 |
| 7154066 | Talwar et al. | Dec 2006 | B2 |
| 7158148 | Toji et al. | Jan 2007 | B2 |
| 7167259 | Varga | Jan 2007 | B2 |
| 7170515 | Zhu | Jan 2007 | B1 |
| 7184040 | Tzvetkov | Feb 2007 | B1 |
| 7224364 | Yue et al. | May 2007 | B1 |
| 7307628 | Goodman et al. | Dec 2007 | B1 |
| 7307638 | Leather et al. | Dec 2007 | B2 |
| 7382368 | Molnar et al. | Jun 2008 | B1 |
| 7453466 | Hux et al. | Nov 2008 | B2 |
| 7483029 | Crow et al. | Jan 2009 | B2 |
| 7548996 | Baker et al. | Jun 2009 | B2 |
| 7551174 | Iourcha et al. | Jun 2009 | B2 |
| 7633506 | Leather et al. | Dec 2009 | B1 |
| 7634637 | Lindholm et al. | Dec 2009 | B1 |
| 7791617 | Crow et al. | Sep 2010 | B2 |
| 7965902 | Zelinka et al. | Jun 2011 | B1 |
| 8063903 | Vignon et al. | Nov 2011 | B2 |
| 20010005209 | Lindholm et al. | Jun 2001 | A1 |
| 20020050979 | Oberoi et al. | May 2002 | A1 |
| 20020097241 | McCormack et al. | Jul 2002 | A1 |
| 20020130863 | Baldwin | Sep 2002 | A1 |
| 20020140655 | Liang et al. | Oct 2002 | A1 |
| 20020158885 | Brokenshire et al. | Oct 2002 | A1 |
| 20020196251 | Duluk, Jr. et al. | Dec 2002 | A1 |
| 20030067468 | Duluk, Jr. et al. | Apr 2003 | A1 |
| 20030076325 | Thrasher | Apr 2003 | A1 |
| 20030122815 | Deering | Jul 2003 | A1 |
| 20030163589 | Bunce et al. | Aug 2003 | A1 |
| 20030194116 | Wong et al. | Oct 2003 | A1 |
| 20030201994 | Taylor et al. | Oct 2003 | A1 |
| 20040085313 | Moreton et al. | May 2004 | A1 |
| 20040130552 | Duluk et al. | Jul 2004 | A1 |
| 20040183801 | Deering | Sep 2004 | A1 |
| 20040196285 | Rice et al. | Oct 2004 | A1 |
| 20040207642 | Crisu et al. | Oct 2004 | A1 |
| 20040246251 | Fenney et al. | Dec 2004 | A1 |
| 20050030314 | Dawson | Feb 2005 | A1 |
| 20050041037 | Dawson | Feb 2005 | A1 |
| 20050066148 | Luick | Mar 2005 | A1 |
| 20050122338 | Hong et al. | Jun 2005 | A1 |
| 20050134588 | Aila et al. | Jun 2005 | A1 |
| 20050134603 | Iourcha et al. | Jun 2005 | A1 |
| 20050179698 | Vijayakumar et al. | Aug 2005 | A1 |
| 20050259100 | Teruyama | Nov 2005 | A1 |
| 20060044317 | Bourd et al. | Mar 2006 | A1 |
| 20060170690 | Leather | Aug 2006 | A1 |
| 20060203005 | Hunter | Sep 2006 | A1 |
| 20060245001 | Lee et al. | Nov 2006 | A1 |
| 20060267981 | Naoi | Nov 2006 | A1 |
| 20070139440 | Crow et al. | Jun 2007 | A1 |
| 20070268298 | Alben et al. | Nov 2007 | A1 |
| 20070273689 | Tsao | Nov 2007 | A1 |
| 20070296725 | Steiner et al. | Dec 2007 | A1 |
| 20080024497 | Crow et al. | Jan 2008 | A1 |
| 20080024522 | Crow et al. | Jan 2008 | A1 |
| 20080100618 | Woo et al. | May 2008 | A1 |
| 20080273218 | Kitora et al. | Nov 2008 | A1 |
| Number | Date | Country |
|---|---|---|
| 101093578 | Dec 2007 | CN |
| 06180758 | Jun 1994 | JP |
| 10-134198 | May 1998 | JP |
| 11195132 | Jul 1999 | JP |
| 2005182547 | Jul 2005 | JP |
| 0013145 | Mar 2000 | WO |
| Entry |
|---|
| A Hardware Assisted Design Rule Check Architecture Larry Seller Jan. 1982 Proceedings of the 19th Conference on Design Automation DAC '82 Publisher: IEEE Press. |
| A Parallel Alogorithm for Polygon Rasterization Juan Pineda Jun. 1988 ACM. |
| A VLSI Architecture for Updating Raster-Scan Displays Satish Gupta, Robert F. Sproull, Ivan E. Sutherland Aug. 1981 ACM SIGGRAPH Computer Graphics, Proceedings of the 8th Annual Conference on Computer Graphics and Interactive Techniques SIGGRAPH '81, vol. 15 Issue Publisher: ACM Press. |
| Blythe, OpenGL section 3.4.1, Basic Line Segment Rasterization, Mar. 29, 1997, pp. 1-3. |
| Boyer, et al.; “Discrete Analysis for Antialiased Lines;” Eurographics 2000; 3 Pages. |
| Crow; “The Use of Grayscale for Improves Raster Display of Vectors and Characters;” University of Texas, Austin, Texas; Work supported by the National Science Foundation unser Grants MCS 76-83889; pp. 1-5: ACM Press. |
| Foley, J. “Computer Graphics: Principles and Practice”, 1987, Addison-Wesley Publishing, 2nd Edition, p. 545-546. |
| Fuchs; “Fast Spheres Shadow, Textures, Transparencies, and Image Enhancements in Pixel-Planes”; ACM; 1985; Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514. |