The present invention relates to the field of graphics processing.
Electronic systems and circuits have made a significant contribution towards the advancement of modern society and are utilized in a number of applications to achieve advantageous results. Numerous electronic technologies such as digital computers, calculators, audio devices, video equipment, and telephone systems facilitate increased productivity and cost reduction in analyzing and communicating data, ideas and trends in most areas of business, science, education and entertainment. Electronic systems designed to produce these results usually involve interfacing with a user and the interfacing often involves presentation of graphical images to the user. Displaying graphics images traditionally involves intensive data processing and coordination requiring considerable resources and often consuming significant power.
An image is typically represented as a raster (an array) of logical picture elements (pixels). Pixel data corresponding to certain surface attributes of an image (e.g. color, depth, texture, etc.) are assigned to each pixel and the pixel data determines the nature of the projection on a display screen area associated with the logical pixel. Conventional three dimensional graphics processors typically involve extensive and numerous sequential stages or “pipeline” type processes that manipulate the pixel data in accordance with various vertex parameter values and instructions to map a three dimensional scene in the world coordinate system to a two dimensional projection (e.g., on a display screen) of an image. A relatively significant amount of processing and memory resources are usually required to implement the numerous stages of a traditional pipeline.
A number of new categories of devices (e.g., such as portable game consoles, portable wireless communication devices, portable computer systems, etc.) are emerging where size and power consumption are a significant concern. Many of these devices are small enough to be held in the hands of a user making them very convenient and the display capabilities of the devices are becoming increasingly important as the underlying fundamental potential of other activities (e.g., communications, game applications, internet applications, etc.) are increasing. However, the resources (e.g., processing capability, storage resources, etc.) of a number of the devices and systems are usually relatively limited. These limitations can make retrieving, coordinating and manipulating information associated with a final image rendered or presented on a display very difficult or even impossible. In addition, traditional graphics information processing can consume significant power and be a significant drain on limited power supplies, such as a battery.
Early z scoreboard tracking systems and methods in accordance with the present invention are described herein. In one embodiment, multiple pixels are received and a pixel depth raster operation is performed on the pixels. The pixel depth raster operation comprises discarding a pixel that is occluded. In one exemplary implementation, the depth raster operation is done at a faster rate than a color raster operation. Pixels that pass the depth raster operation are checked for screen coincidence. Pixels with screen coincidence are stalled and pixels without screen coincidence are forwarded to lower stages of the pipeline. The lower stages of the pipeline are programmable and pixel flight time can vary (e.g., can include multiple passes through the lower stages). Execution through the lower stages is directed by a program sequencer which also directs notification to the pixel flight tracking when a pixel is done processing.
The accompanying drawings, which are incorporated in and form a part of this specification, illustrate embodiments of the invention by way of example and not by way of limitation. The drawings referred to in this specification should be understood as not being drawn to scale except if specifically noted.
Reference will now be made in detail to the preferred embodiments of the invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the preferred embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be obvious to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the present invention.
Some portions of the detailed descriptions which follow are presented in terms of procedures, logic blocks, processing, and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means generally used by those skilled in data processing arts to effectively convey the substance of their work to others skilled in the art. A procedure, logic block, process, etc., is here, and generally, conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps include physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic, optical, or quantum signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present application, discussions utilizing terms such as “processing”, “computing”, “calculating”, “determining”, “displaying” or the like, refer to the action and processes of a computer system, or similar processing device (e.g., an electrical, optical, or quantum, computing device), that manipulates and transforms data represented as physical (e.g., electronic) quantities. The terms refer to actions and processes of the processing devices that manipulate or transform physical quantities within a computer system's component (e.g., registers, memories, logic, other such information storage, transmission or display devices, etc.) into other data similarly represented as physical quantities within other components.
The present invention provides efficient and convenient graphics data organization and processing. A present invention graphics system and method can facilitate presentation of graphics images with a reduced amount of resources dedicated to graphics information processing and can also facilitate increased power conservation. In one embodiment of the present invention, processing of graphics information is simplified and coordination of graphics information between different pixels is facilitated. For example, if pixel data does not impact (e.g., contributes to, modifies, etc.) the image display presentation, power dissipated processing the information is minimized by “killing” the pixel (e.g., not clocking the pixel packet payload through the graphics pipeline). Alternatively, the pixel packet can be removed from the graphics pipeline all together. Information retrieval can also be coordinated to ensure information is being retrieved and forwarded in the proper sequence (e.g., to avoid improper screen coincidence, multiple pass issues, read-modify-write problems, etc.). In addition, embodiments of the present invention can provide flexible organization of graphics information and facilitate programmable multiple pipeline passes.
Graphics pipeline 100 includes setup stage 105, raster stage 110, gatekeeper stage 120, program sequence sage 130, arithmetic logic unit stage 140 and data write stage 150. In one embodiment of the present invention, a host provides graphics pipeline 100 with vertex data (e.g., points in three dimensional space that are being rendered), commands for rendering particular triangles given the vertex data, and programming information for the pipeline (e.g., register writes for loading instructions into different graphics pipeline 100 stages). The stages of graphics pipeline 100 cooperatively operate to process graphics information.
Setup stage 105 receives vertex data and prepares information for processing in graphics pipeline 100. Setup stage 105 can perform geometrical transformation of coordinates, perform viewport transforms, perform clipping and prepare perspective correct parameters for use in raster stage 110, including parameter coefficients. In one embodiment, the setup unit applies a user defined view transform to vertex information (e.g., x, y, z, color and/or texture attributes, etc.) and determines screen space coordinates for each triangle. Setup stage 105 can also support guard-band clipping, culling of back facing triangles (e.g., triangles facing away from a viewer), and determining interpolated texture level of detail (e.g., level of detail based upon triangle level rather than pixel level). In addition, setup stage 105 can collect statistics and debug information from other graphics processing blocks.
Setup stage 105 can include a vertex buffer (e.g., vertex cache) that can be programmably controlled (e.g., by software, a driver, etc.) to efficiently utilize resources (e.g., for different bit size word vertex formats). For example, transformed vertex data can be tracked and saved in the vertex buffer for future use without having to perform transform operations for the same vertex again. In one embodiment, setup stage 105 sets up barycentric coefficients for raster 110. In one exemplary implementation, setup stage 105 is a floating point Very Large Instruction Word (VLIW) machine that supports 32-bit IEEE float, S15.16 fixed point and packed 0.8 fixed point formats.
Raster stage 110 determines which pixels correspond to a particular triangle and interpolates parameters from setup stage 105 associated with the triangle to provide a set of interpolated parameter variables and instruction pointers or sequence numbers associated with (e.g., describing) each pixel. For example, raster stage 100 can provide a “translation” or rasterization from a triangle view to a pixel view of an image. In one embodiment, raster stage 110 scans or iterates each pixel in an intersection of a triangle and a scissor rectangle. For example, raster stage 110 can process pixels of a given triangle and determine which processing operations are appropriate for pixel rendering (e.g., operations related to color, texture, depth and fog, etc.). Raster stage 110 can support guard band (e.g., +/−1K) coordinates providing efficient guard-band rasterization of on-screen pixels and facilitates reduction of clipping operations. In one exemplary implementation, raster stage 110 is compatible with Open GL-ES and D3DM rasterization rules. Raster stage 110 is also programmable to facilitate reduction of power that would otherwise be consumed by unused features and faster rendering of simple drawing tasks, as compared to a hard-coded rasterizer unit in which features consume time or power (or both) whether or not they are being used.
In one embodiment, raster stage 110 also generates pixel packets utilized in graphics pipeline 100. Each pixel packet includes one or more rows and each row includes a payload portion and a sideband portion. A payload portion includes fields for various values including interpolated parameter values (e.g., values that are the result of raster interpolation operations). For example, the fields can be created to hold values associated with pixel surface attributes (e.g., color, texture, depth, fog, (x,y) location, etc.). Instruction sequence numbers associated with the pixel processing are assigned to the pixel packets and placed in an instruction sequence field of the sideband portion. The sideband information also includes a status field (e.g., kill field).
In one embodiment, raster stage 110 calculates barycentic coordinates for pixel packets. In a barycentric coordinate system, distances in a triangle are measured with respect to its vertices. The use of barycentric coordinates reduces the required dynamic range, which permits using fixed point calculations that require less power than floating point calculations. In one embodiment, raster stage 110 can also interleave even number pixel rows and odd number pixel rows to account for multiclock cycle latencies of downstream pipestages.
A present invention graphics pipeline system and method can facilitate efficient utilization of resources by limiting processing on pixels that do not contribute to an image display presentation. Z Raster stage 111 performs an analysis to determine relatively “early” in the graphics pipeline if a pixel contributes to the image display presentation. For example, an analysis of whether a pixel is occluded (e.g., has values associated with “hidden” surfaces that do not contribute to an image display presentation) is performed. In one embodiment, a pixel packet row is not clocked through (e.g., CMOS components for the payload portion do not switch) for killed pixels. The present invention can prevent power being consumed on processing for pixels that would otherwise be discarded at the end of the pipeline. The raster stage removes pixel information (e.g., pixel packet rows) associated with the pixel from the pipeline if the information does not contribute to the image display presentation and notifies gatekeeper 120. Color raster stage 112 performs color raster operations.
In one embodiment, Z raster is done at a faster rate than color raster. In one exemplary implementation, Z raster operations are performed on four pixels are at a time and the pixels that are discarded are “finished” faster than the pixels that go through color rasterizing. The discarding of some pixels while others rasterized at the same time proceed to the lower stages of the pipeline introduce timing issues that are handled by the scoreboarding and program sequencing described below. The scoreboarding and program sequencing also handle timing issues associated with variable length programmable shader operations that can include re-circulating a pixel through pipeline stages multiple passes.
Gatekeeper stage 120 of
In one embodiment, gatekeeper stage 120 utilizes scoreboarding techniques to track and identify coincident pixel issues. Gatekeeper stage 120 can also utilize the scoreboard to tracks pixels that finish processing through the pipeline (e.g., by being written to memory or being killed). Scoreboard 121 facilitates coordination of pixels in a pipeline to maintain an appropriate processing flow (e.g., the order in which an application drew a triangle). For example, it is possible for an application to direct one triangle to be rendered over the top of another triangle and it is possible for a pixel associated with the second triangle to be coincident (e.g., have the same screen location) with a pixel from the first triangle.
Scoreboard 121 tracks the screen locations of pixels that are in “flight” and being processed by downstream stages of the graphics pipeline. Scoreboard 121 prevents a hazard where one pixel in a triangle is coincident (“on top of”) another pixel being processed and in flight but not yet retired. For example, when a pixel packet is received at gatekeeper stage 120, the screen location for the pixel packet is stored at scoreboard 121. When a second pixel packet having the same screen location is received, scoreboard 121 indicates that another pixel with that screen location is currently being processed by downstream stages of graphics pipeline. In one embodiment, scoreboard 121 is implemented as a bit mask. In one exemplary implementation, the bit mask is a grid of bits for indicating whether a pixel having a particular (x, y) location is busy (e.g., being processed by graphics pipeline).
In one embodiment, gatekeeper stage 120 directs raster stage 110 to stall propagation of the new pixel to downstream stages in response to detecting screen coincidence between the pixel and pixels currently processing. Upon completion of processing for a pixel packet, a message is sent from data write stage 150 to gatekeeper stage 120 indicating that the pixel has completed processing. In response to receiving the message, scoreboard 121 is updated to indicate that the screen location associated with the pixel is now free, and that processing can commence on another pixel having the same screen location. In one embodiment, the corresponding bit in a bit mask is cleared.
Program sequencer (P Seq) 130 functions by controlling the operation of the other downstream components of the graphics pipeline 100. In one embodiment program sequencer 130 works in conjunction with a graphics driver to implement a method for loading and executing a programmable shader. The program sequencer 130 can interact with the graphics driver (e.g., a graphics driver executing on the CPU) to control the manner in which the functional modules of the graphics pipeline 100 receive information, configure themselves for operation, and process graphics primitives. For example, graphics rendering data (e.g., primitives, triangle strips, etc.), pipeline configuration information (e.g., mode settings, rendering profiles, etc.), and rendering programs (e.g., pixel shader programs, vertex shader programs, etc.) are received by the lower pipeline stage over a common input from upstream pipeline stages (e.g., from an upstream raster module, from a setup module, or from the graphics driver).
In one exemplary implementation the program sequencer 130 directs execution of an indeterminate length shader program. As used herein, the term “indefinite length” shader program refers to the fact that the shader programs that can be executed by a GPU are not arbitrarily limited by a predetermined, or format based, length. Thus for example, shader programs that can be executed can be short length shader programs (e.g., 16 to 32 instructions long, etc.), normal shader programs (e.g., 64 to 128 instructions long, etc.), long shader programs (e.g., 256 instructions long, etc.), very long shader programs (e.g., more than 1024 instructions long, etc) or the like. In one embodiment, program sequencer 130 directs execution of indeterminate length shader programs by executing them in portions.
P Seq. 130 is also responsible for fetching (e.g., reading) a plurality of different data types (e.g., color data, depth data, texture data, etc.) from a memory (e.g., memory 132) in a single stage. In one embodiment, a variety of different types of surface attribute information from memory 170, including surface information related to pixels (e.g., pixels generated by a rasterization module). The surface information can also be associated with a plurality of graphics functions to be performed on the pixels and wherein the surface information is stored in pixel information (e.g., a pixel packet) associated with the pixels. The plurality of graphics functions can include color blending and texture mapping. In one exemplary implementation, program sequencer 130 directs a recirculation data path for recirculating pixel information through shading and texture operations multiple for multiple passes or loops.
Arithmetic logic stage 140 (e.g., an ALU) of
Data write stage 150 forwards pixel processing results (e.g., color results, Z-depth results, etc.) out to memory. In one embodiment data write stage 150 forwards the results to fragment data cache 170. In one exemplary implementation, data write stage forwards an indication to scoreboard 121 the pixel is no longer in flight.
With reference now to
As described above, certain processes and steps of the present invention are realized, in one embodiment, as a series of instructions (e.g., software program) that reside within computer readable memory (e.g., memory 221) of a computer system (e.g., system 200) and are executed by the CPU 201 and graphics processor 205 of system 200. When executed, the instructions cause the computer system 200 to implement the functionality of the present invention as described below.
As shown in
Additionally, it should be appreciated that although the components 201-257 are depicted in
In block 311 multiple pixel information is received. In one embodiment of the present invention, the multiple pixel packet information is included in a graphics pipeline raster stage (e.g., raster stage 110). In one exemplary implementation, receiving pixel packet information also includes retrieving pixel surface attribute values. The pixel surface attribute values can be inserted in the pixel packet row.
At block 312 a pixel depth raster operation on the multiple pixels is performed. In one embodiment, the pixel depth raster operation is done at a faster rate than a color raster operation. In one exemplary implementation, the pixel depth raster operation is performed on four pixels at a time and the pixels that are discarded are finished faster than the pixels that are forwarded for color raterizing. The depth determination includes analyzing if a pixel associated with the pixel packet information is occluded. For example, a depth comparison of Z values is performed to determine if another pixel already processed and written to a frame buffer is in “front” of a pixel currently entering a data fetch stage. If there is another pixel already processed and in front the current pixel fails the Z test and the current pixel is discarded or removed from further processing. If there is not another pixel already processed and in front the current pixel passes the Z test and the process proceeds to step 313.
The pixels that pass the pixel depth operation are checked for screen coincidence in block 313. The “flight” through the pipeline or processing of the multiple pixels that are forwarded to the lower stages of said graphics pipeline is tracked. In one embodiment, a scoreboard is checked for an indication of a screen coincidence. In one exemplary implementation bits in a scoreboard representing screen positions of pixels that are entering the downstream pipeline portion are set and to check subsequent pixels a determination is made if the scoreboard contains a set bit that is associated with a screen position of the subsequent pixel. Propagation of a pixel is stalled in response to detecting screen coincidence with another pixel.
In step 314 pixels that pass the screen coincidence checking are forwarded to lower stages of the graphics pipeline for downstream processing. The flight or processing of the pixel in the lower stages is variable. In one embodiment, execution shader program is and indeterminate length and a pixel can pass through or recirculate through the lower stages multiple times. In one embodiment, a downstream data write module reports to an upstream scoreboard module that the particular pixel packet has propagated through the graphics pipeline. In this way, written and are marked as retired.
Thus, the present invention facilitates efficient and effective pixel processing. The present invention enables power conservation by eliminating occluded pixels early in the pipeline while coordinating tracking of variable length pipeline processing operations. The depth rasterizing can be performed on multiple pixels at a faster rate than the color rastering while timing issues associated with forwarded pixels that make multiple passes through the pipeline stages are handled.
The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the Claims appended hereto and their equivalents. In the claims, the order of elements does not imply any particular order of operations, steps, or the like, unless a particular element makes specific reference to another element as becoming before or after.
The present application claims the benefit of and priority to copending Provisional Application 60/964929 entitled an Early Z Scoreboard Tracking System and Method filed on Aug. 15, 2007, which is incorporated herein by this reference. The present Application is also a Continuation in Part and claims the benefit and priority of the following copending commonly assigned U.S. patent applications entitled: “A Coincident Graphics Pixel Scoreboard Tracking System and Method” by Hutchins et al. filed on May 14, 2004, Ser. No. 10/846,208; and “An Early Kill Removal Graphics Processing System and Method” by Hutchins et al. filed on May 14, 2004 Ser. No. 10/845,662; which are hereby incorporated by this reference.
Number | Name | Date | Kind |
---|---|---|---|
4620217 | Songer | Oct 1986 | A |
4648045 | Demetrescu | Mar 1987 | A |
4667308 | Hayes et al. | May 1987 | A |
4700319 | Steiner | Oct 1987 | A |
4862392 | Steiner | Aug 1989 | A |
4901224 | Ewert | Feb 1990 | A |
5185856 | Alcorn et al. | Feb 1993 | A |
5268995 | Diefendorff et al. | Dec 1993 | A |
5270687 | Killebrew, Jr. | Dec 1993 | A |
5285323 | Hetherington et al. | Feb 1994 | A |
5357604 | San et al. | Oct 1994 | A |
5392393 | Deering | Feb 1995 | A |
5487022 | Simpson et al. | Jan 1996 | A |
5488687 | Rich | Jan 1996 | A |
5491496 | Tomiyasu | Feb 1996 | A |
5557298 | Yang et al. | Sep 1996 | A |
5577213 | Avery et al. | Nov 1996 | A |
5579473 | Schlapp et al. | Nov 1996 | A |
5579476 | Cheng et al. | Nov 1996 | A |
5581721 | Wada et al. | Dec 1996 | A |
5600584 | Schlafly | Feb 1997 | A |
5604824 | Chui et al. | Feb 1997 | A |
5613050 | Hochmuth et al. | Mar 1997 | A |
5655132 | Watson | Aug 1997 | A |
5701444 | Baldwin | Dec 1997 | A |
5748202 | Nakatsuka et al. | May 1998 | A |
5764228 | Baldwin | Jun 1998 | A |
5777628 | Buck-Gengler | Jul 1998 | A |
5808617 | Kenworthy et al. | Sep 1998 | A |
5818456 | Cosman et al. | Oct 1998 | A |
5831640 | Wang et al. | Nov 1998 | A |
5844569 | Eisler et al. | Dec 1998 | A |
5850572 | Dierke | Dec 1998 | A |
5864342 | Kajiya et al. | Jan 1999 | A |
5941940 | Prasad et al. | Aug 1999 | A |
5977977 | Kajiya et al. | Nov 1999 | A |
5995121 | Alcorn et al. | Nov 1999 | A |
6002410 | Battle | Dec 1999 | A |
6118452 | Gannett | Sep 2000 | A |
6166743 | Tanaka | Dec 2000 | A |
6173366 | Thayer et al. | Jan 2001 | B1 |
6222550 | Rosman et al. | Apr 2001 | B1 |
6229553 | Duluk, Jr. et al. | May 2001 | B1 |
6259460 | Gossett et al. | Jul 2001 | B1 |
6259461 | Brown | Jul 2001 | B1 |
6288730 | Duluk, Jr. | Sep 2001 | B1 |
6313846 | Fenney et al. | Nov 2001 | B1 |
6333744 | Kirk et al. | Dec 2001 | B1 |
6351806 | Wyland | Feb 2002 | B1 |
6353439 | Lindholm et al. | Mar 2002 | B1 |
6407740 | Chan | Jun 2002 | B1 |
6411130 | Gater | Jun 2002 | B1 |
6411301 | Parikh et al. | Jun 2002 | B1 |
6417851 | Lindholm et al. | Jul 2002 | B1 |
6466222 | Kao et al. | Oct 2002 | B1 |
6496537 | Kranawetter et al. | Dec 2002 | B1 |
6516032 | Heirich et al. | Feb 2003 | B1 |
6525737 | Duluk, Jr. et al. | Feb 2003 | B1 |
6526430 | Hung et al. | Feb 2003 | B1 |
6542971 | Reed | Apr 2003 | B1 |
6557022 | Sih et al. | Apr 2003 | B1 |
6597363 | Duluk, Jr. et al. | Jul 2003 | B1 |
6604188 | Coon et al. | Aug 2003 | B1 |
6624818 | Mantor et al. | Sep 2003 | B1 |
6636214 | Leather et al. | Oct 2003 | B1 |
6636221 | Morein | Oct 2003 | B1 |
6636223 | Morein | Oct 2003 | B1 |
6664958 | Leather et al. | Dec 2003 | B1 |
6670955 | Morein | Dec 2003 | B1 |
6693643 | Trivedi et al. | Feb 2004 | B1 |
6717577 | Cheng et al. | Apr 2004 | B1 |
6731288 | Parsons et al. | May 2004 | B2 |
6734861 | Van Dyke et al. | May 2004 | B1 |
6745390 | Reynolds et al. | Jun 2004 | B1 |
6778181 | Kilgariff et al. | Aug 2004 | B1 |
6806886 | Zatz | Oct 2004 | B1 |
6819331 | Shih et al. | Nov 2004 | B2 |
6839828 | Gschwind et al. | Jan 2005 | B2 |
6879328 | Deering | Apr 2005 | B2 |
6912695 | Ernst et al. | Jun 2005 | B2 |
6924808 | Kurihara et al. | Aug 2005 | B2 |
6947053 | Malka et al. | Sep 2005 | B2 |
6980209 | Donham et al. | Dec 2005 | B1 |
6980222 | Marion et al. | Dec 2005 | B2 |
6999100 | Leather et al. | Feb 2006 | B1 |
7034828 | Drebin et al. | Apr 2006 | B1 |
7042462 | Kim et al. | May 2006 | B2 |
7145566 | Karlov | Dec 2006 | B2 |
7158141 | Chung et al. | Jan 2007 | B2 |
7187383 | Kent | Mar 2007 | B2 |
7257814 | Melvin et al. | Aug 2007 | B1 |
7280112 | Hutchins | Oct 2007 | B1 |
7298375 | Hutchins | Nov 2007 | B1 |
7450120 | Hakura et al. | Nov 2008 | B1 |
7477260 | Nordquist | Jan 2009 | B1 |
7659909 | Hutchins | Feb 2010 | B1 |
7710427 | Hutchins et al. | May 2010 | B1 |
7928990 | Jiao et al. | Apr 2011 | B2 |
7941645 | Riach et al. | May 2011 | B1 |
7969446 | Hutchins et al. | Jun 2011 | B2 |
8537168 | Steiner et al. | Sep 2013 | B1 |
20020105519 | Lindholm et al. | Aug 2002 | A1 |
20020126126 | Baldwin | Sep 2002 | A1 |
20020129223 | Takayama et al. | Sep 2002 | A1 |
20020169942 | Sugimoto | Nov 2002 | A1 |
20030115233 | Hou et al. | Jun 2003 | A1 |
20030189565 | Lindholm et al. | Oct 2003 | A1 |
20040012597 | Zatz et al. | Jan 2004 | A1 |
20040012599 | Laws | Jan 2004 | A1 |
20040012600 | Deering et al. | Jan 2004 | A1 |
20040024260 | Winkler et al. | Feb 2004 | A1 |
20040078504 | Law et al. | Apr 2004 | A1 |
20040100474 | Demers et al. | May 2004 | A1 |
20040114813 | Boliek et al. | Jun 2004 | A1 |
20040119710 | Piazza et al. | Jun 2004 | A1 |
20040126035 | Kyo | Jul 2004 | A1 |
20040130552 | Duluk, Jr. et al. | Jul 2004 | A1 |
20040246260 | Kim et al. | Dec 2004 | A1 |
20050122330 | Boyd et al. | Jun 2005 | A1 |
20050134588 | Aila et al. | Jun 2005 | A1 |
20050135433 | Chang et al. | Jun 2005 | A1 |
20050162436 | Van Hook et al. | Jul 2005 | A1 |
20050223195 | Kawaguchi | Oct 2005 | A1 |
20050231506 | Simpson et al. | Oct 2005 | A1 |
20050237337 | Leather et al. | Oct 2005 | A1 |
20050280655 | Hutchins et al. | Dec 2005 | A1 |
20060007234 | Hutchins et al. | Jan 2006 | A1 |
20060028469 | Engel | Feb 2006 | A1 |
20060152519 | Hutchins et al. | Jul 2006 | A1 |
20060155964 | Totsuka | Jul 2006 | A1 |
20060177122 | Yasue | Aug 2006 | A1 |
20060288195 | Ma et al. | Dec 2006 | A1 |
20070165029 | Lee et al. | Jul 2007 | A1 |
20070279408 | Zheng et al. | Dec 2007 | A1 |
20070285427 | Morein et al. | Dec 2007 | A1 |
Number | Date | Country |
---|---|---|
1954338 | May 2004 | CN |
101091203 | May 2004 | CN |
1665165 | May 2004 | EP |
1745434 | May 2004 | EP |
1771824 | May 2004 | EP |
05150979 | Jun 1993 | JP |
11053187 | Feb 1999 | JP |
2000047872 | Feb 2000 | JP |
2002073330 | Mar 2002 | JP |
2002171401 | Jun 2002 | JP |
2004199222 | Jul 2004 | JP |
2006196004 | Jul 2006 | JP |
2008161169 | Jul 2008 | JP |
2005112592 | May 2004 | WO |
2006007127 | May 2004 | WO |
2005114582 | Dec 2005 | WO |
Entry |
---|
Pixar, Inc.; PhotoRealistic RenderMan 3.9 Shading Language Extensions; Sep. 1999. |
http://www.encyclopedia.com/html/s1/sideband.asp. |
PCT Notification of Transmittal of The International Search Report and The Written Opinion of the International Searching Authority, or the Declaration. PCT/US05/17032; Applicant NVIDA Corporation; Mail Date Nov. 9, 2005. |
PCT Notificaiton of Transmittal of The International Search Report or the Declaration. PCT/US05/17526; Applicant Hutchins, Edward A; Mail Date Jan. 17, 2006. |
PCT Notificaiton of Transmittal of The International Search Report and The Written Opinion of the International Searching Authority, or the Declaration. PCT/US05/17031; Applicant NVIDA Corporation; Mail Date Feb. 9, 2007. |
Hutchins et al, Patent Application Entitled “A Unified Data Fetch Graphics Processing System and Method”, U.S. Appl. No. 10/845,986, filed May 14, 2004. |
Hutchins et al, Patent Application Entitled “An Early Kill Removal Graphics Processing System and Method”, U.S. Appl. No. 10/845,662, filed May 14, 2004. |
Battle, J., Patent Application Entitled “Arbitrary Size Texture Palettes For Use in Graphics Systems”, U.S. Appl. No. 10/845,664, filed May 14, 2004. |
Hutchins et al., Patent Application Entitled “A Single Thread Graphics Processing System and Method”, U.S. Appl. No. 10/846,192, filed May 14, 2004. |
“Interleaved Memory.” Dec. 26, 2002. http://www.webopedia.com/TERM/I/interleaved—memory.html. |
Pirazzi, Chris. “Fields, F1/F2, Interleave, Field Dominance And More.” Nov. 4, 2001. http://lurkertech.com/lg/dominance.html. |
Hennessy, et al., Computer Organization and Design: The Hardware/Software Interface, 1997, Section 6.5. |
Moller, et al.; Real-Time Rendering, 2nd ed., 2002, A K Peters Ltd., pp. 92-99, 2002. |
Hollasch; IEEE Standard 754 Floating Point Numbers; http://steve.hollasch.net/cgindex/coding/ieeefloat.html; dated Feb. 24, 2005; retrieved Oct. 21, 2010. |
Microsoft; (Complete) Tutorial to Understand IEEE Floating-Point Errors; http://support.microsoft.com/kb/42980; dated Aug. 16, 2005; retrieved Oct. 21, 2010. |
The Free Online Dictionary, Thesaurus and Encyclopedia, definition for cache; http://www.thefreedictionary.com/cache; retrieved Aug. 17, 2012. |
Wolfe A, et al., “A Superscalar 3D graphics engine”, MICRO-32. Proceedings of the 32nd annual ACM/IEEE International Symposium on Microarchitecture. Haifa, Israel, Nov. 16-18, 1999. |
Zaharieva-Stoyanova E I: “Data-flow analysis in superscalar computer architecture execution,” Tellecommunications in Modern Satellite, Cable and Broadcasting Services, 2003. |
Number | Date | Country | |
---|---|---|---|
20080246764 A1 | Oct 2008 | US |
Number | Date | Country | |
---|---|---|---|
60964929 | Aug 2007 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10846208 | May 2004 | US |
Child | 12002732 | US | |
Parent | 10845662 | May 2004 | US |
Child | 10846208 | US |