This invention relates to computing systems generally, to three-dimensional computer graphics, more particularly to structure and method for generating texture in a three-dimensional graphics processor implementing deferred shading and other enhanced features.
Three-Dimensional Computer Graphics
Computer graphics is the art and science of generating pictures with a computer. Generation of pictures, or images, is commonly called rendering. Generally, in three-dimensional (3D) computer graphics, geometry that represents surfaces (or volumes) of objects in a scene is translated into pixels stored in a frame buffer, and then displayed on a display device. Real-time display devices, such as CRTs or LCDs used as computer monitors, refresh the display by continuously displaying the image over and over. This refresh usually occurs row-by-row, where each row is called a raster line or scan line. In this document, raster lines are numbered from bottom to top, but are displayed in order from top to bottom.
In a 3D animation, a sequence of images is displayed, giving the appearance of motion in three-dimensional space. Interactive 3D computer graphics allows a user to change his viewpoint or change the geometry in real-time, thereby requiring the rendering system to create new images on-the-fly in real-time. Therefore, real-time performance in color, with high quality imagery, is very important.
In 3D computer graphics, each renderable object generally has its own local object coordinate system, and therefore needs to be translated (or transformed) from object coordinates to pixel display coordinates. Conceptually, this is a 4-step process: 1) translation from object coordinates to world coordinates, which is the coordinate system for the entire scene; 2) translation from world coordinates to eye coordinates, based on the viewing point of the scene; 3) translation from eye coordinates to perspective translated eye coordinates, where perspective scaling (farther objects appear smaller) has been performed; and 4) translation from perspective translated eye coordinates to pixel coordinates, also called screen coordinates. Screen coordinates are points in three-dimensional space, and can be in either screen-precision (i.e., pixels) or object-precision (high precision numbers, usually floating-point), as described later. These translation steps can be compressed into one or two steps by precomputing appropriate translation matrices before any translation occurs. Once the geometry is in screen coordinates, it is broken into a set of pixel color values (that is “rasterized”) that are stored into the frame buffer. Many techniques are used for generating pixel color values, including Gouraud shading, Phong shading, and texture mapping.
A summary of the prior art rendering process can be found in: “Fundamentals of Three-dimensional Computer Graphics”, by Watt, Chapter 5: The Rendering Process, pages 97 to 113, published by Addison-Wesley Publishing Company, Reading, Mass., 1989, reprinted 1991, ISBN 0-201-15442-0 (hereinafter referred to as the Watt Reference).
Because many different portions of geometry can affect the same pixel, the geometry representing the surfaces closest to the scene viewing point must be determined. Thus, in accordance with the present invention for each pixel, the visible surfaces within the volume subtended by the pixel's area determine the pixel color value, while hidden surfaces are prevented from affecting the pixel. Non-opaque surfaces closer to the viewing point than the closest opaque surface (or surfaces, if an edge of geometry crosses the pixel area) affect the pixel color value, while all other non-opaque surfaces are discarded. In this document, the term “occluded” is used to describe geometry which is hidden by other non-opaque geometry.
Many techniques have been developed to perform visible surface determination, and a survey of these techniques are incorporated herein by reference to: “Computer Graphics: Principles and Practice”, by Foley, van Dam, Feiner, and Hughes, Chapter 15: Visible-Surface Determination, pages 649 to 720, 2nd edition published by Addison-Wesley Publishing Company, Reading, Mass., 1990, reprinted with corrections 1991, ISBN0-201-12110-7 (hereinafter referred to as the Foley Reference). In the Foley Reference, on page 650, the terms “image-precision” and “object-precision” are defined: “Image-precision algorithms are typically performed at the resolution of the display device, and determine the visibility at each pixel. Object-precision algorithms are performed at the precision with which each object is defined, and determine the visibility of each object.”
As a rendering process proceeds, most prior art renderers must compute the color value of a given screen pixel multiple times because multiple surfaces intersect the volume subtended by the pixel. The average number of times a pixel needs to be rendered, for a particular scene, is called the depth complexity of the scene. Simple scenes have a depth complexity near unity, while complex scenes can have a depth complexity perhaps within the range of ten to twenty, complexity of ten, 90% of the computation is wasted on hidden pixels. This wasted computation is typical of hardware renderers that use the simple Z-buffer technique (discussed later herein), generally chosen because it is easily built in hardware. Methods more complicated than the Z Buffer technique have heretofore generally been too complex to build in a cost-effective manner. An important feature of the method and apparatus invention presented here is the avoidance of this wasted computation by eliminating hidden portions of geometry before they are rasterized, while still being simple enough to build in cost-effective hardware.
When a point on a surface (frequently a polygon vertex) is translated to screen coordinates, the point has three coordinates: 1) the x-coordinate in pixel units (generally including a fraction); 2) the y-coordinate in pixel units (generally including a fraction); and 3) the z-coordinate of the point in either eye coordinates, distance from the virtual screen, or some other coordinate system which preserves the relative distance of surfaces from the viewing point. In this document, positive z-coordinate values are used for the “look direction” from the viewing point, and smaller positive values indicate a position closer to the viewing point.
When a surface is approximated by a set of planar polygons, the vertices of each polygon are translated to screen coordinates. For points in or on the polygon (other than the vertices), the screen coordinates are interpolated from the coordinates of vertices, typically by the processes of edge walking and span interpolation. Thus, a z-coordinate value is generally included in each pixel value (along with the color value) as geometry is rendered.
Polygons are used in 3D graphics to define the shape of objects. Texture mapping is a technique for simulating surface textures by coloring polygons with detailed images. Typically, a single texture map will cover an entire object that consists of many polygons. A texture map consists of one or more rectangular arrays of Red-Green-Blue-Alpha (RGBA) color, with alpha being the percentage of translucency. Texture coordinates for each vertices of a polygon are determined. These coordinates are interpolated for each geometry component, the texture values are looked up in the texture map and the color is assigned to the fragment.
Objects appear smaller when they are farther from the viewer. Therefore, texture maps must be scaled so that the texture pattern appears the same size relative to the object being textured. To avoid scaling and filtering a texture image for each fragment, a series of pre-filtered texture maps, called mipmaps are used. Each texture has a group of associated mipmaps. Each mipmap, also called a level of detail (LOD), is formed of an n×m array of Texture elements (texels), where n and m are powers of 2. Each texel comprises an R, G, B, and A component. Typically each successive LOD has a power of 2 lower resolution than the previous LOD, and thus a cascading series of smaller, prefiltered images are provided, rather than requiring such computations to be performed in real-time. For example, LOD 0 may be a 512×512 array, and LOD 9 is 1×1 array.
Exact texture coordinates and LOD are typically computed for a sample pixel. The texel values surrounding these texture coordinates are then interpolated to generate texture values for the sample. In bilinear interpolation, the prestored LOD array closest to the computed LOD value is selected, and the values of the four texels in the array nearest to the texture coordinates are interpolated to generate texture values for a sample. In trilinear interpolation, the four texels closest to the texture coordinates in the prestored LOD arrays above and below the computed LOD are used to generate the texture values for a sample. For example, if an LOD value of 3.2 is computed then texels from LOD array 3 and LOD array 4 are used for trilinear interpolation. Trilinear interpolation thus requires eight texels per sample, which makes high memory bandwidth a critical component to efficient image rendering.
Additional objects and features of the invention will be more readily apparent from the following detailed description and appended claims when taken in conjunction with the drawings, in which:
a is a block diagram depicting one embodiment of a texel prefetch buffer constructed in accordance with the teachings of this invention;
b is a block diagram depicting texture buffer tag blocks and memory queues associates with the texel prefetch buffer of
a and 6b illustrate a spatially coherent texel mapping for texture memory in accordance with one embodiment of this invention;
c depicts address mapping used in one embodiment of this invention;
a and 13b are block diagrams depicting one embodiment of a re-order system in accordance of the present invention.
The invention is directed to a new graphics processor and method and encompasses numerous substructures including specialized subsystems, subprocessors, devices, architectures, and corresponding procedures. Embodiments of the invention may include one or more of deferred shading, a tiled frame buffer, and multiple-stage hidden surface removal processing, as well as other structures and/or procedures. In this document, this graphics processor of this invention is referred to as the DSGP (for Deferred Shading Graphics Processor), and the associated pipeline is referred to as the “DSGP pipeline”, or simply “the pipeline”.
This present invention includes numerous embodiments of the DSGP pipeline. Embodiments of the present invention are designed to provide high-performance 3D graphics with Phong shading, subpixel anti-aliasing, and texture- and bump-mapping in hardware. The DSGP pipeline provides these sophisticated features without sacrificing performance.
The DSGP pipeline can be connected to a computer via a variety of possible interfaces, including but not limited to for example, an Advanced Graphics Port (AGP) and/or a PCI bus interface, amongst the possible interface choices. VGA and video output are generally also included. Embodiments of the invention supports both OpenGL and Direct3D Application Program Interfaces (APIs). The OpenGL specification, entitled “The OpenGL Graphics System: A Specification (Version 1.2)” by Mark Segal and Kurt Akeley, edited by Jon Leech, is included incorporated by reference.
Several exemplary embodiments or versions of a Deferred Shading Graphics Pipeline are described here, and embodiments having various combinations of features may be implemented. Additionally, features of the invention may be implemented independently of other features, and need not be used exclusively in Graphics Pipelines which perform shading in a deferred manner.
Tiles, Stamps, Samples, and Fragments
Each frame (also called a scene or user frame) of 3D graphics primitives is rendered into a 3D window on the display screen. The pipeline renders primitives, and the invention is described relative to a set of renderable primitives that include: 1) triangles, 2) lines, and 3) points. Polygons with more than three vertices are divided into triangles in the Geometry block, but the DSGP pipeline could be easily modified to render quadrilaterals or polygons with more sides. Therefore, since the pipeline can render any polygon once it is broken up into triangles, the inventive renderer effectively renders any polygon primitive. A window consists of a rectangular grid of pixels, and the window is divided into tiles (hereinafter tiles are assumed to be 16×16 pixels, but could be any size). If tiles are not used, then the window is considered to be one tile. Each tile is further divided into stamps (hereinafter stamps are assumed to be 2×2 pixels, thereby resulting in 64 stamps per tile, but stamps could be any size within a tile). Each pixel includes one or more samples, where each sample has its own color value and z-value (hereinafter, pixels are assumed to include four samples, but any number could be used). A fragment is the collection of samples covered by a primitive within a particular pixel. The term “fragment” is also used to describe the collection of visible samples within a particular primitive and a particular pixel.
Deferred Shading
In ordinary Z-buffer rendering, the renderer calculates the color value (RGB or RGBA) and z value for each pixel of each primitive, then compares the z value of the new pixel with the current z value in the Z-buffer. If the z value comparison indicates the new pixel is “in front of” the existing pixel in the frame buffer, the new pixel overwrites the old one; otherwise, the new pixel is thrown away.
Z-buffer rendering works well and requires no elaborate hardware. However, it typically results in a great deal of wasted processing effort if the scene contains many hidden surfaces. In complex scenes, the renderer may calculate color values for ten or twenty times as many pixels as are visible in the final picture. This means the computational cost of any per-pixel operation—such as Phong shading or texture-mapping—is multiplied by ten or twenty. The number of surfaces per pixel, averaged over an entire frame, is called the depth complexity of the frame. In conventional z-buffered renderers, the depth complexity is a measure of the renderer's inefficiency when rendering a particular frame.
In accordance with the present invention, in a pipeline that performs deferred shading, hidden surface removal (HSR) is completed before any pixel coloring is done. The objective of a deferred shading pipeline is to generate pixel colors for only those primitives that appear in the final image (i.e., exact HSR). Deferred shading generally requires the primitives to be accumulated before HSR can begin. For a frame with only opaque primitives, the HSR process determines the single visible primitive at each sample within all the pixels. Once the visible primitive is determined for a sample, then the primitive's color at that sample location is determined. Additional efficiency can be achieved by determining a single per-pixel color for all the samples within the same pixel, rather than computing per-sample colors.
For a frame with at least some alpha blending (as defined in the above referenced OpenGL specification) of primitives (generally due to transparency), there are some samples that are colored by two or more primitives. This means the HSR process must determine a set of visible primitives per sample.
In some APIs, such as OpenGL, the HSR process can be complicated by other operations (that is by operation other than depth test) that can discard primitives. These other operations include: pixel ownership test, scissor test, alpha test, color test, and stencil test (as described elsewhere in this specification). Some of these operations discard a primitive based on its color (such as alpha test), which is not determined in a deferred shading pipeline until after the HSR process (this is because alpha values are often generated by the texturing process, included in pixel fragment coloring). For example, a primitive that would normally obscure a more distant primitive (generally at a greater z-value) can be discarded by alpha test, thereby causing it to not obscure the more distant primitive. A HSR process that does not take alpha test into account could mistakenly discard the more distant primitive. Hence, there may be an inconsistency between deferred shading and alpha test (similarly, with color test and stencil test); that is, pixel coloring is postponed until after HSR, but HSR can depend on pixel colors. Simple solutions to this problem include: 1) eliminating non-depth-dependent tests from the API, such as alpha test, color test, and stencil test, but this potential solution might prevent existing programs from executing properly on the deferred shading pipeline; and 2) having the HSR process do some color generation, only when needed, but this potential solution would complicate the data flow considerably. Therefore, neither of these choices is attractive. A third alternative, called conservative hidden surface removal (CHSR), is one of the important innovations provided by the inventive structure and method. CHSR is described in great detail in subsequent sections of the specification.
Another complication in many APIs is their ability to change the depth test. The standard way of thinking about 3D rendering assumes visible objects are closer than obscured objects (i.e., at lesser z-values), and this is accomplished by selecting a “less-than” depth test (i.e., an object is visible if its z-value is “less-than” other geometry). However, most APIs support other depth tests such as: greater-than, less-than, greater-than-or-equal-to, equal, less-than-or-equal-to, less-than, not-equal, and the like algebraic, magnitude, and logical relationships. This essentially “changes the rules” for what is visible. This complication is compounded by an API allowing the application program to change the depth test within a frame. Different geometry may be subject to drastically different rules for visibility. Hence, the time order of primitives with different rendering rules must be taken into account. If they are rendered in the order A, B, then C, primitive C will be the final visible surface. However, if the primitives are rendered in the order C, B, then A, primitive A will be the final visible surface. This illustrates how a deferred shading pipeline must preserve the time ordering of primitives, and correct pipeline state (for example, the depth test) must be associated with each primitive.
Deferred Shading Graphics Pipeline
Provisional U.S. patent application Ser. No. 60/097,336; filed Aug. 20, 1998, describes various embodiments of novel deferred Shading Graphics Pipelines. The present invention, and its various embodiments, is suitable for use as the Texture Block in the various embodiments of that differed shading graphics pipeline, or for use with other graphics pipelines which do not use differed shading. Details of such graphics pipelines are for convenience not described again herein.
Texture
The Texture Block of a graphics pipeline applies texture maps to the pixel fragments. Texture maps are stored in Texture Memory, which is typically loaded from the host computer's memory using the AGP interface. In one embodiment, a single polygon can use up to eight textures, although alternative embodiments allow any desired number of textures per polygon.
The inventive structure and method may advantageously make use of trilinear mapping of multiple layers (resolutions) of texture maps. Texture maps are stored in a Texture Memory which may generally comprise a single-buffered memory loaded from the host computer's memory using the AGP interface. In the exemplary embodiment, a single polygon can use up to eight textures. Textures are MIP-mapped. That is, each texture comprises a series of texture maps at different levels of detail, each map representing the appearance of the texture at a given distance from the eye point. To produce a texture value for a given pixel fragment, the Texture Block performs tri-linear interpolation from the texture maps, to approximate the correct level of detail. The Texture Block can, in conjunction with the Fragment Block, perform other interpolation methods, such as anisotropic interpolation.
The Texture Block supplies interpolated texture values (generally as RGBA color values) to the graphics pipeline shading block on a per-fragment basis. Bump maps represent a special kind of texture map. Instead of a color, each texel of a bump map contains a height field gradient. The multiple layers are MIP layers, and interpolation is within and between the MIP layers. The first interpolation is within each layer, then you interpolate between the two adjacent layers, one nominally having resolution greater than required and the other layer having less resolution than required, so that it is done three-dimensionally to generate an optimum resolution.
Detailed Description of Texture Pipeline
Referring to
Texture Setup 1211 receives data packets, for example, from the Fragment unit of U.S. Provisional Patent application 60/097,336. Data packets provide texture LOD data for the texture maps, and potentially visible fragment data for an image to be rendered. The fragment data includes (s, t, r) texture coordinates for each fragment. As shown in
The Fragment Unit receives S, T, R coordinates in floating point format. Setup converts these S, T, R coordinates into U, V, W coordinates, which are fixed point coordinates used prior to texture look-up. The Texture Block then performs a texture look-up and provides i, j, k coordinates, which are integer coordinates mapped in normalized space. Thus, u=i×texture width, v=j×texture height, and w=k×texture depth.
Texture Maps
Texture maps are allocated to Texture Memory 1213 and Texel Prefetch Buffer 1216 using methods to minimize memory conflicts and maximize throughput. Dualoct Bank Mapping unit 1212 maps the i, j, and LOD/k coordinates into Texture Memory 1213 and Texel Prefetch Buffer 1216. Dualoct Bank Mapping unit 1212 also generates tags for texels stored in Texel Prefetch Buffer 1216. The tags are stored in the eight Tag Banks 1216-0 through 1216-7. The tags indicate whether a texel is stored in Texel Prefetch Buffer 1216, and the location of the texel in the buffer.
Texture Memory Management Unit (MMU) 1210 controls access to Texture Memory 1213. Texture Memory 1213 stores the active texture maps. If a texel is not found in Texel Prefetch Buffer 1216, then Texture MMU 1210 requests the texel from Texture Memory 1213. If the texel is from a texture map not stored in Texture Memory 1213 then the texture map can be retrieved from another source as is shown in
After the texels for a given fragment are retrieved, Texture Interpolator 1218 interpolates the texel color values to generate a color value for the fragment. The color value is then inserted into a packet and sent down the pipeline, for example to a shading block.
A texture array is divided into 2×2 texel blocks. Each texel block in an array is represented in Texture Memory. Texturing a given fragment with tri-linear mipmapping requires accessing two to eight of these blocks, depending on where the fragment falls relative to the 2×2 blocks. For trilinear mipmapping for each fragment, up to eight texels must be retrieved from memory. Ideally all eight texels are retrieved in parallel. As shown in
Texture Tile Addressing
To maximize the memory throughput the texels in the texture maps are re-mapped into a spatially coherent form using texture tile addresses. The texels required to generate adjacent fragments depend upon the orientation of the object being rendered, and the depth location of the object in the scene. For example, adjacent fragments of a surface of an object at a large skew angle with respect to the viewing point will use texels at farther distances apart in the selected LOD than adjacent fragments of a surface that are approximately perpendicular to the viewing point. However, there is typically some spatial coherence between groups of fragments in close proximity and the texels used to generate texture for the fragments. Therefore, the texture tile addresses for the texels in the texture maps are defined so as to maximize the spatial coherence of the texture maps.
a and 6a illustrate a spatially coherent texel mapping for texture memory 1213, including texture map 800, including texture “super blocks” 800-0 through 800-3. In one embodiment, a RAMBU™, RAMBUS Corp., Mountain View Calif., memory is used for Texture Memory 1213. The smallest accessible data structure in RAMBUS memory is a “Dualoct” which is 16 bytes. Each texel contains 32 bits of color data in the format RGBA-8, or Lum/Alpha 16. Four texels can therefore be stored in each dualoct. The X and Y axis of
Referring to
This pattern is then repeated in sector 800-0-2 and in sector 800-0-3. Dualoct block 0 (800-0) consists of the four sectors 800-0-0 through 800-0-3. The dualoct block 0 pattern is then repeated in dualoct block 1 (800-1) starting with dualoct number 64, followed by dualoct block 2 (800-2), and dualoct block 3 (800-3). In one embodiment, the recursive swirl pattern stops at the texture super block 0 (800) level.
Alternative spatially coherent patterns are used in alternative embodiments, rather than the recursive swirl pattern illustrated in
The spatially coherent texel mapping patterns illustrated in
Memory Addressing
Referring to
Referring to
To map the texels in the texture map into a spatially coherent format, Dualoct Bank Mapping unit 1212 generates a texture tile address for each dualoct.
The texture tile address is provided to Texture MMU 1210 which generates a corresponding texture memory address. Texture MMU 1210 performs the texture tile address to texture memory address translation using a linear mapping of the texture tile address into a table of texture memory addresses stored in Texture Memory 1213. This table is maintained by software.
The texture memory address data structure 1280 is also programmable. This allows the texture memory address to accommodate different memory configurations, and to alter the placement of bit fields to optimize the access to the texture data. For example, an alternative memory configuration may have more than eight texture memory devices.
Texels are loaded from Texture Memory 1213 into Texel Prefetch Buffer 1216 to provide higher speed access. When texels are moved into Texel Prefetch Buffer 1216, a corresponding tag is created in one of the eight Prefetch Buffer Tag Blocks 1220-0 through 1220-7, shown in
a shows the texels stored for one LOD. For trilinear mipmapping, Banks 1216-4 through 1216-7 contain texels for the second LOD. The Texture ID 1181 bit [26] in the texture tile address is used to control whether an LOD gets mapped to Prefetch Buffer Banks 0–3 (1216-0 through 1216-3) or Banks 4–7 (1216-4 through 1216-7). If Texture ID 1181 bit [26]=0, then the even LOD's (LOD[22]=0) are mapped into Prefetch Buffer Banks 0–3, and the odd LOD's (LOD[22]=1) are mapped into Prefetch Buffer Banks 4–7. Conversely, if Texture ID[26]=1 then the odd LOD's are mapped into Prefetch Buffer Banks 0–3, and the even LOD's are mapped into Prefetch Buffer Banks 4–7. This mapping ensures that all eight tags can be accessed in each cycle, and that texture information is evenly distributed in the caches. Dualoct Bank Mapping unit 1212 also follows this LOD mapping rule when sending texture tile addresses to the corresponding Tag Block 1220-0 through 1220-7, shown in
To generate a texture for a fragment, Dualoct Bank Mapping unit 1212 generates up to eight dualoct requests, and sends them to the appropriate Prefetch Buffer Bank. The Prefetch Buffer Tags 1220-0 through 1220-7 are checked for a match. If there is a hit, the request is sent to the appropriate bank of Memory Queue 1219. When the memory request exits Memory Queue 1219, the line number is sent to Texel Prefetch Buffer 1216 to look-up the data. If there is a miss on a given texture tile address, then a miss request is put into the miss queue for the corresponding tag block. The miss address is eventually read out of the miss queue and forwarded to Texture MMU 1210. The miss request is then serviced, the data is retrieved from Texture Memory 1213 or another external memory source, and is ultimately provided to the appropriate Texel Prefetch Buffer Banks 1216-0 through 1216-7.
Each line in Memory Queue 1219 records one memory access for a particular texture operation on one fragment of data. Memory requests are received at the top of Memory Queue 1219, and when they reach the bottom, Texel Prefetch Buffer 1216 is accessed for the data. Miss data is only filled into Texel Prefetch Buffer 1216 when a particular miss request reaches the bottom of the corresponding memory Queue 1230-0 through 1230-7.
Each of the eight memory Queues 1230-0 through 1230-7 hold up to eight pending miss addresses for a particular Prefetch Buffer Bank 1216-0 through 1216-7. If a memory Queue is not empty, then it can be assumed to contain at least one valid address. Every clock cycle Prefetch Buffer Controller 1218 scans the memory Queues 1230-0 through 1230-7 searching for a valid entry. When a miss address is found, it is sent to Texture MMU 1210.
Reorder Logic
a is a block diagram depicting one embodiment of Read Miss Control Circuitry 2600. Read Miss Control Circuity 2600 receives a read miss request from the miss logic shown in
b is a block diagram of one embodiment of this invention which includes reorder logic 2623-0 (with reorder logic 2623-1 being identical), and showing RAMBus memory controller 2649. The purpose of reorder logic 2623 is to monitor incoming address requests and reorder those requests so as to avoid memory conflicts in RAMBus memory controller 2649. For each memory address received as a request on Bus 2601, conflict detection block 2602 determines if a memory conflict is likely to occur based upon the addresses contained in first level reorder queue 2603. If not, that address is directly forwarded to control block 2605, and is added to first level reorder queue 2603, to allow for conflict checking of subsequently received addresses. On the other hand if a conflict is determined by conflict detection block 2602, the conflicting address request is sent to conflict queue 2604. In one embodiment, in order to prevent conflicting address requests from being utilized too distant from other requests received in the same recent time frame, 32 address requests are received by conflict detection block 2602 and either forwarded to control block 2605 (no conflict), or placed in conflict queue 2604, after which the addresses stored in conflict queue 2604 are output to control circuit 2605. In this manner, the reordered address requests are applied to reordered address queue 2606 to access RAMBus memory controller 2649 with fewer, and often times zero, conflicts, in contrast to the conflict situations which would exist if the original order of the read request were applied directly to RAMBus memory controller 2649 without any reordering.
In-Order tag queue 2609 and out-of-order tag queue 2610 maintains tag information in order to preserve the original address order so that when the results are looked up and output from reorder logic 2623-0 and 2623-1, the desired (original) order is maintained.
Information read from RAMBus memory controller 2649 is stored in read data queue 2611. Through control block 2612, data from queue 2611 is forwarded to either out-of-order queue 2613 or in-order queue 2614. Control block 2615 reassembles data from queues 2613 and 2614 in the original request order and forwards it to the appropriate channel port of block 2614 in order. Control block 2624 receives channel specific data from blocks 2623-0 and 2623-1 which is then re-associated and issued back to the waiting requester.
The inventive pipeline includes a texture memory which includes a prefetch buffer. The host also includes storage for texture, which may typically be very large, but in order to render a texture, it must be loaded into texture memory. Associated with each VSP are S and T's. In order to perform trilinear MIP mapping, we necessarily blend eight (8) samples, so the inventive structure provides a set of eight content addressable (memory) caches running in parallel. In one embodiment, the cache identifier is one of the content addressable tags, and that's the reason the tag part of the cache and the data part of the cache are located separate. Conventionally, the tag and data are co-located so that a query on the tag gives the data. In the inventive structure and method, the tags and data are split up and indices are sent down the pipeline.
The data and tags are stored in different blocks and the content addressable look-up is a look-up or query of an address, and even the “data” stored at that address in itself an index that references the actual data which is stored in a different block. The indices are determined, and sent down the pipeline so that the data referenced by the index can be determined. In other words, the tag is in one location, the texture data is in a second location, and the indices provide a link between the two storage structures.
In one embodiment of the invention, the prefetch buffer comprises a multiplicity of associative memories, generally located on the same integrated circuit as the texel interpolator. In the preferred embodiment, the texel reuse detection method is performed in the Texture Block.
In conventional 3-D graphics pipelines, an object in some orientation in space is rendered. The object has a texture map associated with it, which is represented by many triangle primitives. The procedure implemented in software, will instruct the hardware to load the particular object texture into a Texture Memory. Then all of the triangles that are common to the particular object and therefore have the same texture map are fed into the unit and texture interpolation is performed to generate all of the colored pixels needed to represent that particular object. When that object has been colored, the texture map in DRAM can be destroyed since, for example by a reallocation algorithm, the object has been rendered. If there are more than one object that have the same texture map, such as a plurality of identical objects (possibly at different orientations or locations), then all of that type of object may desirably be textured before the texture map in DRAM is discarded. Different geometry may be fed in, but the same texture map could be used for all, thereby eliminating any need to repeatedly retrieve the texture map from host memory and place it temporarily in one or more pipeline structures.
In more sophisticated conventional schemes, more than one texture map may be retrieved and stored in the memory, for example two or several maps may be stored depending on the available memory, the size of the texture maps, the need to store or retain multiple texture maps, and the sophistication of the management scheme. Each of these conventional texture mapping schemes, spatial object coherence is of primary importance. At least for an entire single object, and typically for groups of objects using the same texture map, all of the triangles making up the object are processed together. The phrase spatial coherency is applied to such a scheme because the triangles form the object and are connected in space, and therefore spatially coherent.
In the inventive structure and method, a sizable memory is supported on the card. In one implementation 128 megabytes are provided, but more or fewer megabytes may be provided. For example, 32 Mb, 64 Mb, 256 Mb, 512 Mb, or more may be provided, depending upon the needs of the user, the real estate available on the card for memory, and the density of memory available.
Rather that reading the eight texels for every visible fragment, using them, and throwing them away so that the eight texels for the next fragment can be retrieved and stored, the inventive structure and method stores and reuses them when there is a reasonable chance they will be needed again.
It would be impractical to read and throw away the eight texels every time a visible fragment is received. Rather, it is desirable to make reuse of these texels, because if you're marching along in tile space, your pixel grid within the tile (typically processed along sequential rows in the rectangular tile pixel grid) could come such that while the same texture map is not needed for sequential pixels, the same texture map might be needed for several pixels clustered in an area of the tile, and hence needed only a few process steps after the first use. Desirably, the invention uses the texels that have been read over and over, so when we need one, we read it, and we know that chances are good that once we have seen one fragment requiring a particular texture map, chances are good that for some period of time afterward while we are in the same tile, we will encounter another fragment from the same object that will need the same texture. So we save those things in this cache, and then on the fly we look-up from the cache (texture reuse register) which ones we need. If there is a cache miss, for example, when a fragment and texture map are encountered for the first time, that texture map is retrieved and stored in the cache.
Texture Map retrieval latency is another concern, but is handled through the use of First-In-First-Out (FIFO) data structures and a look-ahead or predictive retrieval procedure. The FIFO's are large and work in association with the CAM. When an item is needed, a determination is made as to whether it is already stored, and a designator is also placed in the FIFO so that if there is a cache miss, it is still possible to go out to the relatively slow memory to retrieve the information and store it. In either event, that is if the data was in the cache or it was retrieved from the host memory, it is placed in the unit memory (and also into the cache if newly retrieved).
Effectively, the FIFO acts as a sort of delay so that once the need for the texture is identified (prior to its actual use) the data can be retrieved and re-associated, before it is needed, such that the retrieval does not typically slow down the processing. The FIFO queues provide and take up the slack in the pipeline so that it always predicts and looks ahead. By examining the FIFO, non-cached texture can be identified, retrieved from host memory, placed in the cache and in a special unit memory, so that it is ready for use when a read is executed.
The FIFO and other structures that provide the look-ahead and predictive retrieval are provided in some sense to get around the problem created when the spatial object coherence typically used in per-object processing is lost in our per-tile processing. One also notes that the inventive structure and method makes use of any spatial coherence within an object, so that if all the pixels in one object are done sequentially, the invention does take advantage of the fact that there's temporal and spatial coherence.
The Texture Block caches texels to get local reuse. Texture maps are stored in texture memory in 2×2 blocks of RGBA data (16 bytes per block) except for normal vectors, which may be stored in 18 byte blocks.
Virtual Texture Numbers
The user provides a texture number when the texture is passed from user space with OpenGL calls. The user can send some triangles to be textured with one map and then change the texture data associated with the same texture number to texture other triangles in the same frame. Our pipeline requires that all sets of texture data for a frame be available to the Texture Block. The driver assigns a virtual texture number to each texture map.
Texture Memory
Texture Memory stores texture arrays that the Texture Block is currently using. Software manages the texture memory, copying texture arrays from host memory into Texture Memory. It also maintains a table of texture array addresses in Texture Memory.
Texture Addressing
The Texture Block identifies texture arrays by virtual texture number and LOD. The arrays for the highest LODs are lumped into a single record. A texture array pointer table associates a texture array ID (virtual texture number concatenated with the LOD) with an address in Texture Memory. We need to support thousands of texture array pointers, so the texture array pointer table will have to be stored in Texture Memory. We need to map texture array IDs to addresses approximately 500M times per second. Fortunately, adjacent fragments will usually share the same the texture array, so we should get good hit rates with a cache for the texture array pointers. (In one embodiment, the size of the texture array cache is 128 entries, but other sizes, larger or smaller, may be implemented.)
The Texture Block implements a direct map algorithm to search the pointer table in memory. Software manages the texture array pointer table, using the hardware look-up scheme to store table elements.
Texture Memory Allocation
Software handles allocation of texture memory. The Texture Block sends an interrupt to the host when it needs a texture array that is not already in texture memory. The host copies the texture array from main memory frame buffer to texture memory, and updates the texture array pointer table, as described above. The host controls which texture arrays are overwritten by new data.
The host will need to rearrange texture memory to do garbage collection, etc. The hardware will support the following memory copies:
host to memory
memory to host
memory to memory
While the present invention has been described with reference to a few specific embodiments, the description is illustrative of the invention and is not to be construed as limiting the invention. Various modifications may occur to those skilled in the art without departing from the true spirit and scope of the invention as defined by the appended claims.
This is a divisional of application Ser. No. 09/378,408, filed Aug. 20, 1999, now U.S. Pat. No. 6,288,730, and claims the benefit of U.S. Provisional Patent Application Ser. No. 60/097,336 entitled Graphics Processor with Deferred Shading filed Aug. 20, 1998 is hereby incorporated by reference. This application is also related to the following U.S. patent applications, each of which are incorporated herein by reference: Ser. No. 09/213,990, filed 17 Dec. 1998, entitled HOW TO DO TANGENT SPACE LIGHTING IN A DEFERRED SHADING ARCHITECTURE Ser. No. 09/378,598, filed Aug. 20, 1999, entitled APPARATUS AND METHOD FOR PERFORMING SETUP OPERATIONS IN A 3-D GRAPHICS PIPELINE USING UNIFIED PRIMITIVE DESCRIPTORS; Ser. No. 09/378,633, filed Aug. 20, 1999 entitled SYSTEM, APPARATUS AND METHOD FOR SPATIALLY SORTING IMAGE DATA IN A THREE-DIMENSIONAL GRAPHICS PIPELINE; Ser. No. 09/378,439 filed Aug. 20, 1999, entitled GRAPHICS PROCESSOR WITH PIPELINE STATE STORAGE AND RETRIEVAL; Ser. No. 09/378,408, filed Aug. 20, 1999, entitled METHOD AND APPARATUS FOR GENERATING TEXTURE; Ser. No. 09/379,144, filed Aug. 20, 1999 entitled APPARATUS AND METHOD FOR GEOMETRY OPERATIONS IN A 3D GRAPHICS PIPELINE; Ser. No. 09/372,137, filed Aug. 20, 1999 entitled APPARATUS AND METHOD FOR FRAGMENT OPERATIONS IN A 3D GRAPHICS PIPELINE; and Ser. No. 09/378,637, filed Aug. 20, 1999, entitled DEFERRED SHADING GRAPHICS PIPELINE PROCESSOR.
Number | Name | Date | Kind |
---|---|---|---|
4115865 | Beauvais et al. | Sep 1978 | A |
4449193 | Tournois | May 1984 | A |
4484346 | Sternberg et al. | Nov 1984 | A |
4532606 | Phelps | Jul 1985 | A |
4559618 | Houseman et al. | Dec 1985 | A |
4564952 | Karabinis et al. | Jan 1986 | A |
4581760 | Schiller et al. | Apr 1986 | A |
4594673 | Holly | Jun 1986 | A |
4622653 | McElroy | Nov 1986 | A |
4669054 | Schlunt et al. | May 1987 | A |
4670858 | Almy | Jun 1987 | A |
4694404 | Meagher | Sep 1987 | A |
4695973 | Yu | Sep 1987 | A |
4758982 | Price | Jul 1988 | A |
4783829 | Miyakawa et al. | Nov 1988 | A |
4794559 | Greenberger | Dec 1988 | A |
4825391 | Merz | Apr 1989 | A |
4841467 | Ho et al. | Jun 1989 | A |
4847789 | Kelly et al. | Jul 1989 | A |
4888583 | Ligocki et al. | Dec 1989 | A |
4888712 | Barkans et al. | Dec 1989 | A |
4890242 | Sinha et al. | Dec 1989 | A |
4945500 | Deering | Jul 1990 | A |
4961581 | Barnes et al. | Oct 1990 | A |
4970636 | Snodgrass et al. | Nov 1990 | A |
4996666 | Duluk, Jr. | Feb 1991 | A |
4998286 | Tsujiuchi et al. | Mar 1991 | A |
5031038 | Guillemot et al. | Jul 1991 | A |
5040223 | Kamiya et al. | Aug 1991 | A |
5050220 | Marsh et al. | Sep 1991 | A |
5054090 | Knight et al. | Oct 1991 | A |
5067162 | Driscoll, Jr. et al. | Nov 1991 | A |
5083287 | Obata et al. | Jan 1992 | A |
5123084 | Prevost et al. | Jun 1992 | A |
5123085 | Wells et al. | Jun 1992 | A |
5128888 | Tamura et al. | Jul 1992 | A |
5129051 | Cain | Jul 1992 | A |
5129060 | Pfeiffer et al. | Jul 1992 | A |
5133052 | Bier et al. | Jul 1992 | A |
5146592 | Pheiffer et al. | Sep 1992 | A |
5189712 | Kajiwara et al. | Feb 1993 | A |
5245700 | Fossum | Sep 1993 | A |
5247586 | Gobert et al. | Sep 1993 | A |
5265222 | Nishiya et al. | Nov 1993 | A |
5278948 | Luken, Jr. | Jan 1994 | A |
5289567 | Roth | Feb 1994 | A |
5293467 | Buchner et al. | Mar 1994 | A |
5295235 | Newman | Mar 1994 | A |
5299139 | Baisuck et al. | Mar 1994 | A |
5315537 | Blacker | May 1994 | A |
5319743 | Dutta et al. | Jun 1994 | A |
5338200 | Olive | Aug 1994 | A |
5347619 | Erb | Sep 1994 | A |
5363475 | Baker et al. | Nov 1994 | A |
5369734 | Suzuki et al. | Nov 1994 | A |
5394516 | Winser | Feb 1995 | A |
5402532 | Epstein et al. | Mar 1995 | A |
5448690 | Shiraishi et al. | Sep 1995 | A |
5455900 | Shiraishi et al. | Oct 1995 | A |
5481669 | Poulton et al. | Jan 1996 | A |
5493644 | Thayer et al. | Feb 1996 | A |
5509110 | Latham | Apr 1996 | A |
5535288 | Chen et al. | Jul 1996 | A |
5544306 | Deering et al. | Aug 1996 | A |
5546194 | Ross | Aug 1996 | A |
5572634 | Duluk, Jr. | Nov 1996 | A |
5574836 | Broemmelsiek | Nov 1996 | A |
5579455 | Greene et al. | Nov 1996 | A |
5596686 | Duluk, Jr. | Jan 1997 | A |
5613050 | Hochmuth et al. | Mar 1997 | A |
5621866 | Murata et al. | Apr 1997 | A |
5623628 | Brayton et al. | Apr 1997 | A |
5664071 | Nagashima | Sep 1997 | A |
5669010 | Duluk, Jr. | Sep 1997 | A |
5684939 | Foran et al. | Nov 1997 | A |
5699497 | Erdahl et al. | Dec 1997 | A |
5710876 | Peercy et al. | Jan 1998 | A |
5734806 | Narayanaswami | Mar 1998 | A |
5751291 | Olsen et al. | May 1998 | A |
5767589 | Lake et al. | Jun 1998 | A |
5767859 | Rossin et al. | Jun 1998 | A |
5778245 | Papworth et al. | Jul 1998 | A |
5798770 | Baldwin | Aug 1998 | A |
5828378 | Shiraishi | Oct 1998 | A |
5841447 | Drews | Nov 1998 | A |
5850225 | Cosman | Dec 1998 | A |
5852451 | Cox et al. | Dec 1998 | A |
5854631 | Akeley et al. | Dec 1998 | A |
5860158 | Pai et al. | Jan 1999 | A |
5864342 | Kajiya et al. | Jan 1999 | A |
5870095 | Albaugh et al. | Feb 1999 | A |
RE36145 | DeAguiar et al. | Mar 1999 | E |
5880736 | Peercy et al. | Mar 1999 | A |
5889997 | Strunk | Mar 1999 | A |
5920326 | Rentschler et al. | Jul 1999 | A |
5936629 | Brown et al. | Aug 1999 | A |
5949424 | Cabral et al. | Sep 1999 | A |
5949428 | Toelle et al. | Sep 1999 | A |
5977977 | Kajiya et al. | Nov 1999 | A |
5977987 | Duluk, Jr. | Nov 1999 | A |
5990904 | Griffin | Nov 1999 | A |
6002410 | Battle | Dec 1999 | A |
6002412 | Schinnerer | Dec 1999 | A |
6046746 | Deering | Apr 2000 | A |
6084591 | Aleksic | Jul 2000 | A |
6111582 | Jenkins | Aug 2000 | A |
6118452 | Gannett | Sep 2000 | A |
6128000 | Jouppi et al. | Oct 2000 | A |
6167143 | Badique | Dec 2000 | A |
6167486 | Lee et al. | Dec 2000 | A |
6201540 | Gallup et al. | Mar 2001 | B1 |
6204859 | Jouppi et al. | Mar 2001 | B1 |
6216004 | Tiedemann et al. | Apr 2001 | B1 |
6229553 | Duluk, Jr. et al. | May 2001 | B1 |
6243488 | Penna | Jun 2001 | B1 |
6243744 | Snaman, Jr. et al. | Jun 2001 | B1 |
6246415 | Grossman et al. | Jun 2001 | B1 |
6259452 | Coorg et al. | Jul 2001 | B1 |
6259460 | Gossett et al. | Jul 2001 | B1 |
6263493 | Ehrman | Jul 2001 | B1 |
6268875 | Duluk, Jr. et al. | Jul 2001 | B1 |
6275235 | Morgan, III | Aug 2001 | B1 |
6285378 | Duluk, Jr. | Sep 2001 | B1 |
6288730 | Duluk, Jr. et al. | Sep 2001 | B1 |
6331856 | Van Hook et al. | Dec 2001 | B1 |
6476807 | Duluk, Jr. et al. | Nov 2002 | B1 |
6525737 | Duluk, Jr. et al. | Feb 2003 | B1 |
RE38078 | Duluk, Jr. | Apr 2003 | E |
6552723 | Duluk, Jr. et al. | Apr 2003 | B1 |
6577305 | Duluk, Jr. et al. | Jun 2003 | B1 |
6577317 | Duluk, Jr. et al. | Jun 2003 | B1 |
6597363 | Duluk, Jr. et al. | Jul 2003 | B1 |
6614444 | Duluk, Jr. et al. | Sep 2003 | B1 |
6650327 | Airey et al. | Nov 2003 | B1 |
6671747 | Benkual et al. | Dec 2003 | B1 |
6693639 | Duluk, Jr. et al. | Feb 2004 | B1 |
6697063 | Zhu | Feb 2004 | B1 |
6717576 | Duluk, Jr. et al. | Apr 2004 | B1 |
6771264 | Duluk et al. | Aug 2004 | B1 |
20040130552 | Duluk, Jr. et al. | Jul 2004 | A1 |
Number | Date | Country |
---|---|---|
0166577 | Jan 1986 | EP |
0870282 | May 2003 | EP |
WO 90004849 | May 1990 | WO |
WO 95027263 | Oct 1995 | WO |
Number | Date | Country | |
---|---|---|---|
60097336 | Aug 1998 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09378408 | Aug 1999 | US |
Child | 09724663 | US |