The present invention generally relates to computer graphics.
Recent advances in computer performance have enabled graphics systems to provide more realistic graphical images using personal computers, home video game computers, handheld devices, and the like. In such graphics systems, a number of procedures are executed to “render” or draw graphics primitives to the screen of the system. A “graphics primitive” is a basic component of a graphic, such as a point, line, polygon, or the like. Rendered images are formed with combinations of these graphics primitives. Many procedures may be utilized to perform three-dimensional (3-D) graphics rendering.
Specialized graphics processing units (GPUs) have been developed to increase the speed at which graphics rendering procedures are executed. The GPUs typically incorporate one or more rendering pipelines. Each pipeline includes a number of hardware-based functional units that are designed for high-speed execution of graphics instructions/data. Generally, the instructions/data are fed into the front end of a pipeline and the computed results emerge at the back end of a pipeline. The hardware-based functional units, cache memories, firmware, and the like, of the GPUs are designed to operate on the basic graphics primitives and produce real-time rendered 3-D images.
Graphics primitives such as polygons are generally broken down into triangles for rendering. To render a 3-D object on a two-dimensional (2-D) display device, various attribute values (e.g., red, green and blue color values) are specified at each vertex of a given triangle, and the attribute values are interpolated across the triangle. To achieve the correct visual effect, it is necessary to account for the positions of the vertices in 3-D screen space, referred to as perspective correction. Generally speaking, attribute values at the vertex closest to the viewer may need to be weighted more than values at the other vertices. Also, the weight given to values at more distant vertices can depend on how far the viewer is from those vertices (here, distance refers to the distance in screen space). Consequently, perspective correction can be computationally expensive and slow because the interpolation of attribute values across the triangle is typically not linear.
There is increasing interest in rendering 3-D graphical images in handheld devices such as cell phones, personal digital assistants (PDAs), and other devices where cost and power consumption are important design considerations. A method or system for perspective correction that can be efficiently implemented in such devices would therefore be valuable.
Embodiments of the present invention provide methods and systems for perspective correction that can be implemented in devices where cost and power consumption are key considerations.
In one embodiment, vertex data is accessed for a graphics primitive. The vertex data includes homogeneous coordinates for each vertex of the primitive. The homogeneous coordinates can be used to determine perspective-correct barycentric coordinates that are normalized by the area of the primitive. The normalized perspective-correct barycentric coordinates can be used to determine an interpolated value of an attribute for the pixel.
These operations can be efficiently performed in handheld or other portable, battery-operated devices (as well as in other types of devices) using adders and multipliers implemented in hardware. These and other objects and advantages of the various embodiments of the present invention will be recognized by those of ordinary skill in the art after reading the following detailed description of the embodiments that are illustrated in the various drawing figures.
The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements.
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with these embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of embodiments of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be recognized by one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the embodiments of the present invention.
Some portions of the detailed descriptions, which follow, are presented in terms of procedures, steps, logic blocks, processing, and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. A procedure, computer executed step, logic block, process, etc., is here, and generally, conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present invention, discussions utilizing terms such as “accessing” or “determining” or “multiplying” or “adding” or “incrementing” or “holding” or “placing” or “registering” or “summing” or “rendering” or the like, refer to the actions and processes of a computer system (e.g., computer system 100 of
The GPU can be implemented as a discrete component, a discrete graphics card designed to couple to the computer system via a connector (e.g., an Accelerated Graphics Port slot, a Peripheral Component Interconnect-Express slot, etc.), a discrete integrated circuit die (e.g., mounted directly on a motherboard), or an integrated GPU included within the integrated circuit die of a computer system chipset component (not shown) or within the integrated circuit die of a PSOC (programmable system-on-a-chip). Additionally, a local graphics memory 114 can be included for the GPU for high bandwidth graphics data storage.
In the example of
The program sequencer functions by controlling the operation of the functional modules of the graphics pipeline. The program sequencer can interact with the graphics driver (e.g., a graphics driver executing on the CPU 101 of
In one embodiment, data proceeds between the functional modules 220-240 in a packet-based format. For example, the graphics driver transmits data to the GPU in the form of data packets, or pixel packets, that are specifically configured to interface with and be transmitted along the fragment pipe communications pathways of the pipeline. The pixel packets generally include information regarding a group or tile of pixels (e.g., four pixels, eight pixels, 16 pixels, etc.) and coverage information for one or more primitives that relate to the pixels. The pixel packets can also include configuration information that enables the functional modules of the pipeline to configure themselves for rendering operations. For example, the pixel packets can include configuration bits, instructions, functional module addresses, etc., that can be used by one or more of the functional modules of the pipeline to configure itself for the current rendering mode, or the like. In addition to pixel rendering information and functional module configuration information, the pixel packets can include shader program instructions that program the functional modules of the pipeline to execute shader processing on the pixels. For example, the instructions comprising a shader program can be transmitted down the graphics pipeline and be loaded by one or more designated functional modules. Once loaded, during rendering operations, the functional module can execute the shader program on the pixel data to achieve the desired rendering effect.
In this manner, the highly optimized and efficient fragment pipe communications pathway implemented by the functional modules of the graphics pipeline can be used not only to transmit pixel data between the functional modules (e.g., modules 220-240), but to also transmit configuration information and shader program instructions between the functional modules.
Referring still to
To execute shader programs of indeterminate length, the program sequencer controls the graphics pipeline to execute such indeterminate length shader programs by executing them in portions. The program sequencer accesses a first portion of the shader program from the graphics memory and loads the instructions from the first portion into the plurality of stages of the pipeline (e.g., the ALU, the data write component, etc.) of the GPU to configure the GPU for program execution. As described above, the instructions for the first portion can be transmitted to the functional modules of the graphics pipeline as pixel packets that propagate down the fragment pipeline. A span of pixels (e.g., a group of pixels covered by a primitive, etc.) is then processed in accordance with the instructions from the first portion. A second portion of the shader program is then accessed (e.g., direct memory access, transferred in from the system memory 115 of
The span of pixels is then processed in accordance with the instructions from the second portion. In this manner, multiple shader program portions can be accessed, loaded, and executed to perform operations on the span of pixels. For example, for a given shader program that comprises a hundred or more portions, for each of the portions, the GPU can process the span of pixels by loading instructions for the portion and executing instructions for that portion, and so on until all the portions comprising the shader program are executed. This attribute enables embodiments of the present invention to implement the indefinite length shader programs. As described above, no arbitrary limit is placed on the length of a shader program that can be executed.
In general, a number of pixels are covered by the primitive. An example pixel 310 is located at position (x, y) in the plane defined by the primitive.
The primitive has area A/2, where A is equal to (x1−x0)(y2−y0)−(x2−x0)(y1−y0). That is, the area A is actually the area of a parallelogram that includes the primitive and the mirror image of the primitive—it is simpler to represent the area of a primitive as a parallelogram to avoid having to divide or multiply by a factor of two in subsequent calculations. Homogeneous barycentric coordinates or weights (that is, barycentric coordinates that represent the actual areas of the regions “a,” “b” and “g,” where the regions are also treated as parallelograms) for the pixel 310 are given by:
a(x,y)=(x1−x)(y2−y)−(x2−x)(y1−y); (1)
b(x,y)=(x2−x)(y0−y)−(x0−x)(y2−y); and (2)
g(x,y)=(x0−x)(y1−y)−(x1−x)(y0−y). (3)
The derivatives of “a,” “b” and “g” are simple differences given by:
da/dx=y1−y2; da/dy=x2−x1; (4)
db/dx=y2−y0; db/dy=x0−x2; and (5)
dg/dx=y0−y1; dg/dy=x1−x0. (6)
The value of 1/w is linear in screen space, and its value at (x, y) can be expressed using the normalized screen-space barycentric coordinates a(x,y)/A, b(x,y)/A and g(x,y)/A, which are the barycentric coordinates of equations (1), (2) and (3) normalized by the area A. At vertices 0, 1 and 2, 1/w has values of 1/w0, 1/w1 and 1/w2, respectively, and can be computed as:
1/w(x,y)=a(x,y)/A*1/w0+b(x,y)/A*1/w1+g(x,y)/A*1/w2. (7)
Equation (7) can be rewritten as:
A/w(x,y)=a(x,y)/w0+b(x,y)/w1+g(x,y)/w2. (8)
Once a value of A/w(x,y) has been directly calculated using equations (1), (2), (3) and (8), a value of A/w for a pixel adjacent to the pixel 310 can be calculated using fixed-point stepper values based on equations (4), (5) and (6). For example, to determine “a,” “b” and “g” for a pixel adjacent to pixel 310 (in any direction), the values of a(x,y), b(x,y) and g(x,y) can be incremented by the value of da/dx or da/dy, db/dx or db/dy, and dg/dx or dg/dy, respectively, depending on the direction of the adjacent pixel relative to the pixel 310, and the new values of “a,” “b” and “g” can be used in equation (8) to determine a value of A/w for the adjacent pixel.
The values a(x,y)*w(x,y), b(x,y)*w(x,y) and g(x,y)*w(x,y) can be referred to as perspective-correct barycentric coordinates. Normalized perspective-correct barycentric coordinates, which are normalized by the area A and are linear in world space, are designated a_per, b_per and g_per. The normalized perspective-correct barycentric coordinates (a_per, b_per, g_per) have values of (1,0,0), (0,1,0) and (0,0,1) at vertices 0, 1 and 2, respectively.
Because a_per, for example, is linear in world space, a_per/w is linear in screen space and can be expressed using the normalized screen-space barycentric coordinates a(x,y)/A, b(x,y)/A and g(x,y)/A. At vertices 0, 1 and 2, a_per/w has values of 1/w0, 0/w1 (=0) and 0/w2 (=0), respectively. Accordingly:
a_per(x,y)/w(x,y)=a(x,y)/A*1/w0+b(x,y)/A*0/w1+g(x,y)/A*0/w2;
or
a_per(x,y)/w(x,y)=a(x,y)/A*1/w0=a(x,y)/w0*1/A. (9)
Equation (9) can be rewritten as:
a_per(x,y)=a(x,y)/w0*w(x,y)/A. (10)
In equation (10), the first multiplicand a(x,y)/w0 is the first addend of equation (8), and the second multiplicand w(x,y)/A is the reciprocal of A/w(x,y), which is given by equation (8).
In a similar manner:
b_per(x,y)=b(x,y)/w1*w(x,y)/A; and
g_per(x,y)=g(x,y)/w2*w(x,y)/A. (12)
For the pixel 310 at location (x, y), values of a_per(x,y) and b_per(x,y) can be computed as:
temp—a=a(x,y)/w0; (13)
temp—b=b(x,y)/w1; (14)
temp—g=g(x,y)/w2; (15)
temp—w=rcp(temp—a+temp—b+temp—g); (16)
a_per=temp—a*temp—w; and (17)
b_per=temp—b*temp—w; (18)
where rcp means reciprocal.
Because (a_per+b_per+g_per) is equal to unity (1), g_per can be computed as (1 minus a_per minus b_per). However, g_per can also be computed by multiplying temp_g and temp_w.
Equations (13)-(18) can be readily implemented in hardware using registers, adders and multipliers. As such, these operations can be efficiently performed in handheld devices (as well as in other types of devices) in which cost and power consumption are important design considerations. Furthermore, these operations result in normalized perspective-correct barycentric coordinates a_per, b_per and g_per that are normalized by the area of the primitive 300, but without an explicit step of dividing by area.
Furthermore, in a manner similar to that described above, subsequent values of a_per and b_per can be calculated using fixed-point stepper values. For example, once a value of a_per(x,y) has been determined for the pixel 310 at location (x, y), a value of a_per at a pixel adjacent to the pixel 310 can be determined by adding (da/dx)/w0 or (da/dy)/w0, depending on the direction of the adjacent pixel relative to pixel 310, where da/dx and da/dy are given by equation (4).
The attribute values at vertices 0, 1 and 2 are designated p0, p1 and p2, respectively. The attribute value p(x,y) for pixel 310 at location (x, y) is given by:
p(x,y)=a_per*p0+b_per*p1+g_per*p2. (19)
Because g_per=1−a_per−b_per, equation (19) can be written as:
p(x,y)=a_per(dp0)+b_per(dp1)+p2, (20)
where dp0=(p0−p2) and dp1=(p1−p2).
The contents of the first, second and third registers are summed using adder 410, and the reciprocal of the sum is placed into a fourth register temp_w. The contents of the first and fourth registers are multiplied using multiplier 420 to compute the normalized perspective-correct barycentric coordinate a_per(x,y), and the contents of the second and fourth registers are multiplied using a multiplier 430 to compute the normalized perspective-correct barycentric coordinate b_per(x,y). The values of a_per(x,y) and b_per(x,y) can be used in the next stage of the pipeline 210 (
In block 510, vertex data for a graphics primitive is accessed. The vertex data includes homogeneous coordinates (x, y, 1/w) for each vertex of the primitive.
In block 520, the homogeneous coordinates are used to determine the ratios a(x,y)/w0, b(x,y)/w1 and g(x,y)/w2 for a pixel associated with (e.g., covered by) the primitive.
In block 530, the ratios a(x,y)/w0, b(x,y)/w1 and g(x,y)/w2 are used to determine normalized perspective-correct barycentric coordinates a_per(x,y) and b_per(x,y), which are normalized by the area of the primitive. The normalized perspective-correct barycentric coordinates a_per(x,y) and b_per(x,y) can be calculated by determining a reciprocal of the sum of the ratios a(x,y)/w0, b(x,y)/w1 and g(x,y)/w2, and then multiplying the reciprocal by a(x,y)/w0 and by b(x,y)/w1, respectively. A third normalized perspective-correct barycentric coordinate g_per(x,y) can optionally be determined.
In one embodiment, a(x,y)/w0, b(x,y)/w1 and g(x,y)/w2 are stored in first, second and third registers, respectively (e.g., temp_a, temp_b and temp_g). The reciprocal of the sum of the data in the first, second and third registers is stored in a fourth register (e.g., temp_w). The data in the first and fourth registers is multiplied to determine a first normalized perspective-correct barycentric coordinate a_per(x,y), and the data in the second and fourth registers is multiplied to determine a second normalized perspective-correct barycentric coordinate b_per(x,y).
In block 540, the normalized perspective-correct barycentric coordinates a_per(x,y) and b_per(x,y) can be used to determine an interpolated value of an attribute for the pixel at location (x, y). In one embodiment, the first normalized perspective-correct barycentric coordinate a_per(x,y) is multiplied by the difference between a first value for the attribute and a second value for the attribute (e.g., p0−p2), and the second normalized perspective-correct barycentric coordinate b_per(x,y) is multiplied by the difference between a third value for the attribute and the second value for the attribute (e.g., p1−p2). The results of these two multiplications are added to the second value (e.g., p2) to determine the value of the attribute for the pixel at location (x, y).
In block 550, the interpolated value can be used to render the pixel.
In summary, methods and systems for perspective correction by attribute interpolation, that can be efficiently implemented in handheld devices and other devices where cost and power consumption are key considerations, are described.
The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and many modifications and variations are possible in light of the above teaching. For example, embodiments of the present invention can be implemented on GPUs that are different in form or function from GPU 110 of
Number | Name | Date | Kind |
---|---|---|---|
4208810 | Rohner et al. | Jun 1980 | A |
4667308 | Hayes et al. | May 1987 | A |
5170152 | Taylor | Dec 1992 | A |
5268995 | Diefendorff et al. | Dec 1993 | A |
5270687 | Killebrew, Jr. | Dec 1993 | A |
5285323 | Hetherington et al. | Feb 1994 | A |
5313567 | Civanlar et al. | May 1994 | A |
5384912 | Ogrinc et al. | Jan 1995 | A |
5424755 | Lucas et al. | Jun 1995 | A |
5461423 | Tsukagoshi | Oct 1995 | A |
5487022 | Simpson et al. | Jan 1996 | A |
5544292 | Winser | Aug 1996 | A |
5557298 | Yang et al. | Sep 1996 | A |
5579473 | Schlapp et al. | Nov 1996 | A |
5594854 | Baldwin et al. | Jan 1997 | A |
5604824 | Chui et al. | Feb 1997 | A |
5613050 | Hochmuth et al. | Mar 1997 | A |
5710577 | Laumeyer | Jan 1998 | A |
5748202 | Nakatsuka et al. | May 1998 | A |
5764228 | Baldwin | Jun 1998 | A |
5777628 | Buck-Gengler | Jul 1998 | A |
5801711 | Koss et al. | Sep 1998 | A |
5808617 | Kenworthy et al. | Sep 1998 | A |
5818456 | Cosman et al. | Oct 1998 | A |
5831623 | Negishi et al. | Nov 1998 | A |
5844569 | Eisler et al. | Dec 1998 | A |
5856829 | Gray, III et al. | Jan 1999 | A |
5943058 | Nagy | Aug 1999 | A |
5963210 | Lewis et al. | Oct 1999 | A |
5977977 | Kajiya et al. | Nov 1999 | A |
5995121 | Alcorn et al. | Nov 1999 | A |
6002410 | Battle | Dec 1999 | A |
6009435 | Taubin et al. | Dec 1999 | A |
6028608 | Jenkins | Feb 2000 | A |
6031548 | Gueziec et al. | Feb 2000 | A |
6052113 | Foster | Apr 2000 | A |
6072500 | Foran et al. | Jun 2000 | A |
6118452 | Gannett | Sep 2000 | A |
6130673 | Pulli et al. | Oct 2000 | A |
6160557 | Narayanaswami | Dec 2000 | A |
6166743 | Tanaka | Dec 2000 | A |
6191794 | Priem et al. | Feb 2001 | B1 |
6198488 | Lindholm et al. | Mar 2001 | B1 |
6208198 | Lee | Mar 2001 | B1 |
6222550 | Rosman et al. | Apr 2001 | B1 |
6229553 | Duluk, Jr. et al. | May 2001 | B1 |
6239808 | Kirk et al. | May 2001 | B1 |
6239812 | Pettazzi et al. | May 2001 | B1 |
6259461 | Brown | Jul 2001 | B1 |
6288730 | Duluk, Jr. et al. | Sep 2001 | B1 |
6304268 | Iourcha et al. | Oct 2001 | B1 |
6313846 | Fenney et al. | Nov 2001 | B1 |
6337744 | Kuroda | Jan 2002 | B1 |
6373495 | Lin et al. | Apr 2002 | B1 |
6400372 | Gossweiler, III et al. | Jun 2002 | B1 |
6421053 | Johns et al. | Jul 2002 | B1 |
6453330 | Battle et al. | Sep 2002 | B1 |
6456291 | Fowler | Sep 2002 | B1 |
6512524 | Mang | Jan 2003 | B1 |
6516032 | Heirich et al. | Feb 2003 | B1 |
6518974 | Taylor et al. | Feb 2003 | B2 |
6525729 | Akerman et al. | Feb 2003 | B1 |
6525737 | Duluk, Jr. et al. | Feb 2003 | B1 |
6542971 | Reed | Apr 2003 | B1 |
6597363 | Duluk, Jr. et al. | Jul 2003 | B1 |
6618048 | Leather | Sep 2003 | B1 |
6618049 | Hansen | Sep 2003 | B1 |
6621509 | Eiref et al. | Sep 2003 | B1 |
6636214 | Leather et al. | Oct 2003 | B1 |
6636223 | Morein | Oct 2003 | B1 |
6664958 | Leather et al. | Dec 2003 | B1 |
6664960 | Goel et al. | Dec 2003 | B2 |
6670955 | Morein | Dec 2003 | B1 |
6693643 | Trivedi et al. | Feb 2004 | B1 |
6711603 | Takenaka | Mar 2004 | B1 |
6717576 | Duluk, Jr. et al. | Apr 2004 | B1 |
6717577 | Cheng et al. | Apr 2004 | B1 |
6731288 | Parsons et al. | May 2004 | B2 |
6745336 | Martonosi et al. | Jun 2004 | B1 |
6745390 | Reynolds et al. | Jun 2004 | B1 |
6774895 | Papakipos et al. | Aug 2004 | B1 |
6791569 | Millet et al. | Sep 2004 | B1 |
6807620 | Suzuoki et al. | Oct 2004 | B1 |
6819331 | Shih et al. | Nov 2004 | B2 |
6879324 | Hoppe | Apr 2005 | B1 |
6879328 | Deering | Apr 2005 | B2 |
6891548 | Alcorn et al. | May 2005 | B2 |
6912695 | Ernst et al. | Jun 2005 | B2 |
6919895 | Solanki et al. | Jul 2005 | B1 |
6968296 | Royle | Nov 2005 | B2 |
6982722 | Alben et al. | Jan 2006 | B1 |
6987517 | Donovan et al. | Jan 2006 | B1 |
6999100 | Leather et al. | Feb 2006 | B1 |
7034828 | Drebin et al. | Apr 2006 | B1 |
7042462 | Kim et al. | May 2006 | B2 |
7079156 | Hutchins et al. | Jul 2006 | B1 |
7098924 | Prokopenko et al. | Aug 2006 | B2 |
7102639 | Oka | Sep 2006 | B2 |
7106336 | Hutchins | Sep 2006 | B1 |
7117238 | Foskett et al. | Oct 2006 | B1 |
7145566 | Karlov | Dec 2006 | B2 |
7190366 | Hutchins et al. | Mar 2007 | B2 |
7224359 | Papakipos et al. | May 2007 | B1 |
7257814 | Melvin et al. | Aug 2007 | B1 |
7292242 | Wittenbrink et al. | Nov 2007 | B1 |
7295204 | Sfarti | Nov 2007 | B2 |
7538773 | Hutchins | May 2009 | B1 |
7595806 | Toksvig et al. | Sep 2009 | B1 |
20020089512 | Slade et al. | Jul 2002 | A1 |
20020089701 | Lu et al. | Jul 2002 | A1 |
20020158865 | Dye et al. | Oct 2002 | A1 |
20020163967 | Youn et al. | Nov 2002 | A1 |
20030016217 | Vlachos et al. | Jan 2003 | A1 |
20030063087 | Doyle et al. | Apr 2003 | A1 |
20030201994 | Taylor et al. | Oct 2003 | A1 |
20040012597 | Zatz et al. | Jan 2004 | A1 |
20040012598 | Zatz | Jan 2004 | A1 |
20040012600 | Deering et al. | Jan 2004 | A1 |
20040078504 | Law et al. | Apr 2004 | A1 |
20040119710 | Piazza et al. | Jun 2004 | A1 |
20040125103 | Kaufman et al. | Jul 2004 | A1 |
20040145589 | Prokopenko et al. | Jul 2004 | A1 |
20040227755 | Sfarti | Nov 2004 | A1 |
20040246260 | Kim et al. | Dec 2004 | A1 |
20040257376 | Liao et al. | Dec 2004 | A1 |
20050066205 | Holmer | Mar 2005 | A1 |
20050088450 | Chalfin et al. | Apr 2005 | A1 |
20050124583 | Becker et al. | Jun 2005 | A1 |
20050162436 | Van Hook et al. | Jul 2005 | A1 |
20050195187 | Seiler et al. | Sep 2005 | A1 |
20050231506 | Simpson et al. | Oct 2005 | A1 |
20050231519 | Solanki et al. | Oct 2005 | A1 |
20050237337 | Leather et al. | Oct 2005 | A1 |
20060197768 | Van Hook et al. | Sep 2006 | A1 |
20090096784 | Wardetzky et al. | Apr 2009 | A1 |
20090167763 | Waechter et al. | Jul 2009 | A1 |
20100189342 | Parr et al. | Jul 2010 | A1 |
Number | Date | Country |
---|---|---|
101819676 | Sep 2010 | CN |
Entry |
---|
Euh, J., Chittamuru, J., Burleson, W., Cordic Vector Interpolator for Power-Aware 3D Computer Graphics, Oct. 2002, IEEE Workshop on Signal Processing Systems, pp. 240-245. |
Pixar, Inc.; PhotoRealistic RenderMan 3.9 Shading Language Extensions; Sep. 1999. |
http://www.encyclopedia.com/html/s1/sideband.asp; Nov. 2010. |
Narayanaswami, Efficient Gouraud Shading and Linerar Interpolation over Triangles, Computer Graphics Forum, vol. 14, No. 1, pp. 17-24, 1995 (1249WO). |