The present invention relates to generally to texture mapping and more particularly to anisotropic texture mapping methods designed to reduce aliasing effects.
Access to pixel buffers in a graphics system has opened up new ways of processing graphical images. For example, access to pixel buffers in a graphics system permits direct modification of the pixels to achieve certain effects, such as texturing a surface, without the cost of performing these effects in the geometry engines of the graphics system. In the forward direction, texturing a surface typically involves mapping a given point in a 2-dimensional texture space to the surface of a 3-dimensional object in object space. This step is called parameterization. The object is then projected, in a step called a projective transformation, onto a 2-dimensional display in screen space whose resolution is limited by the pixel buffers used to create the display image. Texturing can also be performed in the reverse direction, called inverse mapping. In inverse mapping, for every pixel in screen space, a pre-image of the pixel in the texture space must be found. A square pixel has, in general, a curvilinear quadrilateral pre-image.
One problem that can occur when performing texture mapping is aliasing. This occurs as a result of the limited resolution of the pixel buffer for the display and the limited size of the texture array that holds the 2-dimensional texture. The limited resolution of the pixel buffer for the display has an inherent spatial sampling frequency and the limited resolution of the texture map has an inherent spatial frequency. If the spatial frequency of the texture map is greater than the spatial sampling frequency inherent in the display pixel buffer, then aliasing can occur.
There are several conditions under which the spatial frequency of the texture map can be greater than the spatial sampling frequency of the display pixel buffer. One condition is that the object being textured is a distant object or is viewed in perspective, thereby reducing the size of the object to a small number of pixels compared with a full-size orthographic view of the object in screen space. Another condition is that the size of a window through which the object is viewed in the display space is small. In either case, aliasing occurs, thereby distorting texture and the upsetting the realism sought to be achieved. It is therefore desirable to minimize aliasing under these and other similar circumstances.
A well-known technique for minimizing aliasing of mapped textures, called mip-mapping, is to provide a series of texture arrays, the highest level texture array in the series having the greatest resolution, and the lowest array in the series having the lowest resolution, with intermediate arrays having intermediate resolutions. Each lower dimension texture map is a filtered version of a next higher dimension of the texture array, so that there are smooth transitions between levels in the series of texture arrays. A key issue in this technique is determining which of the series of texture arrays should be chosen to supply the source of the texture. If a level too high is chosen, then aliasing will result, and if too low a level is chosen an array is chosen, the texture will not show up clearly. The parameter that describes the level chosen is called LOD (Level Of Detail).
Filtering of the lower level of detail texture arrays involves the concept of a footprint or area of support. This is the area over which an average or some other filtering function is computed on the next higher level of detail. A common type of footprint is a circle, which implies that the filtering function is the same in each dimension of a 2-dimensional texture array. Another more general footprint is an ellipse, which implies anisotropy because the major axis is different from the minor axis.
The amount of anisotropy can be quantified by a ratio,
where (u,v) are coordinates in a 2-dimensional texture space, (x,y) are coordinates in the display space, and s=(cos α,sin α), and
as described in U.S. Pat. No. 6,219,064 (Kamen et al.).
As a standard mathematical approach, the footprint of a texture map on a pixel can be derived from the Jacobian matrix
The footprint is an ellipse where the two axes are in the direction of du/dx, dv/dx and du/dy and dv/dy, respectively. The magnitude of the axes determines the footprint area and shape.
In isotropic filtering, the differences between axes is ignored and the maximum axis is used to calculate the level of detail (LOD). The LOD is computed as d=log2(magnitude of long axis). This results in a poor quality image when the ratio of the two axes is large.
In anisotropic filtering, the differences of the axes is considered. More texels are sampled along the long axis than along the short axis and the LOD is computed as d=log2(magnitude of short axis). Thus, by applying a proper filter, the resulting texture from multiple texels can faithfully represent the pixel color. If, along the short axis, the footprint covers exactly one texel at a corresponding mip-map level, then only that mip-map level is needed for filtering. However, if the footprint in the short axis direction covers more than just one texel, more samples along the short axis are needed for proper filtering. The current art interpolates between the current mip-map level and next lower resolution mip-map level to obtain the final color of the pixel. However, this requires access to multiple mip-map levels, with an associated performance loss. Access to a second level of the mip-map always costs additional bus transaction cycles which slow down the texture filtering processing. In an efficient cache design, in which one level filtering can be performed in one cycle, additional memory cycles for second level processing reduces the performance of anisotropic filtering by half. The problem manifests itself more when the degree of anisotropy is large. Therefore, it is desirable to generate a high quality textured image when the degree of anisotropy is large, but without incurring a performance loss.
The present invention uses only one level of the mip-map to perform the filtering without regard to the amount of coverage of the footprint of a pixel on the texture. This is done by deriving the next lower-resolution mip-map level from the current level for anisotropic filtering purposes. Using one level instead of two avoids the loading of the next level texels into the cache, which takes an additional memory cycle. In addition, the cache hit rate is improved, because the current level texels has locality of reference.
A method in accordance with the present invention is a method of performing anisotropic mip-mapping. The method includes mapping a target pixel needing texture to one or more texels in a higher resolution texture array, where a region of support in the higher resolution texture array is defined by a long and a short axis, is generally elliptical and a level of detail is derived from the short axis. The method further includes performing a filtering function along an axis using the texels from the higher resolution texture array to simulate a filtering effect of using texels from the higher resolution texture array and a second texel array having a lower resolution. The step of performing a filtering function includes, in one embodiment, using the texels from the higher resolution texture array to derive texels of the lower resolution texture array, interpolating the texels from the higher resolution texture array to form a first blended texel, interpolating the texels from the lower resolution texel array to form a second blended texel, and, interpolating the first blended and second blended texels to arrive at a texture for the target pixel.
One advantage of the present invention is that few texture cache reads are needed for perform a multilevel filtering function.
Another advantage is that a multilevel filtering function can always be performed in the processing of pixels, because no performance loss occurs.
These and other features, aspects and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:
In accordance with the present invention, the formula for filtering 4 texels, in one dimension, is
final color=Wa*ColorA+Wb*ColorB+Wc*ColorC+Wd*ColorD,
where Wa, Wb, Wc and Wd are the filter weights for each of the colors ColorA, ColorB, ColorC, ColorD of the four texels. The filter weights are:
Wa=⅛(1−Df)(1+2Uf),
Wb=Uf*Df+Wa,
Wc=(1−Uf)*Df+Wd,
and
Wd=⅛(1−Df)(3−2Uf)
where Uf is the fraction of the U texture coordinate and Df is the fraction of the LOD. For the V coordinate, Uf is replaced with Vf.
Parameter Uf gives the distance from the center of texel C (Tc) to the Pn+1 position and 1−Uf gives the distance from the center of texel B (TB), where Uf is the fraction of the U texture coordinate and where unity is assumed to be the distance between texel centers. Parameter h gives the distance from the edge of TB to the position Pn+1.
Parameter m gives the distance from the center of texel CD (TCD) to the Pn position and n=(1−m)=(1−(k+½)) gives the distance from the center of texel AB (TAB). Parameter k gives the distance from the edge of TCD to the position Pn.
Based on the above, the color at point Pn+1 is a linear interpolation of the colors at TB and TC, based on position parameter Uf.
TPn+1=UfTB+(1−Uf)TC.
The color at Pn is a linear interpolation of the colors at TAB and TCD based on position parameter m,
TPn=mTAB+(1−m)TCD.
The color TP at P is then a linear interpolation of TPn and TPn+1 based on position parameter q between the levels,
TP=(1−q)TPn+1+qTPn
Substituting the TPn and TPn+1 expressions into TP gives
TP=(1−q)UfTB+(1−q)(1−Uf)TC+qmTAB+q(1−m)TCD
Because only colors TA, TB, TC and TD are available from the cache memory, which stores level n+1, the colors from level n are derived from these colors as follows:
TAB=(TA+TB)/2
TCD=(TC+TD)/2
Therefore,
TP=(1−q)UfTB+(1−q)(1−Uf)TC+qm(TA+TB)/2+q(1−m)(TC+TD)/2.
Collecting like terms yields,
TP=TA(qm/2)+TB((1−q)Uf+qm/2)+TC((1−q)(1−Uf)+q(1−m)/2)+TD(q(1−m)/2).
Therefore, the weights are
Wa=⅛·(1−Df)(1+2Uf)
Wb=DfUf+Wa,
Wc=Df(1−Uf)+Wd,
and
Wd=⅛·(1−Df)(3−2Uf),
where the equalities k=h/2, m=(h+1)/2, h=Uf−½, and n=1−m, were used. Note that
Therefore,
Wuv=Duv*Uuv*Vuv, u=0,1,2,3, v=0,1,2,3
Duv=0.25*Df, if either u=0 or 3, or v=0, or 3, (1−0.75)*Df,otherwise
Uuv=(1−Uf), if u=0 or u=1,
Uuv=Uf, if u=2 or u=3,
Vuv=1−Vf, if v=0 or v=1, and
Vuv=Vf, if v=2 or v=3.
The complete Wuv array is set forth below.
The final color=Σ(Wuv*Color(u,v)), where u and v each take on values from 0 to 3, and where Color(u,v) is the color (texel) corresponding to Wuv on the read mipmap level. Also please note that ΣWuv=1, where u=0, 1, 2, 3 and v=0, 1, 2, 3.
In the case of multiple anisotropic samples, the weights of each sample also depend on the anisotropic ratio, and the filter function chosen for the anisotropic case. For example, if the ratio is 3.5, then 4 samples can be chosen. Assuming a box filtering function, then the weight for each anisotropic sample is 0.25. If a Gaussian filtering function is assumed, then the weight for each sample is subject to the function exp(−((x−x0)(x−x0)/ratio*ratio), where x0 is the cluster center of N samples, and x is the location of the sample. The sample weights must be normalized to 1. The sampling point of samples is across the region of anisotropic shape at the long axis.
In general, the equation for anisotropic filtering can be expressed as follows.
final color=ΣWSi*ColorSi=1
where i takes on values from 1 to the number of samples, ΣWSi=1 for i from 1 to the number of samples, WSi is a weighting function for the ith sample, and ColorSi is the blended color for the ith sample. The greater the number of samples chosen, the better the quality of the image is, but there is a performance loss. Since only the high resolution mip-map level is sampled, there is a tradeoff between quality and performance.
Generally speaking, in the one dimension case, a single texel in the lower resolution mipmap level is derived from two adjacent texels in the higher resolution mipmap level, and two adjacent texels in the lower resolution mipmap level are derived from four adjacent texels in the higher resolution mipmap level. Because a pixel at an LOD between these two mipmap levels is blended from two adjacent texels in the lower resolution mipmap level and two adjacent texels in the higher resolution mipmap level, such a pixel can therefore be blended from four adjacent texels in the higher resolution mipmap level in accordance with the present invention. Please refer to
In the two dimension case, one texel in the lower resolution mipmap level is derived from four texels in a 2×2 block in the higher resolution mipmap level, and four texels in a 2×2 block in the lower resolution mipmap level are derived from sixteen adjacent texels in a 4×4 block in the higher resolution mipmap level. Because a pixel at an LOD between these two mipmap levels is blended from four texels in the 2×2 block in the lower resolution mipmap level and four texels in the 2×2 block in the higher resolution mipmap level, such a pixel can therefore be blended from sixteen texels in a 4×4 block in the higher resolution mipmap level in accordance with the present invention. This can be understood by referring to the above weight array Wuv, which indicates how sixteen texels of the same mipmap level are used.
Although the present invention has been described in considerable detail with reference to certain preferred versions thereof, other versions are possible. Therefore, the spirit and scope of the appended claims should not be limited to the description of the preferred versions contained herein.
This application claims priority to U.S. Provisional Application Ser. No. 60/448,956, filed Feb. 21, 2003, and entitled “SINGLE LEVEL MIP FILTERING ALGORITHM FOR ANISOTROPIC TEXTURING,” which application is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5740344 | Lin et al. | Apr 1998 | A |
6184894 | Rosman et al. | Feb 2001 | B1 |
6292193 | Perry et al. | Sep 2001 | B1 |
6353438 | Van Hook et al. | Mar 2002 | B1 |
20010048443 | Burrell | Dec 2001 | A1 |
20020126133 | Ewins | Sep 2002 | A1 |
20040119720 | Spangler | Jun 2004 | A1 |
20050128213 | Barenbrug et al. | Jun 2005 | A1 |
Number | Date | Country | |
---|---|---|---|
20040257376 A1 | Dec 2004 | US |
Number | Date | Country | |
---|---|---|---|
60448956 | Feb 2003 | US |