Claims
- 1. A configurable filter module comprising:
a plurality of linear blend units each to receive data input from one of an overlay engine and a mapping engine cache, and generate a linear blend filter output respectively; and a filter output multiplexer to receive data output from the linear blend units and select a proper byte ordering output, wherein said linear blend units serve as an overlay interpolator filter to perform linear blending of the data input from the overlay engine during a linear blend mode, and serve as a texture bilinear filter to perform bilinear filtering of the data input from the mapping engine cache during a bilinear filtering mode.
- 2. The configurable filter module as claimed in claim 1, wherein the plurality of linear blending units comprise four dual linear blend units provided to support at least two data formats, and a single linear blend unit provided to support only one data format.
- 3. The configurable filter module as claimed in claim 2, wherein the dual linear blend units are configured as either two split linear blend units or three split linear blend units and include associated circuitry to support both data formats under control of a filter select signal.
- 4. The configurable filter module as claimed in claim 3, wherein the linear blending is accomplished on pixels using the equation A+α(B−A), where A represents 2-dimensional pixel data from the overlay engine indicating overlay surface A, B represents 2-dimensional data from the overlay engine indicating overlay surface B, and alpha (α) represents a blending coefficient.
- 5. The configurable filter module as claimed in claim 3, wherein the bilinear filtering is accomplished on texels using the equation: C=C1(1−.u)(1−.v)+C2(.u(1−.v))+C3(.u*.v)+C4(1−.u)*.v, where C1, C2, C3 and C4 represent 3-dimentional texel data from the mapping engine cache indicating four adjacent texels of locations U−V, U+1−V, U−V+1 and U+1−V+1, and where values .u and .v indicate fractional locations within the C1, C2, C3, C4 texels.
- 6. The configurable filter module as claimed in claim 3, wherein requests from the overlay engine for overlay interpolation take precedence over requests from the mapping engine cache.
- 7. The configurable filter module as claimed in claim 1, wherein the linear blend units can be configured as one of eight 8-bit linear interpolators, three 8-bit bi-linear interpolators and four 565 bi-linear interpolators to perform either said linear blending or said bilinear filtering of data input from respective overlay engine and mapping engine cache.
- 8. The configurable filter module as claimed in claim 1, wherein the linear blend units are configured as a combination of thrice-split linear blend units, a twice-split linear blend unit and a single linear blend unit for bilinear filtering data input from the mapping engine cache to approximate perspective correct shading value of a 3-dimensional triangular surface for different resolution formats.
- 9. The configurable filter module as claimed in claim 1, wherein the linear blend units are configured as four thrice-split linear blend units arranged in parallel for linear blending data input from the overlay engine to approximate perspective correct shading value of a 3-dimensional triangular surface for different resolution formats.
- 10. The configurable filter module as claimed in claim 1, wherein the linear blend units are configured as a combination of four dual linear blend units and a single linear blend unit arranged in parallel for bilinear filtering data input from the mapping engine cache to approximate perspective correct shading value of a 3-dimensional triangular surface for different resolution formats.
- 11. The configurable filter module as claimed in claim 1, wherein each of the linear blend units act as a single interpolator to calculate multiple color resolutions of different data format precision, and comprises:
a high order 5-bit calculation unit arranged to shift data input from left to right by three bit positions; a high order 3-bit calculation unit arranged to shift data input from left to right by five bit positions; first adders arranged to add outputs from the high order 5-bit and 3-bit calculation units to create a high order 8-bit precision calculation; a low order 3-bit calculation unit arranged to shift data input from left to right by five bit positions; a low order 5-bit calculation unit arranged to shift data input from left to right by three bit positions; second adders arranged to add outputs from the low order 5-bit and 3-bit calculation units to create a low order 8-bit precision calculation; and means for calculating multiple color resolutions of different data format precision based on the high order 8-bit precision calculation and the low order 8-bit precision calculation.
- 12. A method for providing shared filter functionality between first and second discrete engines in a graphics system to process video data comprising:
receiving video data from one of the first engine and the second engine; configuring a plurality of linear blend units to perform linear blending of the video data received from the first engine or to perform bilinear filtering of the video data received from the second engine; and determining filter color values to approximate perspective shading of a triangular surface of an image in different resolution formats.
- 13. The method as claimed in claim 12, wherein the first engine corresponds to an overlay engine, the second engine corresponds to a texture mapping engine, and the linear blending is accomplished on pixels using the equation A+alpha(B−A), where A represents 2-dimensional pixel data from the overlay engine indicating overlay surface A, B represents 2-dimensional data from the overlay engine indicating overlay surface B, and alpha represents a blending coefficient.
- 14. The method as claimed in claim 13, wherein the bilinear filtering is accomplished on texels using the equation: C=C1(1−.u)(1−.v)+C2(.u(1−.v))+C3(.u*.v)+C4(1−.u)*.v, where C1, C2, C3 and C4 represent 3-dimentional texel data from said texture mapping engine indicating four adjacent texels of locations U−V, U+1−V, U−V+1 and U+1−V+1, and where values .u and .v indicate fractional locations within the C1, C2, C3, C4 texels.
- 15. The method as claimed in claim 13, wherein requests from the overlay engine for overlay interpolation take precedence over requests from the texture mapping engine.
- 16. A graphics controller comprising:
an engine to provide video data in two-dimension (2D); a cache to store video data in three-dimension (3D); and a configurable filter to provide shared filter resources and to perform linear blending of video data in 2D from the engine, or bilinear filtering of video data in 3D from the cache for a visual display.
- 17. The graphics controller as claimed in claim 16, wherein the engine is an overlay engine configured to perform 2D graphics functions and include a blifter (BLT) engine and an arithmetic stretch blitter (BLT) engine for performing fixed blitter and stretch blitter (BLT) operations.
- 18. The graphics controller as claimed in claim 17, further comprising a 3D engine to provide video data in 3D for storage in the cache, perform 3D graphics functions, including creating a rasterized 2D display image from representation of 3D, perspective-correct texture mapping to deliver 3D graphics, bilinear and anisotropic filtering, MIP mapping to reduce blockiness and enhance image quality, Gouraud shading, alpha-blending, fogging and Z-buffering.
- 19. The graphics controller as claimed in claim 18, wherein the 3D engine further comprises:
a color space converter to receive YUV data and convert into RGB data, where YUV represents color-difference video data containing one luminance component (Y) and two chrominance components (U, V), and RGB represents composite video data containing red (R), green (G) and blue (B) components of an image; an anisotropic filter to combine pixels from different levels-of-detail (LOD) levels to produce an average of selected multiple LOD levels; a dithering unit to read dither weights from a table and sum the dither weights with the current pixel data received from the anisotropic filter; a re-ordering FIFO to sort pixels for the proper output format; and a motion compensation unit to average successive pixels, sum an error term with the averaged result, and send data for final color calculations before rendering for said visual display.
- 20. The graphics controller as claimed in claim 16, wherein said configurable filter comprises:
a plurality of linear blend units to receive data input from one of the engine and the cache; and a filter output multiplexer to receive data output from the linear blend units and select a proper byte ordering output, wherein said linear blend units serve as an overlay interpolator filter to perform said linear blending of the video data received from the engine during a linear blend mode, and serve as a texture bilinear filter to perform said bilinear filtering of the video data received from the cache during a bilinear filtering mode.
- 21. The graphics controller as claimed in claim 20, wherein the plurality of linear blending units comprise four dual linear blend units provided to support at least two data formats, and a single linear blend unit provided to support only one data format.
- 22. The graphics controller as claimed in claim 20, wherein the dual linear blend units are configured as either two split linear blend units or three split linear blend units and include associated circuitry to support both data formats under control of a filter select signal.
- 23. The graphics controller as claimed in claim 20, wherein the linear blending is accomplished on pixels using the equation A+alpha(B−A), where A represents 2-dimensional pixel data from the engine indicating overlay surface A, B represents 2-dimensional data from the engine indicating overlay surface B, and alpha represents a blending coefficient.
- 24. The graphics controller as claimed in claim 20, wherein the bilinear filtering is accomplished on texels using the equation: C=C1(1−.u)(1−.v)+C2(.u(1−.v))+C3(.u*.v)+C4(1−.u)*.v, where C1, C2, C3 and C4 represent 3-dimensional texel data from the cache indicating four adjacent texels of locations U−V, U+1−V, U−V+1 and U+1−V+1, and where values .u and v indicate fractional locations within the C1, C2, C3, C4 texels.
- 25. The graphics controller as claimed in claim 20, wherein requests from the engine for overlay interpolation take precedence over requests from the cache.
- 26. The graphics controller as claimed in claim 20, wherein the linear blend units can be configured as one of eight 8-bit linear interpolators, three 8-bit bi-linear interpolators and four 565 bi-linear interpolators to perform either said linear blending or said bilinear filtering of video data received from the engine and the cache.
- 27. The graphics controller as claimed in claim 20, wherein the linear blend units are configured as a combination of three thrice-split linear blend units, a twice-split linear blend unit and a single linear blend unit for bilinear filtering data received from the cache to approximate perspective correct shading value of a 3-dimensional triangular surface for different resolution formats.
- 28. The graphics controller as claimed in claim 20, wherein the linear blend units are configured as four thrice-split linear blend units arranged in parallel for linear blending video data received from the engine to approximate perspective correct shading value of a 3-dimensional triangular surface for different resolution formats.
- 29. The graphics controller as claimed in claim 20, wherein the linear blend units are configured as a combination of four dual linear blend units and a single linear blend unit arranged in parallel for bilinear filtering video data received from the cache to approximate perspective correct shading value of a 3-dimensional triangular surface for different resolution formats.
- 30. The graphics controller as claimed in claim 20, wherein each of said linear blend units act as a single interpolator to calculate multiple color resolutions of different data format precision, and comprises:
a high order 5-bit calculation unit arranged to shift video data received from left to right by three bit positions; a high order 3-bit calculation unit arranged to shift data received from left to right by five bit positions; first adders arranged to add outputs from the high order 5-bit and 3-bit calculation units to create a high order 8-bit precision calculation; a low order 3-bit calculation unit arranged to shift data input from left to right by five bit positions; a low order 5-bit calculation unit arranged to shift data input from left to right by three bit positions; second adders arranged to add outputs from the low order 5-bit and 3-bit calculation units to create a low order 8-bit precision calculation; and means for calculating multiple color resolutions of different data format precision based on the high order 8-bit precision calculation and the low order 8-bit precision calculation.
CLAIM FOR PRIORITY
[0001] This is a continuation application from an application for “Method And Apparatus For Pixel Filtering Using Commonly Shared Filter Resource Between Overlay And Texture Mapping Engines” filed in the United States Patent & Trademark Office on Jan. 10, 2000, assigned Ser. No. 09/480,156, and all of its subject matters are incorporated by reference herein under 35 U.S.C. §120.
Continuations (1)
|
Number |
Date |
Country |
Parent |
09480156 |
Jan 2000 |
US |
Child |
10233581 |
Sep 2002 |
US |