Embodiments of the present invention generally relate to data decompression, in particular data used in connection with computer graphics.
As a result of continuing advances in computer graphics, images that look more and more realistic are being rendered in applications such as video games. A key to achieving a convincing image is the ability to realistically simulate lighting and shadowing effects on a textured (e.g., three-dimensional) surface.
One technique for rendering surface textures involves the use of normal maps. When rendering using normal maps, each point on a surface to be rendered is associated with a unit length vector that is perpendicular to that point. The normal vector indicates the direction that the surface is facing at that point. Using a normal map, contemporary graphics engines can render very complex looking surfaces to achieve a more realistic effect.
A normal map can contain a large quantity of data, especially when realistic-looking surfaces at high screen (display) resolutions are being portrayed. Compression schemes are usually employed to reduce the amount of data. However, conventional real-time compression techniques can result in a loss of precision when the data are reconstructed, leading to reduced image quality.
Accordingly, a system and/or method that can reconstruct compressed normals with improved precision would be advantageous. Embodiments in accordance with the present invention provide this and other advantages.
In one embodiment of the present invention, the relative magnitudes of a first value and a second value are compared. The first value and the second value represent respective endpoints of a range of values. The first value and the second value each have N bits of precision. Either the first or second value is selected, based on the result of the comparison. The selected value is scaled to produce a third value having N+1 bits of precision. A specified bit value is appended as the least significant bit of the other (non-selected) value to produce a fourth value having N+1 bits of precision. Intermediate values can then be determined by interpolating between the third and fourth values.
In one embodiment, the third and fourth values are nine (9) bits in length (that is, N+1 is 9). In one such embodiment, the first and second values are signed values normalized to the range of [−1, 1] coded in eight (8) bits each (e.g., one byte), the 8 bits having a value in the range of [−127, 127]. In another such embodiment, the first and second values are unsigned values coded in 8 bits each, the 8 bits having a value in the range of [0, 255].
In one embodiment, the data being decompressed include texel data used in connection with computer graphics systems. In such an embodiment, the third and fourth values of N+1 bits (and the first and second values of N bits) correspond to one component (e.g., the x-component or the y-component, etc.) of a texel at a location in a block of data. However, the decompression method described above can be applied to any number of components in other types of applications.
In summary, embodiments of the present invention provide methods and systems for decompressing data with improved precision. As a result, the quality of rendered images can be increased. Importantly, the improvement in precision is achieved without actually storing extra bits of precision in a block of compressed data. In effect, the extra bits of precision are “virtual” bits that are not stored in the compression block, but instead are derived from other information in the compression block. Thus, improved precision is achieved without significantly increasing the burden on computational resources and without increasing the size or changing the structure of the compression block. These and other objects and advantages of the various embodiments of the present invention will be recognized by those of ordinary skill in the art after reading the following detailed description of the embodiments that are illustrated in the various drawing figures.
The accompanying drawings, which are incorporated in and form a part of this specification, illustrate embodiments of the present invention and, together with the description, serve to explain the principles of the invention:
The drawings referred to in the description should not be understood as being drawn to scale except if specifically noted.
Reference will now be made in detail to the various embodiments of the present invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with these embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be understood that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the present invention.
Some portions of the detailed descriptions that follow are presented in terms of procedures, logic blocks, processing, and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. In the present application, a procedure, logic block, process, or the like, is conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those utilizing physical manipulations of physical quantities. Usually, although not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as transactions, bits, values, elements, symbols, characters, fragments, pixels, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present invention, discussions utilizing terms such as “comparing,” “storing,” “using,” “compressing,” “decompressing,” “restoring,” “determining,” “constructing,” “producing,” “accessing,” “calculating,” “selecting,” “associating,” “truncating,” “scaling,” “appending” or the like, refer to actions and processes (e.g., flowchart 60 of
Also included in computer system 112 is an optional alphanumeric input device 106. Device 106 can communicate information and command selections to central processor 101. Computer system 112 also includes a cursor control or directing device 107 coupled to bus 100 for communicating user input information and command selections to central processor 101. Computer system 112 also includes signal communication interface (input/output device) 108, which is also coupled to bus 100. Communication interface 108 can also include wireless communication mechanisms.
It is appreciated that computer system 112 described herein illustrates an exemplary configuration of an operational platform. Nevertheless, other computer systems with differing configurations can also be used in place of computer system 112 within the scope of the present invention. These other types of computer systems can include workstations and thin client devices that are coupled to other computer systems in a distributed computer system network. Computer system 112 may be any type of computing device, such as but not limited to a personal computer, a game console, a personal digital assistant, a cell phone, a portable digital device, etc.
The compression block includes the two anchor points and the encoded bit codes. Thus, compressed data 23 includes fewer bits than are included in data 21. Decoder 24 of
With reference to
In general, the points P0 and P1 are each represented using N+1 bits and encoded using N bits. In one embodiment, the points P0 and P1 are each represented using nine (9) bits and encoded (compressed) using eight (8) bits each. In one embodiment, this is accomplished by truncating the least significant bit from each of P0 and P1.
Six other points (P2, P3, P4, P5, P6 and P7) are linearly interpolated using P0 and P1, yielding a total of 8 values for the palette. Each of the points P0 through P7 is associated with a unique bit code (refer to the discussion of
Table 1 provides one example of code that can be used to generate a palette according to one embodiment of the present invention.
According to the various embodiments of the present invention, P0 and P1 are either signed values or unsigned values. Signed values are normalized to the range [−1, 1] and the interpolation scheme described above (Table 1) is used, except the values for P0 and P1 are signed and the results are not clamped to integer values.
The present invention will be described for an embodiment in which the data to be compressed (encoded) are associated with a single component. Although described in the context of a single component, embodiments in accordance with the present invention are not so limited. Embodiments of the present invention can also be applied to other types of multiple-element, unconstrained or arbitrary data such as spherical harmonic data, by applying multiples of the elements of the compression/decompression scheme described herein (that is, by applying the scheme described herein to each component when there are multiple components).
In the example of
In the present embodiment, during the compression phase, the x-component values X1 through X16 are each compared to the values P0 through P7, in order to determine which of the values in the palette 44 each x-component value is closest to. For instance, if the value X1 is compared to the values P0 through P7 and found to be closest to the value P0, then bit code 000 would be associated with X1. Similarly, if X5 is found to be closest to P2, then bit code 010 would be associated with X5. In general, each of the data values in the block of texels 42 is associated with a bit code selected from palette 44. As a result, a memory-resident encoded block of texels 46 includes an exemplary bit code or index for each of the values in block of texels 42.
In the example of
As mentioned above, in one embodiment, the endpoints P0 and P1 are each represented using N+1 (e.g., 9) bits before they are encoded as N-bit (e.g., 8-bit) strings. In other words, in general, P0 and P1 are each reduced from N+1 bits to N bits before they are stored in compression block 50. In one embodiment, this is accomplished by truncating the least significant bit from each of P0 and P1.
Compression block 50 is decompressed by decoder 24 of
Table 2 provides one example of code used for unsigned decompression in accordance with the present invention.
One example of the “unsigned scale operation” mentioned in Table 2 is provided in Table 3.
The exemplary code of Table 3 is a fixed point operation equivalent to the floating point operation given by:
out=(int)floor(in*511.0/255.0+0.5).
Scaling by 511/255 is being done so that 8 bits are scaled to 9 bits such that values in the range [0 . . . 255] map as evenly as possible to the range [0 . . . 512]. Similarly, for signed values, the signed range [−127 . . . 127] is mapped as evenly as possible to the signed range [−255 . . . 255], as described below.
Table 4 provides one example of code used for signed decompression in accordance with the present invention.
One example of the “signed scale operation” mentioned in Table 4 is provided in Table 5.
The exemplary code of Table 5 is a fixed point operation equivalent to the floating point operation given by:
out=(in==−128)?−256:(int)floor(1*255.0/127.0+0.5).
The decompressed 9-bit P0 and P1 values are used in the interpolation of Table 1 to construct the palette values. Note that although the decompressed P0 and P1 are 9-bit values, the palette values are not limited to 9 bits of precision. That is, the values interpolated from P0 and P1 can be greater than 9 bits in length.
In step 62 of
In step 64 of
In step 66, in one embodiment, the value identified in step 64 is scaled to produce a third value that has N+1 bits of precision. That is, in one embodiment, the value that has the larger magnitude is scaled using a scaling operation. In one embodiment, the value is scaled using one of the scaling operations described by Tables 3 and 5.
In step 68, in one embodiment, a specified bit value (e.g., either 0 or 1) is appended as the least significant bit (lsb) to the other value (e.g., the value that has the smaller magnitude) to produce a fourth value that has N+1 bits of precision. As shown by Tables 2 and 4, the specified bit value depends on which of the first and second values is identified in step 64 as having the larger magnitude. Looking at Table 2, for example, if P0 has the greater magnitude, then a value of 1 is appended to P1 as the least significant bit of P1, while if P1 has the greater magnitude, then a value of 0 is appended to P0 as the least significant bit of P0.
The third and fourth values determined according to flowchart 60 can then be used to interpolate a range of other values, for example to reconstruct a palette as described previously herein.
In summary, embodiments of the present invention provide methods and systems for compressing and reconstructing data with improved precision. As a result, the quality of rendered images can be increased. The improvement in precision is achieved without actually storing extra precision bits. In effect, the extra precision bits are “virtual” bits that are not stored in the compression block, but whose values are derived from other information that is stored in the compression block. Thus, improved precision is achieved without significantly increasing the burden on computational resources and without increasing the size or changing the structure (format) of the compression block.
Embodiments of the present invention, data decompression with extra precision, are thus described. While the present invention has been described in particular embodiments, it should be appreciated that the present invention should not be construed as limited by such embodiments, but rather construed according to the below claims.
This application is a continuation application of U.S. patent application Ser. No. 10/990,884 by D. Rogers et al., filed on Nov. 16, 2004, entitled “Data Compression with Extra Precision,” now U.S. Pat. No. 8,078,656, assigned to the assignee of the present invention, and hereby incorporated by reference in its entirety. This application is related to U.S. patent application Ser. No. 10/990,900 by D. Rogers et al., filed on Nov. 16, 2004, entitled “Two Component Texture Map Compression,” now U.S. Pat. No. 7,961,195, assigned to the assignee of the present invention, and hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4586038 | Sims et al. | Apr 1986 | A |
4791403 | Mitchell et al. | Dec 1988 | A |
4803477 | Miyatake et al. | Feb 1989 | A |
4901064 | Deering | Feb 1990 | A |
5109417 | Fielder et al. | Apr 1992 | A |
5227789 | Barry et al. | Jul 1993 | A |
5495542 | Shimomura et al. | Feb 1996 | A |
5495545 | Cina et al. | Feb 1996 | A |
5644524 | Van Aken et al. | Jul 1997 | A |
5736987 | Drucker et al. | Apr 1998 | A |
5764228 | Baldwin | Jun 1998 | A |
5793371 | Deering | Aug 1998 | A |
5801708 | Alcorn et al. | Sep 1998 | A |
5801975 | Thayer et al. | Sep 1998 | A |
5805782 | Foran | Sep 1998 | A |
5821950 | Rentschler et al. | Oct 1998 | A |
5831640 | Wang et al. | Nov 1998 | A |
5835097 | Vaswani et al. | Nov 1998 | A |
5841442 | Einkauf et al. | Nov 1998 | A |
5963744 | Slavenburg et al. | Oct 1999 | A |
5977977 | Kajiya et al. | Nov 1999 | A |
6052127 | Vaswani et al. | Apr 2000 | A |
6055000 | Okada | Apr 2000 | A |
6078334 | Hanaoka et al. | Jun 2000 | A |
6184893 | Devic et al. | Feb 2001 | B1 |
6351681 | Chih et al. | Feb 2002 | B1 |
6433789 | Rosman | Aug 2002 | B1 |
6501851 | Kondo et al. | Dec 2002 | B1 |
6532013 | Papakipos et al. | Mar 2003 | B1 |
6546409 | Wong | Apr 2003 | B1 |
6580828 | Li | Jun 2003 | B1 |
6791559 | Baldwin | Sep 2004 | B2 |
6816167 | Rauchfuss et al. | Nov 2004 | B1 |
6876362 | Newhall, Jr. et al. | Apr 2005 | B1 |
6925520 | Ma et al. | Aug 2005 | B2 |
6940512 | Yamaguchi et al. | Sep 2005 | B2 |
7098924 | Prokopenko et al. | Aug 2006 | B2 |
7109999 | Lindholm et al. | Sep 2006 | B1 |
7126604 | Purcell et al. | Oct 2006 | B1 |
7224838 | Kondo et al. | May 2007 | B2 |
7646389 | Rouet et al. | Jan 2010 | B1 |
7825936 | Bastos et al. | Nov 2010 | B1 |
7916149 | Donovan et al. | Mar 2011 | B1 |
7961195 | Rogers et al. | Jun 2011 | B1 |
8436868 | Donovan et al. | May 2013 | B2 |
8456481 | Donovan et al. | Jun 2013 | B2 |
20020147753 | Rao et al. | Oct 2002 | A1 |
20030023646 | Lin et al. | Jan 2003 | A1 |
20030105788 | Chatterjee | Jun 2003 | A1 |
20030164830 | Kent | Sep 2003 | A1 |
20030169265 | Emberling | Sep 2003 | A1 |
20030197707 | Dawson | Oct 2003 | A1 |
20030206177 | Hoppe et al. | Nov 2003 | A1 |
20030223490 | Kondo et al. | Dec 2003 | A1 |
20040008200 | Naegle et al. | Jan 2004 | A1 |
20040012596 | Allen et al. | Jan 2004 | A1 |
20040027358 | Nakao | Feb 2004 | A1 |
20040151372 | Reshetov et al. | Aug 2004 | A1 |
20040207631 | Fenney et al. | Oct 2004 | A1 |
20040238535 | Mast | Dec 2004 | A1 |
20050110790 | D'Amora | May 2005 | A1 |
20060238535 | Goel et al. | Oct 2006 | A1 |
Number | Date | Country |
---|---|---|
0171519 | Sep 2001 | WO |
Entry |
---|
Mark Adler, Gzappend, Nov. 4, 2003, http://svn.ghostscript.com/ghostscript/tags/zlib-1.2.3/examples/gzappend.c. |
K Proudfoot, W. Mark, S. Tzvetkov, P. Hanrahan. “A Real-Time Procedural Shading System for Programmable Graphics Hardware.” ACM SIGGRAPH 2001, pp. 159-170. |
Matt Buckelew, “RealiZm Graphics,” IEEE Comp. Soc. Press, Feb. 1997 Proceedings of IEEE CompCon '97, San Jose: pp. 192-197. |
Number | Date | Country | |
---|---|---|---|
20120084334 A1 | Apr 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10990884 | Nov 2004 | US |
Child | 13324971 | US |