Claims
- 1. A method for simulating the effects of layered fog in a computer-generated synthetic environment, wherein aliasing is eliminated at a boundary between regions of the layered fog that have different densities, and wherein the boundary of the regions of the layered fog lies between adjacent pixels, said method comprising the steps of:
(1) generating a plurality of sample points for each of the adjacent pixels that lie on the boundary of the layered fog regions; (2) calculating a layered fog density for each of the plurality of sample points; and (3) blending the layered fog density that is calculated for each of the plurality of sample points to thereby form an anti-aliased pixel layered fog density value for each of the adjacent pixels.
- 2. The method as defined in claim 1 wherein the method further comprises the step of utilizing a total of three sample points to represent the plurality of sample points selected from within each of the adjacent pixels that lie on the boundary of the layered fog regions.
- 3. The method as defined in claim 1 wherein the method of selecting the adjacent pixels that lie on the boundary of the layered fog regions further comprises the step of selecting pixels that are separated by a distance of approximately one screen pixel.
- 4. The method as defined in claim 2 wherein the method further comprises the step of selecting the plurality of sample points over an altitude range that corresponds to approximately one pixel height in display screen space.
- 5. The method as defined in claim 4 wherein the method further comprises the step of determining a sample altitude for each of the plurality of sample points.
- 6. The method as defined in claim 5 wherein the method further comprises the step of determining a Z component of a scaled eye footprint vector for each of the plurality of sample points.
- 7. The method as defined in claim 6 wherein the step of determining the sample altitude further comprises the steps of:
(1) determining a length of an eye vector; (2) determining a slope of an eye vector; (3) determining an orientation of a polygon for which the sample altitude value is being calculated; and (4) determining a total number of pixels on a computer display.
- 8. The method as defined in claim 7 wherein the method further comprises the step of determining whether the eye vector has a primarily vertical direction or a primarily horizontal direction.
- 9. The method as defined in claim 8 wherein the method further comprises the steps of:
(1) determining the sample altitude primarily as a function of an orientation of the polygon when the eye vector has a primarily vertical direction; and (2) determining the sample altitude primarily as a function of a range of the polygon when the eye vector has a primarily horizontal direction.
- 10. The method as defined in claim 9 wherein the step of determining the sample altitude further comprises the steps of:
(1) determining a pixel to eye vector (E); (2) determining a polygon plane normal vector (P); (3) determining the eye to pixel range (R); and (4) determining a pixel size in radians;
- 11. The method as defined in claim 10 wherein the step of determining a pixel size in radians further comprises the step of dividing a view frustum angle by a total number of vertical display pixels on the display screen.
- 12. The method as defined in claim 11 wherein the method further comprises the step of calculating a unit vector that lies in a plane of the polygon, and which points in a direction that is most aligned with (1) the eye vector, or (2) an eye footprint vector (F) on the plane of the polygon.
- 13. The method as defined in claim 12 wherein the method further comprises the step of calculating the eye footprint vector (F).
- 14. The method as defined in claim 13 wherein the step of calculating the eye footprint vector (F) further comprises the steps of:
(1) calculating a new vector by taking a cross-product of the pixel to eye vector (E) and the polygon plane normal vector (P), wherein the eye footprint vector (F) is perpendicular to both the pixel to eye vector (E) and the polygon plane normal vector (P); and (2) calculating a cross-product of the new vector and the polygon plane normal vector (P) to obtain the eye footprint vector (F) which is an unnormalized vector aligned in a direction of the pixel to eye vector (E) in the plane of the polygon.
- 15. The method as defined in claim 14 wherein the method further comprises the step of renormalizing the eye footprint vector (F), wherein renormalizing generates a renormalized eye footprint vector that has a unit length of one in the plane of the polygon.
- 16. The method as defined in claim 15 wherein the method further comprises the step of scaling the renormalized eye footprint vector (F) by a slant factor to thereby obtain a scaled eye footprint vector that accounts for an orientation of the plane of the polygon relative to the eye footprint vector, wherein the slant factor is generated based on an assumption that an angle between the pixel to eye vector (E) and each of the plurality of sample points is less than one degree, that the eye to pixel range (R) will always be substantially larger than a pixel to sample distance, and that the pixel to eye vector (E) and a sample to eye vector are substantially parallel.
- 17. The method as defined in claim 16 wherein the method further comprises the step of calculating the scaled eye footprint vector using a formula:
- 18. The method as defined in claim 17 wherein the method further comprises the step of simplifying the equation of claim 17 by discarding the sin θ term because a sine of a small angle is nearly a value of the angle measured in radians, thereby simplifying the formula to:
- 19. The method as defined in claim 18 wherein the step of calculating the Z component of the scaled eye footprint vector for each of the plurality of sample points further comprises the step of applying the slant factor, the eye to pixel range (R), and the pixel size using a formula:
- 20. The method as defined in claim 19 wherein the method further comprises the step of reducing complexity of the formula in claim 19 by using a dot product of the polygon plane normal vector and the pixel to eye vector to thereby index into a precalculated look-up table to determine a modulating value M, thus resulting in the step of calculating the Z component of the scaled eye footprint vector for each of the plurality of sample points as a formula:
- 21. The method as defined in claim 20 wherein the method further comprises the step of modifying the formula of claim 20 when:
(1) an eyepoint is observing the polygon straight-on to the polygon such that (P·E)=1, where the length of the pixel to eye vector goes to zero, while the modulating factor M goes to infinity; and (2) the eyepoint is observing the polygon edge-on such that (P·E)=0, where the sample Z value approaches infinity.
- 22. The method as defined in claim 21 wherein the method further comprises the step of manipulating contents of the look-up table that are generated at a corner condition.
- 23. The method as defined in claim 21 wherein the method further comprises the step of making an adjustment in the sample Z value when the pixel to eye vector (E) approaches the horizon, wherein the sample Z value needs to approach one so that the plurality of antialiasing samples are over a span of one pixel.
- 24. The method as defined in claim 2 wherein the method further comprises the step of selecting as the three sample points (1) an initial pixel altitude, (2) the initial pixel altitude plus the sample altitude, and (3) the initial pixel altitude minus the sample altitude.
- 25. The method as defined in claim 24 wherein the method further comprises the step of bounding the initial pixel altitude plus the sample altitude and the initial pixel altitude minus the sample altitude by a minimum and a maximum altitudes of the polygon in world space coordinates.
- 26. The method as defined in claim 25 wherein the method further comprises the step of obtaining an antialiased layered fog effect wherein a fog density value of the three sample altitudes is blended into a single layered fog density, wherein the single layered fog density is obtained by averaging after exponentiating each of the three sample altitudes.
- 27. The method as defined in claim 26 wherein the method further comprises the steps of:
(1) selecting as an aggregate density a smallest density value of the three sample altitudes; and (2) utilizing the aggregate density to thereby calculate a function that is a density value that is within approximately six percent of an averaged exponentiated result, and wherein the function is represented by a formula: 14Daverage=Ds+[min(0.5,Dm-Ds)+min(0.5,Dl-Ds)]4
Parent Case Info
[0001] This application is a divisional of U.S. patent application Ser. No. 09/157,710 filed on Sep. 21, 1998.
Divisions (1)
|
Number |
Date |
Country |
Parent |
09157710 |
Sep 1998 |
US |
Child |
09812366 |
Mar 2001 |
US |