Computer graphics have advanced significantly in the past years, especially in light of the increased popularity of video gaming. Various techniques have been employed to create fast and realistic computer graphics for various applications and to overcome common problems with rendering scenes and objects in computer graphics applications.
Reflections and refraction are difficult to model in computer graphics applications because they often produce non-linear rays or beams. For reflection, the ultimate goal is to render curved reflections of nearby objects with fully dynamic motion in real time. However, in the past in an attempt to meet such a goal, it has been necessary to either assume distant objects, planar reflectors, static scenes, limited motion for frame-to-frame coherence, or to rely on extensive pre-computation for all possible motion parameters. For example, some methods allow curved mirror reflection of distant objects but not for nearby objects. Additionally, some previous rendering techniques have required fine tessellation of scene geometry for rendering curvilinear effects. This requires a high geometry computation workload regardless of the original scene complexity, which is often quite simple for interactive applications such as games. Refraction poses an even bigger challenge than reflection, as even planar refraction can produce non-linear effects. Common rendering techniques typically handle only nearly planar refractors or only far away scene objects, but not refraction of near objects.
Some techniques for displaying computer graphics include ray tracing and beam tracing. Ray tracing is a rendering algorithm where visualisations of programmed scenes are produced using a technique which follows rays from the eyepoint outward, rather than originating at the light source. Ray tracing efficiently provides accurate simulations of reflection and refraction of light off of objects in a scene. However, most existing ray tracing techniques store scene geometry in a database and pre-compute certain data structures for efficient rendering. As such ray tracing is less suitable for rendering dynamic scenes.
Beam tracing is a derivative of ray tracing that replaces rays, which have no thickness, with beams. Beams are shaped like unbounded pyramids, with polygonal cross sections and with a beam base that intersects the film or sensor plane of the camera capturing the scene. Beam tracing combines the flexibility of ray tracing and the speed of polygon rasterization in computer graphics applications. One primary advantage of beam tracing is efficiency as it can render individual beams via polygon rasterization which is readily performed by conventional graphics processing units (GPUs). Beam tracing can also resolve sampling, aliasing and level of detail issues that plague conventional ray tracing. However, beam tracing to date has been only able to process linear transformations. Thus, beam tracing has only been applicable to linear effects such as planar mirror reflections but not to non-linear effects such as curved mirror reflection, refraction, caustics, and shadows. Note that such non-linear effects are not limited to curved geometry as even planar refraction is non-linear. Another common non-linear effect arises from bump-mapped surfaces as modeling them requires incoherent ray bundles that cannot be handled using non-linear beams. Furthermore, non-linear beam tracing is highly challenging because common graphics hardware, such as, for example, commodity graphics hardware, supports only linear vertex transformation and triangle rasterization.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The non-linear beam tracing technique described herein supports full non-linear beam tracing effects including multiple reflections and refractions. The technique introduces non-linear beam tracing to render non-linear ray tracing effects such as curved mirror reflection, refraction, caustics, and shadows. Beams are non-linear where rays within the same beam are not parallel or do not intersect at a single point. Such is the case when a primary beam bounces off of a surface and spawns one or more secondary rays or beams. Secondary beams can be rendered in a similar manner to primary rays or beams via polygon streaming. The technique can also be applied to incoherent ray bundles such as occur when rendering bump mapped surfaces.
In one embodiment of the non-linear beam forming technique, image data of a scene or object to be displayed on a computer display and a texture mesh are input. As a first step, a beam tree is constructed on a CPU by bouncing the primary beam off of scene reflectors/refractors to spawn secondary or higher order beams. This beam tree construction step produces beams with triangular cross sections, and may include both smooth and perturbed beams. These smooth and perturbed beams are typically non-linear with the exception of each primary beam. After the beam tree is constructed on a CPU, the content of the beams are then rendered on a GPU. More specifically, in beam parameterization, for each point P on the beam base triangle of a beam an associated ray with a specific origin and direction are computed. A bounding triangle of each scene triangle of the image data projected onto a corresponding beam base triangle is computed for all smooth beams (beams that are not perturbed by a bump map). A bounding triangle for each of any perturbed beams (beams perturbed by a bump map) is then computed by expanding the bounding triangle as computed for the smooth beams. Pixel shading is then performed by determining how each pixel P within each bounding triangle is rendered. Using the previous beam parameterization where for each pixel within the bounding triangle a ray origin and direction was computed, for each pixel the computed ray is followed into space. If the ray does not intersect a scene triangle the pixel is removed from further calculations. If the ray does intersect a scene triangle, the pixel attributes (depth, color or texture coordinates) are determined from the ray's intersection point with the scene triangle and these attributes at the ray's intersection with the scene triangle are used to render the pixel.
In the following description of embodiments of the disclosure, reference is made to the accompanying drawings which form a part hereof, and in which are shown, by way of illustration, specific embodiments in which the technique may be practiced. It is understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the disclosure.
The specific features, aspects, and advantages of the disclosure will become better understood with regard to the following description, appended claims, and accompanying drawings where:
In the following description of the non-linear beam tracing technique, reference is made to the accompanying drawings, which form a part thereof, and which is shown by way of illustration examples by which the non-linear beam tracing technique may be practiced. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the claimed subject matter.
1.0 Non-Linear Beam Tracing
Realizing non-linear beam tracing is challenging as typical graphics hardware units (GPUs) support only linear vertex transforms and triangle rasterization. One embodiment of the non-linear beam tracing technique described herein overcomes this by implementing a non-linear graphics pipeline on top of a standard GPU (e.g., a commodity GPU). Specifically, the technique can render a given scene triangle into a projection region with non-linear edges. This is achieved by customizing the vertex computations to estimate a bounding triangle for the projection region of a scene triangle. The scene triangle is then rendered by a shader/fragment module using the bounding triangle. For each pixel within the bounding triangle, the fragment module performs a simple ray-triangle intersection test to determine if the pixel is within the true projection of the scene triangle. If so, the pixel is shaded, otherwise it is discarded via fragment kill. The technique provides a bounding triangle computation that is both tight and easy to compute, and that remains so for both smooth and perturbed beams. Other technical contributions include beam parameterization that allows efficient bounding triangle estimation and pixel shading while ensuring projection continuity across adjacent beams. Some embodiments of the technique also employ a frustum culling strategy where the frustum sides are no longer planes, and a fast non-linear beam tree construction that supports multiple beam bounces. The non-linear beam tracing technique supports fully dynamic scenes or larger polygon-count scenes since it does not require pre-storage of a scene database. As a result, scenes can be rendered in real-time.
1.1 Operating Environment
1.2 Exemplary Architecture
The beam tree, mesh, and any command from the CPU are input into a GPU front end module 210 which resides on a Graphics Processing Unit module 212. This GPU front end module 210 carves the scene or image data into triangular sections which are defined by vertices. The GPU front end module 210 sends the vertices of the triangular sections of the image data to a programmable vertex processor 214. The programmable vertex processor 214 transforms the vertices of the sections of the scene or image data into a global coordinate system and further computes a linear bounding triangle containing a projection of a scene triangle for each beam (e.g., beam base) of the beam tree. The programmable vertex processor 214 has separate modules for bounding triangle estimation for smooth beams 216 and for bounding triangle estimation for perturbed beams 218. The transformed vertices and bounding triangles for all beams are input into a primitive assembly module 220 that assembles primitives (geometric objects the system can draw) from the vertices in the global coordinates. The primitives and the bounding triangles are input into a rasterization and interpolation module 222 that determines the pixel location of fragments of the primitives (scene triangles) on the visible portions of the computer display. The computed fragments (scene triangles) are then input into a programmable fragment processor/shader module 224 that computes a color value for each pixel, rejecting pixels that are not inside the bounding triangle containing the projection of a scene triangle computed by the vertex processor 214. To do this the programmable fragment processor module 224 simply computes the ray (origin and direction according to the beam parameterization) for each pixel inside the bounding triangle and intersects the ray with the scene triangle to determine if it is a hit or miss; if it is a hit the pixel is shaded according to the intersection point, otherwise it is a miss and the pixel is killed. The pixel color and pixel location stream of the fragments are input into a raster operations module 226 that combines these inputs to update the color and location of every pixel, thereby creating the frame buffer or output image 228.
The following paragraphs provide an example of the rendering of a scene triangle within a beam using the exemplary architecture of the non-linear beam tracing technique described above.
One embodiment of the non-linear beam tracing technique renders scene triangles one by one via feed-forward polygon rasterization. As illustrated in
More specifically,
Rendering results for each beam are stored as a texture on its base triangle. The collection of beam base triangles is then texture mapped onto the original reflector/refractor surfaces (e.g., textures 208) for final rendering. For clarity, the collection of beam base triangles corresponding to a single reflector/refractor object is called a BT (beam tracing) mesh. The resolution of the BT mesh trades off between rendering quality and speed. A BT mesh is associated with a reflector/refractor interface through either mesh simplification or procedural methods, depending on whether the interface is static (e.g. a mirror teapot) or dynamic (e.g. water waves).
1.3 Exemplary Processes
An exemplary architecture and process employing the non-linear beam tracing technique having been discussed the following paragraphs provide details.
1.4 Technique Details and Alternate Embodiments
An overview of one embodiment of the non-linear beam tracing technique is summarized in Table 1. The steps of the technique described in Table 1 are cross-correlated to the flow chart shown in
As shown in
The following paragraphs provide details regarding the previously discussed process actions.
1.4.1 Beam Parameterization
As previously described with respect to
More specifically, given a beam B with normalized boundary rays {{right arrow over (d1)},{right arrow over (d2)},{right arrow over (d3)}} defined on three vertices of its base triangle ΔP1P2P3, the smooth parameterization of the ray origin and direction for any point P on the beam base plane is defined via the following formula:
where {right arrow over (L)} is a function satisfying the following linear property for any real number s, {{right arrow over (P1)},{right arrow over (P2)},{right arrow over (P3)}}, and {{right arrow over (d1)},{right arrow over (d2)},{right arrow over (d3)}}:
This formulation is termed as the Sn parameterization, the beam parameterization for a smooth beam (where {right arrow over (L)} contains
terms). Note that this is an extension of the computation of triangular Bezier patches with two additional constraints: (1) the identical formulations of {right arrow over (aijk)} and {right arrow over (bijk)} in Equation 1, and (2) the linear property for {right arrow over (L)} as in Equation 2. These are essential for an efficient GPU implementation. Even with these two restrictions, the Sn parameterization is still general enough to provide desired level of continuity. Note that in general {right arrow over (O)}≠{right arrow over (P)} unless under the C0 parameterization.
Even though in theory general Ck continuity might be achieved via a large enough n in the Sn parameterization, it has been found out through experiments that an excessively large n not only slows down computation but also introduces excessive undulations. The tradeoff between C0 and S3 (Sn mentioned above with n equals to 3) parameterizations is that C0 is faster to compute (both in vertex and pixel programs) but S3 offers higher quality as it ensures C1 continuity at BT mesh vertices. This quality difference is most evident for scene objects containing straight line geometry or texture.
For perturbed beams, {right arrow over (d)} and {right arrow over (O)} are computed via the C0 parameterization followed by standard bump mapping. Due to the nature of perturbation it has been found a parameterization with higher level continuity unnecessary. For efficient implementation and ease of computation, it is assumed that all rays {{right arrow over (d1)},{right arrow over (d2)},{right arrow over (d3)}} point to the same side of the beam base plane. The exception happens only very rarely for beams at grazing angles. This is enforced in one implementation by proper clamping of {{right arrow over (d1)},{right arrow over (d2)},{right arrow over (d3)}}.
1.4.2 Shading
In this section how one embodiment of the fragment module (
1.4.3 Bounding Triangle Estimation
In this section, as previously discussed with respect to
In the following paragraphs, the bounding triangle computation for Sn (δ=0) and bump-map (δ>0) parameterizations is presented. The procedures performed rely on several math quantities: (1) external directions {{circumflex over (d)}{circumflex over (d1)},{circumflex over (d)}{circumflex over (d2)},{circumflex over (d)}{circumflex over (d3)}}, which are expansions of the original beam directions {{right arrow over (d1)},{right arrow over (d2)},{right arrow over (d3)}} so that they “contain” all interior ray directions {right arrow over (d)}, (2) an affine plane Πa(t), which is t distances away from the beam base plane Πb and contains affine triangle Δa(t) with vertices {right arrow over (Hi)}={right arrow over (Pi)}+t{right arrow over (di)},i=1,2,3, (3) threshold plane ΠT, which is a special affine plane Πa(tres) with a particular value tres so that any space point {right arrow over (Q)} above ΠT is guaranteed to have at most one projection within the beam base triangle ΔP1P2P3, and (4) maximum offsets μb and μd, which are maximum offsets between the Sn and C0 parameterizations of ray origin and direction for all {right arrow over (P)} on the base triangle Δb. Appendix A provides more detailed mathematical definitions of these variables.
1.4.3.1 Bounding Triangle for Smooth Beam with Sn Parameterization
In one embodiment of the non-linear beam tracing technique, for a Sn beam parameterization as computed in Equation 1, the bounding triangle can be accurately estimated by computing tangent lines to each projected edge of the scene triangle. (For example, for C0 parameterization all projected edges are quadratic curves.) However, this strategy has some drawbacks. First, since the projection region may have an irregular shape (for example, it may not be a triangle or even a single region), handling all these cases robustly can be difficult. Second, it was found through experimentation that it is too computationally expensive for vertex programs. Third, it is not general as different n values for Sn typically require different algorithms. Instead, one embodiment of the non-linear beam tracing technique produces less tight bounding triangles than the aforementioned tangent line method but is more general (applicable to any Sn), more robust (works for any projected shape), and faster to compute. For clarity, a procedure employed by one embodiment of the non-linear beam tracing technique for computing a bounding triangle for a smooth beam (as was previously discussed with respect to
More specifically, for each scene triangle Δ with vertices {{right arrow over (Qi)}}i=1,2,3 nine transfer points {Pij}i,j=1,2,3 on the beam base plane Πb are computed where Pi,j is the transfer of {right arrow over (Qi)} in the direction of {right arrow over (dj)}. A bounding triangle {tilde over (Δ)} is then computed from {Pij}i,j=1,2,3. This procedure only works for C0 parameterization but one can extend it for the general Sn (n>1) parameterization by replacing {right arrow over (dj)} with {circumflex over (d)}{circumflex over (dj)} followed by proper ε-expansion of the bounding triangle {tilde over (Δ)} as described under ExpandedBoundingTriangle( ) in Table 2. (The amount of expansion ε is zero for C0 paramaterization, and is non-zero for Sn parameterization to ensure that {tilde over (Δ)} is conservative even when {right arrow over (OP)}≠{right arrow over (P)}.) This procedure is termed the general case as in Table 2. It can be shown that {tilde over (Δ)} computed above is guaranteed to be a bounding triangle for the projection of the scene triangle Δ as long its three vertices completely lie within the beam. However, the bounding triangle computed by this procedure is too crude; as the scene triangle gets smaller or further away from Πb, the overdraw ratio can become astronomically large, making rendering of large scene models impractical.
Another embodiment of the non-linear beam tracing technique employs a different procedure to compute the bounding triangle as follows. The large overdraw ratio in the general case of the procedure only happens when the rays are diverging. The technique addresses this issue by first computing nine transfer points {Hij}i,j=1,2,3 and the associated bounding triangle ΔE1E2E3 similar to the general case procedure, but instead of the base plane Πb these computations are performed on an affine plane Πa that passes through the top of the scene triangle Δ. (If there are multiple affine planes Πa(t) that pass through the top, one chooses the one with largest t value.) One then transports the vertices of ΔE1E2E3 on Πa to the vertices of ΔF1F2F3 on Πb so that they share identical barycentric coordinates. It can be proven that ΔF1F2F3 is guaranteed to be a proper bounding triangle as long the scene triangle Δ is above the threshold plane ΠT. Furthermore, since Πa is much closer to the scene triangle Δ than to the beam base plane Πb, the bounding region is usually much tighter and thus the overdraw ratio is reduced. Since this procedure is faster than the general case procedure but works only when each space point {right arrow over (Q)}εΔ A has at most one projection in the base triangle (due to the fact that Δ is above ΠT), it is termed the unique projection case procedure (i.e., finitesceneboundingbox).
Since the unique projection case procedure is not guaranteed work when the scene triangle Δ is not above the threshold plane ΠT, in this case, the general case procedure is applied. Unlike the diverging case, the region below the threshold plane is mostly converging and the volume is bounded, so the overdraw ratio is usually small. The procedure also fails to be conservative when the three vertices of the scene triangle lie on different sides of the beam boundary. When this happens, the entire beam base is used as the bounding triangle. Since each frustum side of the beams is a bilinear surface for the C0 parameterization, this frustum intersection test can be efficiently performed. (For the S3 parameterization it is a bi-cubic surface and analogous strategies can be applied.) Furthermore, triangles that intersect the beam boundary are relatively rare so this brute force solution does not significantly affect the run-time performance.
1.4.3.2 Bounding Triangle for Perturbed Beam with Bump-Map Parameterization
As mentioned previously with respect to
where φ is the maximum angle formed between any two rays within the same beam, {right arrow over (n)} is the normal of the beam base (facing the inside of the beam),
with {right arrow over (PQ
1.4.4 Beam Tracing (BT) Mesh Construction
For a static reflector/refractor, the technique builds a beam tracing mesh by summing the beam base triangles corresponding to a single refractor/reflector objects. Various methods of mesh simplification can be used to reduce the polygon counts of the original texture mesh to a smaller number of polygon counts while still preserving the overall shape of the mesh. One embodiment of the technique employs a procedural method to ensure real-time performance. For example, water waves are often simulated in games via a planar base mesh with procedural, sinusoidal-like vertex motions plus further mesh subdivision for rendering. In one embodiment of the technique a base mesh is used as the beam tracing mesh and the texture-map of the beam tracing results are mapped back onto the subdivided mesh. Similar strategies could be applied for other kinds of simulation techniques in games and interactive applications.
1.4.5 Beam Tree Construction
A discussed previously with respect to
To handle these situations, in one embodiment, the technique employs a beam construction procedure as summarized in Table 3. In general, the beam tracing method culls beam components outside the beam base triangle and clips those crossing the bounding triangle borders. More specifically, in one embodiment, given a primary beam B and a secondary beam base triangle ΔQ1Q2Q3 (beam base triangle of a beam caused by reflection or refraction), the technique first computes the projections {Vij} of each {right arrow over (Qi)} onto B. The projections {Vij} correspond to vertices of the components of the projection regions from ΔQ1Q2Q3. The technique groups {Vij} into disjoint components {Rk}, culls those outside the base triangle of B, and clips those crossing the boundary of . The technique then handles each remaining Rk separately. In the benign case where Rk (1) has exactly three vertices and (2) is not clipped, it can be shown that (1) each {right arrow over (Qi)} has a unique projection Vi corresponding to one of the vertices of Rk and (2) Rk corresponds to the entire ΔQ1Q2Q3. Consequently the projection direction {right arrow over (di)} of {right arrow over (Qi)} can be computed from Vi relative to the primary beam B. In complex cases where Rk either (1) has number of vertices other than three or (2) is clipped, it could happen that (1) at least one {right arrow over (Qi)} projects onto multiple vertices of Rk and consequently has multiple directions {right arrow over (di,j)} defined or (2) Rk corresponds to a curved subset of ΔQ1Q2Q3. In these cases, the technique represents ΔQ1Q2Q3 with multiple beams {B′} so that each one of them has uniquely defined directions over the corresponding beam base vertices. The technique achieves this by triangle-stripping Rk into multiple components {Rkm} so that each {right arrow over (Q)}εΔQ1Q2Q3 has at most one projection within each Rkm, and handle Rkm via the benign case procedure.
Since the eye-beam is single-perspective, all of the secondary beams (for first level reflections and refractions) belong to the benign cases and the more complicated cases can arise only for third or higher level beams. Additionally, the technique clips the disjoint components Rk against the boundary of the bounding triangle Δb, so it guarantees consistency across multiple primary beams via the vertices created through clipping, e.g. when ΔQ1Q2Q3 projects onto the bases of two adjacent primary beams. Finally, since each Rkm corresponds to a subset of ΔQ1Q2Q3, this beam splitting process does not require re-building of beam tracing meshes or the associated textures as each new beam can be rendered into a subset of the original texture atlases. Since beam tracing meshes are coarser than the real reflector/refractor geometry, the technique can trace beam shapes entirely on a CPU.
1.4.6 Texture Storage for Multiple Bounces
For single-bounce beam tracing the non-linear technique simply records the rendering results over a texture map parameterized by a beam tracing mesh. However, for multi-bounce situations (e.g. multiple mirrors reflecting each other), one texture map might not suffice as each beam base triangle might be visible from multiple beams. For the worst case scenario where M beams can all see each other, the number of different textures can be O(Mn) for n-bounce beam tracing. Fortunately, it can be easily proven that the maximum amount of texture storage is O(Mn) if the beam tree is rendered in a depth-first order. In one embodiment the technique simply keeps n copies of texture for each beam for n-bounce beam tracing.
1.4.7 Acceleration
Given M scene triangles and N beams, the technique would require O(MN) time complexity for geometry processing. Fortunately, this theoretical bound on computation speed can be reduced in practice by view frustum culling and level of detail (LOD) control, as discussed below.
1.4.7.1 Frustum Culling
The viewing frustum is the field of view of the notional camera. The exact shape of this region varies depending on what kind of camera lens is being simulated, but typically it is a frustum of a rectangular pyramid. The planes that cut the frustum perpendicular to the viewing direction are called the near plane and the far plane. Objects that are closer to the camera than the near plane or are further than the far plane are not drawn. View frustum culling is the process of removing objects that lie completely outside the viewing frustum from the rendering process. For a linear view frustum, culling can be efficiently performed since each frustum side is a plane. However, in one embodiment of the technique, each beam has a non-linear view frustum where each frustum side is a non-planar surface. Furthermore, for beam bases lying on a concave portion of the reflector the three boundary surfaces might intersect each other, making the sidedness test of traditional culling incorrect. To resolve these issues, one embodiment of the non-linear beam tracing technique adopts a simple heuristic via a cone which completely contains the original beam viewing frustum. In the one implementation of the technique computation of the bounding cones and culling operations are performed per object on a CPU as well as per triangle on a GPU. Portions of objects outside the bounding cone are culled. Details of how this culling is performed and the associated mathematical computations are provided in Appendix A.5.
1.4.7.2 Geometry (Level of Detail) LOD Control
In computer graphics, accounting for level of detail involves decreasing the complexity of a 3D object representation as it moves away from the viewer or according other metrics such as object importance, eye-space speed or position. Geometry level of detail (LOD) can be naturally achieved by the non-linear beamforming technique. In one embodiment, the technique estimates the proper LOD for each object when rendering a particular beam. This not only reduces aliasing artifacts but also accelerates the rendering speed. Geometry LOD is more difficult to achieve for ray tracing as it requires storing additional LOD geometry in the scene database. In one current implementation, the technique uses a simple heuristic by projecting the bounding box of an object onto a beam, and utilize the rasterized projection area to estimate proper LOD.
1.4.8 Anti-Aliasing
Anti-aliasing is the technique of minimizing the distortion artifacts known as aliasing when representing a high-resolution signal at a lower resolution. Anti-aliasing can be efficiently performed by the GPU in every major step of the non-linear beamforming technique. In addition to the geometry LOD control as mentioned above, the technique also performs full screen anti-aliasing for each rendered beam. The computed beam results are stored in textures, which in turn are anti-aliased via standard texture filtering.
2.0 The Computing Environment
The non-linear beam tracing technique is designed to operate in a computing environment. The following description is intended to provide a brief, general description of a suitable computing environment in which the non-linear beam tracing technique can be implemented. The technique is operational with numerous general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable include, but are not limited to, personal computers, server computers, hand-held or laptop devices (for example, media players, notebook computers, cellular phones, personal data assistants, voice recorders), multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
Device 600 has a display 618, and may also contain communications connection(s) 612 that allow the device to communicate with other devices. Communications connection(s) 612 is an example of communication media. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal, thereby changing the configuration or state of the receiving device of the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. The term computer readable media as used herein includes both storage media and communication media.
Device 600 may have various input device(s) 614 such as a keyboard, mouse, pen, camera, touch input device, and so on. Output device(s) 616 such as speakers, a printer, and so on may also be included. All of these devices are well known in the art and need not be discussed at length here.
The non-linear beam tracing technique may be described in the general context of computer-executable instructions, such as program modules, being executed by a computing device. Generally, program modules include routines, programs, objects, components, data structures, and so on, that perform particular tasks or implement particular abstract data types. The non-linear beam tracing technique may be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
It should also be noted that any or all of the aforementioned alternate embodiments described herein may be used in any combination desired to form additional hybrid embodiments. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. For example, one embodiment of the non-linear beam tracing technique can create higher accuracy models of reflections and refractions by using finer beams at the expense of more computations. Additionally, some of the processing that occurs on the CPU can be performed on the GPU and vice versa. The specific features and acts described above are disclosed as example forms of implementing the claims.
A.1 S3 Parameterization
The formula for S3 is:
It can be easily shown that this formulation satisfies Equations 1 and 2.
A.1 External Directions {{circumflex over (d)}{circumflex over (di)}}i=1,2,3
Given a beam B with base triangle ΔP1P2P3, its external directions is a set of ray directions {{circumflex over (d)}1,{circumflex over (d)}2,{circumflex over (d)}3} emanating from {P1,P2,P3} so that for each P within ΔP1P2P3 there exists a set of real numbers {circumflex over (ω)}1,{circumflex over (ω)}2, {circumflex over (ω)}3, 0≦{circumflex over (ω)}1,{circumflex over (ω)}2,{circumflex over (ω)}3 and {circumflex over (ω)}1+{circumflex over (ω)}2+{circumflex over (ω)}3=1 that can express {right arrow over (d)}(P) as follows:
{right arrow over (d)}(P)={circumflex over (ω)}1{circumflex over (d)}1+{circumflex over (ω)}2{circumflex over (d)}2+{circumflex over (ω)}3{circumflex over (d)}3
External directions exist as long all the beam directions point away to the same side of the beam base plane. Note that for C0 parameterization one could have {circumflex over (d)}i={right arrow over (di)}, but this is not true for general Sn parameterization and bump mapped surfaces. External directions can be computed analytically for Sn parameterization and from the maximum perturbation angle δ for bump map parameterization.
A.2. Affine plane Πa
Given a beam B with base triangle ΔP1P2P3 and associated rays {{right arrow over (d)}1,{right arrow over (d)}2,{right arrow over (d)}3}, one defines affine triangle Δa(t) as a triangle ΔH1H2H3 where {right arrow over (Hi)}={right arrow over (Pi)}+t{right arrow over (di)}, for a real number t. Note that an affine triangle is uniquely defined by the value t. Define affine plane Πa(t) as a plane which contains the affine triangle Δa(t). Note that the base plane of B is a special case of affine plane with t=0. Affine planes have this nice property that if {right arrow over (H)}εΠa has the same set of barycentric coordinates as {right arrow over (P)}εΠb, then {right arrow over (P)} is a projection of {right arrow over (H)} on Πb.
A.3 Threshold Plane ΠT
The threshold plane ΠT of a beam B is defined as an affine plane Πa(tres) with a particular value tres so that any space point {right arrow over (Q)} above ΠT is guaranteed to have at most one projection within the beam base triangle ΔP1P2P3. Intuitively, the beam frustum above the threshold plane ΠT has diverging ray directions. The threshold plane for the Sn parameterization (Equation 1) is now defined. Let {right arrow over (Pi,t)}={right arrow over (Pi)}+t{right arrow over (di)}, i=1,2,3. Define the following three quadratic functions
ƒhd 1(t)=({right arrow over (P2,t)}−{right arrow over (P1,t)})×({right arrow over (P3,t)}−{right arrow over (P1,t)})·{right arrow over (d1)}
ƒ2(t)=({right arrow over (P3,t)}−{right arrow over (P2,t)})×({right arrow over (P1,t)}−{right arrow over (P2,t)})·{right arrow over (d2)}
ƒ3(t)=({right arrow over (P1,t)}−{right arrow over (P3,t)})×({right arrow over (P2,t)}−{right arrow over (P3,t)})·{right arrow over (d3)}
Solving the three equations separately ƒ1(t)=ƒ2(t)=ƒ3(t)=0 one obtains a set of (at most 6) real roots ℑ. Denote tmax=max(0,ℑ). The threshold plane ΠT is then defined as the affine plane Πa(tres) so that min(∥{right arrow over (Pres)}−{right arrow over (Pmax)}∥,∀{right arrow over (Pres)}εΔa(tres),{right arrow over (Pmax)}εΔa(xt))>μb+tmaxμd (μb and μd are defined in the equation below for maximum offsets. Note that Πa(tres) always exists as long tres is large enough.
A.4 Maximum Offsets μb and μd
One defines μb and μd as the maximum offsets between the Sn and C0 parameterizations of ray origin and direction for all {right arrow over (P)} on the base triangle Δb:
where {right arrow over (OP)} and {right arrow over (dP)} are ray origin and direction computed according to the Sn parameterization,
are same quantities computed via the C0 parameterization, and {ωi}i=1,2,3 are the barycentric coordinates of {right arrow over (P)} with respect to Δb. Since the Sn parameterizations are just polynomials, μb and μd can be calculated by standard optimization techniques.
A.5 Bounding Cone for Frustum Culling
The cone center {right arrow over (PC)}, direction {right arrow over (dC)}, and angle θC are computed as follows:
where {{circumflex over (d)}1,{circumflex over (d)}2,{circumflex over (d)}3} is the set of external directions and {right arrow over (PG)} is the barycenter of beam base ΔP1P2P3. It can be easily proven that this cone properly contains every point {right arrow over (Q)} inside the original view frustum and it works for both Sn and bump-map parameterizations. Once the bounding cone is computed, it can be used for frustum culling as follows. For each scene object, a bounding sphere is computed with center {right arrow over (O)} and radius R, and the following steps are used to judge if the sphere and cone intersect. First, one checks if {right arrow over (PC)} is inside the sphere. If not, let
which is the critical angle formed between {right arrow over (O)}−{right arrow over (PC)} and the cone boundary when the sphere and cone boundary barely touch each other. Then it naturally follows that the sphere and cone intersect each other if and only if
Number | Name | Date | Kind |
---|---|---|---|
4317369 | Johnson | Mar 1982 | A |
4928250 | Greenberg et al. | May 1990 | A |
6751322 | Carlbom et al. | Jun 2004 | B1 |
6757446 | Li et al. | Jun 2004 | B1 |
6956570 | Munshi | Oct 2005 | B2 |
7126605 | Munshi et al. | Oct 2006 | B1 |
7212207 | Green et al. | May 2007 | B2 |
7233328 | Christensen et al. | Jun 2007 | B2 |
7548238 | Berteig et al. | Jun 2009 | B2 |
7952583 | Waechter et al. | May 2011 | B2 |
20070098290 | Wells | May 2007 | A1 |
Number | Date | Country |
---|---|---|
9301561 | Jan 1993 | WO |
Entry |
---|
Overbeck et al. “A Real-time Beam Tracer with Application to Exact Soft Shadow”, The Eurographics Association 2007. |
Ghazanfarpour et al. “A Beam Tracing Method with Precise Antialiasing for Polyhedral Scenes”. Published 1998. |
Heckbert; et al., “Beam Tracing Polygonal Objects”, Computer Graphics Laboratory, New York Institute of Technology, Old Westbury, NY 11568, SIGGRAPH '84, pp. 119-127, Jul. 1984. |
Purcell; et al., “Ray Tracing on a Stream Processor”, Date: 2004, pp. i-xii, 1-113. |
Hou; et al., “Real-time multi-perspective rendering on graphics hardware”, Rendering Techniques'06 (Proc. of the Eurographics Symposium on Rendering), pp. 93-102. |
Funkhouser, T., N. Tsingos, I. Carlbom, G. Elko, M. Sondhi, J. E. West, G. Pingali, P. Min, A. Ngan, A beam tracing method for interactive architectural acoustics, J. Acoust. Soc. Am., Feb. 2004, vol. 115, No. 2, pp. 739-756. |
Wald, I., T. Ize, A. Kensler, A. Knoll, S. G. Parker, Ray tracing animated scenes using coherent grid traversal, ACM Trans. Graph., Jul. 2006, vol. 25, No. 3, pp. 485-493. |
Number | Date | Country | |
---|---|---|---|
20090219286 A1 | Sep 2009 | US |