This disclosure pertains to computer graphics. More particularly, embodiments relate to bowtie rasterization techniques for image rasterization. Even more particularly, embodiments relate to bowtie rasterization techniques for rendering three-dimensional (“3D”) light-field display (LfD) radiance images.
The ability to resolve depth within a scene, whether natural or artificial, improves our spatial understanding of the environment and as a result reduces the cognitive load accompanying the analysis and collaboration on complex 3D tasks. A light field display (LfD) provides a 3D image with the color and depth cues expected by the human visual system to create a synthetically generated 3D visual experience that is more natural to the observer than provided by other types of 3D displays.
In a synthetically generated light field radiance image, a pixel can represent a light ray originating on either side of the LfD image plane, which effectively doubles the projection depth of an LfD since a 3D aerial image can be rendered on either side of the image plane. While many hogel views (micro-images) are rendered to create one radiance image per update of the light-field display, each viewer perceives, for example, a perspective correct 3D aerial image with depth cues expected by the human visual system.
Light-field radiance image rendering is an example of extreme multi-view rendering where a scene is rendered from many (e.g., thousands to millions of) viewpoints per update/refresh of the display. While a graphics processing unit (GPU) can be used to compute a light-field radiance image, conventional GPUs and their accompanying APIs (e.g., OpenGL, DirectX, Vulkan) are generally designed to render a scene from one point of view to a single large viewport/framebuffer. That is, the typical GPU raster pipeline expects to render a scene from a single viewpoint per dispatch of the scene geometry. Therefore, the burden of radiance image rendering falls to the host application, which must understand the exact nature of the display's projection system and render all the appropriate views sequentially. For every view rendered, the host application sets the camera view matrix, the viewport to render to, and redispatches the scene's render commands. The update rate of the display and thus the power required to render animated content is a function of scene complexity (e.g., number of triangles, textures, lights, state changes, etc.) and the number of scene dispatches (renders/views). Complex scenes require exceedingly long computation times during which the light-field display is unresponsive to users.
A number of techniques have been developed to rasterize light field radiance images, including double frustum rendering as described in Michael W. Halle, Adam B. Kropp. 1997, “Fast Computer Graphics Rendering for Full Parallax Spatial Displays,” Proc. Soc. Photo-Opt. Instrum. Eng. (SPIE), 3011:105-112 (Feb. 10-11, 1997) and oblique slice and dice full-parallax, light-field radiance image as described in Thomas Burnett, “Light-Field Displays and Extreme Multiview Rendering,” Information Display, Vol. 6, 2017 (pages 6-13, 32).
The major difference between these rasterization approaches is the order in which they decompose a 4D light-field (two dimensions of position, two dimensions of direction) into 2D rendering passes. The double frustum technique renders hogel micro-images using two independent frustums for each hogel, a back perspective frustum and a front perspective frustum. The oblique slice and dice technique renders directions using sheared orthographic projections; after which, every oblique pixel must be transformed/swizzled and/or sampled into hogel micro-images.
The double frustum technique has at least one notable drawback, at least when implemented in OpenGL with the traditional perspective camera matrix. In OpenGL, the perspective camera matrix cannot be defined with the near plane on or behind the virtual camera origin. Rather, the near plane—that is, the clipping plane nearest to the virtual camera—is defined at a positive offset, with the expectation that the viewport is mapped to the near plane. If the front and back frustum definitions share the same origin, then there exists a small region between the two frusta near planes that is not seen by either camera: the Epsilon region. Portions of triangles that pass through the Epsilon region are not rendered, resulting in un-rasterized portions of the image which leads to visible artifacts when the image is projected by the display. One solution is to back offset the virtual cameras so that the near planes are coplanar and keep the near plane offset small. While this reduces, it does not eliminate all hogel corruption.
The oblique slice and dice technique uses an orthographic camera projection with a shear applied to render all the pixels for a particular view direction during each pass of the geometry. The shear matrix is adjusted for each projection ‘direction’ the light field display can produce. The oblique slice and dice technique algorithm does not generate the radiance image directly and requires a pixel transform where every pixel is moved from oblique render space into the radiance image.
Moreover, both the double frustum technique and the oblique slice and dice technique are view-major rendering techniques. That is, according to these techniques all of the triangles for a view of a scene (i.e., all of the triangles for a particular hogel) are rendered before proceeding to the next view/hogel.
For an LfD display, view-major rendering techniques set the viewport for a particular view and then render all the triangles from all the scene objects onto that viewport. If there are many objects in a scene (as there often are) with unique vertex lists, textures, materials, and so forth, then there is the potential that the same object data is being constantly swapped in and out of the processor cache, possibly on a per-view or per-hogel basis.
Embodiments of the present disclosure can include systems, methods and computer program products for processing three-dimensional (3D) graphics data. More particularly, embodiments include using bowtie (or pinhole) frustums. One embodiment comprises receiving 3D geometry data for a 3D scene to be rendered to a display, which may comprise an array of hogels. The 3D geometry data can define a set of shapes (objects, constituent polygons of objects or other shapes). A shape can be defined in a model space. Further, embodiments can include reducing downstream processing of the 3D geometry data to render the shape to the display's radiance image. Identifying a subset of hogels in a hogel plane that have hogel bowtie frustums that intersect the shape. In one embodiment, identifying the subset of hogels comprises reverse casting a bowtie frustum from the shape onto a hogel plane. In another embodiment, identifying a subset of hogels in a hogel plane that have hogel bowtie frustums that intersect the shape comprises performing a binary search or other search for hogels.
One aspect of the present disclosure includes using, as an invertible pinhole projection as a hogel camera frustum or a reverse cast frustum.
According to yet another aspect of the present disclosure, a hogel plane definition (HPD) is provided that is a 2D array of bowtie frustums. As the HPD in such embodiments is a 2D array of bowtie frustums, knowledge gained by triangle vertex clipping/culling operations can reduce the number of intersection tests globally performed on the radiance image definition and accelerate rendering. Valid triangle-bowtie frustum intersection tests can be determined via a binary search or reverse casting the bowtie frustum definition from a model's bounding volume minimum and maximum extents or a triangle's vertices onto the radiance image hogel plane defined in or transformed into model space. The resulting intersections define a subset of hogels whose frustums may intersect that triangle.
Another aspect of the present disclosure includes object culling. In one embodiment, the shape may be an object defined in the model space, the object having a bounding volume with extents. Bowtie frustums can be reverse cast from the bounding extents onto the hogel plane to identify the subset of hogels that have hogel frustums that intersect the object. The object culling may be performed, in some embodiments in a hogel plane definition space.
Another aspect of the present disclosure includes triangle culling. In one embodiment, the shape comprises a triangle defined in the model space. Triangle culling can include reverse casting bowtie frustums from the vertices of the triangle onto the hogel plane to identify a subset of hogels that have hogel frustums that intersect the triangle. The triangle culling is performed, in some embodiments, in a hogel plane definition space.
Yet another aspect of the present disclosure includes performing triangle clipping in model space. A hogel plane is transformed to model space. A clipping operation can include testing the hogel camera frustums of a subset of hogels for intersection with the triangle.
Another aspect of the present disclosure includes performing a triangle clipping operation that returns a direction in which the triangle lies from a hogel's frustum. The direction can be used to search to search (for example perform a binary search) of a 2D array of hogel frustums for an intersection with the triangle. According to one embodiment, during the 2D array search of hogel frustums, if the plane equations are normalized, then the plane equation dot product returns the distance to the point, which can be used to index the image plane frustum definition and shorten the search.
According to one embodiment, when triangle vertices are clipped against a hogel frustum, the triangle clipper labels the culled vertices with the distance and direction of those vertices from the hogel in model space. The resulting vector then can be used to index the hogel plane definition (HPD) to identify an intersecting hogel frustum (if any).
According to another aspect of the present disclosure, when all the bow tie frustums lie on a plane—that is, all the bowtie frustums have centers on the same plane and have the same normals, only one set of bowtie frustum planes needs to be defined and transformed into model space. Specific bowtie frustum planes can then be derived by a series of scaled additions, saving memory and multiplies. Lastly, during the triangle clipping/culling operation, the clip/cull direction can be used to reduce the number of hogel frustum/triangle intersections tests within the global 2D radiance image definition.
In some embodiments, hogels lie on a plane have the same orientation. For example, all the bowtie frustums have centers on the same plane and have the same normals. One bowtie frustum plane definition can be transformed through scaled addition for each of the hogels in the plane. That is, one set of hogel bowtie frustum planes can be defined and transformed to model space. The frustum planes for other hogels can be derived by a series of scaled additions, saving memory. This enables subsequent bowtie frustums to be transformed through addition alone. For example, one embodiment includes transforming a hogel bowtie frustum for one hogel to the model space, and the determining hogel bowtie frustums for the others hogels in the model space through a scaled addition.
Yet another aspect of the present disclosure includes performing rendering operations in a triangle-major manner.
Moreover, during the triangle clipping/culling operation, the clip/cull direction can be used to reduce the number of hogel frustum/triangle intersections tested (e.g., within the global 2D radiance image definition).
The subject matter of the present application may be better understood, and the numerous objects, features, and advantages made apparent to those skilled in the art, by referencing the accompanying drawings.
The invention and the various features and advantageous details thereof are explained more fully with reference to the nonlimiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known starting materials, processing techniques, components and equipment are omitted so as not to unnecessarily obscure the invention in detail. It should be understood, however, that the detailed description and the specific examples, while indicating preferred embodiments of the invention, are given by way of illustration only and not by way of limitation. Various substitutions, modifications, additions and/or rearrangements within the spirit and/or scope of the underlying inventive concept will become apparent to those skilled in the art from this disclosure.
As will be appreciated, the term “hogel” refers to a holographic element. In this description, “hogel” is generally used to refer to a micro-lens and may refer to an actual display hogel (that is, the physical hogel of the display) or a virtual hogel (that is, the virtual representation of a hogel used when processing 3D-data). In some cases, the term hogel may also refer to the accompanying micro-image, as will be clear by context.
Embodiments described herein relate to light-field rasterization, which is the process by which a 3D-scene is rendered into a light-field radiance image. As will be appreciated, a light field can be described as a set of rays that pass through a volume of space and is typically represented in computer vision as a plenoptic function. A light-field radiance image is a raster description of a light-field. In some embodiments, each pixel in the light-field radiance image represents a unique ray within the volume of space. Light field radiance image rasterization is the process by which a 3D-scene is rendered into a light-field radiance image. The light-field radiance image can be projected through an array of hogel micro-lenses to reconstruct a 3D image, such as a perspective-correct 3D aerial image visible for all viewers within the display's projection frustum.
A light-field display (“LfD”) typically comprises an array of hogel micro-lenses, photonics, such as an array of special light modulators (SLMs) or other photonics, and a computation system to compute light field radiance images. The photonics and optics form an array of micro-projectors. To project a light-field radiance image, each micro-projector projects a viewpoint specific image (a “micro-image” or “hogel image”) through a corresponding hogel micro-lens that angularly distributes the light. According to one embodiment, each micro-image represents all of the perspective rays that pass through a corresponding point spot on the light-field image plane of the LfD. For example, each micro image can represent the position, direction, intensity, and color of light rays that pass through the point spot as described, for example, by a plenoptic function.
As mentioned, light-field radiance image rasterization is an example of multi-view rendering. A 3D scene may be rendered, for example, from many points of view per update of the light-field display. U.S. Pat. No. 10,573,056, entitled “Multi-View Processing Unit Systems and Methods,” issued Feb. 25, 2020, to Burnett et al., which is hereby fully incorporated by reference herein, describes rendering techniques that apply a double-sided frustum (referred to as a “bowtie” frustum). Unlike the double frustum techniques that require two virtual cameras per hogel to project independent front and back frustums, bowtie frustum techniques can project a double-sided frustum a front portion (e.g., a portion to the front of the virtual camera) and the rear portion (e.g., a portion to the rear of the virtual camera) as part of the same frustum. Some embodiments of bowtie frustum rendering techniques can thus render multiple sides of a hogel image in a single pass of the geometry using a single bowtie frustum and without having to employ conjugate cameras for a single position. U.S. Pat. No. 10,573,056 further describes embodiments of various multi-view display systems and multi-view processing units (MvPUs), that can, at a high level, implement parallel render pipelines to render the multiple views of a scene in parallel.
The present disclosure provides improved rendering techniques using a bowtie frustum, pipelines that implement bowtie frustum rendering techniques, and radiance image renderers using bowtie frustum techniques. Such techniques may be implemented in a variety of graphics pipelines and multi-view devices, including but not limited to, multi-view display systems and MvPUs that implement parallel rendering pipelines to render multiple views of a scene in parallel.
Radiance image renderers that apply bowtie frustum techniques (referred to herein as “bowtie renderers”) can take advantage of performance optimizations inherent in extreme multi-view, full-parallax, synthetic radiance image rendering. Some embodiments of bowtie renderers according to the present disclosure use a single definition bowtie/pinhole projection matrix and invert the triangle/view rendering priority to provide triangle-major rendering.
In addition, or in the alternative, embodiments of bowtie renderers of the present disclosure can implement more efficient culling and clipping. In general, 3D graphics pipelines perform object culling in world space and triangle clipping in a unity clipping space. According to one aspect of the present disclosure, a bowtie renderer can perform one or more of the following: object culling in a hogel plane definition (HPD) space, triangle culling in the HPD space, or triangle clipping in model space.
At a high-level, the geometry data of 3D model 102 is processed to transform the geometry data of 3D model 102 into a light-field radiance image. For each hogel, a hogel frustum is projected in front of and to the back of the image plane at that hogel. The geometry that falls within that frustum is rasterized to generate a micro-image for the hogel that considers the hogel's viewpoint. With bowtie rendering, a virtual camera can project a single frustum that extends to the front of and behind the image plane to process geometry on either side of the image plane in a single pass of the geometry.
A micro-image can be generated for each hogel. Turning briefly to
Returning to
Bowtie frustum 106 includes a front portion 108 to a front side of the virtual camera position and a back portion 110 behind the virtual camera position. Since the virtual camera position in
Bow-tie frustum 106 can be used to determine, for example, which geometry (e.g., which triangles) to rasterize for virtual hogel 105 when generating a micro-image to be projected by the corresponding actual hogel of the display system.
In the illustrated embodiment, the bowtie frustum is defined with a positive far plane in front, and a negative near plane behind. However, the bowtie frustum can also be defined without either the near plane 112 or the far plane 114. In other words, the hogel bowtie frustum can be defined as four side planes, where the side plane edges all pass through the virtual camera point. For example, frustum 106 passes through the center point of virtual hogel 105 and extends infinitely fore and aft. The side planes define an infinite fore and aft bowtie frustum. In such an embodiment, the bowtie frustum (e.g., bowtie frustum 106 without front plane 112 and rear plane 114) can be thought of as essentially an invertible pinhole projection, bisected by a hogel plane 100.
As discussed below, bowtie frustums can be used in object culling, triangle culling, triangle clipping, and other operations. Since the bowtie frustum side planes (e.g., planes 116, 118, 120, 122) have different normals above and below the hogel plane/image plane, the plane equations used for triangle culling/clipping operations can be different for the front and back halves of the bowtie frustum 106. This can be accounted for in code using two sets of plane equations: a set for the front portion 108 and a set for the back portion 110 of the bowtie frustum.
The hogel plane 100 is itself a plane having a plane equation that can be used to determine whether clipping should occur by use of the front, back or both sets of clipping planes. In some embodiments, this test is performed once per triangle (per object) per render cycle and the result cached for subsequent bowtie triangle/frustum clipping operations.
Before proceeding, some additional context may be helpful. Graphics engines commonly utilize multiple coordinate vector spaces. The original x,y,z coordinates defining a 3D model are commonly defined relative to a coordinate vector space for that model, referred to as model space (also known as object space). The position of each triangle vertex of a 3D model, for example, may be expressed in model space relative to a standard right-handed 3D coordinate system for that model.
A 3D scene, however, can comprise a collection of 3D models, with each 3D model having its own model space. To represent the 3D model relative to each other in a scene, the vertices of each model are transformed (moved, rotated and/or scaled) by applying a model transform (e.g., expressed as a model matrix) into a common space, referred to as world space. When models have transformed into world space, the vertices are expressed relative to the world space coordinate system.
The graphics pipeline can further apply a view transform (e.g., expressed as a view matrix) to transform vertices to a view space, also referred to as “camera space” or “eye space.” The view space simulates rendering onto a virtual camera that is arbitrarily oriented in world space. A different view transform can be applied for each hogel based on the viewpoint of the hogel. Further, a projection transform, such as an orthographic or perspective projection transform, can be applied to transform vertices into homogeneous coordinates.
According to some embodiments, a 2D array of hogels/hogel cameras is modeled using a hogel plane definition (HPD). The transform of the virtual hogels/hogel cameras represented by the HPD to world space by expressed as the dot product of a view volume transform (VVT)*HPD. A viewpoint-specific transform that expresses the relationship between a particular viewpoint and the view volume, can be applied to transform coordinates from world space to a viewpoint specific eye space.
Before discussing embodiments of the HDP and the VVT further, it can be noted that, for purposes of explanation, this disclosure describes various embodiments using a standard, right-handed, ‘Y-UP’, coordinate system that is common in many OpenGL applications. For example,
According to one embodiment, each virtual hogel can be defined in a 3D space by a camera matrix. Turning to
A hogel plane can be defined on the x-z plane, normalized, and centered. According to one aspect of the present disclosure, a hogel plane can be defined by a hogel plane definition (HPD) (also referred to as a radiance image definition).
According to one aspect of the present disclosure, HPD 800 comprises a 2D array of hogel camera matrices with accompanying centers in the hogel plane definition. Further, HPD defines the hogel frustum planes or includes information to derive the hogel frustum planes for each hogel in the hogel plane. As such, HPD 800 can also be considered a 2D array of hogel bowtie frustums.
In the example of
Further, HPD 800 implicitly or explicitly defines the frustum planes (e.g., bowtie frustum planes) for each hogel 0,0 through 1,1. In one embodiment, HPD 800 includes an explicit definition of the bowtie frustum for each hogel represented in the HPD. However, since all the hogels represented in HPD 800 lie in a plane and have the same orientation, HPD 800 can include the definition of set of frustum planes associated with a point on the hogel plane (for example, frustum planes having an origin at the center coordinate for hogel 1,1 (or other hogel) or frustum planes having an origin at the origin 802 of the normalized hogel plane). The frustum planes for the (other) hogels can be calculated from the one set of frustum planes as needed. One embodiment of determining frustums from a known frustum is described in conjunction with
According to one embodiment, the frustum planes for a hogel can be determined using the camera matrix for the hogel. Referring back briefly to
In some embodiments, the size of each hogel (e.g., diameter) and pitch (e.g., spacing between the hogels, such as gap 812 between hogel 804 and hogel 806 are encoded in the HPD. In embodiments in which the hogels are all the same diameter and the spacing between them consistent, the size of the hogels may be inferred.
While only twelve hogels are represented by HPD 800 in
A view volume transform (VVT) expresses the relationship between a view volume about the optical center of the display and world space. According to one embodiment, a VVT is a 4×4 transform matrix that defines a 3D cuboid volume in world space to be rendered. In other words, the VVT defines the 3D cuboid volume within a scene that a volumetric, light-field, or holographic display projects. Multiplying the hogel camera matrices defined within the HPD by the VVT transforms the hogel cameras into world space.
Embodiments can use the HPD in various operations, such as, but not limited to, object and triangle culling. The culling operations are performed with the hogel plane and the object extents or triangle in the same space. For example, the object extents or triangle vertexes can be mapped to HPD space. As another non-limiting example, the HPD can be mapped to a model space to perform culling.
Some embodiments of bowtie renderers use bowtie frustums for object culling. In prior graphics pipelines, object culling is typically done by comparing an object's bounding volume for intersection with a camera frustum. However, the HPD may define a large number of virtual cameras and frustums (e.g., tens of thousands of hogel cameras/frustums). Object culling by testing the object's bounding volume against a large number of camera frustums can be a computationally expensive task. Object culling may proceed in a manner similar to triangle culling discussed below, but the frustums are reverse casting from the bounding volume extents.
According to one embodiment, object culling is performed in an HPD space using reverse casting of bowtie frustums. It is more efficient to transform an object's maximum and minimum bounding volume extents into the HPD space and then reverse cast the frustum edges from the transformed extents onto the normalized hogel image plane. The resulting intersections encompass the subset of hogel frustums that intersect the object's bounding volume. Limiting the processing of objects within that narrower subset of hogels can speed up rendering significantly.
According to another embodiment, the HPD can be transformed into an object's model space, preserving the relative position and orientation of the model. A transformed radiance image bowtie frustum can then be transformed to the object's bounding volume minimum and maximum extents. The resulting plane equations define a series of edges. The outermost edges can be checked for intersection with the hogels of the transformed radiance image hogel plane. The resulting intersections encompass the subset of hogel frustums that intersect the object's bounding volume. Limiting the processing of objects to within that narrower subset of hogels can speed up rendering.
In any case, the minimum and maximum indices calculated by the reverse frustum cast can be used to define the indexable extent within the hogel plane definition (a 2D subset of hogels) for further processing the object's geometry.
According to one aspect of the present disclosure, some bowtie renderers can perform culling into the HPD space. Turning briefly to
In a culling operation, a bowtie frustum is projected from each vertex at least toward the hogel plane 1000. For example, bowtie frustum 1010 is projected from vertex 1004 (the side planes of bowtie frustum 1010 each pass-through vertex 1004), bowtie frustum 1012 is projected from vertex 1006 (the side planes of bowtie frustum 1012 each pass-through vertex 1004), bowtie frustum 1014 is projected from vertex 1008 (the side planes of bowtie frustum 1014 each pass-through vertex 1008). Bowtie frustum 1010, bowtie frustum 1012, and bowtie frustum 1014 can each have the same frustum definition as the bowtie frustums defined for the virtual hogels of hogel plane 1000 but shifted to the vertices of triangle 1002 in the HPD space. In one embodiment, for example, the frustum for a vertex is determined according to
Each hogel can be considered a circle/cell on the hogel plane. The intersections of the frustum planes with the hogels can be calculated using any suitable intersection testing algorithm known or developed in the art. The intersections of the frustum with the hogel plane are converted to the indices of the hogels intersected by the frustum. The resulting intersections encompass the subset of hogel frustums that intersect triangle 1002. In this example, only the frustums of the virtual hogels in area 1020 will intersect triangle 1002. The portion of a hogel plane identified by reverse frustum projection can be referred to as a “reverse frustum projection hogel plane.”
As may be recalled, each hogel in the hogel plane can have an assigned index (x, z) (e.g., (0,0) through (n, n) for a square hogel plane. The minimum and maximum hogel indices determined by the reverse frustum cast can be used to define the indexable extent within the hogel plane definition to use during clipping. Using the index values of the hogels intersected by the reverse cast frustums, the reverse frustum projection plane can be defined as the rectangle of hogels having corners at (xmin, zmin), xmax, zmin), xmin, zmax), xmax, zmax), where xmin, zmin, xmax, zmax are the max and min x and z indices of the hogels intersected by the reverse cast frustums. However, as illustrated in
While
Moreover, it can be noted that object culling may be used to determine a first reverse frustum projection hogel plane applicable to processing an object. Triangle culling may determine a second reverse frustum projection hogel plane applicable to processing a particular triangle of the object, where the second reverse frustum projection hogel plane is a subset of the first reverse frustum projection hogel plane. Using
3D graphics pipelines typically expect a triangle's vertices to be multiplied by the model-view-projection (MVP) matrix before being submitted to the rasterizer for clipping/culling in unity clip space. However, this implies at least three [4×4] matrix by [4×1] vertex multiples (˜48 multiplies) per triangle per hogel tested, which can be a significant number of multiplications (and additions) to determine if a triangle is visible to an individual hogel frustum. Moreover, hogel frustum definitions can be narrow, resulting in many culled or clipped triangles. Therefore, a bowtie renderer, according to one embodiment, clips in model space to avoid many unnecessary triangle vertex transforms.
As described in “Fast Extraction of Viewing Frustum Planes from the World-View Projection Matrix,” (Gil Gribb, Klaus Hartmann 2001), which is hereby fully incorporated by reference herein, there are techniques for deriving clipping planes from an MVP matrix to allows for clipping in model space. This implies, though, that on a per-object basis, the intersecting HPD subregion of hogels that have frustums that intersect the object would need to be transformed into the object's model space to determine the necessary clipping planes.
In some embodiments, all the hogels are on a plane and have the same orientation. As such, the frustum planes for only one hogel are transformed into the object's model space and. The remaining hogel frustum planes can be calculated merely by shifting one set of transformed hogel frustum planes with a few scaled additions to shift the transformed hogel frustum planes with a few scaled additions. Therefore, as part of the HPD, one set of hogel frustum planes can be defined at the origin and transformed into model space when a new object enters the pipeline. Subsequent hogel-specific frustum planes are then derived through inexpensive addition operations and cached when a hogel frustum requires an intersection test with the first triangle of an object. In this manner, hogel frustum planes are efficiently calculated once per object render and only when necessary. Further, if object culling or triangle culling was performed, then the hogel frustum planes only need to be determined (as needed) for the hogels in the subregion of (e.g., the reverse frustum projection hogel plane determined from object culling and/or triangle culling).
Therefore, as part of the HPD, one set of hogel frustum planes can be defined at the HPD origin and transformed into model space (for example, when a new object enters the pipeline). Subsequent hogel-specific frustum planes are then derived through inexpensive addition operations. The hogel-specific frustum planes can be cached when an intersection test is required to test a hogel-specific frustum with the first triangle of an object. In this manner, hogel frustum planes can be efficiently calculated once per object render and only when necessary. If the triangle intersects a frustum plane, the triangle can be clipped to the plane in model space. Any suitable clipping algorithm known or developed in the art can be applied to clip polygons (e.g., triangles) to the hogel-specific frustum planes, including, but not limited to the Sutherland-Hodgman algorithm and variations thereof (the basis for the Sutherland-Hodgman algorithm was initially described in Ivan Sutherland, Gary W. Hodgman: “Reentrant Polygon Clipping,” Communications of the ACM, vol. 17, pp. 32-42, 1974, which is hereby fully incorporated herein by reference herein).
Reverse frustum casting can be used to identify a subset of hogels for further processing of an object or polygon, such as identifying a subset of hogels for clipping/intersection testing. In addition, or in the alternative, some embodiments utilize smart clipping algorithms in which the result of an intersection test returns direction information that can be used to select the hogel to test. Clipping algorithms such as Sutherland-Hodgman algorithm use a distance point to plane dot product calculation to determine whether a point is behind, on or in front of a plane. Therefore, during a Sutherland-Hodgman edge clip operation the cardinal direction of where points of a triangle lie relative to the frustum can be recorded and used to prevent or reduce future hogel frustum intersection tests.
Turning to
The clip direction knowledge can be used to quickly terminate indexing, for example:
for(idx·x=v Min·x;(idx·x<=vMax·x)&&(!AllPointsLeft);idx·x++)
Indexing can similarly be terminated for the z index.
In this example, after testing a hogel and determining that the hogel frustum does not intersect the triangle, the range of x index values is reduced so that only hogels to the left of the tested hogel (hogel 1107) will be tested against triangle 1120.
Therefore, in another embodiment, instead of indexing through the HPD using indices calculated by the reverse frustum cast, the clip direction can be used to binary search through the HPD to find a valid triangle intersection.
During a binary search, the centermost hogel of a hogel plane may be selected first. If the frustum of the hogel does not intersect the triangle, the clipping/culling operation can return the direction to search. A bowtie frustum comprises four side planes and each plane has a normal which is perpendicular to the plane. When a plane is tested for intersection, and all vertices are outside the plane, then the normal indicates the general direction of the vertices relative to that plane. This information can be used to select a direction when indexing through the hogels in the hogel plane.
For example, frustum of the center hogel 1206 is tested first to determine if it intersects with triangle 1202. Since the frustum of hogel 1206 does not intersect triangle 1202 and all the vertices of triangle 1202 are to the north and left of hogel 1206, the clipping algorithm can select a direction to select a hogel that is closer to the triangle, for example to move/search left. The frustum of a second hogel 1208 is tested, where the second hogel 1208 is half-way between the left edge of the hogel plane 1200 and the first hogel 1206. Since the frustum of hogel 1208 does not intersect triangle 1202 and all the vertices of triangle 1202 are to the north and left of hogel 1206, the clipping algorithm can continue to move to the left.
The frustum of a third hogel 1210 that is to the left of second hogel 1208 is tested. Since the frustum of hogel 1210 does not intersect triangle 1202 and all the vertices of triangle 1202 are to the north of hogel 1210, the clipping algorithm moves north and tests hogel 1212. The frustum of hogel 1212 does intersect triangle 1202. Once an intersection is found (any triangle edge crossing any hogel frustum plane) the search can stop, after which, neighboring hogels (e.g., hogel 1214, hogel 1216, hogel 1218, hogel 1220, the hogel obscured by triangle 1202) are scheduled for intersection testing. If the frustum planes of any of those hogels intersect triangle 1202, the neighboring hogels of that hogel are scheduled for intersection testing, and so on.
If the plane equations are normalized, then the clip/cull directions can be accompanied by distances, which can be used to further restrict the search space. As such, if a triangle is fully culled by a frustum clip operation, the clipper returns a vector (distance/direction) from the frustum center to the center of the triangle in model space. The next frustum for clipping can then be calculated directly from the hogel plane definition.
According to one embodiment, pipeline 1300 indexes through geometry of the scene and the render targets for which it is responsible giving triangle major priority. In other words, pipeline 1300 will process a triangle and index through the render targets to render the triangle to all the render targets for which pipeline is responsible as appropriate. Then the next triangle can be rendered to the render targets as appropriate. For example, if pipeline 1300 is responsible for rendering to sixteen targets 1302 (e.g., sixteen viewports or sixteen hogels), pipeline 1300 will render a triangle from a scene to the sixteen targets 1302 as appropriate before rendering the next triangle in the scene. This is different from a traditional GPU pipeline, which will render all the geometry for one view/hogel before moving to the next view/hogel.
In operation, pipeline 1300 receives scene data 1316 from a host application or other source. Scene data 1316 may include, for example, data for one or more 3D models, such as, for example, geometry (e.g., vertex lists), texture, lighting, shading, bounding volumes (e.g., sphere or cuboid) and other information as a description of the virtual scene. Pipeline 1300 further receives or accesses a view volume transform (VVT) 1318 that expresses the relationship between a view volume that represents a volume about the optical center of the display and world space. According to one embodiment, VVT 1318 is a 4×4 transform matrix that defines a 3D cuboid volume in world space to be rendered. In other words, the VVT, according to one embodiment, defines the 3D cuboid volume within a scene that a volumetric, light-field, or holographic display projects.
Pipeline 1300 further receives or accesses a hogel definition plane (HPD) 1320, which according to one embodiment, is a 2D array of hogel camera matrices and accompanying viewport centers in a radiance image rendering view definition. HPD 1320 may further define or include information usable by pipeline 1300 to determine the hogel frustums. In some embodiments pipeline 1300 is configured with HPD 1320 by, for example, the manufacturer of a display.
Setup stage 1304 performs a variety of operations to prepare for a render cycle that, for example, clears buffers and performs other setup operations to prepare for a render cycle. The setup operations can include various operations dependent on the multi-view computing system implementation.
Dispatcher 1306 is responsible for iterating the scene objects and vertex lists and dispatching triangles to a vertex processor 1308. According to one embodiment, dispatcher 1306 performs object culling if object culling is enabled. Various types of object culling may be implemented including, but not limited to, object culling using reverse casting of frustums from the object's maximum and minimum bounding volume extents onto the hogel plane, which can be performed in HPD space or model space in some embodiments. Dispatcher 1306 dispatches the object vertices 1322 to vertex processor 1308.
If object culling is enabled, dispatcher 1306 determines the hogels of interest for processing of the object and dispatches this information with the triangles of the object to the culling and clipping stage. For example, dispatcher 1306 can provide the indexable extent within the hogel plane definition to use during downstream operations for processing of the geometry of the object. According to one aspect of the present disclosure, dispatcher 1306 provides an indication of a reverse frustum projection hogel plane determined for the object.
Vertex processor 1308 performs triangle culling and triangle clipping. Triangle culling and clipping is a processing stage where triangles are transformed, clipped and/or culled in an algorithm specific manner and order. According to one embodiment, the culling and clipping stage determines for each triangle, the subset of hogels that have hogel frustums that intersect the triangle and then performs clipping based on those hogels. The triangle culling and clipping stage can pass clipped vertices 1324 to a fragment processor to perform rasterization and shading according to rasterization and shading techniques known or developed in the art.
In various embodiments, triangle culling for a triangle of an object is performed in the HPD space. In addition, or in the alternative, triangle culling is performed in model space. Vertex processor 1308 may implement various triangle culling techniques known or developed in the art, including, but not limited to, triangle culling techniques discussed in conjunction with
Vertex processor 1308 provides culled and clipped vertices 1324 to fragment processor 1310. However, prior to forwarding the culled and clipped vertices for a triangle to fragment processor 1310, vertex processor 1308 may perform other transformations, such as applying M,V,P transforms to the remaining triangles/polygon vertices. Vertex processor 1308 clips a triangle for each hogel having a hogel camera frustum that intersects the triangle (at least for the hogels for which pipeline 1300 is responsible for rendering). If vertex processor 1308 performs processing for multiple hogels/viewpoints, then vertex processor 1308 generates a unique set of triangle intersection vertices for each intersected hogel which are passed to the fragment processor. Clipping in model space can reduce the computational burden on the multi-view vertex processor.
Rasterization is the process of converting a polygonal model into a raster or pixel image; for example, rasterization renders a hogel image data (or other multi-view image) from a triangle list. Rasterization occurs after the vertex transform stage of the render pipeline and generates a list of fragments 1326. Rasterizer 1312 may perform rasterization according to any suitable rasterization technique known or developed in the art including, but not limited to, those described in U.S. Pat. No. 10,573,056.
The fragments 1326 are shaded by shader 1314 based on the textures/materials/etc. associated with the scene. For example, the texture mapped on the surface of each polygon can be based on a linear interpolation of the texture coordinates. Shading can be performed according to various techniques, including but not limited to, the Phong Reflection Model, described in Phong, Bui Tuong “Illumination for Computer Generated Graphics,” Communications of the ACM, Vol. 18, No. 6, Association of Computing Machinery, Inc. (June 1975). The Phong algorithm is a simple shading model that describes the manner in which light reflects off a surface. The Phong model uses an ambient, diffuse and specular description of both the surface material and source light to accumulate a pixel color value for a fragment. The specular component requires knowledge of the light position (in camera space) and the surface normal for each fragment. It can be noted that if a normalized eye space x (n-eye) is used, the lighting angle can also be adjusted to account for changes in angles due to the transform to the normalized eye space.
Pipeline 1300 may perform other operations such as correction of the rendered images to account for any micro-lens distortions or color aberrations. Vertex processor 1308 performs operations on triangles, such as applying transformations, triangle clipping and culling.
MvPU 1400, according to one embodiment, can comprise a highly parallel array processor configured for parallel computation of tasks and data where many threads can be executing logic against a series of queued tasks. In one embodiment, triangles are dispatched to separate render pipelines 1402a-1402n that all execute the same series of instructions to render their particular views. Therefore, hogel views can be assigned into work groups that are distributed among, for example, accelerator cores and executed concurrently.
Each pipeline 1402a-1402n can have a queue (workgroup) of viewpoints/viewports to render and a frame may be considered rendered when all the queues are empty. MvPU 1400 synchronizes between pipelines, for example, by managing triangle vertex dispatch and texture accesses within the MvPU. Dispatch of triangles within an MvPU 1400 can be synchronized so all the render pipelines are working on the same triangle or texture in parallel, but from their unique viewpoints. According to one embodiment, MvPU 1400 may implement triangle-major processing in which a triangle is rendered in parallel.
Furthermore, various techniques described herein may be applied in a variety of system architectures. Embodiments described herein may be implemented as software instructions embodied on a non-transitory computer readable medium. Embodiments may be implemented in a GPU, in an MvPU, in a processor or according to another architecture.
Although the invention has been described with respect to specific embodiments thereof, these embodiments are merely illustrative, and not restrictive of the invention. Rather, the description is intended to describe illustrative embodiments, features and functions in order to provide a person of ordinary skill in the art context to understand the invention without limiting the invention to any particularly described embodiment, feature or function, including any such embodiment feature or function described. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes only, various equivalent modifications are possible within the spirit and scope of the invention, as those skilled in the relevant art will recognize and appreciate. As indicated, these modifications may be made to the invention in light of the foregoing description of illustrated embodiments of the invention and are to be included within the spirit and scope of the invention. Thus, while the invention has been described herein with reference to particular embodiments thereof, a latitude of modification, various changes and substitutions are intended in the foregoing disclosures, and it will be appreciated that in some instances some features of embodiments of the invention will be employed without a corresponding use of other features without departing from the scope and spirit of the invention as set forth. Therefore, many modifications may be made to adapt a particular situation or material to the essential scope and spirit of the invention.
Reference throughout this specification to “one embodiment”, “an embodiment”, or “a specific embodiment” or similar terminology means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment and may not necessarily be present in all embodiments. Thus, respective appearances of the phrases “in one embodiment”, “in an embodiment”, or “in a specific embodiment” or similar terminology in various places throughout this specification are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics of any particular embodiment may be combined in any suitable manner with one or more other embodiments. It is to be understood that other variations and modifications of the embodiments described and illustrated herein are possible in light of the teachings herein and are to be considered as part of the spirit and scope of the invention.
In the description herein, numerous specific details are provided, such as examples of components and/or methods, to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that an embodiment may be able to be practiced without one or more of the specific details, or with other apparatus, systems, assemblies, methods, components, materials, parts, and/or the like. In other instances, well-known structures, components, systems, materials, or operations are not specifically shown or described in detail to avoid obscuring aspects of embodiments of the invention. While the invention may be illustrated by using a particular embodiment, this is not and does not limit the invention to any particular embodiment and a person of ordinary skill in the art will recognize that additional embodiments are readily understandable and are a part of this invention.
Embodiments described herein can be implemented in the form of control logic in software or hardware or a combination of both. The control logic may be stored in an information storage medium, such as a computer-readable medium, as a plurality of instructions adapted to direct an information processing device to perform a set of steps disclosed in the various embodiments. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the invention.
At least portions of the functionalities or processes described herein can be implemented in suitable computer-executable instructions. The computer-executable instructions may reside on a computer readable medium, hardware circuitry or the like, or any combination thereof. The computer-executable instructions may be stored as software code components or modules on one or more computer readable media.
Within this disclosure, the term “computer readable medium” is not limited to ROM, RAM, and HD and can include any type of data storage medium that can be read by a processor (such as non-volatile memories, volatile memories, DASD arrays, magnetic tapes, floppy diskettes, hard drives, optical storage devices, etc. or any other appropriate computer-readable medium).
In one embodiment, the computer-executable instructions may include lines of compiled code according to a selected programming language. Any suitable programming language can be used to implement the routines, methods or programs of embodiments of the invention described herein. Different programming techniques can be employed such as procedural or object oriented.
Particular routines can execute on a single processor or multiple processors. For example, various functions of the disclosed embodiments may be distributed. Communications between systems implementing embodiments can be accomplished using any electronic, optical, radio frequency signals, or other suitable methods and tools of communication in compliance with various protocols.
Although the steps, operations, or computations may be presented in a specific order, this order may be changed in different embodiments. In some embodiments, to the extent multiple steps are shown as sequential in this specification, some combination of such steps in alternative embodiments may be performed at the same time. The sequence of operations described herein can be interrupted, suspended, or otherwise controlled by another process, such as an operating system, kernel, etc. Functions, routines, methods, steps and operations described herein can be performed in hardware, software, firmware or any combination thereof.
It will also be appreciated that one or more of the elements depicted in the drawings/figures can be implemented in a more separated or integrated manner, or even removed or rendered as inoperable in certain cases, as is useful in accordance with a particular application. Additionally, any signal arrows in the drawings/figures should be considered only as exemplary, and not limiting, unless otherwise specifically noted.
As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, product, article, or apparatus that comprises a list of elements is not necessarily limited only to those elements but may include other elements not expressly listed or inherent to such process, product, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
Additionally, any examples or illustrations given herein are not to be regarded in any way as restrictions on, limits to, or express definitions of, any term or terms with which they are utilized. Instead, these examples or illustrations are to be regarded as being described with respect to one particular embodiment and as illustrative only. Those of ordinary skill in the art will appreciate that any term or terms with which these examples or illustrations are utilized will encompass other embodiments which may or may not be given therewith or elsewhere in the specification and all such embodiments are intended to be included within the scope of that term or terms. Language designating such nonlimiting examples and illustrations includes, but is not limited to: “for example,” “for instance,” “e.g.,” “in one embodiment.”
Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any component(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or component.
Although the present invention has been described in detail, it should be understood that various changes, substitutions and alterations can be made hereto without departing from the spirit and scope of the invention.
This application claims the benefit of priority under 35 U.S.C. 119(e), to U.S. Provisional Application No. 63/218,757, entitled “BowTie Clipping for Extreme Multi-view Radiance Image Rendering,” filed Jul. 6, 2021, which is hereby fully incorporated by reference herein for all purposes.
Number | Date | Country | |
---|---|---|---|
63218757 | Jul 2021 | US |