This disclosure relates to an image generation system and method
The “background” description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description which may not otherwise qualify as prior art at the time of filing, are neither expressly or impliedly admitted as prior art against the present invention.
Raytracing is a rendering technique that has received widespread interest in recent years for its ability to generate a high degree of visual realism. Raytracing is often utilised in simulations of a number of optical effects within an image, such as reflections, shadows, and chromatic aberrations.
This can be useful for any computer-based image generation process—for example, for special effects in movies and in generating images for computer games. While such techniques have been discussed and used for a relatively long time, it is only more recently that processing hardware has become suitably powerful so as to be able to implement raytracing techniques with an acceptably low latency for real-time applications or at least more extensive use within a piece of content.
Such techniques effectively aim to determine the visual properties of objects within a scene by tracing, from the camera, a ray for each pixel in the scene. Of course, this is a rather computationally intensive process—a large number of pixels are expected to be used for displaying a scene, and this may lead to a large number of calculations even for simpler scenes (such as those with few reflections and the like). In view of this, scanline rendering and other rendering methods have generally been preferred for rendering where the latency is considered to be important despite the lower image quality.
One technique that seeks to improve the rendering times associated with raytracing based methods is the use of bounding volumes to represent groups of objects. These bounding volumes are stored in a bounding volume hierarchy (BVH) which has a structure that is considered suitable for navigation as a part of a raytracing process. The use of bounding volumes is advantageous in that a group of objects may be tested for intersections by rays together, rather than on a per-object basis. This can mean that the number of intersection tests is reduced, as well as the calculations for each being simplified by the use of a simplified shape (such as a box or sphere) that is representative of the objects. While in principle advantageous, the BVH represents a separate data structure that comprises information that is useful for generating images—which leads to a substantial increase in the amount of data storage and navigation that is required.
It may therefore be considered advantageous to reduce the amount of additional data that is required for the implementation of raytracing methods.
This disclosure is defined by claim 1.
Further respective aspects and features of the disclosure are defined in the appended claims.
It is to be understood that both the foregoing general description of the invention and the following detailed description are exemplary, but are not restrictive, of the invention.
A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
Referring now to the drawings, wherein like reference numerals designate identical or corresponding parts throughout the several views, a system and method for implementing an improved image rendering process is disclosed.
In the embodiments described below, a modification to the rasterization process is considered. Existing rasterization processes used to render images require the use of a vertex buffer with an associated set of indices (stored in an index buffer) for defining objects in a scene. The vertex buffer stores a list of vertex coordinates, while the index buffer identifies triplets of coordinates representing the vertices of a triangle to be rendered; this is performed so as to reduce the redundancy associated with listing each coordinate when defining each triangle (as a number of the vertices will share the same location, for example).
In
A fetch shader 30 is operable to collect information from each of the vertex buffer 10 and the index buffer 20 and provide the information to the vertex shader 40, which is operable to perform a vertex shading process. The collected information corresponds only to the information that is required by the vertex shader 40 for a particular operation, rather than necessarily comprising the entire contents of each of the buffers 10 and 20. This is illustrated in
For example, when rendering a particular object within a scene, the fetch shader 30 may be operable to obtain a set of indices from the index buffer 20 that identify the vertices associated with that object. Corresponding vertex location data, as required to relate the indices to a shape that can be rendered correctly, are also obtained by the fetch shader 30 from the vertex buffer 10. The fetch shader 30 is then operable to pass this data to the vertex shader 40 and a vertex shading operation is performed. Examples of the operation of a vertex shader may include one or more of converting a three-dimensional vertex location into a two-dimensional screen position, and/or manipulating the position or colour associated with one or more vertices. This output is then provided to a further stage in the rendering pipeline, such as a geometry shader or a rasterizer.
Another part of the image rendering process may be that of raytracing, as described above. Raytracing often makes use of BVHs as an efficient data storage structure, the format of which is often bound to the hardware used to store and/or implement—and therefore it may be difficult to provide substantial improvements to the BVH structure itself. The structure of an exemplary BVH is discussed below with reference to
The level of detail for each level can be determined in any suitable fashion, and the BVH may have a maximum level of detail that is defined. For example, the BVH may terminate with bounding volumes representing groups of objects—this would lead to a coarse representation, but one that is reduced in size and may be traversed very quickly. Alternatively, the BVH may terminate with bounding volumes representing portions of objects—while this offers a finer approximation of the objects, of course this provides a BVH that is larger and may take longer to traverse. The BVH may be defined so as to comprise elements of both—such that some objects have a finer/coarser representation than others.
BVHs can be generated in a number of ways, each with its own benefits and drawbacks. For example, a top-down approach can be taken in which the bounding volumes are defined beginning with the largest sets possible. That is, the input (such as the set of objects within an environment, or a representation of those objects) is divided into two or more subsets that are each then subdivided—that is, bounding volumes are generated beginning with box 110, and proceeding to boxes 120 and so on. While this represents a fast implementation, it often results in a BVH that is rather inefficient, which can result in a larger size overall or a reduced ease of navigation.
An alternative method is that of the bottom-up approach. In this approach, the bounding volumes are defined beginning with the smallest volumes in the BVH. In the example of
Each of these methods require information about all of the objects to be available before the BVH can be generated; this is of course acceptable in many applications, but in others it may be preferred that a BVH is able to be generated on-the-fly.
A third approach that may be considered is that of insertion methods. These may be performed on-the-fly, and they are performed by inserting objects into the bounding volumes of a BVH on a per-object basis. This means that only information about that object is necessary at the time of insertion. Insertion approaches cover a wide range of related methods in which the placement of the object is determined in a manner that identifies an optimal or suitable placement. For example, a function may be defined that evaluates the impact (in terms of size or navigability or the like) of an insertion upon the BVH, with the insertion being performed in a manner that minimises or otherwise reduces the impact upon the BVH.
Of course, any other suitable approaches may be considered compatible with the teachings of the present disclosure, rather than being limited to those discussed above.
Any suitable input data may be represented using a BVH and associated bounding volumes. For example, video games may provide a suitable source of input data for generating such a structure—in this case, the input information may be data about the virtual objects that defines their respective dimensions and locations. Similarly, information describing a real environment could be used as an information source—for example, information may be generated from images of a real environment and the objects within the environment, and this information can be used to generate a BVH that may be used to render images of that environment.
For example, if one hundred rays were to be tested for intersections, only one hundred tests would be required at this stage as there is a single object (the bounding volume 220) to test for each ray—rather than one hundred multiplied by the number of polygons making up the object 210.
If, for example, it were found that only ten rays intersected the bounding volume 220, this stage would require thirty tests (that is, a test for each ray with each bounding volume). This is again a very small amount relative to the testing of one hundred multiplied by the number of polygons making up the object 210 as noted above. It is therefore apparent that the falling number of rays to be considered for intersection is sufficient to offset the increasing number of bounding volumes to be considered, such that overall the total number of intersections to be tested is lower than the amount required if no bounding volumes are defined and no BVH is utilised.
In a practical implementation these volumes may be divided further until the surfaces of the object 210 are represented with a suitable level of precision for the application—such as when the bounding volumes and the polygons (primitives) representing the object occupy a similar display area, at which point the polygons may be used instead.
In these examples, the bounding volume 220 may be considered to be a higher level in the BVH than the bounding volumes 230, 231—for instance, the bounding volume 220 may correspond to a volume such as 120 of
It is apparent from these Figures that the number of calculations that are to be performed in a raytracing method may be reduced significantly with the use of bounding volumes and BVHs; this is because the number of intersections that are to be evaluated may be reduced significantly.
From the above, it is clear that a BVH is well-suited to storing information for use with raytracing methods. However, due to differences in how each process stores and uses data it is not efficient to utilise the BVH with vertex shaders that are also being used during the rendering process. For instance, BVHs are constructed so as to require that there is spatial coherency of the triangles used in the BVH, and triangles may be duplicated between nodes in some cases. Index buffers do not have the same requirement, and as such the encoded information may be different even when representing the same data due to redundancies and the like.
Further to the above considerations it is generally considered impractical to modify a vertex shader itself to instead make use of a BVH structure. One reason for this is simply the range of vertex shaders that exist; modifying each one of these to make use of BVH structures instead of traditional methods would represent a significant overhead. In addition to this, the inputs/outputs of those shaders are generally used in a standardised manner (such as receiving inputs from other processes) and therefore it would be necessary to redesign much or even the entirety of the graphics pipeline to accommodate such a modification.
The arrangement 500 comprises a vertex buffer 510, an index buffer 520, a fetch shader 530, a vertex shader 540, and a BVH pos buffer 550. These may have similar functions to the corresponding units in
The index buffer 520 is configured to perform the same function as in the arrangement of
The BVH pos buffer 550 is configured to store information about the position of triangles within the BVH structure, for example by identifying a particular node and position, which enables the triangle data stored in the BVH to be accessed. While this data is analogous to that of the pos data stored by the vertex buffer in
It is considered that the BVH pos buffer 550 has a smaller size than a traditional vertex buffer for processing the same information, and as such this arrangement may be considered advantageous in that the amount of data required to be stored may be reduced significantly. For example, in some embodiments the BVH pos buffer 550 may be one third of the size of a corresponding vertex buffer (such as the vertex buffer 10 of
The fetch shader 530 is configured to obtain data from the index buffer 520 relating to the indices of triangles that are to be used by the vertex shader 540, and data from the BVH pos buffer 550 which enables triangle information to be obtained from the BVH structure. This data is then provided to the vertex shader 540 as required.
In some embodiments, the fetch shader 530 is configured to perform a transformation operation so as to transform the BVH position information into object space. As noted above, this is the form in which vertex position data is generally stored in the vertex buffer of
The vertex shader 540 is operable to perform a vertex shading process using the data output by the fetch shader, as in a conventional vertex shading process. That is to say that the vertex shader need not be modified in view of the modification to use BVH data, such that the same vertex shader as that discussed in the context of
The vertex buffer 510 may still be of use in such an arrangement, despite position information being obtained from the BVH instead, as other processing may obtain data from this buffer. For example, the vertex buffer may still be used to store attributes relating to UV mapping of textures (relating to the process of projecting a two-dimensional texture onto a three-dimensional surface), and/or surface normals and tangents. Such data may be obtained by the fetch shader 530, in some embodiments, for use as a part of the vertex shading process performed by the vertex shader 540.
At a step 600, the fetch shader 530 obtains index information from the index buffer 520. In particular, this comprises one or more indices that correspond to vertices that are to be operated upon by the vertex shader 540.
At a step 610 the fetch shader 530 optionally obtains attribute information from the vertex buffer 510, the attribute information corresponding to the vertices described by the index information obtained in step 600.
At a step 620, the fetch shader 530 obtains position (pos) information from the BVH pos buffer 550 corresponding to those indices identified in the information obtained in the step 600.
At a step 630, the fetch shader 530 obtains triangle data from the BVH using the position information obtained in step 620. In some embodiments, this data is transformed into object space data although this is not considered to be essential.
At a step 640, the fetch shader 530 provides the data obtained in step 630 to the vertex shader 540.
At a step 650 the vertex shader 540 performs a vertex shading operation, such as colour and/or position manipulating of one or more vertices. The results of the vertex shading step 650 may be output for use in a later rasterization process, for example.
In using the above method and arrangement, it is considered that a vertex shading process may be performed using information obtained at least substantially from a BVH. This is advantageous in that the data obtained from the BVH need not be stored in a separate data structure held in the vertex buffer, and as a result the total amount of data used to represent the geometry of a virtual scene is reduced.
In this example, the BVH structure is modified so as to store attributes. As noted above, these are traditionally stored in the vertex buffer. By instead storing attribute information in the BVH, there is no need to store index data in the index buffer—this is because the index data is required only to access information from the vertex shader.
In the arrangement 700, the fetch shader 730 is operable to obtain the desired data solely from the BVH pos buffer 750. Using the pos information, all data relating to the triangles encoded in the BVH (including attribute information) may be obtained from the BVH without the use of the index buffer or vertex buffer as described in earlier embodiments. The fetch shader 730 is then operable to provide the obtained data (with or without a transformation as appropriate) to the vertex shader 740.
Such an arrangement, and an associated processing method, may therefore provide additional benefits relating to the reduction of the number of data storage structures that are required to perform an image rendering process.
In some embodiments, further advantages may be obtained in view of the use of the BVH data. For example, the pos data may be compressed by exploiting the fact that the BVH structure utilises triangles. This means that a single index may be stored, with the other vertices of the triangle able to be identified in dependence upon this index. For instance, the other two vertices may each be identified by a respective offset value indicating a location relative to the index.
Alternatively, in some cases a single index and a single bit value may be used to indicate the vertices of the triangle. This is possible because BVH nodes are generally aligned such that it is possible to infer the location of a second vertex based upon knowledge of this alignment, and the location of the final vertex of the triangle can only be in one of two positions (either side of the line defined by the first two vertex positions).
The BVH storage unit 810 operable to store a BVH comprising a hierarchical structure of a plurality of triangles describing a virtual scene.
The BVH position buffer 820 operable to store data for identifying the location of one or more triangles within the BVH. This data may have any suitable format; in some embodiments the locations of triangles within the BVH are indicated by an index identifying a first vertex of a triangle and a respective offset for each other vertex in the triangle. Alternatively, the locations of triangles within the BVH are indicated by an index identifying a first vertex in a triangle and a bit value.
The fetch shader 830 operable to identify vertex indices for use in rendering images, to obtain one or more triangles within the BVH corresponding to those vertex indices, and to provide vertex data corresponding to those triangles to a vertex shader operable to perform a vertex shading process. The vertex shading process may comprise one or more of the following: modifying a position of a vertex, modifying a colour of a vertex, modifying an orientation of a vertex, and converting a three-dimensional vertex location into a two-dimensional screen position.
In some cases, the fetch shader 830 is operable to perform a transform on the obtained location information for the one or more triangles. For example, this may comprise converting information about one or more triangles to vertex data in a format as would normally be used in the vertex buffer in traditional arrangements. In particular, this may comprise converting the obtained location information into coordinates in object space. Of course, in some embodiments the BVH itself may be adapted so as to comprise this information such that it can be obtained by the fetch shader 830 in a suitable format such that no transformation is required for the data to be compatible with the vertex shader.
The fetch shader 830 may further be operable to obtain attribute data, with the obtained attribute data being used to perform the vertex shading process. This attribute data may comprise any suitable geometry data, for example.
In some embodiments, the system 800 may also comprise a vertex shader operable to perform the vertex shading process using data obtained from the fetch shader. The vertex shading process may comprise any of the processes described above, or indeed any other suitable vertex shading process. The output of the vertex shader may be provided to a rasterizer, or any other suitable image processing unit, for use in generating an image for display.
In some embodiments, the system 800 comprises an index buffer operable to store data identifying vertex indices for use in rendering images. This data may be used to identify locations within the BVH by using a suitable form of address conversion—for example, converting coordinates of vertex indices into the location of a particular triangle within the BVH structure, such as identifying a particular bounding volume and location within the bounding volume.
In some embodiments, the system 800 comprises a vertex buffer operable to store attribute data for one or more vertices. This may be an alternative, or an additional, source of attribute data; in some embodiments the BVH stored by the BVH storage unit comprises attribute data for the triangles.
The arrangement of
A step 900 comprises storing a bounding volume hierarchy, BVH, comprising a hierarchical structure of a plurality of triangles describing a virtual scene.
A step 910 comprises storing data for identifying the location of one or more triangles within the BVH.
A step 920 comprises identifying vertex indices for use in rendering images
A step 930 comprises obtaining one or more triangles within the BVH corresponding to those vertex indices.
A step 940 comprises providing vertex data corresponding to those triangles to a vertex shader operable to perform a vertex shading process.
The techniques described above may be implemented in hardware, software or combinations of the two. In the case that a software-controlled data processing apparatus is employed to implement one or more features of the embodiments, it will be appreciated that such software, and a storage or transmission medium such as a non-transitory machine-readable storage medium by which such software is provided, are also considered as embodiments of the disclosure.
Thus, the foregoing discussion discloses and describes merely exemplary embodiments of the present invention. As will be understood by those skilled in the art, the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting of the scope of the invention, as well as other claims. The disclosure, including any readily discernible variants of the teachings herein, defines, in part, the scope of the foregoing claim terminology such that no inventive subject matter is dedicated to the public.
Embodiments of the present disclosure may be implemented according to one or more of the following numbered clauses:
1. An image generation system comprising:
2. An image generation system according to clause 1, wherein the locations of triangles within the BVH are indicated by an index identifying a first vertex of a triangle and a respective offset for each other vertex in the triangle.
3. An image generation system according to clause 1, wherein the locations of triangles within the BVH are indicated by an index, identifying a first vertex in a triangle, and a bit value.
4. An image generation system according to any preceding clause, comprising a vertex shader operable to perform the vertex shading process using data obtained from the fetch shader.
5. An image generation system according to any preceding clause, wherein the vertex shading process comprises one or more of the following:
6. An image generation system according to any preceding clause, wherein the fetch shader is operable to perform a transform on the obtained location information for the one or more triangles.
7. An image generation system according to clause 6, wherein the fetch shader is operable to perform a transform to convert the obtained location information into coordinates in object space.
8. An image generation system according to any preceding clause, comprising an index buffer operable to store data identifying vertex indices for use in rendering images.
9. An image generation system according to clause 8, comprising a vertex buffer operable to store attribute data for one or more vertices.
10. An image generation system according to any of clause 1-7, wherein the BVH comprises attribute data for the triangles.
11. An image generation system according to any preceding clause, wherein the fetch shader is operable to obtain attribute data, and
12. An image generation method comprising:
13. Computer software which, when executed by a computer, causes the computer to carry out the method of clause 12.
14. A non-transitory machine-readable storage medium which stores computer software according to clause 13.
Number | Date | Country | Kind |
---|---|---|---|
2003031.8 | Mar 2020 | GB | national |