Rendering dense point clouds is very computationally intensive as every point is rendered individually. However, the higher the density of the point cloud, the better fidelity there is for a viewer. Current techniques for rendering dense point clouds involve identifying the points that are visible in the total frustum of the camera and only loading and rendering points in that area. While these techniques simplify computation relative to all possible viewing angles, they still can require intensive resource usage. A user's eyes only focus on a small portion of the screen at once and the points that are loaded and rendered outside of this small area are not as important for view. What is needed is a more efficient method for reducing the computational load of point cloud rendering.
The present invention relates to a method of dynamic foveated point cloud rendering.
According to an illustrative embodiment of the present disclosure, a rendering system includes at least one processor configured to receive a plurality of data points, receive at least one priority assignment, assign at least one priority value to the plurality of data points based on the at least one priority assignment, and render the plurality of data points based on the at least one priority value.
According to a further illustrative embodiment of the present disclosure, a rendering method includes receiving a plurality of data points, receiving at least one priority assignment, assigning at least one priority value to the plurality of data points based on the at least one priority assignment, and rendering the plurality of data based on the at least one priority value.
Additional features and advantages of the present invention will become apparent to those skilled in the art upon consideration of the following detailed description of the illustrative embodiment exemplifying the best mode of carrying out the invention as presently perceived.
Throughout the several views, like elements are referenced using like references. The elements in the figures are not drawn to scale and some dimensions are exaggerated for clarity.
The detailed description of the invention particularly refers to the accompanying figures in which:
The embodiments of the invention described herein are not intended to be exhaustive or to limit the invention to precise forms disclosed. Rather, the embodiments selected for description have been chosen to enable one skilled in the art to practice the invention.
For example, if rendering processing capacity is insufficient to render all features at the specified level of detail, rendering algorithm 109 can notify priority algorithm 107 of the lack of resources. Priority algorithm 107 can then readjust the priorities and send a new priorities to rendering algorithm 109. This readjustment sequence can occur more than once. Exemplary systems can use an additive approach, wherein universally minimal rendering is used as a starting point and then rendering detail is added with each cycle of communication between priority algorithm 107 and rendering algorithm 109. Once no more processing power is available for rendering, the communication loop ends. Exemplary systems can pre-filter the data points 103 such that the priority algorithm 107 assigns each data point a priority value based on the priority assignments 105. The pre-filtered data points are then sent to the rendering algorithm where they are rendered according to their pre-filtered priorities and further cycles are not required. Exemplary systems can combine the additive and pre-filter approaches by pre-filtering for certain priorities between each communication cycle and adding to the rendering load until a predetermined threshold is met (e.g., insufficient processing capacity remains to render additional detail).
In exemplary methods, the data points 105 can be a point cloud stored in a spatially ordered tree structure (e.g., an octree) with a defined index to efficiently traverse the tree. The leaf nodes in the tree are individual points with position and color information. The non-leaf nodes, or inner nodes, of the tree contain references to leaf children as well as the bounding volume that encapsulate the leaf children in 3D space.
Exemplary systems use the eye-tracking data to efficiently access the data stored in the tree and reduce loading and rendering times. At a high level, any points contained in bounding volumes defined within the tree structure that are intersected by the eye-view-vectors contain points that are more important to visualize while bounding volumes outside those vectors are less important. Similarly, points contained in bounding volumes further than the focal point are less important. A priority assignment algorithm will assign priorities to various levels of inner nodes of the tree based on the intersection. The priority of a given inner node can vary depending on what criteria the particular priority assignment algorithm uses. The most simplistic method would be reducing the priority based on the linear distance from the eye-view-vectors. A more effective solution would use a non-linear assignment based on the fidelity sensing capabilities of the human eye.
Once priorities of the bounding volumes have been assigned, the system can use this data to load points within these volumes only up to the level of detail associated with the priority. Methods for determining the priority to detail-level determination can vary. Optimizations can be made that take into account depth of non-leaf nodes, general point density, available system resources, etc. Other systems can be combined that add/subtract priority to specific volumes depending on the application. For example, specific points of interest can be forced to always render in higher detail. Based on these priorities, the detailed point data is loaded from slow, long-term storage (e.g., a hard drive) into smaller, faster memory (e.g., RAM/GPU RAM). A rendering algorithm then takes the priorities of the volumes and renders the points accordingly. The rendering algorithm may use more data other than just the priority to determine how the specific leaf-node points are rendered, such as distance from camera, focal point, etc. This results in a system that dynamically loads and renders high-resolution point-cloud data only where the user is looking. Data is loaded/rendered at a decreased resolution outside the central vision. This dramatically decreases demand on the system rendering points the user cannot perceive, thus increasing the capability of increasing the resolution of what the user is seeing without negatively affecting the experience.
Although the invention has been described in detail with reference to certain preferred embodiments, variations and modifications exist within the spirit and scope of the invention as described and defined in the following claims.
The invention described herein was made in the performance of official duties by employees of the Department of the Navy and may be manufactured, used and licensed by or for the United States Government for any governmental purpose without payment of any royalties thereon. This invention (Navy Case 112200) is assigned to the United States Government and is available for licensing for commercial purposes. Licensing and technical inquiries may be directed to the Office of Research and Technical Applications, Naval Information Warfare Center Pacific, Code 72120, San Diego, CA, 92152; voice (619) 553-5118; NIWC_Pacific_T2@us.navy.mil.