Dynamic Foveated Point Cloud Rendering System

Information

  • Patent Application
  • 20240331270
  • Publication Number
    20240331270
  • Date Filed
    March 30, 2023
    a year ago
  • Date Published
    October 03, 2024
    5 months ago
Abstract
A rendering system that includes at least one processor configured to receive a plurality of data points, receive at least one priority assignment, assign at least one priority value to the plurality of data points based on the at least one priority assignment, and render the plurality of data points based on the at least one priority value.
Description
BACKGROUND OF THE INVENTION

Rendering dense point clouds is very computationally intensive as every point is rendered individually. However, the higher the density of the point cloud, the better fidelity there is for a viewer. Current techniques for rendering dense point clouds involve identifying the points that are visible in the total frustum of the camera and only loading and rendering points in that area. While these techniques simplify computation relative to all possible viewing angles, they still can require intensive resource usage. A user's eyes only focus on a small portion of the screen at once and the points that are loaded and rendered outside of this small area are not as important for view. What is needed is a more efficient method for reducing the computational load of point cloud rendering.


SUMMARY OF THE INVENTION

The present invention relates to a method of dynamic foveated point cloud rendering.


According to an illustrative embodiment of the present disclosure, a rendering system includes at least one processor configured to receive a plurality of data points, receive at least one priority assignment, assign at least one priority value to the plurality of data points based on the at least one priority assignment, and render the plurality of data points based on the at least one priority value.


According to a further illustrative embodiment of the present disclosure, a rendering method includes receiving a plurality of data points, receiving at least one priority assignment, assigning at least one priority value to the plurality of data points based on the at least one priority assignment, and rendering the plurality of data based on the at least one priority value.


Additional features and advantages of the present invention will become apparent to those skilled in the art upon consideration of the following detailed description of the illustrative embodiment exemplifying the best mode of carrying out the invention as presently perceived.





BRIEF DESCRIPTION OF THE DRAWINGS

Throughout the several views, like elements are referenced using like references. The elements in the figures are not drawn to scale and some dimensions are exaggerated for clarity.


The detailed description of the invention particularly refers to the accompanying figures in which:



FIG. 1 shows a flow diagram of an exemplary rendering system.



FIG. 2 shows an exemplary system rendering a 2D space.



FIG. 3 shows an exemplary system rendering a 3D space.



FIG. 4 shows an exemplary method of rendering point clouds.





DETAILED DESCRIPTION OF THE INVENTION

The embodiments of the invention described herein are not intended to be exhaustive or to limit the invention to precise forms disclosed. Rather, the embodiments selected for description have been chosen to enable one skilled in the art to practice the invention.



FIG. 1 shows a flow diagram of an exemplary rendering system 101. The system can use two primary inputs, data points 103 and priority assignments 105. Data points 103 can include a plurality of individual points each with corresponding positional and color data. Data points 103 can be supplied from previously recorded data. Data points 103 can be supplied in real-time by a camera system to allow contemporaneous remote viewing/navigation of a space. Priority assignments 105 can include any feature (e.g., distance from user, angle from eye-line/fovea, colors, color contrast, brightness, objects, etc.) that a user or system wants rendered at a specific level of detail. For example, priority assignments 105 can include any point within a predetermined distance (e.g., the effective rendering distance of the hardware) of the user and within a predetermined angle of the user's eye-line/fovea (e.g., within 20 degrees). Points matching these criteria will be prioritized for higher fidelity rendering. As another example, an external priority assignment source (e.g., a safety system) can be configured to automatically pull a list of hazards from a database and identify features correlating to the hazards. Each feature can be assigned any number of differing priority levels depending on various factors (e.g., likelihood of negative event occurring, severity of impact, difficulty of detection, etc.). When the system 101 runs, the source can supply the features as priority assignments 105 for priority algorithm 107. In another example combining the previous two examples, an exemplary system can prioritize rendering points within the user's eye-line, and within that eye-line prioritize rendering the hazard features. Priority algorithm 107 can assign priorities to various locations within the virtual environment and then pass the priorities to rendering algorithm 109. Rendering algorithm 109 uses the priorities to render particular points with more or less detail. In at least some examples, rendering algorithm 109 can have inputs for priority algorithm 107.


For example, if rendering processing capacity is insufficient to render all features at the specified level of detail, rendering algorithm 109 can notify priority algorithm 107 of the lack of resources. Priority algorithm 107 can then readjust the priorities and send a new priorities to rendering algorithm 109. This readjustment sequence can occur more than once. Exemplary systems can use an additive approach, wherein universally minimal rendering is used as a starting point and then rendering detail is added with each cycle of communication between priority algorithm 107 and rendering algorithm 109. Once no more processing power is available for rendering, the communication loop ends. Exemplary systems can pre-filter the data points 103 such that the priority algorithm 107 assigns each data point a priority value based on the priority assignments 105. The pre-filtered data points are then sent to the rendering algorithm where they are rendered according to their pre-filtered priorities and further cycles are not required. Exemplary systems can combine the additive and pre-filter approaches by pre-filtering for certain priorities between each communication cycle and adding to the rendering load until a predetermined threshold is met (e.g., insufficient processing capacity remains to render additional detail).


In exemplary methods, the data points 105 can be a point cloud stored in a spatially ordered tree structure (e.g., an octree) with a defined index to efficiently traverse the tree. The leaf nodes in the tree are individual points with position and color information. The non-leaf nodes, or inner nodes, of the tree contain references to leaf children as well as the bounding volume that encapsulate the leaf children in 3D space.



FIG. 2 shows an exemplary system rendering a 2D space. While 2D rendering is less resource intensive, exemplary systems can still be used for prioritizing particular rendering targets.



FIG. 3 shows an exemplary system rendering a 3D space. Two 3D vectors can be computed from the data the eye tracking system provides. These vectors are in the coordinate space of the virtual world and represent the direction and position of the user's eyes within the virtual space. For simplification and as a further optimization step, some systems could derive a single vector to represent these two eye-view-vectors at the expense of fidelity. This could, for example, average the positions and directions of the actual eye-view-vectors. The eye tracking data also allows a focal depth to be calculated. This is where the two eye-view-vectors converge and indicates how far forward the user is looking (i.e., the focal point).


Exemplary systems use the eye-tracking data to efficiently access the data stored in the tree and reduce loading and rendering times. At a high level, any points contained in bounding volumes defined within the tree structure that are intersected by the eye-view-vectors contain points that are more important to visualize while bounding volumes outside those vectors are less important. Similarly, points contained in bounding volumes further than the focal point are less important. A priority assignment algorithm will assign priorities to various levels of inner nodes of the tree based on the intersection. The priority of a given inner node can vary depending on what criteria the particular priority assignment algorithm uses. The most simplistic method would be reducing the priority based on the linear distance from the eye-view-vectors. A more effective solution would use a non-linear assignment based on the fidelity sensing capabilities of the human eye.


Once priorities of the bounding volumes have been assigned, the system can use this data to load points within these volumes only up to the level of detail associated with the priority. Methods for determining the priority to detail-level determination can vary. Optimizations can be made that take into account depth of non-leaf nodes, general point density, available system resources, etc. Other systems can be combined that add/subtract priority to specific volumes depending on the application. For example, specific points of interest can be forced to always render in higher detail. Based on these priorities, the detailed point data is loaded from slow, long-term storage (e.g., a hard drive) into smaller, faster memory (e.g., RAM/GPU RAM). A rendering algorithm then takes the priorities of the volumes and renders the points accordingly. The rendering algorithm may use more data other than just the priority to determine how the specific leaf-node points are rendered, such as distance from camera, focal point, etc. This results in a system that dynamically loads and renders high-resolution point-cloud data only where the user is looking. Data is loaded/rendered at a decreased resolution outside the central vision. This dramatically decreases demand on the system rendering points the user cannot perceive, thus increasing the capability of increasing the resolution of what the user is seeing without negatively affecting the experience.



FIG. 4 shows an exemplary method for rendering a point cloud. At step 401, providing a rendering system comprising at least one processor. At step 403, receiving a plurality of data points. At step 405, receiving at least one priority assignment. At step 407, assigning at least one priority value to the plurality of data points based on the at least one priority assignment. At step 409, rendering the plurality of data points based on the at least one priority value.


Although the invention has been described in detail with reference to certain preferred embodiments, variations and modifications exist within the spirit and scope of the invention as described and defined in the following claims.

Claims
  • 1. A rendering system comprising: at least one processor configured to: receive a plurality of data points;receive at least one priority assignment;assign at least one priority value to the plurality of data points based on the at least one priority assignment; andrender the plurality of data points based on the at least one priority value.
  • 2. The system of claim 1, wherein the at least one processor is further configured to: determine remaining rendering processor capacity; andassign at least one priority value to the plurality of data points based on the remaining rendering processor capacity and the at least one priority assignment.
  • 3. The system of claim 1, wherein each data point of the plurality of data points comprises positional coordinates and a color value.
  • 4. The system of claim 1, wherein the at least one priority assignment comprises distance from a user and angle from the user's line of sight.
  • 5. The system of claim 4, wherein the at least one priority assignment further comprises at least one object of interest.
  • 6. The system of claim 4, wherein the at least one priority assignment further comprises at least one location of interest.
  • 7. The system of claim 1, further comprising a priority assignment source configured to transfer the at least one priority assignment to the at least one processor.
  • 8. A method of rendering comprising: (a) receiving a plurality of data points;(b) receiving at least one priority assignment;(c) assigning at least one priority value to the plurality of data points based on the at least one priority assignment; and(d) rendering the plurality of data based on the at least one priority value.
  • 9. The method of claim 8, further comprising: (e) determining remaining rendering processor capacity; and(f) assigning at least one priority value to the plurality of data points based on the remaining rendering processor capacity and the at least one priority assignment.
  • 10. The method of claim 9, further comprising: (g) repeating steps (e) and (f) until the remaining rendering processor capacity is below a predetermined threshold.
  • 11. The method of claim 10, wherein the predetermined threshold comprises a minimum processing capacity.
  • 12. The method of claim 8, wherein the at least one priority assignment comprises distance from a user and angle from the user's line of sight.
  • 13. The method of claim 12, wherein the at least one priority assignment further comprises at least one object of interest.
  • 14. The method of claim 12, wherein the at least one priority assignment further comprises at least one location of interest.
  • 15. A method of rendering comprising: (a) providing at least one processor at least one processor configured to: receive a plurality of data points;receive at least one priority assignment;assign at least one priority value to the plurality of data points based on at least one priority assignment; andrender the plurality of data points based on at least one priority value;(b) receiving a plurality of data points;(c) receiving at least one priority assignment;(d) assigning at least one priority value to the plurality of data points based on at least one priority assignment; and(e) rendering the plurality of data based on at least one priority value.
  • 16. The method of claim 15, further comprising: (f) determining remaining rendering processor capacity; and(g) assigning at least one priority value to the plurality of data points based on the remaining rendering processor capacity and the at least one priority assignment.
  • 17. The method of claim 16, further comprising: (h) repeating steps (e) and (f) until the remaining rendering processor capacity is below a predetermined threshold.
  • 18. The method of claim 17, wherein the predetermined threshold comprises a minimum processing capacity.
  • 19. The method of claim 15, wherein the at least one priority assignment comprises distance from a user and angle from the user's line of sight.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT

The invention described herein was made in the performance of official duties by employees of the Department of the Navy and may be manufactured, used and licensed by or for the United States Government for any governmental purpose without payment of any royalties thereon. This invention (Navy Case 112200) is assigned to the United States Government and is available for licensing for commercial purposes. Licensing and technical inquiries may be directed to the Office of Research and Technical Applications, Naval Information Warfare Center Pacific, Code 72120, San Diego, CA, 92152; voice (619) 553-5118; NIWC_Pacific_T2@us.navy.mil.