This disclosure relates to virtual reality (VR), augmented reality (AR), and hybrid reality technologies.
Mixed reality (MR), sometimes referred to as hybrid reality, is the term commonly applied to the merging of real or physical world and virtual worlds to produce new environments and visualizations where physical and digital objects co-exist and interact. Mixed reality visualizations and environments can exists in the physical world, the virtual world, and can include a mix of reality, VR, and AR via immersive technology.
An aspect of the disclosure provides a method for rendering a virtual object in a virtual environment on a user device. The method can include determining a pose of a user. The method can include determining a viewing area of the user in the virtual environment based on the pose. The method can include defining a viewing region within the viewing area, the viewing region having a volume described by an angular displacement from a vector extending outward from the user in the virtual environment. The method can include identifying a virtual object in the viewing area of the user. The method can include causing the user device to display a version of a plurality of versions of the virtual object via the user device based on one or more of a distance to the virtual object, a viewing region in relation to the virtual object, an interaction with the virtual object, and a reference to the virtual object.
Another aspect of the disclosure provides a non-transitory computer-readable medium comprising instructions for rendering a virtual object in a virtual environment on a user device. When executed by one or more processors the instructions cause the one or more processors to determine a pose of a user. The instructions cause the one or more processors to determine a viewing area of the user in the virtual environment based on the pose. The instructions cause the one or more processors to define a viewing region within the viewing area, the viewing region having a volume described by an angular displacement from a vector extending outward from the user in the virtual environment. The instructions cause the one or more processors to identify a virtual object in the viewing area of the user. The instructions cause the one or more processors to cause the user device to display a version of a plurality of versions of the virtual object via the user device based on one or more of a distance to the virtual object, a viewing region in relation to the virtual object, an interaction with the virtual object, and a reference to the virtual object.
Other features and benefits will be apparent to one of ordinary skill with a review of the following description.
The details of embodiments of the present disclosure, both as to their structure and operation, can be gleaned in part by study of the accompanying drawings, in which like reference numerals refer to like parts, and in which:
Different systems and methods that allow each user in a mixed reality environment to render virtual objects to be viewed and/or manipulated in the mixed reality environment from the viewpoint of each user are described in this disclosure. As each user moves around the virtual environment, that user's perspective of each virtual object changes. A renderer must determine how to update the appearance of the virtual environment on the display of a user device each time the user moves. The renderer must make these decisions and update the viewing perspective in a very short duration. If the renderer can spend less time calculating the new viewing perspective for each virtual object, the renderer can more-quickly provide the updated frames for display, which provides improved user experience, especially for user devices that have limited processing capability. Different approaches for determining how to render virtual objects are described below. Conditions are tested, and different versions of virtual objects are selected for rendering based on the results of the tested conditions. By way of example, when a user is not looking directly at a virtual object, is not in the vicinity of a virtual object, is not interacting with the virtual object, and/or does not have permission to see all details of the virtual object, a client application should not waste processing time and power on rendering a high quality version of that virtual object. Therefore, the renderer can use a reduced quality version of the virtual object to represent the virtual object for the entire time the user is not looking directly at a virtual object, is not in the vicinity of a virtual object, is not interacting with the virtual object, and/or does not have permission to see all details of the virtual object.
As shown in
It is noted that the user of a VR/AR/MR/XR system is not technically “inside” the virtual environment. However the phrase “perspective of the user” or “position of the user” is intended to convey the view or position that the user would have (e.g., via the user device) were the user inside the virtual environment. This can also be the “position of” or “perspective the avatar of the user” within the virtual environment. It can also be the view a user would see viewing the virtual environment via the user device.
Each of the user devices 120 include different architectural features, and may include the features shown in
Some of the sensors 124 (e.g., inertial, optical, and location sensors) are used to track the pose (e.g., position and orientation) of a user in virtual environments and physical environments. Tracking of user position and orientation (e.g., of a user head or eyes) is commonly used to determine view areas, and the view area is used to determine what virtual objects to render using the processor 126 for presentation to the user on a display of a user device. Tracking the positions and orientations of the user or any user input device (e.g., a handheld device) may also be used to determine interactions with virtual objects. In some embodiments, an interaction with a virtual object includes a modification (e.g., change color or other) to the virtual object that is permitted after a tracked position of the user or user input device intersects with a point of the virtual object in a geospatial map of a virtual environment, and after a user-initiated command is provided to make the desired modification.
Some of the sensors 124 (e.g., cameras and other optical sensors of AR devices) may also be used to capture information about a physical environment, which is used to generate virtual representations of that information, or to generate geospatial maps of the physical environment that can be used to determine where and how to present virtual objects among physical objects of the physical environment. Such virtual representations and geospatial maps may be created using any known approach. In one approach, many two-dimensional images are captured by a camera of an AR device, those two-dimensional images are used to identify three-dimensional points in the physical environment, and the three-dimensional points are used to determine relative positions, relative spacing and structural characteristics (e.g., surfaces and depths) of physical objects in the physical environment. Other optical sensors may be used in addition to a camera (e.g., a depth sensor). Textures, colors and other features of physical objects or physical environments can be determined by analysis of individual images.
Examples of the user devices 120 include VR, AR, and general computing devices with displays, including head-mounted displays, sensor-packed wearable devices with a display (e.g., glasses), mobile phones, tablets, desktop computers, laptop computers, or other computing devices that are suitable for carrying out the functionality described in this disclosure.
The methods or processes outlined and described herein and particularly those that follow below, can be performed by one or more processors of the platform 110 either alone or in connection with the user device(s) 120. The processes can also be performed using distributed or cloud-based computing.
A viewing area for the user that extends from a position 221 of the user is shown. The viewing area defines parts of the virtual environment that are displayed to that user by a user device operated by the user. Example user devices include any of the mixed reality user devices 120. Other parts of the virtual environment that are not in the viewing area for a user are not displayed to the user until the user's pose changes to create a new viewing area that includes the other parts. A viewing area can be determined using different techniques known in the art. One technique involves: (i) determining the position and the orientation of a user in a virtual environment (e.g., the orientation of the user's head or eyes); (ii) determining outer limits of peripheral vision for the user (e.g., x degrees of vision in different directions from a vector extending outward along the user's orientation, where x is a number like 45 or another number depending on the display of the user device or another reason); and (iii) defining the volume enclosed by the peripheral vision as the viewing area. A volumetric viewing area is illustrated in
After a viewing area is defined, a viewing region for a user can be defined for use in some embodiments that are described later, including use in determining how to render virtual objects that are inside and outside the viewing region. A viewing region is smaller than the viewing area of the user. Different shapes and sizes of viewing regions are possible. A preferred shape is a volume (e.g., conical, rectangular or other prism) that extends from the position 221 of the user along the direction of the orientation of the user. The cross-sectional area of the volume that is perpendicular to the direction of the orientation may expand or contract as the volume extends outward from the user's position 221. A viewing region can be determined using different techniques known in the art. One technique involves: (i) determining the position and the current orientation of a user in a virtual environment (e.g., the orientation of the user's head or eyes); (ii) determining outer limits of the viewing region (e.g., x degrees of vision in different directions from a vector extending outward along the user's current orientation); and (iii) defining the volume enclosed by the outer limits as the viewing region. The value of x can vary. For example, since users may prefer to reorient their head from the current orientation to see an object that is located more than 10-15 degrees from the current orientation, the value of x may be set to 10 or 15 degrees. The value of x can be predetermined or provisioned with a given system. The value of x can also be user-defined.
By way of example, a volumetric viewing region is illustrated in
As shown in
Different versions of the virtual object 231 are described herein as having different levels quality. For example, respective low and high levels of quality can be achieved by using less or more triangles or polygons, using coarse or precise meshes, using less or more colors or textures, using a static image or an animated image, removing or including details of the virtual object, pixelating or not pixelating details of the virtual object, or other different versions of features of a virtual object. In some embodiments, two versions of a virtual object are maintained by the platform 110 or the user device 120. One version is a higher quality version that is a complex representation of the virtual object and the other is a lower quality version that is a simplified representation of the virtual object. The simplified version could be lower quality in that the virtual object is a unified version of all of its components such that the lower quality version cannot be disassembled. Alternatively, the simplified version could be any of the lower levels of quality listed above, or some other version different than the complex version.
As shown, a pose (e.g., position, orientation) of a user interacting with a virtual environment is determined (510) (by, e.g., the platform 110), and a viewing area of the user in the virtual environment is determined (520)—e.g., based on the user's pose, as known in the art. A virtual object in the viewing area of the user is identified (530). Based on evaluation of one or more conditions (e.g., distance, angle, etc.), a version of the virtual object from among two or more versions of the virtual object to display in the viewing area is selected or generated (540), and the selected or generated version of the virtual object is rendered for display in the viewing area of the user (550). In some embodiments, the rendering of block 550 can be performed by the user device 120. In some other embodiments, the rendering (550) can be perform cooperatively between the platform 110 and the user device 120. Different evaluations of conditions during step 540 are shown in
A first evaluation involves determining if a distance between the position of the user and the virtual object is within a threshold distance (540a). If the distance is within the threshold distance, the version is a higher quality version compared to a lower quality version. If the distance is not within the threshold distance, the version is the lower quality version.
A second evaluation involves determining if the virtual object is positioned in a viewing region of the user (540b). If the virtual object is positioned in the viewing region, the version is a higher quality version compared to a lower quality version. If the virtual object is not positioned in the viewing region, the version is the lower quality version. Alternatively, instead of determining if the virtual object is positioned in a viewing region of the user, step 540b could simply be a determination if the user is looking at the virtual object. If the user is looking at the virtual object, the version is the higher quality version. If the user is not looking at the virtual object, the version is the lower quality version.
A third evaluation involves determining if the user or another user is interacting with the virtual object (540c). If the user or another user is interacting with the virtual object, the version is a higher quality version compared to a lower quality version. If the user or another user is not interacting with the virtual object, the version is the lower quality version. By way of example, interactions may include looking at the virtual object, pointing to, modifying the virtual object, appending content (e.g., notations) to the virtual object, moving the virtual object, or other interactions.
A fourth evaluation involves determining if the user or another user is communicatively referring to the virtual object (540d). If the user or another user is communicatively referring to the virtual object (e.g., talking about or referencing the object), the version is a higher quality version compared to a lower quality version. If the user or another user is not communicatively referring to the virtual object, the version is the lower quality version. Examples of when the user or another user is communicatively referring to the virtual object include recognizing speech or text that references the virtual object or a feature of the virtual object.
Another evaluation not shown in
In some embodiments of
In some embodiments, an invisible volume is generated around each virtual object, or an invisible boundary is generated in between the position 221 of the user and the space occupied by the virtual object 231. The size of the volume can be set to the size of the virtual object 231 or larger. The size of the boundary may vary depending on desired implementation. The volume or the boundary may be used to determine which version of the virtual object to render. For example, if the user is looking at, pointing to, or positioned at a location within the volume, then the virtual object is rendered using the higher quality version. Otherwise, the object is rendered using the lower quality version.
Methods of this disclosure may be implemented by hardware, firmware or software. One or more non-transitory machine-readable media embodying program instructions that, when executed by one or more machines, cause the one or more machines to perform or implement operations comprising the steps of any of the methods or operations described herein are contemplated. As used herein, machine-readable media includes all forms of machine-readable media (e.g. non-volatile or volatile storage media, removable or non-removable media, integrated circuit media, magnetic storage media, optical storage media, or any other storage media) that may be patented under the laws of the jurisdiction in which this application is filed, but does not include machine-readable media that cannot be patented under the laws of the jurisdiction in which this application is filed.
By way of example, machines may include one or more computing device(s), processor(s), controller(s), integrated circuit(s), chip(s), system(s) on a chip, server(s), programmable logic device(s), other circuitry, and/or other suitable means described herein (e.g., the platform 110, the user device 120) or otherwise known in the art. Systems that include one or more machines or the one or more non-transitory machine-readable media embodying program instructions that, when executed by the one or more machines, cause the one or more machines to perform or implement operations comprising the steps of any methods described herein are also contemplated.
Method steps described herein may be order independent, and can therefore be performed in an order different from that described. It is also noted that different method steps described herein can be combined to form any number of methods, as would be understood by one of skill in the art. It is further noted that any two or more steps described herein may be performed at the same time. Any method step or feature disclosed herein may be expressly restricted from a claim for various reasons like achieving reduced manufacturing costs, lower power consumption, and increased processing efficiency. Method steps can be performed at any of the system components shown in the figures.
Systems comprising one or more modules that perform, are operable to perform, or adapted to perform different method steps/stages disclosed herein are also contemplated, where the modules are implemented using one or more machines listed herein or other suitable hardware. When two things (e.g., modules or other features) are “coupled to” each other, those two things may be directly connected together, or separated by one or more intervening things. Where no lines and intervening things connect two particular things, coupling of those things is contemplated in at least one embodiment unless otherwise stated. Where an output of one thing and an input of another thing are coupled to each other, information sent from the output is received by the input even if the data passes through one or more intermediate things. Different communication pathways and protocols may be used to transmit information disclosed herein. Information like data, instructions, commands, signals, bits, symbols, and chips and the like may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, or optical fields or particles.
The words comprise, comprising, include, including and the like are to be construed in an inclusive sense (i.e., not limited to) as opposed to an exclusive sense (i.e., consisting only of). Words using the singular or plural number also include the plural or singular number, respectively. The word or and the word and, as used in the Detailed Description, cover any of the items and all of the items in a list. The words some, any and at least one refer to one or more. The term may is used herein to indicate an example, not a requirement—e.g., a thing that may perform an operation or may have a characteristic need not perform that operation or have that characteristic in each embodiment, but that thing performs that operation or has that characteristic in at least one embodiment.
This application claims priority to U.S. Provisional Patent Application Ser. No. 62/580,128, filed Nov. 1, 2017, entitled “SYSTEMS AND METHODS FOR DETERMINING HOW TO RENDER A VIRTUAL OBJECT BASED ON ONE OR MORE CONDITIONS,” the contents of which are hereby incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
62580128 | Nov 2017 | US |