This disclosure relates to virtual reality (VR), augmented reality (AR), and hybrid reality technologies.
Mixed reality (MR), sometimes referred to as hybrid reality, is the term commonly applied to the merging of real or physical world and virtual worlds to produce new environments and visualizations where physical and digital objects co-exist and interact. Mixed reality visualizations and environments can exists in the physical world, the virtual world, and can include a mix of reality, VR, and AR via immersive technology.
An aspect of the disclosure provides a method for managing files associated with a virtual object in a virtual environment. The method can include receiving, at a server, a file including data related to the virtual object for transfer to a user device communicatively coupled to the server. The method can include determining, by the server, a maximum file size that the user device can receive. The method can include dividing the file into n different transmission files if a size of the file is greater than the maximum file size. The method can include transmitting the n different transmission files to the user device in a priority order based on viewpoint of the user device related to the virtual object.
Another aspect of the disclosure provides a non-transitory computer-readable medium comprising instructions for a non-transitory computer-readable medium comprising instructions for managing files associated with a virtual object in a virtual environment. When executed by one or more processors the instructions cause the one or more processors to receive a file including data related to the virtual object for transfer to a user device communicatively coupled to the server. The instructions further cause the one or more processors to determine a maximum file size that the user device can receive. The instruction further cause the one or more processors to divide the file into n different transmission files if a size of the file is greater than the maximum file size. The instructions further cause the one or more processors to transmit the n different transmission files to the user device in a priority order based on viewpoint of the user device related to the virtual object.
Other features and benefits will be apparent to one of ordinary skill with a review of the following description.
The details of embodiments of the present disclosure, both as to their structure and operation, can be gleaned in part by study of the accompanying drawings, in which like reference numerals refer to like parts, and in which:
As shown in
Each of the user devices 120 include different architectural features, and may include the features shown in
Some of the sensors 124 (e.g., inertial, optical, and location sensors) are used to track the pose (e.g., position and orientation) of a user in virtual environments and physical environments. Tracking of user position and orientation (e.g., of a user head or eyes) is commonly used to determine view areas, and the view area is used to determine what virtual objects to render using the processor 126 for presentation to the user on a display of a user device. Tracking the positions and orientations of the user or any user input device (e.g., a handheld device) may also be used to determine interactions with virtual objects. In some embodiments, an interaction with a virtual object includes a modification (e.g., change color or other) to the virtual object that is permitted after a tracked position of the user or user input device intersects with a point of the virtual object in a geospatial map of a virtual environment, and after a user-initiated command is provided to make the desired modification.
Some of the sensors 124 (e.g., cameras and other optical sensors of AR devices) may also be used to capture information about a physical environment, which is used to generate virtual representations of that information, or to generate geospatial maps of the physical environment that can be used to determine where and how to present virtual objects among physical objects of the physical environment. Such virtual representations and geospatial maps may be created using any known approach. In one approach, many two-dimensional images are captured by a camera of an AR device, those two-dimensional images are used to identify three-dimensional points in the physical environment, and the three-dimensional points are used to determine relative positions, relative spacing and structural characteristics (e.g., surfaces and depths) of physical objects in the physical environment. Other optical sensors may be used in addition to a camera (e.g., a depth sensor). Textures, colors and other features of physical objects or physical environments can be determined by analysis of individual images.
Examples of the user devices 120 include VR, AR, and general computing devices with displays, including head-mounted displays, sensor-packed wearable devices with a display (e.g., glasses), mobile phones, tablets, desktop computers, laptop computers, or other computing devices that are suitable for carrying out the functionality described in this disclosure.
This disclosure includes systems and methods for importing virtual objects of a virtual environment from the platform 110 to a user device 120 for display by that user device 120 to a user. In one embodiment, when a user device 120 makes a request to import a virtual object, the platform 110 receives the request, and separates the virtual object into smaller parts, sections, layers, versions or other things that can be sent to the user device 120 in available transport packets at a required or desired speed of transmission during a time period. After the transmission packets are received by the user device 120, the user device 120 (e.g., a client application running on a processor) reassembles the virtual object in different ways (e.g., in the background before rendering the virtual object in the user's viewing area, over time by rendering the content of each packet after that packet is received, or another way). By way of example, when a user selects a file to import to a user device 120, an application of the user device 120 sends a request to the platform 110 (e.g., the collaboration manager 115). The platform 110 locates the file and determines how to import the file based on the file type. If the file contains a virtual object (CAD, three-dimensional or other virtual object format), the platform 110 uses import tools to convert the virtual object into a common format for display if needed. The platform 110 then prepares the virtual object file for distribution to the requesting user device 120 and other user devices 120 that need to display the virtual object.
The platform 110 may have predefined rules for separating the virtual object depending on different conditions, and different conditions may apply to different user devices 120 such that the way a virtual object is separated for transmission to a first user device is different than the way the same virtual object is separated for transmission to a second user device. Examples of conditions include a maximum file size the user device 120 can receive in one transmission, the type of the user device 120, the connection speed between the platform 110 and the user device 120, permissions of a user operating the user device 120, or other conditions. For each user device 120, the platform 110 determines condition(s) that apply to that user device 120, and then looks up the rule controlling how the virtual object is separated for transmission to that user device 120. By way of example, the platform 110 may check the file size of the virtual object, determine a maximum file size a user device 120 can receive in a single transmission packet, determine if the file size of the virtual object is greater than the maximum file size, and either (i) transmit an unseparated version of the virtual object to the user device 120 when the file size of the virtual object is not greater than the maximum file size, or (ii) determine how to separate the virtual object for transmission to the user device 120.
The platform 110 may also check the connection quality and speed associated with the user device 120 to verify whether the virtual object can be transported in whole in a threshold amount of time. If the platform 110 determines the file can be sent in whole in the threshold amount of time, the platform 110 sends the entire file of the virtual object to that user device 120. Otherwise, the platform 110 determines how to separate the virtual object for transmission to that user device 120.
Different approaches for separating the virtual object are described herein. Each approach is configurable and can be adjusted based on desired user experience or other reasons.
One approach for separating the virtual object creates separate transmission files that each include one or more components of the virtual object (e.g., a different component or group of components such as wheels of a car). Each transmission file is created to be no greater than a maximum file size that a user device 120 can receive in a single transmission packet or during a threshold amount of time. When transmission files are sequentially transmitted, particular components of the virtual object may be prioritized over other components, and those prioritized components may be transmitted to and rendered for display on the user device 120 before the other components are transmitted to and rendered for display on the user device 120. Alternatively, all transmission files or a set of transmission files must be received by the user device 120 before the user device 120 assembles the contents of those files for rendering.
Another approach for separating the virtual object involves the platform 110 generating and sending multiple versions of the transmission files. For example, the platform 110 can send a lower quality version (e.g., coarser, less precise, less granular, less refined, less detailed or other simpler version) of the virtual object or component(s) thereof in one or more initial transmission(s) for rendering and display at the user device 120, and then later transmitting a higher quality version (e.g., less coarse, more precise, more granular, more refined, more detailed or other complex version) of the virtual object or component(s) thereof for rendering and display at the user device 120. If an amount of a file size occupied by a particular component of a virtual object exceeds a threshold amount (e.g., the maximum file size a user device 120 can receive in a single transmission, or a smaller value), then a lower quality version of that particular component is transmitted even though higher quality versions of other components are transmitted, and the user device 120 can render a version of the virtual object that includes both the lower quality version of that particular component and the higher quality versions of other components. After the user device 120 receives a higher quality version of the particular component that was previously received in lower quality, the user device 120 replaces the lower-quality version with the higher-quality version. As a result, the user can see the virtual object appearing to become more refined and detailed over time until the highest quality version of the virtual object that is available for the user is rendered.
Yet another approach for separating the virtual object involves separating the virtual object into slices (e.g., vertically, horizontally or combination thereof) or into layers from the outside of the object to the inside of the object, such that the slices or layers can be displayed in the order they are received by the user device 120.
Each approach for separating the virtual object described herein can be used to (i) receive all or a group of transmission files before rendering the combined content of those files, or (ii) render content of single transmission files as those files are received. Transmission files that include parts of a virtual object that meet a condition (e.g., the parts are not in the user's viewing area, are not of interest to the user, or another condition) may be transmitted after transmission files that include parts of a virtual object that do not meet the condition. In some cases, the transmission files that include parts of a virtual object that meet the condition are not transmitted or rendered until those parts no longer meet the condition. A user's interest in part of an object may be confirmed when the user's position in the virtual environment approaches the part of that object, or interacts with the part of that object by selecting, moving, attempting to “slice/dissect” the object, or other interaction.
Transmitting Files Associated with a Virtual Object to a User Device Based on Different Conditions
The platform 110 can further determine whether condition meets a threshold (220A). For example, the condition can relate to the files associated with the virtual object or the user device. The files can be a collection or collections of data related to the components of a virtual object. The components can include parts or pieces of the virtual object. The figures use depict a car as a primary example, thus the components can be wheels, windows, engine parts, etc. A component as used herein can be any subpart or divisible part of the virtual object. Based on whether the condition meets the threshold, one or more transmission file(s) specifying parts of a virtual object are generated or selected (230A). The transmission file(s) are transmitted to the user device using one or more transmissions (240A), and the parts of the virtual object are rendered on the user device based on the transmitted file(s) (250A). Examples of conditions include a transmission capability of the user device (e.g., a maximum file size or a transmission time period), a permission level of the user, or another condition. Transmissions may be separated in time, by channel, or other communication technique.
Another process for transmitting files associated with a virtual object to a user device based on one or more conditions is shown in
If the file size of the file comprising all components of the virtual object is not greater than the maximum file size, a transmission file comprising all components of the virtual object is generated or selected (230B-i), the generated or selected transmission file is transmitted to the user device (240B-i), and the user device renders the virtual object by rendering the content of the transmitted transmission file (250B-i).
If the file size of the file comprising all components of the virtual object is greater than the maximum file size, n different transmission files are generated or selected, wherein each transmission file (i) is less than the maximum file size, and (ii) includes a different component or different groups of components of the virtual object (230B-ii). The n different transmission files are transmitted to the user device (240B-ii), and the user device renders the virtual object by rendering the content of the first through nth transmission files in combination, or in the order the transmission files are received (250B-ii). In some embodiments, n is an integer. During step 240B-ii, a first transmission file comprising a first component or group of components of the virtual object is transmitted in a first transmission (e.g., at a first transmission time), a second transmission file comprising a second component or group of components of the virtual object is transmitted in a second transmission (e.g., at a second transmission time), and so on until an nth transmission file comprising an nth component or group of components of the virtual object is filed in an nth transmission (e.g., at an nth transmission time), where n is greater than 1.
In an optional embodiment of
The flow of
Another process for transmitting files associated with a virtual object to a user device based on one or more conditions is shown in
If the file size of the file comprising all components of the virtual object is not greater than the maximum file size, a transmission file comprising all components of the virtual object is generated or selected (2300-i), the transmission file is transmitted to the user device (2400-i), and the user device renders the virtual object by rendering the content of the transmission file (2500-i).
If the file size of the file comprising all components of the virtual object is greater than the maximum file size, one or more transmission files are generated or selected (2300-ii) and transmitted to the user device (2400-ii). The user device renders the virtual object by rendering the content of the transmission files in combination or in the order the transmission files are received (2500-ii). During step 2300-ii, an initial transmission file comprising a simplified (i.e., lower quality) version of the virtual object is generated or selected before n other transmission file(s) comprising a complex (i.e., higher quality) version of the virtual object are generated or selected. During step 2400-ii, the initial transmission file is transmitted to the user device before the n other transmission file(s) are transmitted to the user device. Finally, during step 2500-ii, content of the initial transmission file is rendered for display on the user device before content of the n other transmission file(s) are rendered for display on the user device.
By way of example, a simplified version of a virtual object may omit components of the virtual object, may include only portions of components, may include less resolution (e.g., less triangles or polygons) or texture or color than higher quality versions of the object, or other differences in features of the virtual object compared to higher quality, more complex versions.
Another process for transmitting files associated with a virtual object to a user device based on one or more conditions is shown in
If the permission level of the user does not meet or exceed the permission threshold, no transmission files are generated or selected (230D-i), or transmitted to the user device (240D-i). If a locally stored version of the virtual object (e.g., a lower quality version) exists at the user device, the locally stored version is rendered for display on the user device.
If the permission level of the user meets or exceeds the permission threshold, transmission file(s) comprising components of the virtual object are generated or selected (230D-ii), and the transmission files are transmitted to the user device (240D-ii). If a locally stored version of the virtual object (e.g., a lower quality version) exists at the user device, the locally stored version is rendered for display before the user device renders the content of the transmission files to display a different version of the virtual object (e.g., a higher quality version) the user is allowed to view (250D-ii).
In some embodiments, the order (e.g., priority) in which files are sent can be based on the perspective or viewpoint/vantage point of the user. The process illustrated in
It is noted that the user of a VR/AR/XR system is not technically “inside” the virtual environment. However the phrase “perspective of the user” is intended to convey the view that the user would have (e.g., via the user device) were the user inside the virtual environment. This can also be the “perspective the avatar of the user” within the virtual environment. It can also be the view a user would see viewing the virtual environment via the user device.
Methods of this disclosure may be implemented by hardware, firmware or software. One or more non-transitory machine-readable media embodying program instructions that, when executed by one or more machines, cause the one or more machines to perform or implement operations comprising the steps of any of the methods or operations described herein are contemplated. As used herein, machine-readable media includes all forms of machine-readable media (e.g. non-volatile or volatile storage media, removable or non-removable media, integrated circuit media, magnetic storage media, optical storage media, or any other storage media) that may be patented under the laws of the jurisdiction in which this application is filed, but does not include machine-readable media that cannot be patented under the laws of the jurisdiction in which this application is filed.
By way of example, machines may include one or more computing device(s), processor(s), controller(s), integrated circuit(s), chip(s), system(s) on a chip, server(s), programmable logic device(s), other circuitry, and/or other suitable means described herein (e.g., the platform 110, the user device 120) or otherwise known in the art. Systems that include one or more machines or the one or more non-transitory machine-readable media embodying program instructions that, when executed by the one or more machines, cause the one or more machines to perform or implement operations comprising the steps of any methods described herein are also contemplated.
Method steps described herein may be order independent, and can therefore be performed in an order different from that described. It is also noted that different method steps described herein can be combined to form any number of methods, as would be understood by one of skill in the art. It is further noted that any two or more steps described herein may be performed at the same time. Any method step or feature disclosed herein may be expressly restricted from a claim for various reasons like achieving reduced manufacturing costs, lower power consumption, and increased processing efficiency. Method steps can be performed at any of the system components shown in the figures.
Systems comprising one or more modules that perform, are operable to perform, or adapted to perform different method steps/stages disclosed herein are also contemplated, where the modules are implemented using one or more machines listed herein or other suitable hardware. When two things (e.g., modules or other features) are “coupled to” each other, those two things may be directly connected together, or separated by one or more intervening things. Where no lines and intervening things connect two particular things, coupling of those things is contemplated in at least one embodiment unless otherwise stated. Where an output of one thing and an input of another thing are coupled to each other, information sent from the output is received by the input even if the data passes through one or more intermediate things. Different communication pathways and protocols may be used to transmit information disclosed herein. Information like data, instructions, commands, signals, bits, symbols, and chips and the like may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, or optical fields or particles.
The words comprise, comprising, include, including and the like are to be construed in an inclusive sense (i.e., not limited to) as opposed to an exclusive sense (i.e., consisting only of). Words using the singular or plural number also include the plural or singular number, respectively. The word or and the word and, as used in the Detailed Description, cover any of the items and all of the items in a list. The words some, any and at least one refer to one or more. The term may is used herein to indicate an example, not a requirement—e.g., a thing that may perform an operation or may have a characteristic need not perform that operation or have that characteristic in each embodiment, but that thing performs that operation or has that characteristic in at least one embodiment.
This application claims priority to U.S. Provisional Patent Application Ser. No. 62/580,124, filed Nov. 1, 2017, entitled “SYSTEMS AND METHODS FOR TRANSMITTING FILES ASSOCIATED WITH A VIRTUAL OBJECT TO A USER DEVICE BASED ON DIFFERENT CONDITIONS,” the contents of which are hereby incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
62580124 | Nov 2017 | US |