This disclosure relates to virtual reality (VR), augmented reality (AR), and hybrid reality technologies.
Mixed reality (MR), sometimes referred to as hybrid reality, is the term commonly applied to the merging of real or physical world and virtual worlds to produce new environments and visualizations where physical and digital objects co-exist and interact. Mixed reality visualizations and environments can exists in the physical world, the virtual world, and can include a mix of reality, VR, and AR via immersive technology including interactive environments and interactive three-dimensional (3D) virtual objects.
Interactive 3D virtual objects can be complex and contain large amounts of information that describe different features of the virtual objects, including the geometry, appearance, scenery, and animation of the virtual objects. Particular features of a virtual object may include shape, surface geometry, color, texture, material type, light sources, cameras, peripheral objects, animation, physical properties, and kinematics. As virtual objects become more complex by integrating more features, encoding and transferring all features of a virtual object between applications becomes increasingly difficult when multiple files are used to provide details about different features of the virtual object.
Some devices may be limited in their ability to store, render, and display virtual content, or interact with a virtual environment. In some example, these limitations may be based on device capabilities, constraints, and/or permissions.
An aspect of the disclosure provides a method for operating a digitally created collaborative environment including a plurality of user devices communicatively coupled to a network. The method can include determining, at one or more processors coupled to the network, first condition values experienced at a first user device of the plurality of user devices, each first condition value being associated with a condition of the first user device. The method can include determining first permission values for each user permission of a plurality of user permissions based on the first condition values, each user permission of the plurality of user permissions indicating a mode of operation of the user device in conjunction with the collaborative environment. The method can include selecting a first permission value of a first user permission of the plurality of user permissions. The method can include applying the first permission to the first user device.
Another aspect of the disclosure provides a non-transitory computer-readable medium comprising instructions for operating a digitally created collaborative environment including a plurality of user devices communicatively coupled to a network. When executed by one or more processors the instructions cause the one or more processors to determine first condition values experienced at a first user device of the plurality of user devices, each first condition value being associated with a condition of the first user device. The instructions further cause the one or more processors to determine first permission values for each user permission of a plurality of user permissions based on the first condition values, each user permission of the plurality of user permissions indicating a mode of operation of the user device in conjunction with the collaborative environment. The instructions further cause the one or more processors to select a first permission value of a first user permission of the plurality of user permissions. The instructions further cause the one or more processors to apply the first permission to the first user device.
Other features and benefits will be apparent to one of ordinary skill with a review of the following description.
The details of embodiments of the present disclosure, both as to their structure and operation, can be gleaned in part by study of the accompanying drawings, in which like reference numerals refer to like parts, and in which:
This disclosure relates to different approaches for determining values of conditions experienced by a user, and using the values of the conditions to determine a value of a user permission to apply to the user.
A system for creating computer-generated collaborative environments and providing such collaborative environments as an immersive experience for VR, AR, and MR users is shown in
As shown in
Each of the user devices 120 include different architectural features, and may include the features shown in
Some of the sensors 124 (e.g., inertial, optical, and location sensors) are used to track the pose (e.g., position and orientation) of a user in virtual environments and physical environments. Tracking of user position and orientation (e.g., of a user head or eyes) is commonly used to determine view areas, and the view area is used to determine what virtual objects to render using the processor 126 for presentation to the user on a display of a user device. Tracking the positions and orientations of the user or any user input device (e.g., a handheld device) may also be used to determine interactions with virtual objects. In some embodiments, an interaction with a virtual object includes a modification (e.g., change color or other) to the virtual object that is permitted after a tracked position of the user or user input device intersects with a point of the virtual object in a geospatial map of a virtual environment, and after a user-initiated command is provided to make the desired modification. Positions in a physical environment may be tracked in different ways, including positioning using Global Navigation Satellite Systems (GNSS), Bluetooth, WiFi, an altimeter, or any other known way to estimate the position of a thing (e.g., a user) in a physical environment.
Some of the sensors 124 (e.g., cameras and other optical sensors of AR devices) may also be used to capture information about a physical environment, which is used to generate virtual representations of that information, or to generate geospatial maps of the physical environment that can be used to determine where and how to present virtual objects among physical objects of the physical environment. Such virtual representations and geospatial maps may be created using any known approach. In one approach, many two-dimensional images are captured by a camera of an AR device, those two-dimensional images are used to identify three-dimensional points in the physical environment, and the three-dimensional points are used to determine relative positions, relative spacing and structural characteristics (e.g., surfaces and depths) of physical objects in the physical environment. Other optical sensors may be used in addition to a camera (e.g., a depth sensor). Textures, colors and other features of physical objects or physical environments can be determined by analysis of individual images.
Examples of the user devices 120 include VR, AR, and general computing devices with displays, including head-mounted displays, sensor-packed wearable devices with a display (e.g., glasses), mobile phones, tablets, desktop computers, laptop computers, or other computing devices that are suitable for carrying out the functionality described in this disclosure.
The methods or processes outlined and described herein and particularly those that follow below, can be performed by one or more processors of the platform 110 either alone or in connection or cooperation with the user device(s) 120. The processes can also be performed using distributed or cloud-based computing.
In some systems, a user experience can be limited or at least based upon the capabilities of the user device 120. In some embodiments, the platform 110, for example, can determine, based on user device capabilities and connection whether to transmit virtual content the user device 120 for rendering at the user device 120, or whether to transmit a video stream of the virtual session. Accordingly, the disclosed systems and methods can be agnostic to the type of user device 120 being used with the platform 110. In addition, users can be made aware of limitations and/or capabilities of other users.
In some embodiments, one or more values of conditions experienced by an N-th user are determined (210). The N-th user can be one of a plurality of users with a plurality of user devices 120 in a network. An illustrative process for determining values of one or more conditions during step 210 is provided in
For a K-th user permission of k user permissions, the one or more values of conditions are used to determine respective one or more values of the K-th user permission that can be applied to the N-th user (220). An illustrative process for determining one or more values of a user permission during step 220 is provided in connection with
In some embodiments, one of the determined values of the K-th user permission is selected for application to the N-th user (230). By way of example, selection of a value among other values of a user permission to apply to the N-th user during step 230 may be accomplished by determining which of the values is most limiting, and then selecting a more-limiting value, or the most-limiting value.
In some other embodiments, the system can provide the optimal or best possible user experience based on conditions at the user device 120 or user device capabilities. In some cases there may be a trade-off between what the device or network connection can handle and what is best to provide to the user. In one example, a first device may be able to handle a low quality version of a virtual object. But rather than provide low quality 3D experience, a 2D video stream may provide higher resolution of the virtual object, but in two dimensions. The platform 110 can instead provide a 2D video stream of the session to view the virtual environment and all objects in higher detail than in low quality 3D. This also may be true if the server is aware that the user is participating using an AR device (e.g., with limited capabilities or capability conditions—see below) and other users are using VR. Instead of rendering lower quality content to the AR device, the system can provide a video stream of all the VR content.
The selected value of the of the K-th user permission is applied to the N-th user (240).
A determination is made as to whether there are more user permissions for which a determined value has not been applied to the N-th user (e.g., is K<k?) (250). If there are more user permissions for which a determined value has not been applied to the N-th user (e.g., is K<k), steps 220 through 250 are repeated for the other user permissions. If there are no more user permissions for which a determined value has not been applied to the N-th user (e.g., is K≥k), a determination is made as to whether there are any more users to which user permission values are to be applied (260). If there are more users, steps 210 through 260 are repeated for the other users. If there are no more users, the process ends.
Conditions may also or alternatively include device capability conditions including an absence of a capability (incapability). The capability conditions can have associated values, including: user input capabilities (e.g., values: no microphone is available on the device, the microphone is muted, no keyboard is available on the device, no camera is available on the device, and/or no peripheral tool is available in connection with the device); device output capabilities (e.g., values: no 3D display is available on the device, no display is available on the device, no speaker is available on the device, and/or the volume of the device is off or below a threshold level of volume required to hear audio output), setting capabilities (e.g., values: the battery level of the device is below a battery level threshold), memory capacity or capabilities (e.g., ability to store content prior to and during rendering), and processing capabilities (e.g., values: processing available for rendering is below a processing level threshold). The values of the capability conditions can also be referred to as condition values.
In some embodiments, the conditions may be automatically applied (e.g., indicated to or known by the platform 110). In some other embodiments, the conditions or the values of the conditions, such as the device capabilities, may be user-defined. User definitions through user preferences, for example, can indicate desired operating characteristics based on the external environment of the user device 120, or other factors not readily identifiable via the user device 120 itself. For example, while a user device 120 may have a microphone, the associated user may be in a public environment and may selectively indicate (or select) the inability to talk given a noisy environment or to avoid being overheard. As another example, the user device 120 may be equipped with a mouse or other pointing device, but the user may want any input to remain private and so, selectively indicate “no peripheral tool,” as in
In some embodiments, the platform 110 may indicate various (or all) conditions affecting a single user device to all other user devices 120. For example, if a first user device 120 lacks a keyboard (or has another capability condition) that condition or those conditions are indicated to all other user devices 120 in the network. In some other embodiments, the user devices 120 may provide certain messaging (e.g., a broadcast) that indicates any conditions (e.g., network conditions, component or equipment degradations, absence of certain components, etc.) affecting interoperability (either positive or negative) to other user devices 120 in the network. Thus, all user devices 120 in the network are aware of all other user devices' abilities to, for example, transmit or respond (e.g., by text or audio), draw, manipulate, or create content within the virtual environment, etc.
Different user permission values for each condition value are shown in the same row as that condition value in
In some embodiments, where two condition values result in a different value for the same user permission, the most-restrictive value may be selected. For example, a battery level below a battery level threshold permits all inputs except video recording by the user, and a connectivity level value 3 permits only text inputs by the user. In this case, the most limiting user permission value is associated with the connectivity level value 3, which permits only text inputs by the user. Thus, the user permission value that applies to the user/user device would be that only text inputs by the user are allowed.
Applying any user permission value can be accomplished in different ways—e.g., an application on the user device 120 can apply the user permission values, a server can apply the user permission values by sending communications to the user device that are allowed by the user permission values, or other approaches. In some other embodiments, the user can be provided various options for interaction via the user device 120, such as lower (3D) quality content versus a (2D) video stream. Such content and quality settings can be adjusted as needed at any time over the duration of the session. For example, the user may move to an area having better connectivity and therefore the “decision point” switches the user device 120 to high(er) quality output. In such embodiments, the platform 110 can periodically receive updates as to values of one or more conditions (e.g., condition values) of a given user device 120. This can allow the platform 110 to dynamically update user permissions or values of user permissions on a per-user device basis.
In some embodiments, all of the user devices 120 can be informed of other user devices' permissions. This can be accomplished via messaging from individual user devices 120 (e.g., broadcast) or from the platform 110. In some embodiments, the platform 110 can inform the network of the various user permissions and conditions ascribed to all other user devices 120.
By way of example, since the first user is experiencing the first connectivity level value of 2, the following user permission values apply to the first user: all inputs by the first user except video are allowed; all outputs by the device to the first user except video are allowed (e.g., such that any video output by another user is converted to descriptive audio or text at the platform 110 before the descriptive audio or text is sent to the device of the first user); rendering of virtual objects are prioritized (e.g., virtual objects in view or being interacted with by the first user or other user are rendered before other objects not in view or not being interacted with the first user or other user); the qualities of virtual objects displayed on the device of the first user is less than some versions of the virtual objects, but better than lower versions of the virtual objects; and/or interactions by the first user with virtual objects are restricted to a subset of the default types of interactions.
By way of example, since the second user is experiencing the second connectivity level value of 1, all default user permission values apply to the second user, including: all inputs by the first user are allowed; all outputs by the device to the first user are allowed; rendering of virtual objects need not be prioritized; the qualities of virtual objects displayed on the device of the first user are complex versions of the virtual objects; and/or interactions by the first user with virtual objects are not restricted.
By way of example, since the third user is experiencing the third connectivity level value of 3, the following user permission values apply to the third user: only text input by the first user is allowed; only text and descriptive text about audio or video are allowed (e.g., such that any audio or video output by another user is converted to descriptive text at the platform 110 before the descriptive text is sent to the device of the third user); rendering of virtual objects are prioritized; the qualities of virtual objects displayed on the device of the third user are the lower versions of the virtual objects; and/or interactions by the third user with virtual objects are restricted more than the first user (e.g., the third user is only allowed to view the virtual objects).
For purposes of illustration, the device of the first user is assumed to have full capabilities (e.g., all user inputs are available, all device outputs are available, battery level is higher than battery threshold, and available processing for rendering is above a processing threshold), so the associated permission values would be default values.
By way of example, possible condition values for device capabilities of the second user's device along with associated permission values, which are enclosed in parentheses include: no camera (no video input by user); no 3D display (2D versions of virtual objects are rendered); battery level N/A (default values), and processing available for rendering above a processing threshold (default values). Selected permission values that are most-restricting would include default values except for: no video input (from no camera) as communication inputs to other users; default types of communication received from other users; virtual objects are displayed in 2D (from no 3D display); the quality of a virtual object is complex; rendering of different virtual objects need not be prioritized; and all types of interactions are permitted.
Examples of possible condition values for device capabilities of the third user's device along with associated permission values, which are enclosed in parentheses include: mute (no audio input by user); no 3D display (2D versions of virtual objects); battery level below battery threshold (no video input by user, no video from others is provided to the user, prioritize rendering, low quality of virtual objects, the only allowed interaction is viewing); processing available for rendering is below a processing threshold (prioritize rendering, maximize quality of virtual objects, interactions that minimize rendering are allowed). By way of example, selected permission values that are most-restricting would include default values except for: no audio input (from mute) and no video input (from battery level below battery threshold) as communication inputs to other users; no video (from battery level below battery threshold) as communication from other users provided to the user; virtual objects are displayed in 2D (from no 3D display), the quality of virtual objects is low (from battery level below battery threshold); rendering of different virtual objects is prioritized (from battery level below battery threshold, and from processing available for rendering below a processing threshold); and the third user can only view virtual objects (from battery level below battery threshold). If a condition value changes, such as when the battery level is charged above the battery level threshold, then the selected permission values that are most-restricting change—e.g., no audio input (from mute) as communication input to other users is still applied; video input would be available as a communication input to others; video would be available as communication received from other users; virtual objects are still displayed in 2D (from no 3D display); the quality of virtual objects is now maximized (from processing available for rendering below a processing threshold); rendering of different virtual objects is still prioritized (from processing available for rendering below a processing threshold); and more interactions are allowed beyond view only (from processing available for rendering below a processing threshold). By way of example, the additional interactions may include moving, modifying, annotating or drawing on a virtual object, but not exploding it, for example, to see its inner contents that would have to be newly rendered.
As shown, a condition change for device capability (e.g., for User 1A) results in new user permission values being applied to that user. By way of example, if a device capability condition value changes from a battery level below a battery threshold to a battery level above the battery threshold, such as when a device is plugged in after previously discharging below the battery threshold, the user permission values associated with the battery level change from (i) first values (e.g., all user inputs except microphone are available, all device outputs except 3D display are available, the values associated with battery level being below a battery threshold, and the values associated with processing available for rendering being below a processing threshold) to (ii) second values (e.g., all user inputs are available, all device outputs are available, the values associated with battery level being higher than the battery threshold, and the values associated with processing available for rendering being above the processing threshold). Changes also occur as user inputs or user outputs change (e.g., a user device is unmuted making audio input available, or the volume of a user device is increased over a threshold level such that a user can hear audio outputs). The final user permission values that apply to the user may depend on values of other conditions (e.g., connectivity conditions).
By way of another example, if an interaction condition value of a user (e.g., User 2B) changes from one value (e.g., not interacting with a virtual object in a virtual environment) to another value (e.g., interacting with the virtual object in the virtual environment), the user permission values associated with the connectivity change from a first value (e.g., one connectivity level applied to the user) to a second value (e.g., a different connectivity level applied to the user). The final user permission values applied to the user may depend on values of other conditions (e.g., connectivity conditions, device capability conditions).
By way of another example, if a connectivity condition value of a user (e.g., User 1C) changes from one level (e.g., level 3) to another level (e.g., level 1), the user permission values associated with the connectivity change from first values (e.g., only text input by the user is allowed, only text and descriptive text about audio or video are provided to the user, rendering of virtual objects are prioritized, the quality of virtual objects displayed on the device of the user are the lower versions of the virtual objects, and/or interactions by the user with virtual objects are restricted) to second values (e.g., all default user permission values apply to the user). The final user permission values that apply to the user may depend on values of other conditions (e.g., device capability conditions). Such a change in network connectivity may occur on the same network (e.g., having stronger signaling after moving within a wireless network), or by switching networks.
User permissions can alternatively be considered as user modes of operation.
Methods of this disclosure may be implemented by hardware, firmware or software. For example, one or more non-transitory machine-readable media embodying program instructions that, when executed by one or more machines (e.g. processors of the platform 110), cause the one or more machines to perform or implement operations comprising the steps of any of the methods or operations described herein can be used. As used herein, machine-readable media includes all forms of machine-readable media (e.g. non-volatile or volatile storage media, removable or non-removable media, integrated circuit media, magnetic storage media, optical storage media, or any other storage media) that may be patented under the laws of the jurisdiction in which this application is filed, but does not include machine-readable media that cannot be patented under the laws of the jurisdiction in which this application is filed.
By way of example, machines may include one or more computing device(s), processor(s), controller(s), integrated circuit(s), chip(s), system(s) on a chip, server(s), programmable logic device(s), other circuitry, and/or other suitable means described herein (e.g., the platform 110, the user device 120) or otherwise known in the art. Systems that include one or more machines or the one or more non-transitory machine-readable media embodying program instructions that, when executed by the one or more machines, cause the one or more machines to perform or implement operations comprising the steps of any methods described herein are also contemplated.
Method steps described herein may be order independent, and can therefore be performed in an order different from that described. It is also noted that different method steps described herein can be combined to form any number of methods, as would be understood by one of skill in the art. It is further noted that any two or more steps described herein may be performed at the same time. Any method step or feature disclosed herein may be expressly restricted from a claim for various reasons like achieving reduced manufacturing costs, lower power consumption, and increased processing efficiency. Method steps can be performed at any of the system components shown in the figures.
Systems comprising one or more modules that perform, are operable to perform, or adapted to perform different method steps/stages disclosed herein are also contemplated, where the modules are implemented using one or more machines listed herein or other suitable hardware. When two things (e.g., modules or other features) are “coupled to” each other, those two things may be directly connected together, or separated by one or more intervening things. Where no lines and intervening things connect two particular things, coupling of those things is contemplated in at least one embodiment unless otherwise stated. Where an output of one thing and an input of another thing are coupled to each other, information sent from the output is received by the input even if the data passes through one or more intermediate things. Different communication pathways and protocols may be used to transmit information disclosed herein. Information like data, instructions, commands, signals, bits, symbols, and chips and the like may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, or optical fields or particles.
The words comprise, comprising, include, including and the like are to be construed in an inclusive sense (i.e., not limited to) as opposed to an exclusive sense (i.e., consisting only of). Words using the singular or plural number also include the plural or singular number, respectively. The word or and the word and, as used in the Detailed Description, cover any of the items and all of the items in a list. The words some, any and at least one refer to one or more. The term may is used herein to indicate an example, not a requirement—e.g., a thing that may perform an operation or may have a characteristic need not perform that operation or have that characteristic in each embodiment, but that thing performs that operation or has that characteristic in at least one embodiment.
This application claims priority to U.S. Provisional Patent Application Ser. No. 62/593,058, filed Nov. 30, 2017, entitled “SYSTEMS AND METHODS FOR DETERMINING VALUES OF CONDITIONS EXPERIENCED BY A USER, AND USING THE VALUES OF THE CONDITIONS TO DETERMINE A VALUE OF A USER PERMISSION TO APPLY TO THE USER,” the contents of which are hereby incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
62593058 | Nov 2017 | US |