This disclosure relates to virtual reality (VR), augmented reality (AR), and mixed reality (MR) technologies.
Mixed reality (MR), sometimes referred to as hybrid reality, is the term commonly applied to the merging of real or physical world and virtual worlds to produce new environments and visualizations where physical and digital objects co-exist and interact. Mixed reality visualizations and environments can exists in the physical world, the virtual world, and can include a mix of reality, VR, and AR via immersive technology including interactive environments and interactive three-dimensional (3D) virtual objects. Users of MR visualizations and environments can move around the MR visualizations and interact with virtual objects within the virtual environment.
Interactive 3D virtual objects can be complex and contain large amounts of information that describe different features of the virtual objects, including the geometry, appearance, scenery, and animation of the virtual objects. Particular features of a virtual object may include shape, surface geometry, color, texture, material type, light sources, cameras, peripheral objects, animation, physical properties, and kinematics.
MR, VR, and AR (or similar) devices can provide complex features and high-fidelity of representations of a physical world that can be useful in instruction or various types of training curricula or programs.
One aspect of the disclosure provides a method for operating a virtual environment. The method can include receiving, at one or more processors of a mixed reality platform, information associated with an object in view of an augmented reality (AR) device. The object can be a physical object located in a physical world or a virtual thing. The method can include identifying the object based on the information. The method can include identifying virtual content associated with the object. The virtual content can be stored in a database coupled to the one or more processors. The method can include determining that the AR device is permitted to receive at least a portion of the virtual content. The method can include transmitting the virtual content to the AR device based on the determining. The method can include causing the AR device to display the virtual content. The AR device can receive at least a portion the virtual content based on a permission level. The information can be collected by the AR device or received from the one or more processors based on the location of the AR device.
The information can be at least one of an image of the object captured at the AR device, a location of the object, and an image of a barcode of the object captured at the AR device.
The receiving can include receiving a wireless transmission from the object or thing.
The method can include performing a database query based on the information at the one or more processors.
The determining can be based on, for example, a white list including an identifier of the AR device, a user login credentials associated with a user of the AR device, and/or a biometric scan of a user associated with the AR device.
The virtual content can include an access level. The access level can include one of a public rating, a private rating, and a protected rating. The public rating permits any user to view the virtual content. The private rating permits certain users to view the virtual content. The protected rating permits only users having specific access rights to view the virtual content.
Another aspect of the disclosure provides a non-transitory computer-readable medium including instructions for operating a virtual environment. When executed by one or more processors, the instructions cause the one or more processors to receive information associated with an object in view of an augmented reality (AR) device. The object being a physical object located in a physical world. The instructions further cause the one or more processors to identify the object based on the information. The instructions further cause the one or more processors to identify virtual content associated with the object, the virtual content being stored in a database coupled to the one or more processors. The instructions further cause the one or more processors to determine that the AR device is permitted to receive at least a portion of the virtual content. The instructions further cause the one or more processors to transmit the virtual content to the AR device based on the determining. The instructions further cause the one or more processors to cause the AR device to display the virtual content.
The details of embodiments of the present disclosure, both as to their structure and operation, can be gleaned in part by study of the accompanying drawings, in which like reference numerals refer to like parts, and in which:
This disclosure relates to different approaches for authorized exportation of virtual content to an augmented reality device.
As shown in
Some of the sensors 124 (e.g., inertial, optical, and location sensors) are used to track the pose (e.g., position and orientation) of a user in virtual environments and physical environments. Tracking of user position and orientation (e.g., of a user head or eyes) is commonly used to determine fields of view, and each field of view is used to determine what virtual content is to be rendered using the processor 126 for presentation to the user on a display of a user device. Tracking the positions and orientations of the user or any user input device (e.g., a handheld device) may also be used to determine interactions with virtual content. In some embodiments, an interaction with virtual content (e.g., a virtual object) includes a modification (e.g., change color or other) to the virtual content that is permitted after a tracked position of the user or user input device intersects with a point of the virtual content in a geospatial map of a virtual environment, and after a user-initiated command is provided to make the desired modification. Positions in a physical environment may be tracked in different ways, including positioning using Global Navigation Satellite Systems (GNSS), Bluetooth, WiFi, an altimeter, or any other known way to estimate the position of a thing (e.g., a user or object) in a physical environment.
Some of the sensors 124 (e.g., cameras and other optical sensors of AR devices) may also be used to capture information about a physical environment, which is used to generate virtual representations of that information, or to generate geospatial maps of the physical environment that can be used to determine where and how to present virtual content among physical objects of the physical environment. Such virtual representations and geospatial maps may be created using any known approach. In one approach, many two-dimensional images are captured by a camera of an AR device, those two-dimensional images are used to identify three-dimensional points in the physical environment, and the three-dimensional points are used to determine relative positions, relative spacing and structural characteristics (e.g., surfaces and depths) of physical objects in the physical environment. Other optical sensors may be used in addition to a camera (e.g., a depth sensor). Textures, colors and other features of physical objects or physical environments can be determined by analysis of individual images.
Some of the sensors 124 (e.g., cameras and other optical and biometric sensors of the AR devices) may be used to capture biometric information (e.g. eye color, hair color, facial features, heart rate, etc.) about the wearer or user of the AR device (e.g., the user device 120). The captured information may assist the system in verifying the identity of the user. The captured biometric data can be used to, for example, authenticate the user. In other examples, the capture biometric data can be used in conjunction with additional authenticating elements such as a username and password (e.g., login credentials). Thus, the platform 110 can use captured biometric data to validate the user's identity.
Examples of the user devices 120 include VR, AR, MR and general computing devices with displays, including head-mounted displays, sensor-packed wearable devices with a display (e.g., glasses), mobile phones, tablets, desktop computers, laptop computers, or other computing devices that are suitable for carrying out the functionality described in this disclosure.
The methods or processes outlined and described herein and particularly those that follow below in connection with
As shown in
For example, a user wearing an AR headset as part of his job function encounters various pieces of equipment while walking across the job site. The system (e.g., the platform 110) can identify the user as an employee responsible for performing specific job functions on specific pieces of equipment. The system can provide the user with only the content relative to equipment the user needs to interact with in order to perform a specified job function. In another example, a user wearing an AR headset is touring an oil refinery. The user is a potential customer of the oil refinery. Based on the fact that the user is not an employee of the oil refinery, the system may limit the content the user can view about the equipment the user encounters during his tour of the oil refinery and only provide content that is marked for public consumption.
The augmented reality device can use the camera, WiFi, BT, GPS, and various other sensors (e.g., the sensors 124) to capture information about the thing. In some examples, the sensors can capture/detect/reveal a number of characteristics of the thing. The relative location of the thing in relation to the AR user device GPS location can be used to determine a position of the thing, or about the location in which the thing is found. A camera can capture information about the physical characteristics of the thing (e.g., size, shape, color, etc.). Image capture can further scan a bar code or QR code, for example to determine other characteristics of the thing that are not readily identifiable by viewing the thing. Other sensors 124 can scan the thing or receive data transmitted from the thing or from sensors in the same area as the thing. Recognition of a thing in the vicinity of the user can be accomplished by different approaches. In one embodiment of step 210, an image of the thing is captured by a camera of the augmented reality device, and known image recognition software is used to identify the thing.
The system can use multiple methods to identify the thing. For example, the system can use a scan of a bar code or QR Code to identify the thing. In another example, the system can use sensor data collected by the AR headset. If the sensor data contains a unique identifier for the thing, the system can use that to identify the thing. In another example the system can use image recognition techniques to identify the thing. In this example, the AR headset captures one or more images of the thing and uses those images to uniquely identify the thing. In another example the system can use a combination of techniques to identify the thing. In this example the system narrows down the list of possible things by eliminating all the things that would not be found at the user's location. Next, the system can use sensor data collected by the AR headset to create a list of potential things based on the analyzing the data received from the sensors (e.g. things that transmit temperature readings). In another step the system can use one or more images captured by the AR headset to compare to the list of potential things.
In embodiments of step 210, the user identifies the thing (e.g., by selection, comparison of the thing to known objects in that location). In embodiments of step 210, an identifier of the thing is detected by the augmented reality device (e.g., a sensor of the augmented reality device scans a code identifier, a sensor of the augmented reality device detects an identifier emitted by the thing), and the identifier is used to identify the thing. In embodiments, the user selects an image of the thing on a display of the augmented reality device. In some examples, a scan can involve a database lookup (e.g., at the platform 110) which can occur via a server query or by a local lookup at the user device 120. In other examples, sensors of a thing or object or the thing or object itself can transmit an identifier that the user device 120 can reference in a server or memory query to determine what the object/thing actually is. Other approaches are also possible for recognizing a thing.
Virtual content associated with the thing is identified (220). Virtual content may take on different forms, including: a virtual representation of the thing; a virtual representation of the type of thing with modifiable features (e.g., a virtual representation of a purchasable thing like a car with an option to change the color or other features of that virtually represented thing); information about the thing (e.g., cost, reviews, maintenance records, operational procedures, background information); or other content. The virtual content may include both public, private and protected information about the thing. The public information can be provided to any user. Private information may be accessible by a select or otherwise limited list of users or user devices 120 (e.g., a white list or black list). Protected information may be viewed by one or more users or user devices having specific access rights. The system (e.g., the platform 110) can implement multiple methods of verifying access rights (e.g., authentication) for protected information prior to allowing the user or user device 120 to access/view/modify the protected information, as described below. The thing itself may not be purchasable, and can be anything (e.g., a physical object that is maintained, repaired or operated, or another type of thing).
A determination is made as to whether the user of the augmented reality device is permitted to view the virtual content (230). In some embodiments of step 230, user information is checked against an authorized list of users (e.g., the user logs into an authorized user account or is compared to for example, a white list or a black list), and the user is permitted to view (e.g., at least a portion of) the virtual content if the user is authorized. In another embodiment of step 230, a determination is made as to whether the location of the augmented reality device is within a predefined distance or area relative to the thing. The location can be determined by GPS, WiFi, BT, or other well-known methods to determine location information. Location information can also be gleaned by determining whether the augmented reality device is receiving data from a sensor associated with the thing. For example, sensor data can be transmitted over a short distance and therefore if the AR device is receiving the sensor data the system can assume the user is within close proximity to the thing. The system determines if the user is permitted to view virtual content associated with the thing, for example, if the location of the augmented reality device is within the predefined distance or area relative to the thing.
In other embodiments of step 230, the system (e.g., the platform 110) can confirm user access or authorization rights. The access rights or authorization rights determine the type or amount of virtual content available to the user (e.g., the user device 120). Each element or piece of virtual content can be classified by an access rating, for example public, private, or protected as noted above. For example, the public rating allows any user to view the content, the private rating allows only the or specific users (e.g., on a whitelist or not on a black list) to view the content, and the protected rating requires the user to have specific access rights to view the content. In some instances, a white list may identify specific users exactly (e.g., a list of names or other identifiers) whereas specific access rights identify a class of users, users with a given security clearance level, or users with a specific title. Specific access rights can include, for example, credentials (e.g., username and password), a security clearance or rating, a specific title or job description, or other distinguishing characteristics. The access rights can further include, for example, classification level, status, user title, user job description or function, proximity to the thing, an identity of the user, identifier of the user device, or a combination of the foregoing.
In some examples, difference versions of virtual content may be presented, based on the user access rights or authorization levels. For example, a public version of the virtual content may be widely accessible. On the other hand, the private or protected versions of the content contain data that could be used to create a competitive advantage, cause harm, expose, or in some way negatively impact the owner of the virtual content and therefore should be protected.
If the user is permitted to view the virtual content, the virtual content is transmitted to the augmented reality device for display to the user (240). In some embodiments, at least a portion of the virtual content is transmitted to the AR device, based on the permissions or authorization level(s).
The augmented reality device is caused to present (e.g., display, play) the virtual content (250). Examples of displaying the virtual content include: overlaying the virtual content over the thing; displaying the virtual content at a preset position relative to a predefined point on the thing; displaying the virtual content in view of the user and letting the user move the virtual content to a location in a physical space; or another way of displaying. In one embodiment of step 250, the augmented reality device is caused to display the virtual content when executable instructions are received by the augmented reality device from the platform, wherein the instructions direct the augmented reality device to display the virtual content.
In step 220, the platform 110 can identify the thing as the chair based on a query of an associated database, to identify the chair. The database can have one or more memories or memory storage devices communicatively coupled to the platform 110.
In step 230, the platform can conduct a search (e.g., in response to a user query for information about the thing) within known or authorized users in a memory, for example. The user can be an authorized user if the user has, for example, a current account or other applicable credentials. If the user identity properly referenced in the memory, then the user can be identified as authorized. In another example, if the user is identified on a white list, the user may be authorized. Alternatively, if the user is identified on a black list, the user may not be authorized.
In step 240, the platform 110 can provide requested or applicable virtual content associated with the thing (e.g., the chair of step 210).
In step 250, the AR user device 120 can display the virtual content associated with the thing (to the user).
Methods of this disclosure offer different technical solutions to important technical problems.
One technical problem is providing secure access to sensitive data by a particular user device 120. Solutions described herein provide secure access to data and permit only certain user devices to receive desired virtual content (e.g., from the platform 110) while excluding other user devices from accessing the virtual content.
Another technical problem is delivering different content to different users, where the content delivered to each user is more relevant to that user. Solutions described herein provide improved delivery of relevant virtual content, which improves the relationship between users and sources of virtual content, and provides new revenue opportunities for sources of virtual content.
Each method of this disclosure can be used with virtual reality (VR), augmented reality (AR), and/or mixed reality (MR) technologies. Virtual environments and virtual content may be presented using VR technologies, AR technologies, and/or MR technologies.
Methods of this disclosure may be implemented by hardware, firmware or software. One or more non-transitory machine-readable media embodying program instructions that, when executed by one or more machines, cause the one or more machines to perform or implement operations comprising the steps of any of the methods or operations described herein are contemplated. As used herein, machine-readable media includes all forms of machine-readable media (e.g. non-volatile or volatile storage media, removable or non-removable media, integrated circuit media, magnetic storage media, optical storage media, or any other storage media) that may be patented under the laws of the jurisdiction in which this application is filed, but does not include machine-readable media that cannot be patented under the laws of the jurisdiction in which this application is filed.
By way of example, machines may include one or more computing device(s), processor(s), controller(s), integrated circuit(s), chip(s), system(s) on a chip, server(s), programmable logic device(s), other circuitry, and/or other suitable means described herein (e.g., the platform 110, the user device 120) or otherwise known in the art. Systems that include one or more machines or the one or more non-transitory machine-readable media embodying program instructions that, when executed by the one or more machines, cause the one or more machines to perform or implement operations comprising the steps of any methods described herein are also contemplated.
Method steps described herein may be order independent, and can therefore be performed in an order different from that described. It is also noted that different method steps described herein can be combined to form any number of methods, as would be understood by one of skill in the art. It is further noted that any two or more steps described herein may be performed at the same time. Any method step or feature disclosed herein may be expressly restricted from a claim for various reasons like achieving reduced manufacturing costs, lower power consumption, and increased processing efficiency. Method steps can be performed at any of the system components shown in the figures.
Systems comprising one or more modules that perform, are operable to perform, or adapted to perform different method steps/stages disclosed herein are also contemplated, where the modules are implemented using one or more machines listed herein or other suitable hardware. When two things (e.g., modules or other features) are “coupled to” each other, those two things may be directly connected together, or separated by one or more intervening things. Where no lines and intervening things connect two particular things, coupling of those things is contemplated in at least one embodiment unless otherwise stated. Where an output of one thing and an input of another thing are coupled to each other, information sent from the output is received by the input even if the data passes through one or more intermediate things. Different communication pathways and protocols may be used to transmit information disclosed herein. Information like data, instructions, commands, signals, bits, symbols, and chips and the like may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, or optical fields or particles.
The words comprise, comprising, include, including and the like are to be construed in an inclusive sense (i.e., not limited to) as opposed to an exclusive sense (i.e., consisting only of). Words using the singular or plural number also include the plural or singular number, respectively. The word or and the word and, as used in the Detailed Description, cover any of the items and all of the items in a list. The words some, any and at least one refer to one or more. The term may is used herein to indicate an example, not a requirement—e.g., a thing that may perform an operation or may have a characteristic need not perform that operation or have that characteristic in each embodiment, but that thing performs that operation or has that characteristic in at least one embodiment.
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics of each of the described embodiments may be readily combined in any suitable manner in one or more embodiments.
This application claims priority to U.S. Provisional Patent Application Ser. No. 62/628,872, filed Feb. 9, 2018, entitled “SYSTEMS AND METHODS FOR AUTHORIZED EXPORTATION OF VIRTUAL CONTENT TO AN AUGMENTED REALITY DEVICE,” the contents of which are hereby incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
62628872 | Feb 2018 | US |