This disclosure relates to virtual reality (VR), augmented reality (AR), and mixed reality (MR) technologies.
Mixed reality (MR), sometimes referred to as hybrid reality, is the term commonly applied to the merging of real or physical world and virtual worlds to produce new environments and visualizations where physical and digital objects co-exist and interact. Mixed reality visualizations and environments can exists in the physical world, the virtual world, and can include a mix of reality, VR, and AR via immersive technology including interactive environments and interactive three-dimensional (3D) virtual objects. Users of MR visualizations and environments can move around the MR visualizations and interact with virtual objects within the virtual environment.
Interactive 3D virtual objects can be complex and contain large amounts of information that describe different features of the virtual objects, including the geometry, appearance, scenery, and animation of the virtual objects. Particular features of a virtual object may include shape, surface geometry, color, texture, material type, light sources, cameras, peripheral objects, animation, physical properties, and kinematics.
MR, VR, and AR (or similar) devices can provide complex features and high-fidelity of representations of a physical world that can be useful in instruction or various types of training curricula or programs.
An aspect of the disclosure provides a method for operating a virtual reality (VR) system. The VR system can have a VR device associated with a virtual environment to provide a virtual representation of an augmented reality (AR) experience of a physical world. The method can include receiving a command to emulate an augmented reality (AR) device by the VR device. The method can include displaying a portion of the virtual environment viewable, on a VR display of the VR device, based on a position and an orientation of a first VR user in the virtual environment. The method can include displaying an image of an AR display of the AR device that would be in view of the user if the user was wearing the AR device in the physical world. The method can include displaying a virtual object representing a physical object inside the image of the AR display. The method can include receiving a user input associated with the virtual object at the VR device. The method can include providing feedback via the VR display based on the user input if the user input matches a predefined interaction of a set of predefined interactions. The method can include displaying an indication that the predefined interactions are completed successfully. The method can include redisplaying the virtual object as manipulated if the user input does not match a predefined interaction of the set of predefined interactions. The predefined interactions can include an ordered sequence of actions based on user input. The user input can include at least one of a voice command, a movement of the user, an interaction with the virtual object, and a controller input to manipulate an AR menu. The predefined interactions can have at least one of work instructions, a maintenance program, an operations program. The method can include displaying an instruction via the image of the AR display to interact with the virtual object.
Another aspect of the disclosure provides a non-transitory computer-readable medium comprising instructions for operating a virtual reality (VR) system including a VR device associated with a virtual environment to provide a virtual representation of an augmented reality (AR) experience of a physical world. When executed by one or more processors the instructions can cause the one or more processors to receive a command to emulate an augmented reality (AR) device by the VR device. The instructions can cause the one or more processors to display a portion of the virtual environment viewable based on a position and an orientation of a first VR user in the virtual environment. The instructions can cause the one or more processors to display an image of an AR display of the AR device that would be in view of the user if the user was wearing the AR device in the physical world. The instructions can cause the one or more processors to display a virtual object representing a physical object inside the image of the AR display. The instructions can cause the one or more processors to receive a user input associated with the virtual object at the VR device. The instructions can cause the one or more processors to provide feedback via the VR display based on the user input if the user input matches a predefined interaction of a set of predefined interactions. The instructions can cause the one or more processors to display an indication that the predefined interactions are completed successfully. The instructions can cause the one or more processors to redisplay the virtual object as manipulated if the user input does not match a predefined interaction of the set of predefined interactions. The predefined interactions can include an ordered sequence of actions based on user input. The user input can include at least one of a voice command, a movement of the user, an interaction with the virtual object, and a controller input to manipulate an AR menu. The predefined interactions can include at least one of work instructions, a maintenance program, an operations program. The displaying can further include, displaying an instruction via the image of the AR display to interact with the virtual object.
Other features and advantages will become apparent to one of ordinary skill following a review of the following description.
The details of embodiments of the present disclosure, both as to their structure and operation, can be gleaned in part by study of the accompanying drawings, in which like reference numerals refer to like parts, and in which:
This disclosure relates to different approaches for using a virtual reality (VR) device to emulate user experience of an augmented reality (AR) device.
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics of each of the described embodiments may be combined in any suitable manner in one or more embodiments.
As shown in
Some of the sensors 124 (e.g., inertial, optical, and location sensors) are used to track the pose (e.g., position and orientation) of a user in virtual environments and physical environments. Tracking of user position and orientation (e.g., of a user head or eyes) is commonly used to determine fields of view, and each field of view is used to determine what virtual content is to be rendered using the processor 126 for presentation to the user on a display of a user device. Tracking the positions and orientations of the user or any user input device (e.g., a handheld device) may also be used to determine interactions with virtual content. In some embodiments, an interaction with virtual content (e.g., a virtual object) includes a modification (e.g., change color or other) to the virtual content that is permitted after a tracked position of the user or user input device intersects with a point of the virtual content in a geospatial map of a virtual environment, and after a user-initiated command is provided to make the desired modification. Positions in a physical environment may be tracked in different ways, including positioning using Global Navigation Satellite Systems (GNSS), Bluetooth, WiFi, an altimeter, or any other known way to estimate the position of a thing (e.g., a user) in a physical environment.
Some of the sensors 124 (e.g., cameras and other optical sensors of AR devices) may also be used to capture information about a physical environment, which is used to generate virtual representations of that information, or to generate geospatial maps of the physical environment that can be used to determine where and how to present virtual content among physical objects of the physical environment. Such virtual representations and geospatial maps may be created using any known approach. In one approach, many two-dimensional images are captured by a camera of an AR device, those two-dimensional images are used to identify three-dimensional points in the physical environment, and the three-dimensional points are used to determine relative positions, relative spacing and structural characteristics (e.g., surfaces and depths) of physical objects in the physical environment. Other optical sensors may be used in addition to a camera (e.g., a depth sensor). Textures, colors and other features of physical objects or physical environments can be determined by analysis of individual images.
Examples of the user devices 120 include VR, AR, MR and general computing devices with displays, including head-mounted displays, sensor-packed wearable devices with a display (e.g., glasses), mobile phones, tablets, desktop computers, laptop computers, or other computing devices that are suitable for carrying out the functionality described in this disclosure.
The methods or processes outlined and described herein and particularly those that follow below, can be performed by one or more processors of the platform 110 either alone or in connection or cooperation with the user device(s) 120. The processes can also be performed using distributed or cloud-based computing.
As shown in
A determination is made if an AR emulator mode is to be activated on the VR device (220). Examples of determining if an AR emulator mode is to be activated include: a menu selection via a user interface of the VR device, clicking on a link received from another user that directs the VR device to the AR emulator mode, opening a file storing an emulation program, or other ways of determining. When a particular emulation program is executed, the AR emulator mode depicts what a user would experience if that user wore a particular AR device while encountering physical things in a physical environment. One particular use of the AR emulator mode is to emulate AR training programs that a user would interact with when later using an AR device in a particular physical environment (e.g., training programs for maintenance, repair and operations (MRO) or other situations). Another use of the AR emulator mode is to emulate a user experience of an AR device during design of that user experience so a designer can evaluate and modify the user experience according to what the emulator presents to the designer on a VR device. Yet another use of the AR emulator is to emulate a user experience with an AR device before a user encounters the experience during future use of an AR device. As used herein, emulating the AR experience can include converting an AR training program (e.g., an AR program, algorithm, predefined set of instructions, etc.) intended for use with an AR user device 120 into a training program or set of instructions/algorithm that creates the look and feel of an AR interface for use on a VR user device.
Different emulation programs can exist for different AR devices, different users of the same AR device, or different physical environments in which the same AR device is used. Each emulation program can do one or more of the following: (i) display a virtual object representing an identified physical thing, (ii) replicate the physical appearance of a screen of an identified AR device as would be seen by a user operating the AR device, (iii) replicate a user interface of that AR device that would be seen by the user operating the AR device, (iv) replicate virtual content displayed on the screen of the AR device during operation of the AR device in association with the physical thing, (v) capture the user movement and/or action in the virtual world and interpret/map the user behavior/action into an action that is known by the emulation program, and (vi) provide an appropriate reaction based on the instructions in the emulation program.
When an AR emulation program is executed by a processor of the VR device, different outputs are displayed on the screen of the VR device. The outputs can include: (i) virtual objects representing physical things that would be encountered when a user operates an AR device in a physical environment that includes the physical things; (ii) an image of any part of the AR device that is (or would be) in view of the user when the user operates the AR device, including an AR screen of the AR device; (iii) representations of different parts of a user interface that appear under different scenarios within the image of the AR screen at locations where the different parts of the user interface would appear when a user operates the AR device; and (iv) representations of virtual content that appear within the image of the AR screen at locations relative to the virtual objects that match locations where the virtual content would appear when the user operates the AR device in view of the physical things represented by the virtual objects. In short, the AR emulation program depicts the appearance of a user experience provided by an AR device when the AR device is operated by a user.
If during step 220, the AR mode is not activated, no AR emulation program is executed using the VR device (230).
If during step 220, the AR mode is activated, a physical thing (and optionally other physical things) to display in the virtual environment is determined (240). Examples of determining a physical thing to display in the virtual environment include (i) selection by the user of the physical thing or a physical location where one or more physical things of interest may be present (e.g., a menu selection via a user interface of the VR device, clicking on a link received from another user that identifies the physical thing, or other ways of selecting), or (ii) identifying a predefined physical thing or a physical location where one or more physical things of interest may be present (e.g., a thing associated with an emulation program that is to be presented to the user of the VR device, where the emulation program may be selected among one or more emulation programs).
The method of
Optionally, a group of one or more AR devices that can be emulated in association with the physical thing is determined (250). The group of AR devices can be determined in different ways. In one implementation, the group of AR devices includes devices that are approved for use with the physical thing or in a physical environment that contains the physical thing (e.g., the group consists of ruggedized AR devices for heavy equipment maintenance, the group consists of AR devices that have the ability to offer particular security features, or other groups). In another implementation, the group of AR devices includes devices that can display particular virtual content relating to the physical thing (e.g., the group consists of AR devices that can track physical things and display information relative to the positions of the tracked physical things).
An AR device (optionally from the group of one or more AR devices) to emulate in the virtual environment is determined (260). Examples of determining an AR device from the group of one or more AR devices to emulate in the virtual environment include (i) selection by the user of the AR device (e.g., a menu selection via a user interface of the VR device, clicking on a link received from another user that identifies the AR device, or other ways of selecting), or (ii) identifying a predefined AR device (e.g., an AR device associated with the emulation program that is to be presented to the user of the VR device).
An emulation of an experience the user would have when wearing the AR device while viewing the physical thing is presented using the VR device of the user (270). One example of step 270, includes displaying on the screen of the VR device: (i) an image of a screen of the identified AR device; (ii) a representation of a user interface of the identified AR device that would be displayed by the screen of the identified AR device during its use in a physical environment; (iii) representations of virtual content that would be displayed by the screen of the identified AR device during its use; and (iv) virtual object(s) that represent physical thing(s) in the physical environment. In some embodiments, a mapping of the physical environment and its physical things is determined, and the mapping is used to generate a virtual environment and virtual objects that virtually represent the physical environment and the physical things where the relative positions and orientations of the virtual objects match the relative positions and orientations of the physical things. The process of step 270 is described in more detail in connection with
Using a VR Device of a User to Present an Emulation of an Experience the User would have when Wearing an AR Device while Viewing a Physical Thing (270)
As shown in
An image of an AR screen of the AR device that would be in view of the user if the user was wearing the AR device is displayed on the screen of the VR device (372). Such an image, and related 3D information/data, may be previously generated and stored before step 270, and later retrieved during step 270. In one implementation, the image of the AR screen is static and can be generated and displayed separately from representations of a user interface or dynamic virtual content (e.g., information about the identified physical thing) that would display on the AR screen if the user wore the selected AR device. Separating images of the AR screen and representations of a user interface and virtual content offers different technical advantages depending on implementation, including: (i) reduction of bandwidth use when transmitting images since the static image of the AR screen need only be transmitted once to the VR device; (ii) reduction of processing needed to render new images at the VR device since the static image of the AR screen may not need to be re-rendered as often as other images; and (iii) reduction of data storage needed to store images since single instances of represented virtual content can be stored for use with different images of different AR screens (as compared to having the same representations of virtual content replicated and saved with different AR screens of different AR devices).
The VR device waits for or instructs the user to move towards an interactive virtual object that represents the physical thing (373).
A determination is made as to whether the interactive virtual object is (i) near (e.g., within a predefined distance of) the current position of the user, and (ii) inside the image of the AR screen (374).
If the virtual object is not near the current position of the user, or is not inside the image of the AR screen during step 374, the process returns to step 373.
If the virtual object is near the current position of the user and is inside the image of the AR screen during step 374, the interactive virtual object is displayed on the screen of the VR device (375).
Representation(s) of a user interface and/or AR virtual content are displayed on the screen of the VR device inside the image of the AR screen (376). Such representations of a user interface and/or AR virtual content may be previously generated and stored before step 270, and later retrieved during step 376. Examples of different AR virtual content that is represented using the VR device include stored information, videos, digital twins, documentation, or images associated with the physical thing (e.g., instructions for operating the thing, instructions for performance maintenance on the thing, options for repairing the thing, methods for performing diagnostics on the thing, options for designing the thing, information about the thing, etc.). As used herein a digital twin is a virtual replica of a physical object, that is, a virtual representation that captures the size, characteristics, composition, color, texture, etc. of a physical object such that the user believes he/she is interacting with the physical object. Examples of a user interface includes information or images that are not associated with the physical thing, but that may be used to control the operation of a program being executed by a device.
By way of example, a sub-step during step 376 of determining where to display a representation of a user interface and/or representations of virtual content may include: (i) identifying a predefined part of the interactive virtual object that is inside the image of the AR screen, and displaying a representation of virtual content at a position relative to the predefined part in the same manner the virtual content would be displayed at a position in the screen of the AR device relative to a part of the physical object that matches the predefined part of the interactive virtual object; (ii) identifying an area inside the image of the AR screen that does not block the user's view of the interactive virtual object, and displaying a representation of a user interface and/or a representation of virtual content in that area; (iii) identifying a location inside the image of the AR screen that is selected by the user, and displaying a representation of virtual content at that location; (iv) identifying a predefined area of the AR screen for displaying a representation of a user interface and/or a representation of virtual content, and displaying the representation of the user interface and/or the representation of the virtual content in that predefined area; and (v) identifying an area that replicates the same area on the AR headset in which the information would be displayed.
User input is detected (377), and an action is performed based on the user input (378). Examples of user input include: movement of the user's position and/or orientation within the virtual environment; interaction by the user with a displayed representation of virtual content; interaction by the user with the virtual object; user speaking a voice command; using the VR controller to perform an action; and user selecting a menu item on the AR user interface.
The AR emulation program interprets the user's action and provides a subsequent reaction which may be: manipulation of the virtual object; an update to the user's view; an update to the displayed AR menu option; a new AR menu being displayed; new virtual objects being displayed; new AR content being displayed within the simulated AR display/headset. An action is performed based on a user input at step (378). Examples of step 378 are provided in
The predefined interactions, or training program can further include work instructions, maintenance instructions, troubleshooting and repair instructions, operational instructions, design instructions, set of modifications required, “how to” instructions, directional instructions, etc. Any series or set of actions or tasks that can be executed in an AR system can be transformed or otherwise converted and emulated in a VR system. The training program can include instructions that allow a user to learn to operate, for example, an AR user device 120 in an experience-based environment that mimics the real world. This can be particularly useful in situations concerning maintenance training for large equipment that is not easily transported (e.g., oil rigs, large construction equipment, etc.). Thus the system can convert AR data or software elements intended for use with an AR user device into data or other information required to display the same elements in a VR environment viewable using a VR user device.
In some examples, the system can be implemented as a training or evaluation tool for a remote technician, to introduce procedures or evaluate a technician's performance in situations that simulate experiences the technician would encounter in the field (e.g., the real world) while conducting maintenance. In real world operations, the remote technician can be equipped with an AR device for use in conducting actual maintenance, inspections, etc. on equipment in the field. The system therefore, can provide VR-simulated training scenarios and associated instructions for using an actual AR device (e.g., via the emulated AR device on the VR system). This implementation can provide valuable training, simulating the real world environment using the AR system.
If the determination during step 578a is that the user input indicates the user otherwise manipulated the virtual object, then the virtual object is redisplayed on the screen of the VR device as manipulated (578b). Examples of manipulating a virtual object include moving the virtual object (in whole or in part) or other known manipulations. In addition to redisplaying the virtual object in its manipulated form (e.g. opening the hood of a virtual car), the system may display updated AR menu and/or content to provide further instruction to the user. If the determination during step 578a is that the user was attempting to complete a predefined interaction with the virtual object (e.g., as directed by presented representations of virtual content), a determination is made as to whether the user successfully completed the predefined interaction (578c). If completion was determined to be successful during step 578c, the process returns to step 376 for any new representation of content, or returns to step 373 for any additional interactive virtual object (578d). If completion was determined to be unsuccessful during step 578c, the VR device may output an indication that the predefined interaction was not successfully completed, and then wait for the user to successfully complete the predefined interaction by returning to step 578c (578e).
Illustrations of different approaches for using a VR display area of a VR device to emulate an AR display area of an AR device
Different predefined actions can be used in different embodiments, including: (i) a movement by the user relative to the physical thing or the virtual object that is respectively recognized by the AR device or the VR device (e.g., using a visual sensor and image processing capability of that AR or VR device, or using tracked movement of a tracking device worn, held or otherwise coupled to the user when the user uses the AR device or the VR device); (ii) a movement by the physical thing or a part of the physical thing that is recognized by the VR device or AR device (e.g., using a visual sensor and image processing capability of the VR device or AR device, or using tracked movement of a tracking device attached to the physical thing); (iii) a sound, a light, or other output by the physical thing that is recognized by the VR device or AR device (e.g., using an appropriate sensor); and/or (iv) an input received from the user (e.g., a voice command, an audio input, a selection of a displayed option relative to the content or representation of content, a selection of the virtual object or part of the virtual object).
In different embodiments: (i) the first and second predefined actions are the same and detected using similar technology (e.g., a visual sensor and image processing capability of the AR device and the VR device); (ii) the first and second predefined actions are the same, but the first and second predefined actions are detected using different technology (e.g., the AR device uses a visual sensor and image processing capability, and the VR device tracks the movement of the user using other technology such as movement of device worn, held or otherwise coupled to the user); (iii) the first and second predefined actions are not the same but similar (e.g., the first predefined action is a first recognized movement by the user of the physical thing, and the second predefined action is a second recognized movement by the user of the virtual object); or (iv) the first and second predefined actions are not the same or similar (e.g., the first predefined action is a first recognized movement by the user or the physical thing, and the second predefined action is a non-movement action such as a spoken command like “completed” or “advance to next step”).
In one embodiment, the representation 709 and the content 759 are removed only after respective predefined actions with the virtual object 705 and the physical thing 755 are detected, and respective predefined actions with the representation 709 and the content 759 are detected.
The following steps are carried out in an additional embodiment:
Each method of this disclosure can be used with virtual reality (VR), augmented reality (AR), and/or mixed reality (MR) technologies. Virtual environments and virtual content may be presented using VR technologies, AR technologies, and/or MR technologies.
Methods of this disclosure may be implemented by hardware, firmware or software.
One or more non-transitory machine-readable media embodying program instructions that, when executed by one or more machines, cause the one or more machines to perform or implement operations comprising the steps of any of the methods or operations described herein are contemplated. As used herein, machine-readable media includes all forms of machine-readable media (e.g. non-volatile or volatile storage media, removable or non-removable media, integrated circuit media, magnetic storage media, optical storage media, or any other storage media) that may be patented under the laws of the jurisdiction in which this application is filed, but does not include machine-readable media that cannot be patented under the laws of the jurisdiction in which this application is filed.
By way of example, machines may include one or more computing device(s), processor(s), controller(s), integrated circuit(s), chip(s), system(s) on a chip, server(s), programmable logic device(s), other circuitry, and/or other suitable means described herein (e.g., the platform 110, the user device 120) or otherwise known in the art. One or more machines that are configured to perform the methods or operations comprising the steps of any methods described herein are contemplated.
Systems that include one or more machines and the one or more non-transitory machine-readable media embodying program instructions that, when executed by the one or more machines, cause the one or more machines to perform or implement operations comprising the steps of any methods described herein are also contemplated.
Method steps described herein may be order independent, and can therefore be performed in an order different from that described. It is also noted that different method steps described herein can be combined to form any number of methods, as would be understood by one of skill in the art. It is further noted that any two or more steps described herein may be performed at the same time. Any method step or feature disclosed herein may be expressly restricted from a claim for various reasons like achieving reduced manufacturing costs, lower power consumption, and increased processing efficiency. Method steps can be performed at any of the system components shown in the figures.
Systems comprising one or more modules that perform, are operable to perform, or adapted to perform different method steps/stages disclosed herein are also contemplated, where the modules are implemented using one or more machines listed herein or other suitable hardware. When two things (e.g., modules or other features) are “coupled to” each other, those two things may be directly connected together, or separated by one or more intervening things. Where no lines and intervening things connect two particular things, coupling of those things is contemplated in at least one embodiment unless otherwise stated. Where an output of one thing and an input of another thing are coupled to each other, information sent from the output is received by the input even if the data passes through one or more intermediate things. Different communication pathways and protocols may be used to transmit information disclosed herein. Information like data, instructions, commands, signals, bits, symbols, and chips and the like may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, or optical fields or particles.
The words comprise, comprising, include, including and the like are to be construed in an inclusive sense (i.e., not limited to) as opposed to an exclusive sense (i.e., consisting only of). Words using the singular or plural number also include the plural or singular number, respectively. The word or and the word and, as used in the Detailed Description, cover any of the items and all of the items in a list. The words some, any and at least one refer to one or more. The term may is used herein to indicate an example, not a requirement—e.g., a thing that may perform an operation or may have a characteristic need not perform that operation or have that characteristic in each embodiment, but that thing performs that operation or has that characteristic in at least one embodiment.
Although the present disclosure provides certain example embodiments and applications, other embodiments that are apparent to those of ordinary skill in the art, including embodiments which do not provide all of the features and advantages set forth herein, are also within the scope of this disclosure. Accordingly, the scope of the present disclosure is intended to be defined only by reference to the appended claims.
This application claims priority to U.S. Provisional Patent Application Ser. No. 62/628,860, filed Feb. 9, 2018, entitled “SYSTEMS AND METHODS FOR USING A VIRTUAL REALITY DEVICE TO EMULATE USER EXPERIENCE OF AN AUGMENTED REALITY DEVICE,” the contents of which are hereby incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
62628860 | Feb 2018 | US |