At least one embodiment of the present invention pertains to virtual reality (VR) and augmented reality (AR) display systems, and more particularly, to a device and method to combine VR, AR and/or real-world visual content in a displayed scene.
Virtual Reality (VR) is a computer-simulated environment that can simulate a user's physical presence in various real-world and imagined environments. Traditional VR display systems display three-dimensional (3D) content that has minimal correspondence with physical reality, which results in a “disconnected” (but potentially limitless) user experience. Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as video, graphics, sound, etc. Current AR systems attempt to merge 3D augmentations with real-world understanding, such as surface reconstruction for physics and occlusion.
Introduced here are a visualization method and a visualization device (collectively and individually, the “visualization technique” or “the technique”) for providing mixed-reality visual content to a user, including a combination of VR and AR content, thereby providing advantages of both types of visualization methods. The technique provides a user with an illusion of a physical window into another universe or environment (i.e., a VR environment) within a real-world view of the user's environment. The visualization technique can be implemented by, for example, a standard, handheld mobile computing device, such as a smartphone or tablet computer, or by a special-purpose visualization device, such as a head-mounted display (HMD) system.
In certain embodiments, the visualization device provides the user (or users) with a real-world, real-time view (“reality view”) of the user's (or the device's) environment on a display area of the device. The device determines a location at which a VR window, or VR “portal,” should be displayed to the user within the reality view, and displays the VR portal so that it appears to the user to be at that determined location. In certain embodiments, this is done by detecting a predetermined visual marker pattern in the reality view, and locating the VR portal based on (e.g., superimposing the VR portal on) the marker pattern. The device then displays a VR scene within the VR portal and can also display one or more AR objects overlaid on the reality view, outside of the VR portal. In certain embodiments the device can detect changes in its physical location and/or orientation (or of a user holding/wearing a device) and correspondingly adjusts dynamically the apparent (displayed) location and/or orientation of the VR portal and the content within the VR portal. By doing so, the device provides the user with a consistent and realistic illusion that the VR portal is a physical window into another universe or environment (i.e., a VR environment).
The VR content and AR content each can be static or dynamic, or a combination of both static and dynamic content (i.e., even when the user/device is motionless). Additionally, displayed objects can move from locations within the VR portal to locations outside the VR portal, in which case such objects essentially change from being VR objects to being AR objects, or vice versa, according to their display locations.
Other aspects of the technique will be apparent from the accompanying figures and detailed description.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
One or more embodiments of the present invention are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements.
In this description, references to “an embodiment”, “one embodiment” or the like, mean that the particular feature, function, structure or characteristic being described is included in at least one embodiment of the technique introduced here. Occurrences of such phrases in this specification do not necessarily all refer to the same embodiment. On the other hand, the embodiments referred to also are not necessarily mutually exclusive.
The technique introduced here enables the use of a conventional image display device (e.g., a liquid crystal display (LCD)), for example in an HMD or AR-enabled mobile device, to create a visual “portal” that appears as a porous interface between the real world and a virtual world, with optional AR content overlaid on the user's real-world view. This technique has advantages for (among other things) HMD devices, for example, since the dark background of the screen can provide an improved contrast ratio, which addresses the technical challenges for HMD devices that display AR content without occluding real world content in the background, e.g., because they have transparent or semi-transparent displays that only add light to a scene.
In certain embodiments the mixed-reality visualization device includes: 1) an HMD device or handheld mobile AR-enabled device with six-degrees-of-freedom (6-DOF) position/location tracking capability and the capabilities of recognizing and tracking planar marker images and providing a mixed reality overlay that appears fixed with respect to the real world; 2) an image display system that can display a target image and present a blank or dark screen when needed; and 3) a display control interface to trigger the display of the planar marker image on a separate display system. In operation the mixed-reality visualization technique can include causing a planar marker image to be displayed on a separate image display system (e.g., an LCD monitor) separate from the visualization device, recognizing the location and orientation of the planar marker image with the visualization device, and operating the visualization device such that the image display system becomes a porous interface or “portal” between AR and VR content. At least in embodiments where the visualization device is a standard handheld mobile device, such as a smartphone or tablet computer, the mixed VR/AR content can be viewed by multiple users simultaneously.
The visualization device also generates and displays to the user a VR window (also called VR portal) 3 that, in at least some embodiments, appears to the user to be at a fixed location and orientation in space, as discussed below. The visualization device displays VR content within the VR window 3, representing a VR environment, including a number of VR objects 4. The VR objects 4 (which may be far more diverse in appearance than shown in
In some embodiments, the location and orientation of the VR window 3, as displayed to the user, are determined by use of a predetermined planar marker image, or target image.
The visualization device uses the target image to determine where to locate and how to size and orient the VR window as displayed to the user. In certain embodiments the visualization device overlays the VR window on the target image and matches the boundaries of the VR window exactly to the boundaries of the target image, i.e., it coregisters the VR window and the target image. In other embodiments, the device may use the target image simply as a reference point, for example to center the VR window.
Additionally, the visualization device has the ability to sense its own location within its local physical environment and its motion in 6-DOF (i.e., translation along and rotation about each of three orthogonal axes). It uses this ability to modify the content displayed in the VR window as the user moves in space relative to the marker image, to reflect the change in the user's location and perspective. For example, if the user (or visualization device) moves closer to the target image, the VR window and VR content within it will grow larger on the display. In that event the content within the VR window may also be modified to show additional details of objects and/or additional objects around the edges of the VR window, just as a user would see more looking out a real (physical) window when the user is right up against the window then when the user is standing several away from it. Similarly, if the user moves away from the target image, the VR window and VR content within it grow smaller on the display, with VR content being modified accordingly. Further, if the user moves to the side so that the device does not have a direct (perpendicular) view of the planar target image, the visualization device will adjust the displayed shape and content of the VR window accordingly to account for the user's change in perspective, to maintain a realistic illusion that the VR window is a portal into another environment/universe.
In certain embodiments, the VR content within the VR window is a subset of a larger VR image maintained by the visualization device. For example, the larger VR image may be sized at least to encompass the entire displayed area or field of view of the user. In such embodiments, the visualization device uses occlusion geometry, such as a mesh or shader, to mask the portion of the VR image outside the VR window so that that portion of the VR image is not displayed to the user. An example of the occlusion geometry is shown in
At least some of the VR objects 41 through 43 may be animated. For example, the spaceship 41 may appear to fly out of the VR window toward the user, as shown in
The application & rendering module 73 generates the application context in which the mixed-reality visualization technique is applied and can be, for example, a game software application. The application & rendering module 73 receives the transformation data (R,t) from the 6-DOF tracking module 72, and based on that data as well as image data from the display camera(s) 75, generates image data which is sent to the display device(s) 76, for display to the user. The 6-DOF tracking module 72 and application rendering module 73 each can be implemented by appropriately-programmed programmable circuitry, or by specially-designed (“hardwired”) circuitry, or a combination thereof.
As mentioned above, the mixed-reality visualization device 71 can be, for example, an appropriately-configured conventional handheld mobile device, or a special-purpose HMD device. In either case, the physical components of such a visualization device can be as shown in
The physical components of the illustrated visualization device 71 include one or more instance of each of the following: a processor 81, a memory 82, a display device 83, a display video camera 84, a depth-sensing tracking video camera 85, an inertial measurement unit (IMU) 87, and communication device 87, all coupled together (directly or indirectly) by an interconnect 88. The interconnect 88 may be or include one or more conductive traces, buses, point-to-point connections, controllers, adapters, wireless links and/or other conventional connection devices and/or media, at least some of which may operate independently of each other.
The processor(s) 81 individually and/or collectively control the overall operation of the visualization device 71 and perform various data processing functions. Additionally, the processor(s) 81 may provide at least some of the computation and data processing functionality for generating and displaying the above-mentioned virtual measurement tool. Each processor 81 can be or include, for example, one or more general-purpose programmable microprocessors, digital signal processors (DSPs), mobile application processors, microcontrollers, application specific integrated circuits (ASICs), programmable gate arrays (PGAs), or the like, or a combination of such devices.
Data and instructions (code) 90 that configure the processor(s) 81 to execute aspects of the mixed-reality visualization technique introduced here can be stored in the one or more memories 82. Each memory 82 can be or include one or more physical storage devices, which may be in the form of random access memory (RAM), read-only memory (ROM) (which may be erasable and programmable), flash memory, miniature hard disk drive, or other suitable type of storage device, or a combination of such devices.
The one or more communication devices 87 enable the visualization device 71 to receive data and/or commands from, and send data and/or commands to, a separate, external processing system, such as a personal computer, a game console, or a remote server. Each communication device 88 can be or include, for example, a universal serial bus (USB) adapter, Wi-Fi transceiver, Bluetooth or Bluetooth Low Energy (BLE) transceiver, Ethernet adapter, cable modem, DSL modem, cellular transceiver (e.g., 3G, LTE/4G or 5G), baseband processor, or the like, or a combination thereof.
Display video camera(s) 84 acquire a live video feed of the user's environment, to produce the reality view of the user's environment, particularly in a conventional handheld mobile device embodiment. Tracking video camera(s) 85 can be used to detect movement (translation and/or rotation) of the visualization device 71 relative to its local environment (and particularly, relative to the target image). One or more of the tracking camera(s) 85 may be a depth-sensing camera 85, in which case the camera(s) 85 may be used to apply, for example, time-of-flight principles to determine distances to nearby objects, including the target image. The IMU 86 can include, for example, one or more gyroscope and/or accelerometers to send translation and/or rotation of the device 71. In at least some embodiments, an IMU 86 is not necessary in view of the presence of the tracking camera(s) 85, but nonetheless can be employed to provide more robust estimation.
Note that any or all of the above-mentioned components may be fully self-contained in terms of their above-described functionally; however, in some embodiments, one or more processors 81 provide at least some of the processing functionality associated with the other components. For example, at least some of the data processing for depth detection associated with tracking camera(s) 85 may be performed by processor(s) 81. Similarly, at least some of the data processing for gaze tracking associated with IMU 86 may be performed by processor(s) 81. Likewise, at least some of the image processing that supports AR/VR displays 83 may be performed by processor(s) 81; and so forth.
The machine-implemented operations described above can be implemented by programmable circuitry programmed/configured by software and/or firmware, or entirely by special-purpose circuitry, or by a combination of such forms. Such special-purpose circuitry (if any) can be in the form of, for example, one or more application-specific integrated circuits (ASICs), programmable logic devices (PLDs), field-programmable gate arrays (FPGAs), system-on-a-chip systems (SOCs), etc.
Software to implement the techniques introduced here may be stored on a machine-readable storage medium and may be executed by one or more general-purpose or special-purpose programmable microprocessors. A “machine-readable medium”, as the term is used herein, includes any mechanism that can store information in a form accessible by a machine (a machine may be, for example, a computer, network device, cellular phone, personal digital assistant (PDA), manufacturing tool, any device with one or more processors, etc.). For example, a machine-accessible medium includes recordable/non-recordable media (e.g., read-only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; etc.), etc.
Certain embodiments of the technology introduced herein are as summarized in the following numbered examples:
1. A method comprising: providing a user of a visualization device with a real-world, real-time view of an environment of the user, on a display area of the visualization device; determining, in the visualization device, a location at which a virtual reality window should be displayed within the real-world, real-time view of the environment of the user; and displaying, on the display area of the visualization device, the virtual reality window at the determined location within the real-world, real-time view of the environment of the user.
2. A method as recited in example 1, further comprising: generating, in the visualization device, a simulated scene of a second environment, other than the environment of the user; wherein said displaying the virtual reality window comprises displaying the simulated scene of the second environment within the virtual reality window.
3. A method as recited in example 1 or example 2, further comprising: detecting a physical movement of the visualization device; wherein said displaying the virtual reality window comprises modifying content of the virtual reality window, in the visualization device, in response to the physical movement of the visualization device, to simulate a change in perspective of the visualization device relative to the virtual reality window.
4. A method as recited in any of examples 1 through 3, wherein said determining a location at which a virtual reality window should be displayed comprises: identifying a predetermined pattern in the environment of the user; and setting the location at which a virtual reality window should be displayed, based on the predetermined pattern.
5. A method as recited in any of examples 1 through 4, wherein said displaying the virtual reality window comprises overlaying the virtual reality window over the predetermined pattern from a perspective of the visualization device.
6. A method as recited in any of examples 1 through 5, further comprising: detecting a location and orientation of the predetermined pattern; and determining a display location and orientation for the virtual reality window, based on the location and orientation of the predetermined pattern.
7. A method as recited in any of examples 1 through 6, further comprising: displaying, on the display area of the visualization device, an augmented reality image overlaid on the real-world, real-time view, outside of the virtual reality window.
8. A method as recited in any of examples 1 through 7, further comprising: displaying on the display area an object, generated by the device, so that the object appears to move from the virtual reality window to the real-world, real-time view of the environment of the user, or vice versa.
9. A method comprising: identifying, by a device that has a display capability, a first region located within a three-dimensional space occupied by a user of the device; enabling the user to view a real-time, real-world view of a portion of the three-dimensional space excluding the first region, on the device; causing the device to display to the user a virtual reality image in the first region, concurrently with said enabling the user to view the real-time, real-world view of the portion of the three-dimensional space excluding the first region; causing the device to display to the user an augmented reality image in a second region of the three-dimensional space from the point of view of the user, concurrently with said causing the device to display to the user the real-time, real-world view, the second region being outside of the first region; detecting, by the device, a changes in a location and an orientation of the device; and adjusting a location or orientation of the virtual reality image as displayed by the device, in response to the changes in the location and orientation of the display device.
10. A method as recited in example 9, wherein said identifying the first region comprises identifying a predetermined visible marker pattern in the three-dimensional space occupied by the user.
11. A method as recited in example 9 or example 10, wherein said causing the device to display the virtual reality image in the first region comprises overlaying the virtual reality image on the first region so that the first region is coextensive with the predetermined visible marker pattern.
12. A method as recited in any of examples 9 through 11, further comprising: displaying on the device an object, generated by the device, so that the object appears to move from the first region to the second region or vice versa.
13. A visualization device comprising: a display device that has a display area; a camera to acquire images of an environment in which the device is located; an inertial measurement unit (IMU); at least one processor coupled to the display device, the camera and the IMU, and configured to: cause the display device to display, on the display area, a real-world, real-time view of the environment in which the device is located; determine a location at which a virtual reality window should be displayed within the real-world, real-time view; cause the display device to display, on the display area, the virtual reality window at the determined location within the real-world, real-time view; detect a physical movement of the device based on data from at least one of the camera or the IMU; and modify content of the virtual reality window in response to the physical movement of the device, to simulate a change in perspective of the user relative to the virtual reality window.
14. A visualization device as recited in example 13, wherein the device is a hand-held mobile computing device, and the real-world, real-time view of the environment in which the device is located is acquired by the camera.
15. A visualization device as recited in example 13, wherein the device is a head-mounted AR/VR display device.
16. A visualization device as recited in any of examples 13 through 15, wherein the at least one processor is further configured to: generate a simulated scene of a second environment, other than the environment in which the device is located; wherein displaying the virtual reality window comprises displaying the simulated scene of the second environment within the virtual reality window.
17. A visualization device as recited in any of examples 13 through 16, wherein the at least one processor is further configured to: cause the display device to display, on the display area, an augmented reality image overlaid on the real-world, real-time view, outside of the virtual reality window.
18. A visualization device as recited in any of examples 13 through 17, wherein the at least one processor is further configured to: generate an object; and cause the display device to display the object on the display area so that the object appears to move from the virtual reality window to the real-world, real-time view of the environment in which the device is located, or vice versa.
19. A visualization device as recited in any of examples 13 through 18, wherein determining a location at which a virtual reality window should be displayed comprises: identifying a predetermined pattern in the environment of the user; and setting the location based on a location of the predetermined pattern.
20. A visualization device as recited in any of examples 13 through 19, wherein displaying the virtual reality window comprises overlaying the virtual reality window over the predetermined pattern from a perspective of the visualization device.
21. An apparatus comprising: means for providing a user of a visualization device with a real-world, real-time view of an environment of the user, on a display area of the visualization device; means for determining, in the visualization device, a location at which a virtual reality window should be displayed within the real-world, real-time view of the environment of the user; and means for displaying, on the display area of the visualization device, the virtual reality window at the determined location within the real-world, real-time view of the environment of the user.
22. An apparatus as recited in example 21, further comprising: means for generating, in the visualization device, a simulated scene of a second environment, other than the environment of the user; wherein said means for displaying the virtual reality window comprises means for displaying the simulated scene of the second environment within the virtual reality window.
23. An apparatus as recited in example 21 or example 22, further comprising: means for detecting a physical movement of the visualization device; wherein said means for displaying the virtual reality window comprises means for modifying content of the virtual reality window, in the visualization device, in response to the physical movement of the visualization device, to simulate a change in perspective of the visualization device relative to the virtual reality window.
24. An apparatus as recited in any of examples 21 through 23, wherein said means for determining a location at which a virtual reality window should be displayed comprises: means for identifying a predetermined pattern in the environment of the user; and setting the location at which a virtual reality window should be displayed, based on the predetermined pattern.
25. An apparatus as recited in any of examples 21 through 24, wherein said means for displaying the virtual reality window comprises means for overlaying the virtual reality window over the predetermined pattern from a perspective of the visualization device.
26. An apparatus as recited in any of examples 21 through 25, further comprising: means for detecting a location and orientation of the predetermined pattern; and means for determining a display location and orientation for the virtual reality window, based on the location and orientation of the predetermined pattern.
27. An apparatus as recited in any of examples 21 through 26, further comprising: means for displaying, on the display area of the visualization device, an augmented reality image overlaid on the real-world, real-time view, outside of the virtual reality window.
28. An apparatus as recited in any of examples 21 through 27, further comprising: means for displaying on the display area an object, generated by the device, so that the object appears to move from the virtual reality window to the real-world, real-time view of the environment of the user, or vice versa.
Any or all of the features and functions described above can be combined with each other, except to the extent it may be otherwise stated above or to the extent that any such embodiments may be incompatible by virtue of their function or structure, as will be apparent to persons of ordinary skill in the art. Unless contrary to physical possibility, it is envisioned that (i) the methods/steps described herein may be performed in any sequence and/or in any combination, and that (ii) the components of respective embodiments may be combined in any manner.
Although the subject matter has been described in language specific to structural features and/or acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as examples of implementing the claims and other equivalent features and acts are intended to be within the scope of the claims.
This is a continuation of U.S. patent application Ser. No. 14/561,167, filed on Dec. 4, 2014, which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | 14561167 | Dec 2014 | US |
Child | 14610992 | US |