The disclosure relates generally to sensory feedback systems and methods for guiding users in virtual/augmented reality environments such as walk-around virtual reality environments and for assisting with preventing collisions with objects in the physical operating space in which the user acts.
Various augmented and/or virtual reality systems and/or environments are known. One current generation of desktop virtual reality (“VR”) experiences is created using head-mounted displays (“HMDs”), which can be tethered to a stationary computer (such as a personal computer (“PC”), laptop, or game console), or self-contained. Such desktop VR experiences generally try to be fully immersive and disconnect the users' senses from their surroundings.
Collisions with physical objects when using a walk-around virtual reality system are currently solved in certain situations by either having a second person in the operating space (a “chaperone”) guiding the user, and/or by providing physical hints (e.g., by placing a thick carpet on the floor that ends some distance from the adjacent walls).
It is desirable to address the current limitations in this art.
By way of example, reference will now be made to the accompanying drawings, which are not to scale.
Those of ordinary skill in the art will realize that the following description of the present invention is illustrative only and not in any way limiting. Other embodiments of the invention will readily suggest themselves to such skilled persons, having the benefit of this disclosure, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein. Reference will now be made in detail to specific implementations of the present invention as illustrated in the accompanying drawings. The same reference numbers will be used throughout the drawings and the following description to refer to the same or like parts.
The data structures and code described in this detailed description are typically stored on a computer readable storage medium, which may be any device or medium that can store code and/or data for use by a computer system. This includes, but is not limited to, magnetic and optical storage devices such as disk drives, magnetic tape, CDs (compact discs) and DVDs (digital versatile discs or digital video discs), and computer instruction signals embodied in a transmission medium (with or without a carrier wave upon which the signals are modulated). For example, the transmission medium may include a communications network, such as the Internet.
In certain embodiments, memory 110 may include without limitation high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and may include without limitation non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 110 may optionally include one or more storage devices remotely located from the processor(s) 105. Memory 110, or one or more of the storage devices (e.g., one or more non-volatile storage devices) in memory 110, may include a computer readable storage medium. In certain embodiments, memory 110 or the computer readable storage medium of memory 110 may store one or more of the following programs, modules and data structures: an operating system that includes procedures for handling various basic system services and for performing hardware dependent tasks; a network communication module that is used for connecting computing device 110 to other computers via the one or more communication network interfaces and one or more communication networks, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and so on; a client application that may permit a user to interact with computing device 100.
Certain embodiments of the present invention comprise a trackable head-mounted display (“HMD”) with at least three degrees of freedom in an operating space and optionally one or more sensors with at least two degrees of freedom of positional tracking. The HMD and the optional sensors provide sensory input to a controller, which in turn provides sensory feedback to the HMD or another output device. Without limitation, the HMD may be tethered to a stationary computer (such as a personal computer (“PC”), laptop, or game console), or alternatively may be self-contained (i.e., with some or all sensory inputs, controllers/computers, and outputs all housed in a single head-mounted device).
Each base station (320) according to certain embodiments contains two rotors, which sweep a linear beam (310) across the scene on orthogonal axes. At the start of each sweep cycle, the base station (320) according to certain embodiments emits an omni-directional light pulse (“sync signal”) visible to all sensors. Thus, each sensor computes a unique angular location in the swept volume by timing the duration between the sync signal and the beam signal. Sensor distance and orientation is solved using multiple sensors affixed to a single rigid body.
Depending on the particular requirements of each implementation, various other tracking systems may be integrated, using techniques that are well known by skilled artisans.
The HMD in certain embodiments presents the user a dynamic virtual environment (“virtual space”). It is tracked in operating space so that its user's motions in the operating space are translated to the representation of the virtual space. When the HMD or an optional additional sensor close in on physical obstacles in the operating space, such as walls or furniture, a sensory feedback is provided to the user either in the virtual space or in the operating space in order to avoid a collision.
In certain exemplary embodiments the system is primed by the user in advance by defining the boundaries and limitations of the operating space through one or more methods of programmatic input (“Soft Bounds”). In others, the system automatically detects actual obstacles in space through one or more sensor technologies (“Hard Bounds”).
A controller according to aspects of certain embodiments receives and processes detection signals from the HMD and/or external sensors and generates corresponding feedback to the user based on the proximity of Soft Bounds and/or Hard Bounds and/or based on explicit user request for feedback (e.g., an “overlay button”).
Without limitation, definition of Soft Bounds can be made in the following ways (including combinations), depending on the requirements of each particular implementation:
Without limitation, definitions of Hard Bounds can be made in the following ways (including combinations), depending on the requirements of each particular implementation:
Without limitation, sensory feedback can be given in the following ways (including combinations), depending on the requirements of each particular implementation:
Any of the above systems and methods may be implemented either as a digital warning signal that is triggered as a user crosses a predefined threshold, or as an analog warning that increases in intensity (e.g., overlay of room bounds fades in in brightness the closer a user gets to an obstacle and gets supported by a rumbling sensation increasing in intensity as user gets even closer).
One embodiment allows the user to dynamically reposition his or her virtual representation in the virtual space so as to be able to experience a larger area of the virtual environment than what is provided by his or her operating space. In one such exemplary embodiment, the operating space is a 3-meter by 3-meter square. The virtual environment is a room several times the size of this operating space. In order to experience all of it, a user could trigger a reposition of his representation in the virtual environment. In this example, the user could move around and rotate a “ghosted” virtual representation of him or herself and a 3-meter by 3-meter square projected onto the ground of the virtual environment. Upon accepting the repositioned space, the user would be “teleported” to his or her new place in the virtual environment and could continue moving around this new part of the virtual environment by physically moving in his or her operating space.
In certain implementations, the appearance of chaperoning bounds according to aspects of the present invention may reduce the sense of immersion that a user experiences in a virtual environment. This can be addressed by the following solutions, either separately or in combination, depending on the requirements of each particular implementation:
First, chaperoning bounds are not displayed at full brightness immediately, but instead are slowly faded in as a user closes in on the actual bounds of the user's real environment (“operating space”). Independent fade value may be computed for each wall, then a fifth fade value (assuming an exemplary typical operating space in a room with four walls) is applied to a perimeter mesh that is the outer edges of the space in which the user is standing (e.g., this may appear as the edges of a cube highlighted). The fifth fade value in one embodiment may be implemented as the maximum value of the fade values for each of the four walls. In this way, if a user is backing into a wall, the perimeter mesh will light up full bright. In certain embodiments, to assist a user to see the other of the walls as the user backs into a wall, the fade values may intentionally bleed slightly into their neighboring walls and slightly into the opposite wall. This technique allows a user to see the location of all walls without the chaperoning alerts becoming overwhelming. In certain embodiments, to increase the sense of immersion, after the brightest chaperoning bounds are activated and displayed at full brightness for some period of time (e.g, 4 seconds), the brightness of all chaperoning alerts is slowly faded to 20% of the original brightness.
Second, only the bounds of the wall closest to user are shown at full intensity (e.g., as a glowing grid), while the other walls are only shown as their outlines/outer corners. Third, the intensity of the chaperoning bounds may be defined relative to the brightness of the virtual environment they are superimposed on. This underlying brightness can either be measured live based on the rendered virtual environment, or provided by the game driving the experience through an API. Fourth, after a user has stood still in one place for a few seconds and has gotten to understand where the bounds are, chaperone bounds may be automatically faded out so that user can experience the VR environment undisturbed in spite of being close to a wall.
In certain implementations, chaperoning alert systems according to aspects of the present invention may show a warning too late for a user to stop before a collision if the user is moving too quickly. This can be addressed by the following solutions, either separately or in combination, depending on the requirements of each particular implementation:
First, the chaperoning warnings may be shown earlier intentionally. However, this may have the undesirable effect of making the usable space in which a user can experience VR smaller. Second, the velocity and/or acceleration of tracked objects (e.g., the user's HMD apparatus and/or related handheld controllers) may be measured, and the chaperone bounds may be shown sooner or later based on the outcome of these measurements. Third, the risk of rapid movement and therefore the speed/intensity of the display of chaperoning warnings may be derived from heuristics. For example, systems according to aspects of the present invention may measure how users generally experience a specific VR experience (e.g., is it one in which slow exploration is typical, or one in which fast movement is typical?). Also, if an exemplary system is designed to identify a user (e.g., by login, eye tracking cameras, height, typical motion patterns, etc.) it can base its warnings on how quickly this particular user typically moves and reacts to chaperone warnings. Fourth, if a game/application does not actually need a large use space, chaperone warnings can be more aggressive since the need for maximization of space is lower.
In certain implementations, initial room setup according to aspects of the present invention may be perceived as relatively manual and unintuitive. This can be addressed by the following solutions, either separately or in combination, depending on the requirements of each particular implementation:
First, a user can simply walk around in the real operating space, holding his or her controller and moving it along some or all of the walls/floor/ceiling of the operating space. The measurements taken via this process are transmitted to the chaperoning system controller using any appropriate technique, as known to skilled artisans. Based on these absolute measured positions, the systems according to aspects of the present invention then calculate the smallest polyhedron that contains all of the positions in which the controller has been detected. Second, rangefinders of various types that are known to skilled artisans (e.g., ultrasound, laser) may be integrated into particular implementations, and these may generate the necessary information regarding the boundaries of the operating space, with little or no intervention required by a user.
In certain embodiments, extending the concept of the independently controlled persistent ground perimeter, wall styles may be separated from perimeter styles, where the perimeter includes the vertical wall separators, ceiling outline, and ground outline. Perimeter styles could be a subset of:
1. Dynamic perimeter
2. Dynamic perimeter with persistent ground outline
3. Persistent ground outline only
4. Dynamic ground outline only (for the true minimalist who knows his or her space very well)
5. None
In certain embodiments, users may select the invasiveness, aggressiveness, fade distance, and/or color scheme of the chaperoning bounds that are displayed, via any suitable user interface and/or configuration utility using techniques that are known to skilled artisans. For example, in terms of color scheme selection, a suitable palette of colors may be predetermined from which a user may select, or users may be permitted to choose hue and/or saturation, while brightness is generated by systems according to aspects of the present invention. Moreover, user selections may be adjusted and/or saved, depending on particular games or applications.
While the above description contains many specifics and certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention not be limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art, as mentioned above. The invention includes any combination or sub-combination of the elements from the different species and/or embodiments disclosed herein.
This application is a continuation of, and claims priority to, co-pending commonly owned U.S. patent application Ser. No. 14/933,955 entitled, “SENSORY FEEDBACK SYSTEMS AND METHODS FOR GUIDING USERS IN VIRTUAL REALITY ENVIRONMENTS” and filed on Nov. 5, 2015, which claims the benefit of Provisional Application Ser. Nos. 62/075,742, filed on Nov. 5, 2014, and 62/126,695, filed on Mar. 1, 2015, all of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5892462 | Tran | Apr 1999 | A |
5900849 | Gallery | May 1999 | A |
6581546 | Dalland | Jun 2003 | B1 |
7403101 | Kropinski et al. | Jul 2008 | B2 |
7741961 | Rafii et al. | Jun 2010 | B1 |
8049644 | Oehlert et al. | Nov 2011 | B1 |
9761163 | Ohyama et al. | Sep 2017 | B2 |
20050123171 | Kobayashi et al. | Jun 2005 | A1 |
20100156906 | Montgomery et al. | Jun 2010 | A1 |
20100222099 | Fields | Sep 2010 | A1 |
20120262558 | Boger et al. | Oct 2012 | A1 |
20130083007 | Geisner et al. | Apr 2013 | A1 |
20130084970 | Geisner et al. | Apr 2013 | A1 |
20130293586 | Kaino et al. | Nov 2013 | A1 |
20130328928 | Yamagishi et al. | Dec 2013 | A1 |
20130335301 | Wong et al. | Dec 2013 | A1 |
20140121966 | Iaccarino | May 2014 | A1 |
20140198017 | Lamb et al. | Jul 2014 | A1 |
20140354692 | Ng-Thow-Hing et al. | Dec 2014 | A1 |
20150091451 | Williams | Apr 2015 | A1 |
20150116806 | Mizoguchi et al. | Apr 2015 | A1 |
20160152184 | Ogawa et al. | Jun 2016 | A1 |
Number | Date | Country |
---|---|---|
102099767 | Jun 2011 | CN |
102722249 | Oct 2012 | CN |
103480154 | Jan 2014 | CN |
103761085 | Apr 2014 | CN |
103975268 | Aug 2014 | CN |
2005165848 | Jun 2005 | JP |
2006301924 | Nov 2006 | JP |
2012155655 | Aug 2012 | JP |
2013257716 | Dec 2013 | JP |
WO2012176201 | Dec 2012 | WO |
Entry |
---|
Office Action from the European Patent Office for Application No. 15857384.0, dated Nov. 18, 2019, a counterpart foreign application of U.S. Pat. No. 10,241,566, 7 pages. |
Translated Office Action from the Japanese Patent Office for Application No. 2017-544546, dated Mar. 26, 2019, a counterpart foreign application of U.S. Pat. No. 10,241,566, 6 pages. |
Extended European Search Report (in English) of European Patent App. No. 15857384 (PCT/US2015/059329), dated Apr. 25, 2016, search completion Apr. 10, 2018 from European Patent Office (EPO). |
The Japanese Office Action dated Dec. 18, 2018 for Japanese Patent Application No. 2017-544546, a counterpart of U.S. Appl. No. 14/933,955, 8 pages. |
Office Action for U.S. Appl. No. 14/933,955, dated Feb. 7, 2018, “Sensory Feedback Systems and Methods for Guiding Users in Virtual Reality Environments”, Sawyer, et al., 7 pages. |
Office Action for U.S. Appl. No. 14/933,955, dated Jun. 14, 2018, “Sensory Feedback Systems and Methods for Guiding Users in Virtual Reality Environments”, Sawyer, et al., 17 pages. |
Office Action for U.S. Appl. No. 14/933,955, dated Sep. 8, 2017, “Sensory Feedback Systems and Methods for Guiding Users in Virtual Reality Environments”, Sawyer, et al., 5 pages. |
PCT Search Report (in English) & Written Opinion for Int App. No. PCT/US2015/059329, dated Jan. 28, 2016, search completed Dec. 28, 2015 from ISA/US, 5 pages. |
Translated Office Action from the Chinesse Patent Office for Application No. 201580072357.3, dated Apr. 27, 2020, a counterpart foreign application of U.S. Pat. No. 10,241,566, 13 pgs. |
English Translation of the Chinese Office Action dated Sep. 27, 2020 for Chinese Patent Application No. 201580072357.3, a counterpart of U.S. Pat. No. 10,241,566, 9 pages. |
Number | Date | Country | |
---|---|---|---|
20190212812 A1 | Jul 2019 | US |
Number | Date | Country | |
---|---|---|---|
62075742 | Nov 2014 | US | |
62126695 | Mar 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14933955 | Nov 2015 | US |
Child | 16353422 | US |