The accompanying drawings illustrate a number of exemplary embodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the present disclosure.
Throughout the drawings, identical reference characters and descriptions indicate similar, but not necessarily identical, elements. While the exemplary embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, the exemplary embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the present disclosure covers all modifications, equivalents, and alternatives falling within the scope of the appended claims.
The present disclosure is generally directed to various methods of decoupling virtual representations of objects with the physical objects they represent, in a controlled manner. In artificial reality systems, including augmented reality, virtual reality, or other similar systems, steps are often taken to map out and/or track real world objects, and then illustrate those tracked objects in the artificial reality environment. For example, an artificial reality system may determine the current location of a user's arms or hands, and then track any movements of those objects in physical space. The movements of those physical items may then be mapped to virtual arms or hands in an artificial reality environment.
These systems may attempt to precisely map the user's physical movements to the virtual arms or hands in the artificial reality environment. In some cases, these artificial reality systems may use exterior cameras mounted to artificial reality devices or cameras mounted to controllers to determine the current location of the user's hands. This location would then be used to render a virtual representation of the users' physical movements. In other cases, these artificial reality systems may use full physics simulations to physically model the user's fingers, hand, or other body parts, and use that full physics simulation to determine how the user will interact with virtual objects in the artificial reality environment. This full physics simulation, however, is computationally very expensive. Furthermore, whether using external cameras or performing full physics modeling, neither of these approaches provides control over determining when a user has “touched” or otherwise interacted with a virtual object in the virtual environment.
In contrast to such systems, the embodiments herein provide various methods and systems that allow precise control of a user's interactions with virtual objects within an artificial environment. Moreover, at least some of these embodiments may allow virtual interactions with virtual objects feel as though they were physical interactions with physical objects. The embodiments herein may be designed to decouple the location of physical objects (e.g., a user's hands) from the virtual representation of those physical objects. Thus, in contrast to systems that attempt to faithfully reproduce each movement of the user, the embodiments herein controllably decouple the user's movements from the virtual representation of those movements. As such, the process of virtually representing physical objects may be computationally much cheaper than full physics simulations, may provide more flexibility than physics-based solutions, and may allow each interaction with a virtual item to be unique, providing a lifelike feel to each virtual item. These embodiments will be described in greater detail below with reference to the computing environment 100 of
For example, the communications module 104 may communicate with other computer systems. The communications module 104 may include wired or wireless communication means that receive and/or transmit data to or from other computer systems. These communication means may include hardware radios including, for example, a hardware-based receiver 105, a hardware-based transmitter 106, or a combined hardware-based transceiver capable of both receiving and transmitting data. The radios may be WIFI radios, cellular radios, Bluetooth radios, global positioning system (GPS) radios, or other types of radios. The communications module 104 may interact with databases, mobile computing devices (such as mobile phones or tablets), embedded or other types of computing systems.
The computer system 101 may also include a virtual environment generating module 107. The virtual environment generating module 107 may be configured to generate artificial reality environments including augmented reality environments, virtual reality environments, and other similar types of virtual or computer-generated environments. These virtual environments may include solely virtual items, or combinations of physical (i.e., real-world) items and virtual items 108. The virtual items 108 may include substantially any item that can be represented by a computer-generated image. In some cases, as will be shown further below with regard to
The physical object detecting module 109 of computer system 101 may be configured to detect physical objects 110 and may further determine their position and/or movements. The physical object detecting module 109 may implement any number of sensors or other electronic devices when performing such detections (please see
The virtual representation generating module 112 of computer system 101 may generate a virtual representation of a detected physical object 110. Thus, for example, if the detected physical object 110 is a user's hand, the virtual representation generating module 112 may generate a virtual representation 113 of the user's hand. This virtual representation 113 may be configured to generally track the real-world physical movements of the user's hand or other physical object 110. In many of the examples herein, the physical object 110 tracked by the physical object detecting module 109 is a user's hand or hands. As such, the virtual representation 113 may be often be referred to herein as “virtual hands,” although it will be understood that the virtual representation 113 may virtually represent any physical object, and is not limited to a user's hands. Indeed, as will be explained herein, the virtual representation may represent a writing implement such as a pencil or pen, a controller such as a video game or virtual reality controller, a user's face, a user's body, or other physical objects.
The presentation module 114 of computer system 101 may present the virtual item 108 (e.g., a user interface) and the virtual representation 113 of the detected physical object 110 in a display. The display may be an artificial reality device 117, a smartphone 116, a television, a computer monitor, or other type of display visible to the user 115. In some cases, the computer system 101 may be separate from the artificial reality device 117 or the smartphone 116, etc., while in other cases, the computer system (or at least some of its components or modules) may be part of the artificial reality device 117, the smartphone 116, or other electronic device. Using the artificial reality device 117 or the smartphone 116, for example, the user 115 may provide inputs 118 to the computer system including movement inputs in relation to an artificial reality interface.
However, while the movements of the virtual representation 113 within the virtual environment are generally designed to track the physical movements of the detected physical object 110, in the embodiments herein, the presentation module 114 may controllably decouple the virtual representation from the physical object's real-world movements. That is, the presentation module 114 may present the virtual representation 113 in a position that is different than that indicated by the movement of the physical object, at least in some manner and by some degree. This difference in the virtual representation from the actual movement of the tracked physical object may be based on the determined intent of that movement. For instance, if the systems herein determine that a real-world movement by a user intended to cause a certain effect within the virtual environment, that effect may be carried out within the virtual environment, regardless of whether the physical movement fully triggered that effect. These concepts will be explained further below with regard to method 200 of
Features from any of the embodiments described herein may be used in combination with one another in accordance with the general principles described herein. These and other embodiments, features, and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying drawings and claims.
As illustrated in
The physical object detecting module 109 of computer system 101 may detect, at step 220 of method 200, a current position of at least one physical object 110 that is to be portrayed within the virtual environment 300. The physical object detecting module 109 may detect the physical object 110 and its position 111 and/or movements using various hardware sensors including, without limitation, cameras, accelerometers, gyroscopes, piezoelectric sensors, electromyography (EMG) sensors, or other types of sensors. The physical object(s) 110 detected by module 109 may include a user's left hand, a user's right hand, a user's fingers, a user's legs, feet, face, or other body parts, an electronic controller held by a user, a writing implement such as a pen or pencil held by the user, a paintbrush or other tool held by a user, or substantially any other physical object. While many examples herein may refer to tracking a user's hands, it will be understood that the physical object may be a single hand, multiple hands, a portion of a hand, or a portion of any other body part or other object that is detectable by the physical object detecting module 109 of computer system 101.
At step 230 of method 200, the virtual representation generating module 112 may generate a virtual representation 113 of the physical object 110 within the virtual environment 300. The virtual representation 113 of the physical object 110 may be configured to at least partially follow movements of the physical object relative to the virtual item (e.g., user interface 301). Thus, for example, as the physical object detecting module 109 tracks movements of user 115's hands, for example, the virtual representation 113 of those hands may be shown in the virtual environment 300. Thus, virtual hand 303 may be generated by the virtual representation generating module 112 to represent the user's real-world right hand and its movements in relation to the user interface 301. However, as will be shown below, the embodiments herein may controllably decouple the physical hand from the virtual representation 113 of that hand. This controlled decoupling may provide additional sensory information that may feel to the user as though they were interacting with an actual physical object. Embodiments depicting such are described further below.
At step 240 of method 200, the presentation module 114 of computer system 101 may present the virtual item 108 and the generated virtual representation 113 of the physical object 110 within the virtual environment 300. The virtual representation 113 of the physical object 110 may be at least partially, controllably decoupled from the movements of the physical object relative to the virtual item 108. Thus, in
For instance, the user's overreach may inadvertently select a different button than intended. Or, the user may wish to point to or select a specific item on the user interface, but may then move their hand or wrist, causing the selection point to move to an undesired location. Or, still further, the user may intend to push a button on the interface, but may move their physical hand far past the back of the interface and may then withdraw their hand to release the button. However, because the user's physical hand was so far past the back of the user interface 301, the act of unselecting the button may not occur until the user has moved their hand back to a position in front of the user interface 301. These movements may be fatiguing to users, and may cause frustration if inadvertent actions are carried out on the user interface. The examples above may be partially or entirely avoided by controllably decoupling the user's physical movements from the movements of the virtual representation (e.g., virtual hand 303) in the virtual environment 300.
At step 410, the virtual environment generating module 107 may generate a virtual item (e.g., user interface 501 of
For example, as shown in
The embodiments described herein may be configured to stop (or at least temporarily stop) the virtual representation at the surface of the virtual item, regardless of how far past the surface of the virtual item the physical object travels. For instance, as shown in
Additionally or alternatively, the systems herein may be configured to apply a less rigid limit in “soft stop” embodiments. For example, in
The amount of distance needed to cause the virtual hand to move after being stopped may be controlled by policies (e.g., policies 121 stored in data store 120 of
In this manner, the controlled stopping of virtual representations (e.g., virtual hand 603) at virtual items' surfaces may control or apply limits to how items are “touched” or “contacted” within a virtual environment. This touch limiting may be used to define where a virtual surface begins and ends. In one example, if the virtual item is a light switch displayed at a specified location, the systems herein may place a hard limit on the location of the virtual hand (or other virtual representation) so that even if the user's physical hand provides an input that would take the virtual hand past the virtual light switch, the virtual hand may be “touch limited” and may be configured to controllably stop on the surface of the light switch. As such, the touch of the virtual hand may be limited and may be stopped at the light switch, even if the user's physical hand would indicate movement past the light switch.
In other cases, this limit may not be a hard stop, but may be applied proportionally, so that if the user's physical hand moves past the light switch at, for instance, 5 cm past the virtual light switch, the virtual hand will continue to move. In some cases, the virtual hand 603 may move more slowly past the virtual light switch or other virtual item, giving the feeling that the virtual hand is touching a thick substance like dough. If the user's physical hand moves to 10 cm, for example, past the virtual light switch, the virtual hand may move fully through the virtual light switch and into open space or onto another virtual object behind the virtual light switch. Thus, the touch limiting may be applied in a fully on manner (hard stop), or in a proportional manner (soft stop) that allows touch limiting to occur for specific movements close to the virtual item, but to cease once the movements have moved sufficiently past the virtual item.
In some cases, when a user is initially learning to interact with an artificial reality interface, the user may provide movements with their hands and/or arms, and the physical object detecting module 109 of computer system 101 may track those movements and determine how the virtual representation 113 is to be moved relative to each virtual item 108. As the user uses their hand, for example, to interact with an interface, the computer system 101 may be informed of the user's preferences directly by the user, or may learn the user's preferences regarding touch limiting over time. Thus, the amount of decoupling between the user's movements and the amount the virtual representation correspondingly moves (or doesn't move) may change and may be adaptable over time. These changes may occur based on inputs from the user, or based on machine learning algorithms, for example, that monitor the user's movements and determine the user's intention with respect to each virtual item. The user may specify the amount of decoupling (e.g., a very loose coupling between physical movements and virtual movements, or a very tight coupling that results in a nearly one-to-one correlation between physical movements and virtual representation movements), or the underlying system may specify the amount of decoupling. This amount of decoupling may be different for each user and for each virtual and/or physical object, and may be specified in a controlling policy (e.g., 121).
In the example of method 400, the systems herein may be configured to prevent a virtual representation of a physical object from moving beyond a given surface of a virtual item (i.e., the surface they are contacting with the virtual representation). Controlled stops may substantially slow movement of the virtual representation at the surface of the virtual item, and then allow the virtual representation to move past the surface of the virtual item upon detecting physical movements indicating that such should occur. Additionally or alternatively, in some cases, haptic feedback may be provided to the user upon the virtual representation of the physical object reaching the surface of the virtual item within the virtual environment. This haptic feedback may be provided using any of a plurality of different haptic devices, including those described in
Turning now to
At step 710 of method 700, the virtual environment generating module 107 of
At step 730 of method 700, the virtual representation generating module 112 of
Thus, as shown in
However, in cases where pinning is applied, as shown in
In some cases, the pinning may be applied in a strict manner, that leaves the virtual hand 803 pinned to the initial position regardless of further physical movements. In other cases, the pinning may be applied in a less restrictive, controlled manner that allows the virtual hand to move in a trailing fashion after the movements of the physical hand. This slower trailing motion may simulate the friction the user would feel if the system were to pin the user's virtual finger to an initial touch point on the virtual interface 801 and were to slide the affordance 804 (or other virtual touch indicator) in a delayed or proportionally reduced amount than is represented by the user's actual physical movements (e.g., 805). When using pinning, the user may perceive these “higher effort” movements as friction (i.e., the friction they would feel on a physical interface). As noted above, the embodiments herein may also disconnect the user's physical wrist motions from the virtual hand 803. As such, even if the user's physical hand and wrist are rotating, the virtual environment 800 may show a virtual hand 803 that is pinned to the initial touch point and may only move in the controlled, proportionally reduced manner.
Accordingly, as illustrated in
In addition to touch limiting embodiments and pinning embodiments, the systems herein may also apply recoil assist when interacting with a virtual user interface or with any other type of virtual item. In such cases, for example, if a user is attempting to press a virtual button in a virtual user interface and the user physically moves their hand 10 cm past the location of a virtual button on the virtual user interface, in order to complete the button press by moving their finger 2 cm backwards, the systems herein may be configured to interpret the 2 cm backwards motion from the physical location where the user begins to retract their hand (i.e., at 10 cm past the location of the virtual UI). As such, instead of forcing the user to retract their hand 12 cm from the point of turnaround to complete the button press, the user may simply retract their hand wherever the retraction naturally occurs. This retraction or “recoil” may thus occur in a variety of locations that are not necessarily tied to the virtual UI or other virtual item. By controllably decoupling the virtual representation from the tracked physical object, the systems herein may provide recoil assist to allow interaction with objects in a natural and expected manner. This may reduce latency by allowing the retraction of the virtual hand to begin wherever the user's physical hand begins to retract (even if well beyond the virtual interface). This may also reduce user fatigue by not requiring the user to retract their hand all the way back to the virtual interface plus the distance required to deselect the virtual button. This will be explained further below with regard to method 900 of
In
At step 910 of method 900, the virtual environment generating module 107 of
The computer system 101 may then determine, at step 940, that a first movement of the physical object 110 has moved the virtual representation of the physical object to a first position that is within a specified distance of a surface of the virtual item, and that a second movement of the physical object has moved the virtual representation of the physical object to a second position that is a specified distance closer to the surface of the virtual item. At step 950, the presentation module 114 may present the virtual item and the generated virtual representation of the physical object within the virtual environment 1000, where the virtual representation (e.g., 1003) is configured to recoil from the surface of the virtual item according to the determined first and second movements.
Thus, for example, as shown in
For instance,
Still further, some embodiments may provide visual cues (e.g., 1104) on the virtual interface 1100 indicating where the virtual hand 1103 is relative to the virtual interface. At least in some cases, the visual cues 1104 may be provided even if the virtual hand 1103 does not exactly correspond to the user's physical movements. This may occur as the result of controllable decoupling between the physical movements of the user and the virtual representation of those movements (e.g., virtual hand 1103). The visual cue 1104 may be configured to track the position of the virtual hand 1103, even if the physical position of the user's hand would indicate otherwise. The amount of decoupling may be controlled and may be specified in policies. These policies may provide information used by the virtual interface to determine where to place the visual cue 1104. In some cases, instead of just showing an indicator of the user's physical hand (e.g., 1105), the virtual environment may show a representation of the user's physical hand (or other physical object), along with the virtual representation that does not necessarily track the location of the physical object (e.g., in cases of touch limiting, pinning, recoil assist, or when implementing force profiles). Accordingly, in such cases, both the virtual representation of the physical object and the physical object itself may be presented in the virtual environment.
In addition to these scenarios, the embodiments herein may also provide undershot compensation for interactions with virtual items. A shown in
Any or all of these embodiments may work with a user's hands, with a tool (e.g., a pencil or paintbrush) that the user is holding, or with a controller or other object that is in the user's hand. Thus, for example, a pencil tip may be touch limited or pinned, or a paintbrush or other tool may follow certain force profiles or may implement recoil assist. Still other tools may implement undershot compensation, where various filters may be applied (including, for example, double exponential filters) that are tuned to identify the position and velocity of a tracked physical item and intentionally overshoot or undershoot to interact with a virtual item in an intended manner. Thus, if the user is using a controller and quickly motions the controller at a virtual button, for example, the system will determine that the intent was a button push, and may move the virtual hand in that manner. If, however, the user moved the controller slowly and intentionally toward a virtual button, but did not touch that button, the system may determine that the user did not intend to interact with the button and, in such cases, would depict the virtual hand as being near the virtual button, but not touching it.
During these interactions, various affordances may be depicted within the virtual environment. For example, as shown in
When the user contacts the virtual interface 1301, the affordance 1302 may change to a two-ringed circle, as shown in
In some cases, after placing the affordance 1302 indicating where the user is touching the virtual interface 1301 (as in
In addition to the methods described above, systems may be provided that include the following: at least one physical processor and physical memory comprising computer-executable instructions that, when executed by the physical processor, cause the physical processor to: generate a virtual item within a virtual environment, detect, using one or more hardware sensors, a current position of at least one physical object that is to be portrayed within the virtual environment, generate a virtual representation of the physical object within the virtual environment, wherein the virtual representation of the physical object is configured to at least partially follow movements of the physical object relative to the virtual item, and present the virtual item and the generated virtual representation of the physical object within the virtual environment, wherein the virtual representation of the physical object is at least partially, controllably decoupled from the movements of the physical object relative to the virtual item.
Moreover, a non-transitory computer-readable medium may be provided that includes one or more computer-executable instructions that, when executed by at least one processor of a computing device, may cause the computing device to: generate a virtual item within a virtual environment, detect, using one or more hardware sensors, a current position of at least one physical object that is to be portrayed within the virtual environment, generate a virtual representation of the physical object within the virtual environment, wherein the virtual representation of the physical object is configured to at least partially follow movements of the physical object relative to the virtual item, and present the virtual item and the generated virtual representation of the physical object within the virtual environment, wherein the virtual representation of the physical object is at least partially, controllably decoupled from the movements of the physical object relative to the virtual item.
Thus, in this manner, methods, systems, and computer-readable media may be provided which controllably decouple a virtual representation's movements from the movements of a physical object. The controllable decoupling may take the form of touch limiting, pinning, recoil assist, undershot assist, force profiles, or other forms of controlled decoupling. By implementing such embodiments, users may experience less fatigue and less frustration when interacting with virtual items in virtual environments, as their intended actions may be interpreted and carried out, even if their physical movements did not accurately reflect those actions.
Example 1: A computer-implemented method may include generating a virtual item within a virtual environment, detecting, using one or more hardware sensors, a current position of at least one physical object that is to be portrayed within the virtual environment, generating a virtual representation of the physical object within the virtual environment, wherein the virtual representation of the physical object is configured to at least partially follow movements of the physical object relative to the virtual item, and presenting the virtual item and the generated virtual representation of the physical object within the virtual environment, wherein the virtual representation of the physical object is at least partially, controllably decoupled from the movements of the physical object relative to the virtual item.
Example 2. The computer-implemented method of Example 1, wherein the virtual item is an interface.
Example 3. The computer-implemented method of any of Examples 1 or 2, wherein the interface comprises a floating user interface within the virtual environment.
Example 4: The computer-implemented method of any of Examples 1-3, wherein the floating user interface is displayed at a fixed position within the virtual environment.
Example 5. The computer-implemented method of any of Examples 1-4, wherein the at least one physical object comprises at least one of: a user's left hand, a user's right hand, a user's fingers, or an electronic controller.
Example 6. The computer-implemented method of any of Examples 1-5, wherein the virtual representation of the physical object is controllably decoupled from the movements of the physical object relative to the virtual item by a specified amount.
Example 7. The computer-implemented method of any of Examples 1-6, wherein the specified amount of decoupling is controlled based on a policy.
Example 8. The computer-implemented method of any of Examples 1-7, wherein the specified amount of decoupling is adaptable over time based on movements of the physical object.
Example 9. The computer-implemented method of any of Examples 1-8, wherein the controllable decoupling of the virtual representation of the physical object from the movements of the physical object relative to the virtual item includes stopping the virtual representation of the physical object at a surface of the virtual item within the virtual environment.
Example 10. The computer-implemented method of any of Examples 1-9, wherein stopping the virtual representation of the physical object at the surface of the virtual item within the virtual environment includes preventing the virtual representation of the physical object from moving beyond the surface of the virtual item.
Example 11. The computer-implemented method of any of Examples 1-10, wherein stopping the virtual representation of the physical object at the surface of the virtual item within the virtual environment includes substantially slowing movement of the virtual representation of the physical object at the surface of the virtual item, and allowing the virtual representation of the physical object to move past the surface of the virtual item upon detecting physical movements indicating such.
Example 12. The computer-implemented method of any of Examples 1-11, further comprising providing haptic feedback using one or more haptic devices upon the virtual representation of the physical object reaching the surface of the virtual item within the virtual environment.
Example 13: A system may include: at least one physical processor and physical memory comprising computer-executable instructions that, when executed by the physical processor, cause the physical processor to: generate a virtual item within a virtual environment, detect, using one or more hardware sensors, a current position of at least one physical object that is to be portrayed within the virtual environment, generate a virtual representation of the physical object within the virtual environment, wherein the virtual representation of the physical object is configured to at least partially follow movements of the physical object relative to the virtual item, and present the virtual item and the generated virtual representation of the physical object within the virtual environment, wherein the virtual representation of the physical object is at least partially, controllably decoupled from the movements of the physical object relative to the virtual item.
Example 14. The system of Example 13, wherein the controllable decoupling of the virtual representation of the physical object from the movements of the physical object relative to the virtual item includes moving the virtual representation of the physical object a lesser percentage of the physical movement of the physical object.
Example 15. The system of any of Examples 13 or 14, wherein the controllable decoupling of the virtual representation of the physical object from the movements of the physical object relative to the virtual item includes pinning the virtual representation of the physical object to a detected current position of the physical object relative to the virtual item.
Example 16. The system of any of Examples 13-15, wherein the virtual representation of the physical object remains pinned to the detected current position of the physical object, even if a portion of the physical object moves away from the detected current position of the physical object.
Example 17. The system of any of Examples 13-16, wherein the controllable decoupling of the virtual representation of the physical object from the movements of the physical object relative to the virtual item includes registering recoil movements from a movement endpoint.
Example 18. The system of any of Examples 13-17, wherein both the virtual representation of the physical object and the physical object are presented in the virtual environment.
Example 19. The system of any of Examples 13-18, wherein the virtual environment presents one or more affordances on the virtual item indicating where the virtual representation of the physical object is positioned to contact the virtual item.
Example 20. A non-transitory computer-readable medium may include one or more computer-executable instructions that, when executed by at least one processor of a computing device, may cause the computing device to: generate a virtual item within a virtual environment, detect, using one or more hardware sensors, a current position of at least one physical object that is to be portrayed within the virtual environment, generate a virtual representation of the physical object within the virtual environment, wherein the virtual representation of the physical object is configured to at least partially follow movements of the physical object relative to the virtual item, and present the virtual item and the generated virtual representation of the physical object within the virtual environment, wherein the virtual representation of the physical object is at least partially, controllably decoupled from the movements of the physical object relative to the virtual item.
Embodiments of the present disclosure may include or be implemented in conjunction with various types of artificial-reality systems. Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, for example, a virtual reality, an augmented reality, a mixed reality, a hybrid reality, or some combination and/or derivative thereof. Artificial-reality content may include completely computer-generated content or computer-generated content combined with captured (e.g., real-world) content. The artificial-reality content may include video, audio, haptic feedback, or some combination thereof, any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional (3D) effect to the viewer). Additionally, in some embodiments, artificial reality may also be associated with applications, products, accessories, services, or some combination thereof, that are used to, for example, create content in an artificial reality and/or are otherwise used in (e.g., to perform activities in) an artificial reality.
Artificial-reality systems may be implemented in a variety of different form factors and configurations. Some artificial-reality systems may be designed to work without near-eye displays (NEDs). Other artificial-reality systems may include an NED that also provides visibility into the real world (such as, e.g., augmented-reality system 1400 in
Turning to
In some embodiments, augmented-reality system 1400 may include one or more sensors, such as sensor 1440. Sensor 1440 may generate measurement signals in response to motion of augmented-reality system 1400 and may be located on substantially any portion of frame 1410. Sensor 1440 may represent one or more of a variety of different sensing mechanisms, such as a position sensor, an inertial measurement unit (IMU), a depth camera assembly, a structured light emitter and/or detector, or any combination thereof. In some embodiments, augmented-reality system 1400 may or may not include sensor 1440 or may include more than one sensor. In embodiments in which sensor 1440 includes an IMU, the IMU may generate calibration data based on measurement signals from sensor 1440. Examples of sensor 1440 may include, without limitation, accelerometers, gyroscopes, magnetometers, other suitable types of sensors that detect motion, sensors used for error correction of the IMU, or some combination thereof.
In some examples, augmented-reality system 1400 may also include a microphone array with a plurality of acoustic transducers 1420(A)-1420(J), referred to collectively as acoustic transducers 1420. Acoustic transducers 1420 may represent transducers that detect air pressure variations induced by sound waves. Each acoustic transducer 1420 may be configured to detect sound and convert the detected sound into an electronic format (e.g., an analog or digital format). The microphone array in
In some embodiments, one or more of acoustic transducers 1420(A)-(J) may be used as output transducers (e.g., speakers). For example, acoustic transducers 1420(A) and/or 1420(B) may be earbuds or any other suitable type of headphone or speaker.
The configuration of acoustic transducers 1420 of the microphone array may vary. While augmented-reality system 1400 is shown in
Acoustic transducers 1420(A) and 1420(B) may be positioned on different parts of the user's ear, such as behind the pinna, behind the tragus, and/or within the auricle or fossa. Or, there may be additional acoustic transducers 1420 on or surrounding the ear in addition to acoustic transducers 1420 inside the ear canal. Having an acoustic transducer 1420 positioned next to an ear canal of a user may enable the microphone array to collect information on how sounds arrive at the ear canal. By positioning at least two of acoustic transducers 1420 on either side of a user's head (e.g., as binaural microphones), augmented-reality device 1400 may simulate binaural hearing and capture a 3D stereo sound field around about a user's head. In some embodiments, acoustic transducers 1420(A) and 1420(B) may be connected to augmented-reality system 1400 via a wired connection 1430, and in other embodiments acoustic transducers 1420(A) and 1420(B) may be connected to augmented-reality system 1400 via a wireless connection (e.g., a BLUETOOTH connection). In still other embodiments, acoustic transducers 1420(A) and 1420(B) may not be used at all in conjunction with augmented-reality system 1400.
Acoustic transducers 1420 on frame 1410 may be positioned in a variety of different ways, including along the length of the temples, across the bridge, above or below display devices 1415(A) and 1415(B), or some combination thereof. Acoustic transducers 1420 may also be oriented such that the microphone array is able to detect sounds in a wide range of directions surrounding the user wearing the augmented-reality system 1400. In some embodiments, an optimization process may be performed during manufacturing of augmented-reality system 1400 to determine relative positioning of each acoustic transducer 1420 in the microphone array.
In some examples, augmented-reality system 1400 may include or be connected to an external device (e.g., a paired device), such as neckband 1405. Neckband 1405 generally represents any type or form of paired device. Thus, the following discussion of neckband 1405 may also apply to various other paired devices, such as charging cases, smart watches, smart phones, wrist bands, other wearable devices, hand-held controllers, tablet computers, laptop computers, other external compute devices, etc.
As shown, neckband 1405 may be coupled to eyewear device 1402 via one or more connectors. The connectors may be wired or wireless and may include electrical and/or non-electrical (e.g., structural) components. In some cases, eyewear device 1402 and neckband 1405 may operate independently without any wired or wireless connection between them. While
Pairing external devices, such as neckband 1405, with augmented-reality eyewear devices may enable the eyewear devices to achieve the form factor of a pair of glasses while still providing sufficient battery and computation power for expanded capabilities. Some or all of the battery power, computational resources, and/or additional features of augmented-reality system 1400 may be provided by a paired device or shared between a paired device and an eyewear device, thus reducing the weight, heat profile, and form factor of the eyewear device overall while still retaining desired functionality. For example, neckband 1405 may allow components that would otherwise be included on an eyewear device to be included in neckband 1405 since users may tolerate a heavier weight load on their shoulders than they would tolerate on their heads. Neckband 1405 may also have a larger surface area over which to diffuse and disperse heat to the ambient environment. Thus, neckband 1405 may allow for greater battery and computation capacity than might otherwise have been possible on a stand-alone eyewear device. Since weight carried in neckband 1405 may be less invasive to a user than weight carried in eyewear device 1402, a user may tolerate wearing a lighter eyewear device and carrying or wearing the paired device for greater lengths of time than a user would tolerate wearing a heavy standalone eyewear device, thereby enabling users to more fully incorporate artificial-reality environments into their day-to-day activities.
Neckband 1405 may be communicatively coupled with eyewear device 1402 and/or to other devices. These other devices may provide certain functions (e.g., tracking, localizing, depth mapping, processing, storage, etc.) to augmented-reality system 1400. In the embodiment of
Acoustic transducers 1420(1) and 1420(J) of neckband 1405 may be configured to detect sound and convert the detected sound into an electronic format (analog or digital). In the embodiment of
Controller 1425 of neckband 1405 may process information generated by the sensors on neckband 1405 and/or augmented-reality system 1400. For example, controller 1425 may process information from the microphone array that describes sounds detected by the microphone array. For each detected sound, controller 1425 may perform a direction-of-arrival (DOA) estimation to estimate a direction from which the detected sound arrived at the microphone array. As the microphone array detects sounds, controller 1425 may populate an audio data set with the information. In embodiments in which augmented-reality system 1400 includes an inertial measurement unit, controller 1425 may compute all inertial and spatial calculations from the IMU located on eyewear device 1402. A connector may convey information between augmented-reality system 1400 and neckband 1405 and between augmented-reality system 1400 and controller 1425. The information may be in the form of optical data, electrical data, wireless data, or any other transmittable data form. Moving the processing of information generated by augmented-reality system 1400 to neckband 1405 may reduce weight and heat in eyewear device 1402, making it more comfortable to the user.
Power source 1435 in neckband 1405 may provide power to eyewear device 1402 and/or to neckband 1405. Power source 1435 may include, without limitation, lithium ion batteries, lithium-polymer batteries, primary lithium batteries, alkaline batteries, or any other form of power storage. In some cases, power source 1435 may be a wired power source. Including power source 1435 on neckband 1405 instead of on eyewear device 1402 may help better distribute the weight and heat generated by power source 1435.
As noted, some artificial-reality systems may, instead of blending an artificial reality with actual reality, substantially replace one or more of a user's sensory perceptions of the real world with a virtual experience. One example of this type of system is a head-worn display system, such as virtual-reality system 1500 in
Artificial-reality systems may include a variety of types of visual feedback mechanisms. For example, display devices in augmented-reality system 1400 and/or virtual-reality system 1500 may include one or more liquid crystal displays (LCDs), light emitting diode (LED) displays, microLED displays, organic LED (OLED) displays, digital light project (DLP) micro-displays, liquid crystal on silicon (LCoS) micro-displays, and/or any other suitable type of display screen. These artificial-reality systems may include a single display screen for both eyes or may provide a display screen for each eye, which may allow for additional flexibility for varifocal adjustments or for correcting a user's refractive error. Some of these artificial-reality systems may also include optical subsystems having one or more lenses (e.g., concave or convex lenses, Fresnel lenses, adjustable liquid lenses, etc.) through which a user may view a display screen. These optical subsystems may serve a variety of purposes, including to collimate (e.g., make an object appear at a greater distance than its physical distance), to magnify (e.g., make an object appear larger than its actual size), and/or to relay (to, e.g., the viewer's eyes) light. These optical subsystems may be used in a non-pupil-forming architecture (such as a single lens configuration that directly collimates light but results in so-called pincushion distortion) and/or a pupil-forming architecture (such as a multi-lens configuration that produces so-called barrel distortion to nullify pincushion distortion).
In addition to or instead of using display screens, some of the artificial-reality systems described herein may include one or more projection systems. For example, display devices in augmented-reality system 1400 and/or virtual-reality system 1500 may include micro-LED projectors that project light (using, e.g., a waveguide) into display devices, such as clear combiner lenses that allow ambient light to pass through. The display devices may refract the projected light toward a user's pupil and may enable a user to simultaneously view both artificial-reality content and the real world. The display devices may accomplish this using any of a variety of different optical components, including waveguide components (e.g., holographic, planar, diffractive, polarized, and/or reflective waveguide elements), light-manipulation surfaces and elements (such as diffractive, reflective, and refractive elements and gratings), coupling elements, etc. Artificial-reality systems may also be configured with any other suitable type or form of image projection system, such as retinal projectors used in virtual retina displays.
The artificial-reality systems described herein may also include various types of computer vision components and subsystems. For example, augmented-reality system 1400 and/or virtual-reality system 1500 may include one or more optical sensors, such as two-dimensional (2D) or 3D cameras, structured light transmitters and detectors, time-of-flight depth sensors, single-beam or sweeping laser rangefinders, 3D LiDAR sensors, and/or any other suitable type or form of optical sensor. An artificial-reality system may process data from one or more of these sensors to identify a location of a user, to map the real world, to provide a user with context about real-world surroundings, and/or to perform a variety of other functions.
The artificial-reality systems described herein may also include one or more input and/or output audio transducers. Output audio transducers may include voice coil speakers, ribbon speakers, electrostatic speakers, piezoelectric speakers, bone conduction transducers, cartilage conduction transducers, tragus-vibration transducers, and/or any other suitable type or form of audio transducer. Similarly, input audio transducers may include condenser microphones, dynamic microphones, ribbon microphones, and/or any other type or form of input transducer. In some embodiments, a single transducer may be used for both audio input and audio output.
In some embodiments, the artificial-reality systems described herein may also include tactile (i.e., haptic) feedback systems, which may be incorporated into headwear, gloves, body suits, handheld controllers, environmental devices (e.g., chairs, floormats, etc.), and/or any other type of device or system. Haptic feedback systems may provide various types of cutaneous feedback, including vibration, force, traction, texture, and/or temperature. Haptic feedback systems may also provide various types of kinesthetic feedback, such as motion and compliance. Haptic feedback may be implemented using motors, piezoelectric actuators, fluidic systems, and/or a variety of other types of feedback mechanisms. Haptic feedback systems may be implemented independent of other artificial-reality devices, within other artificial-reality devices, and/or in conjunction with other artificial-reality devices.
By providing haptic sensations, audible content, and/or visual content, artificial-reality systems may create an entire virtual experience or enhance a user's real-world experience in a variety of contexts and environments. For instance, artificial-reality systems may assist or extend a user's perception, memory, or cognition within a particular environment. Some systems may enhance a user's interactions with other people in the real world or may enable more immersive interactions with other people in a virtual world. Artificial-reality systems may also be used for educational purposes (e.g., for teaching or training in schools, hospitals, government organizations, military organizations, business enterprises, etc.), entertainment purposes (e.g., for playing video games, listening to music, watching video content, etc.), and/or for accessibility purposes (e.g., as hearing aids, visual aids, etc.). The embodiments disclosed herein may enable or enhance a user's artificial-reality experience in one or more of these contexts and environments and/or in other contexts and environments.
As noted, artificial-reality systems 1400 and 1500 may be used with a variety of other types of devices to provide a more compelling artificial-reality experience. These devices may be haptic interfaces with transducers that provide haptic feedback and/or that collect haptic information about a user's interaction with an environment. The artificial-reality systems disclosed herein may include various types of haptic interfaces that detect or convey various types of haptic information, including tactile feedback (e.g., feedback that a user detects via nerves in the skin, which may also be referred to as cutaneous feedback) and/or kinesthetic feedback (e.g., feedback that a user detects via receptors located in muscles, joints, and/or tendons).
Haptic feedback may be provided by interfaces positioned within a user's environment (e.g., chairs, tables, floors, etc.) and/or interfaces on articles that may be worn or carried by a user (e.g., gloves, wristbands, etc.). As an example,
One or more vibrotactile devices 1640 may be positioned at least partially within one or more corresponding pockets formed in textile material 1630 of vibrotactile system 1600. Vibrotactile devices 1640 may be positioned in locations to provide a vibrating sensation (e.g., haptic feedback) to a user of vibrotactile system 1600. For example, vibrotactile devices 1640 may be positioned against the user's finger(s), thumb, or wrist, as shown in
A power source 1650 (e.g., a battery) for applying a voltage to the vibrotactile devices 1640 for activation thereof may be electrically coupled to vibrotactile devices 1640, such as via conductive wiring 1652. In some examples, each of vibrotactile devices 1640 may be independently electrically coupled to power source 1650 for individual activation. In some embodiments, a processor 1660 may be operatively coupled to power source 1650 and configured (e.g., programmed) to control activation of vibrotactile devices 1640.
Vibrotactile system 1600 may be implemented in a variety of ways. In some examples, vibrotactile system 1600 may be a standalone system with integral subsystems and components for operation independent of other devices and systems. As another example, vibrotactile system 1600 may be configured for interaction with another device or system 1670. For example, vibrotactile system 1600 may, in some examples, include a communications interface 1680 for receiving and/or sending signals to the other device or system 1670. The other device or system 1670 may be a mobile device, a gaming console, an artificial-reality (e.g., virtual-reality, augmented-reality, mixed-reality) device, a personal computer, a tablet computer, a network device (e.g., a modem, a router, etc.), a handheld controller, etc. Communications interface 1680 may enable communications between vibrotactile system 1600 and the other device or system 1670 via a wireless (e.g., Wi-Fi, BLUETOOTH, cellular, radio, etc.) link or a wired link. If present, communications interface 1680 may be in communication with processor 1660, such as to provide a signal to processor 1660 to activate or deactivate one or more of the vibrotactile devices 1640.
Vibrotactile system 1600 may optionally include other subsystems and components, such as touch-sensitive pads 1690, pressure sensors, motion sensors, position sensors, lighting elements, and/or user interface elements (e.g., an on/off button, a vibration control element, etc.). During use, vibrotactile devices 1640 may be configured to be activated for a variety of different reasons, such as in response to the user's interaction with user interface elements, a signal from the motion or position sensors, a signal from the touch-sensitive pads 1690, a signal from the pressure sensors, a signal from the other device or system 1670, etc.
Although power source 1650, processor 1660, and communications interface 1680 are illustrated in
Haptic wearables, such as those shown in and described in connection with
Head-mounted display 1702 generally represents any type or form of virtual-reality system, such as virtual-reality system 1500 in
While haptic interfaces may be used with virtual-reality systems, as shown in
One or more of band elements 1832 may include any type or form of actuator suitable for providing haptic feedback. For example, one or more of band elements 1832 may be configured to provide one or more of various types of cutaneous feedback, including vibration, force, traction, texture, and/or temperature. To provide such feedback, band elements 1832 may include one or more of various types of actuators. In one example, each of band elements 1832 may include a vibrotactor (e.g., a vibrotactile actuator) configured to vibrate in unison or independently to provide one or more of various types of haptic sensations to a user. Alternatively, only a single band element or a subset of band elements may include vibrotactors.
Haptic devices 1610, 1620, 1704, and 1830 may include any suitable number and/or type of haptic transducer, sensor, and/or feedback mechanism. For example, haptic devices 1610, 1620, 1704, and 1830 may include one or more mechanical transducers, piezoelectric transducers, and/or fluidic transducers. Haptic devices 1610, 1620, 1704, and 1830 may also include various combinations of different types and forms of transducers that work together or independently to enhance a user's artificial-reality experience. In one example, each of band elements 1832 of haptic device 1830 may include a vibrotactor (e.g., a vibrotactile actuator) configured to vibrate in unison or independently to provide one or more of various types of haptic sensations to a user.
Dongle portion 2020 may include antenna 2052, which may be configured to communicate with antenna 2050 included as part of wearable portion 2010. Communication between antennas 2050 and 2052 may occur using any suitable wireless technology and protocol, non-limiting examples of which include radiofrequency signaling and BLUETOOTH. As shown, the signals received by antenna 2052 of dongle portion 2020 may be provided to a host computer for further processing, display, and/or for effecting control of a particular physical or virtual object or objects.
Although the examples provided with reference to
As detailed above, the computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein. In their most basic configuration, these computing device(s) may each include at least one memory device and at least one physical processor.
In some examples, the term “memory device” generally refers to any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, a memory device may store, load, and/or maintain one or more of the modules described herein. Examples of memory devices include, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory.
In some examples, the term “physical processor” generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, a physical processor may access and/or modify one or more modules stored in the above-described memory device. Examples of physical processors include, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor.
Although illustrated as separate elements, the modules described and/or illustrated herein may represent portions of a single module or application. In addition, in certain embodiments one or more of these modules may represent one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks. For example, one or more of the modules described and/or illustrated herein may represent modules stored and configured to run on one or more of the computing devices or systems described and/or illustrated herein. One or more of these modules may also represent all or portions of one or more special-purpose computers configured to perform one or more tasks.
In addition, one or more of the modules described herein may transform data, physical devices, and/or representations of physical devices from one form to another. For example, one or more of the modules recited herein may receive data to be transformed, transform the data, and output a result of the transformation to control interactions with virtual objects. Additionally or alternatively, one or more of the modules recited herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form to another by executing on the computing device, storing data on the computing device, and/or otherwise interacting with the computing device.
In some embodiments, the term “computer-readable medium” generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions. Examples of computer-readable media include, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and flash media), and other distribution systems.
The process parameters and sequence of the steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.
The preceding description has been provided to enable others skilled in the art to best utilize various aspects of the exemplary embodiments disclosed herein. This exemplary description is not intended to be exhaustive or to be limited to any precise form disclosed. Many modifications and variations are possible without departing from the spirit and scope of the present disclosure. The embodiments disclosed herein should be considered in all respects illustrative and not restrictive. Reference should be made to the appended claims and their equivalents in determining the scope of the present disclosure.
Unless otherwise noted, the terms “connected to” and “coupled to” (and their derivatives), as used in the specification and claims, are to be construed as permitting both direct and indirect (i.e., via other elements or components) connection. In addition, the terms “a” or “an,” as used in the specification and claims, are to be construed as meaning “at least one of.” Finally, for ease of use, the terms “including” and “having” (and their derivatives), as used in the specification and claims, are interchangeable with and have the same meaning as the word “comprising.”
Number | Name | Date | Kind |
---|---|---|---|
20120218395 | Andersen et al. | Aug 2012 | A1 |
20140201674 | Holz | Jul 2014 | A1 |
20160018985 | Bennet et al. | Jan 2016 | A1 |
20180157398 | Kaehler et al. | Jun 2018 | A1 |
20180284982 | Veeramani | Oct 2018 | A1 |
20180342103 | Schwarz et al. | Nov 2018 | A1 |
20210191600 | Lemay | Jun 2021 | A1 |
20220091722 | Faulkner | Mar 2022 | A1 |
20220101613 | Rockel | Mar 2022 | A1 |
20220121344 | Pastrana Vicente | Apr 2022 | A1 |
Number | Date | Country |
---|---|---|
108536273 | Sep 2018 | CN |
Entry |
---|
International Search Report and Written Opinion for International Application No. PCT/US2022/048319, dated Feb. 23, 2023, 10 pages. |
Number | Date | Country | |
---|---|---|---|
20230140550 A1 | May 2023 | US |