Virtual reality (VR) systems and augmented reality (AR) systems may enable users to experience an immersive computing, entertainment, or gaming experience. While wearing a head-mounted display (HMD), a user can view portions of a captured scene or an artificially generated scene by orienting his or her head and eyes, just as the user naturally does to view a real-world environment. The user can also interact with and control virtual features that are displayed in some HMDs using a wired or wireless controller. However, it can be difficult to track such controllers with sufficient precision and accuracy to perform certain tasks in a VR/AR environment, such as writing, drawing, pointing, gesturing, etc.
As will be described in greater detail below, the present disclosure describes styluses for use in a virtual, augmented, or mixed reality (“VR/AR/MR”) environment and associated HMD systems. In some examples, the styluses include at least one sensor for detecting manipulation of the stylus and a tracking component that enables the stylus to be tracked.
For example, a VR/AR/MR stylus may include an elongated housing that is dimensioned to be grasped by a user's hand, at least one sensor that is configured to detect manipulation of the stylus by the user, and a tracking component that enables the stylus to be tracked in a VR/AR/MR environment. Such a stylus may also include a communication component configured to transmit sensor data generated by the sensor to an HMD system.
In some examples, manipulation of the stylus may include movement of the stylus in a shape resembling a grapheme. Manipulation of the stylus may also include a pressure exerted on a stylus. In this example, the sensor may include a pressure sensor that is configured to detect the pressure exerted on the stylus and generate sensor data based on the same. For example, the sensor may include pressure sensors that are disposed along the elongated housing and configured to detect pressure exerted by at least one finger of the user's hand. Such pressure sensors may include a matrix of pressure sensing electrodes that is wrapped around at least a portion of the elongated housing.
The sensor of the stylus may also include a pressure-sensitive tip that is configured to detect pressure exerted on a tip of the stylus when the stylus interacts with a surface and/or a magnetic field sensor that is configured to detect rotation of a magnetic ball positioned at a tip of the stylus when the stylus interacts with a surface. The stylus may also include a haptic-feedback module that is configured to provide haptic feedback to the user in response to the manipulation.
In one example, the stylus may be configurable between a surface mode, in which the sensor detects manipulation of the stylus as the stylus interacts with a passive surface, and a non-surface mode, in which the sensor detects manipulation of the stylus within space. In addition, the sensor of the stylus may include at least one inertial measurement unit sensor that is disposed within the elongated housing and configured to generate sensor data relating to the manipulation of the stylus.
The manipulation of the stylus may also include a press of at least one mechanical button, a touch of at least a portion of the stylus, a dragging touch across at least a portion of the stylus, a tilting of the stylus, a press of a tip of the stylus against a surface of a real-world object, a movement of the tip of the stylus across the surface of the real-world object, a translation of the stylus in space, and/or a squeezing of the stylus. The tracking component may include at least one of an electrically active component or an electrically passive component.
The present disclosure also details various head-mounted display systems in which the above-described styluses may be used. For example, a head-mounted display system may include a stylus, a tracking subsystem, and a display subsystem. The stylus may include an elongated housing that is dimensioned to be grasped by a user's hand and a tracking component disposed on or within the elongated housing. The tracking subsystem may be configured to track, using at least the tracking component, manipulation of the stylus in a real-world environment. In addition, the display subsystem may be configured to display, based on tracking information received from the tracking subsystem, an image based on the manipulation of the stylus within a VR/AR/MR environment.
In some examples, the tracking subsystem may be configured to track the stylus by (1) capturing images of the stylus in a real-world environment, (2) identifying, within the images, the tracking component of the stylus, and (3) tracking, based on a position of the tracking component within the images, the stylus within the real-world environment. In these examples, the tracking component of the stylus may include at least one infrared light-emitting diode disposed on or within the elongated housing and the tracking subsystem may include an image sensor configured to capture the images of the stylus in the real-world environment.
The stylus may also include at least one sensor that is configured to detect the manipulation of the stylus and a communication component configured to transmit sensor data generated by the sensor to the tracking subsystem. The tracking subsystem may be configured to identify the manipulation of the stylus based on the sensor data received from the communication component, and/or track, using both the tracking component and the sensor data received from the communication component, the stylus within the VR/AR/MR environment. In some examples, the manipulation of the stylus may include movement of the stylus in a shape resembling a grapheme, and the HMD system may also include a processing subsystem configured to identify, based on the movement of the stylus, the grapheme. In these examples, the manipulation of the stylus may also include a pressure exerted on the stylus, and the stylus may include a pressure sensor that is configured to detect the pressure exerted on the stylus and generate sensor data based on the pressure exerted. Such a processing subsystem may be configured to identify the grapheme by identifying, based on the sensor data, a pressure profile that is associated with the grapheme. Pressure exerted on the stylus may include, for example, pressure exerted on a tip of the stylus when the stylus interacts with a surface and/or pressure exerted by at least one of the user's fingers on the elongated housing.
The present disclosure also details various methods of assembling a stylus. In accordance with such methods, at least one sensor may be coupled to an elongated housing that is dimensioned to be grasped by a user's hand. In addition, a tracking component may be coupled to the elongated housing. In this example, the sensor may be configured to detect manipulation of the stylus by the user, and the tracking component may enable the stylus to be tracked in a VR/AR/MR environment.
Features from any of the above-mentioned embodiments may be used in combination with one another in accordance with the general principles described herein. These and other embodiments, features, and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying drawings and claims.
The accompanying drawings illustrate a number of example embodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the instant disclosure.
Throughout the drawings, identical reference characters and descriptions indicate similar, but not necessarily identical, elements. While the exemplary embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, the exemplary embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the instant disclosure covers all modifications, equivalents, and alternatives falling within the scope of the appended claims.
The present disclosure is generally directed to VR/AR/MR styluses and head-mounted display (HMD) systems that may be used in connection with the same. As will be explained in greater detail below, embodiments of the present disclosure may include a stylus with an elongated housing and a tracking component disposed on or within the elongated housing. The stylus may also have at least one sensor that is configured to detect manipulation of the stylus by the user. A corresponding HMD system may include a tracking subsystem that uses one or both of (1) the tracking component or (2) data from the sensor(s) to track the stylus in a VR/AR/MR environment. As such, HMD systems according to some embodiments of the present disclosure may enable or improve the tracking or performance of styluses or other controllers within a VR/AR/MR environment. For example, embodiments of the disclosure may improve or enable users to perform gestures for interacting with and/or manipulating a virtual feature, the formation of virtual graphemes (e.g., letters, numbers, symbols, etc.), and other accuracy-sensitive tasks.
The following will provide, with reference to
The HMD device 105 may include a depth-sensing subsystem 120, a display subsystem 125, an image capture subsystem 130, at least one position sensor 135, and an inertial measurement unit (IMU) 140. Other embodiments of the HMD device 105 may include additional or alternative subsystems and features, such as an eye-tracking or gaze-estimation system configured to track the eyes of a user of the HMD device 105. An optional adjustable optical lens assembly may be included and configured to adjust the focus of one or more images displayed on the display subsystem 125 and/or of a view of the real-world environment in front of the user. This adjustable optical lens assembly may, for example, adjust focus based on eye-tracking or gaze-estimation information. Some embodiments of the HMD device 105 may include different or additional components than those described in conjunction with the example block diagram of
The depth-sensing subsystem 120 may capture data describing depth information characterizing a local real-world area or environment near the HMD device 105 and/or characterizing a position, velocity, or position of the depth-sensing subsystem 120 (and thereby of the HMD device 105) within the local area. The depth-sensing subsystem 120 may compute depth and location information using collected data (e.g., based on captured light according to one or more computer-vision schemes or algorithms, by processing a portion of a structured light pattern (e.g., a projected grid of IR light), by time-of-flight (ToF) imaging, simultaneous localization and mapping (SLAM), etc.), or the depth-sensing subsystem 120 can transmit this data to another device such as an external implementation of the processing subsystem 110 that can determine the depth information using the data from the depth-sensing subsystem 120.
The display subsystem 125 may include an electronic display for rendering two-dimensional or three-dimensional images to the user in accordance with data received from the processing subsystem 110. The display subsystem 125 may, in some examples, include additional components for rendering and/or displaying images, such as a graphics processing unit (GPU), one or more lenses, an eye-tracking element, etc. The display subsystem 125 may include a single electronic display or multiple electronic displays (e.g., a display for each eye of the user). Examples of the display subsystem 125 may include a liquid crystal display (LCD), an organic light-emitting diode (OLED) display, an inorganic light-emitting diode (ILED) display, an active-matrix organic light-emitting diode (AMOLED) display, a transparent organic light-emitting diode (TOLED) display, another suitable display, or some combination thereof. The display subsystem 125 may be opaque such that the user cannot see the local environment through the display subsystem 125 (e.g., for VR or MR applications) or may be at least partially transparent to visible light such that the user can see the local environment while using the display subsystem 125 (e.g., for AR or MR applications).
The image capture subsystem 130 may include one or more optical image sensors or cameras for capturing and collecting image data from a local environment. In some embodiments, the sensors included in the image capture subsystem 130 may provide stereoscopic views of the local environment that may be used by the processing subsystem 110 to generate image data that characterizes the local environment and/or a position and orientation of the HMD device 105 and stylus 170 within the local environment. For example, the image capture subsystem 130 may include simultaneous localization and mapping (SLAM) cameras or other cameras that include a wide-angle lens system for capturing a wider field-of-view than may be captured by the eyes of the user. In some embodiments, the image capture subsystem 130 may include one or more IR sensors, such as for capturing an image of an IR tracking component of the stylus 170 to provide data for tracking the stylus 170. The image capture subsystem 130 may provide pass-through views of the real-world environment that are displayed to the user via the display subsystem 125.
The IMU 140 may, in some examples, represent an electronic subsystem that generates data indicating a position (i.e., location, elevation, orientation, direction of movement, speed of movement, translation, and/or rotation) of the HMD device 105 based on measurement signals received from one or more of the position sensors 135 and from depth information received from the depth-sensing subsystem 120 and/or the image capture subsystem 130. For example, a position sensor 135 may generate one or more measurement signals in response to motion of the HMD device 105. Examples of position sensors 135 include, without limitation, one or more accelerometers, one or more gyroscopes, one or more magnetometers, any other suitable type of sensor that detects motion, a type of sensor used for error correction of the IMU 140, or some combination thereof. The position sensors 135 may be located external to the IMU 140, internal to the IMU 140, or some combination thereof.
Based on the one or more measurement signals from one or more position sensors 135, the IMU 140 may generate data indicating an estimated current position of the HMD device 105 relative to an initial position of the HMD device 105. For example, the position sensors 135 may include multiple accelerometers to measure translational motion (forward/back, up/down, left/right) and multiple gyroscopes to measure rotational motion (e.g., pitch, yaw, roll). As described herein, the image capture subsystem 130 and/or the depth-sensing subsystem 120 may generate data indicating an estimated current position and/or orientation of the HMD device 105 relative to the real-world environment in which the HMD device 105 is used.
The processing subsystem 110, whether as a part of the HMD device 105 or as a standalone system in communication with the HMD device 105, may include an application store 150, a tracking subsystem 155, and an image processing engine 160, for example. In some embodiments, the processing subsystem 110 may be configured to process images (e.g., a visible image, data associated with a captured image, etc.) from the image capture subsystem 130, such as to remove distortion caused by the lens system of the image capture subsystem 130 and/or by a separation distance between two image sensors that is noticeably larger than or noticeably less than an average separation distance between users' eyes. For example, when the image capture subsystem 130 is, or is part of, a SLAM camera system, direct images from the image capture subsystem 130 may appear distorted to a user if shown in an uncorrected format. Image correction or compensation may be performed by the processing subsystem 110 to correct and present the images to the user with a more natural appearance, so that it appears to the user as if the user is looking through the display subsystem 125 of the HMD device 105. In some embodiments, the image capture subsystem 130 may include one or more image sensors having lenses adapted (e.g., in terms of field-of-view, separation distance, etc.) to provide pass-through views of the local environment. The image capture subsystem 130 may be configured to capture color images or monochromatic images.
The I/O interface 115 may represent a subsystem or device that allows a user to send action requests and receive responses from the processing subsystem 110, the HMD device 105, and/or the stylus 170. An action request may, in some examples, represent a request to perform a particular action. For example, an action request may be an instruction to start or end capture of image or video data, an instruction to perform a particular action within an application, or to select a mode of operation (e.g., a pass-through mode for displaying an image of the surrounding real-world environment, a surface mode for interacting the stylus 170 with a physical surface in the real-world environment, a non-surface mode for performing actions with the stylus 170 in an open space, etc., as explained in greater detail below). The I/O interface 115 may include one or more input devices or may enable communication with one or more input devices (e.g., wired or wireless communication). Exemplary input devices may include a keyboard, a mouse, a hand-held controller, the stylus 170, a button or knob on the HMD device 105, a microphone, or any other suitable device for receiving action requests and communicating the action requests to the processing subsystem 110.
An action request received by the I/O interface 115 may be communicated to the processing subsystem 110, which may perform an action corresponding to the action request. In some embodiments, the stylus 170 includes a motion sensor that captures inertial data indicating an estimated position (e.g., location and/or orientation) of the stylus 170 relative to an initial position. In some embodiments, the I/O interface 115 and/or the stylus 170 may provide haptic feedback to the user in accordance with instructions received from the processing subsystem 110 and/or the HMD device 105. For example, haptic feedback may be provided when an action request is received or the processing subsystem 110 communicates instructions to the I/O interface 115 causing the I/O interface 115 to generate or direct generation of haptic feedback when the processing subsystem 110 performs an action.
The processing subsystem 110 may include one or more processing devices or physical processors for providing content to the HMD device 105 in accordance with information received from one or more of: the depth-sensing subsystem 120, the image capture subsystem 130, the I/O interface 115, and the stylus 170. In the example shown in
The application store 150 may store one or more applications for execution by the processing subsystem 110. An application may, in some examples, represent a group of instructions, that when executed by a processor, generates content for presentation to the user. Content generated by an application may be generated in response to inputs received from the user via movement of the HMD device 105 or the stylus 170. Examples of such applications include, without limitation, gaming applications, document generation applications, conferencing applications, video playback applications, or other suitable applications.
The tracking subsystem 155 may calibrate the HMD system 100 using one or more calibration parameters and may adjust one or more calibration parameters to reduce error in determination of the position of the HMD device 105 and/or the stylus 170. For example, the tracking subsystem 155 may communicate a calibration parameter to the depth-sensing subsystem 120 to adjust the focus of the depth-sensing subsystem 120 to more accurately determine positions of structured light elements captured by the depth-sensing subsystem 120. Calibration performed by the tracking subsystem 155 may also account for information received from the IMU 140 in the HMD device 105 and/or another motion sensor included in the stylus 170. Additionally, if tracking of the HMD device 105 is lost (e.g., the depth-sensing subsystem 120 loses line of sight of at least a threshold number of structured light elements), the tracking subsystem 155 may recalibrate some or all of the HMD system 100.
The tracking subsystem 155 may include a processing unit for tracking movements of the HMD device 105 and/or of the stylus 170 using information from the depth-sensing subsystem 120, the image capture subsystem 130, the position sensor(s) 135, the IMU 140, a motion sensor of the stylus 170, or some combination thereof. For example, the tracking subsystem 155 may determine a position of a reference point of the HMD device 105 and/or of the stylus 170 in a mapping of the real-world environment based on information collected with the HMD device 105. Additionally, in some embodiments, the tracking subsystem 155 may use portions of data indicating a position and/or orientation of the HMD device 105 and/or stylus 170 from the IMU 140 to predict a future position and/or orientation of the HMD device 105 and/or the stylus 170. The tracking subsystem 155 may also provide the estimated or predicted future position of the HMD device 105 or the I/O interface 115 to the image processing engine 160.
In some embodiments, the tracking subsystem 155 may track other features that can be observed by the depth-sensing subsystem 120, the image capture subsystem 130, and/or by another system. For example, the tracking subsystem 155 may track one or both of the user's hands so that the location of the user's hands within the real-world environment may be known and utilized. For example, the tracking subsystem 155 may receive and process image data in order to determine a pointing direction of a finger of one of the user's hands. The tracking subsystem 155 may also receive information from one or more eye-tracking cameras included in some embodiments of the HMD device 105 to tracking the user's gaze.
The image processing engine 160 may generate a three-dimensional mapping of the area surrounding some or all of the HMD device 105 (i.e., the “local area” or “real-world” environment) based on information received from the HMD device 105. In some embodiments, the image processing engine 160 may determine depth information for the three-dimensional mapping of the local area based on information received from the depth-sensing subsystem 120 that is relevant for techniques used in computing depth. The engine 160 may calculate depth information using one or more techniques in computing depth from structured light (e.g., a projected grid of IR light). In various embodiments, the engine 160 may use the depth information to, e.g., update a model of the local area and generate content based in part on the updated model.
The engine 160 may also execute applications within the HMD system 100 and receive position information, acceleration information, velocity information, predicted future positions, or some combination thereof, of the HMD device 105 and/or stylus 170 from the tracking subsystem 155. Based on the received information, the engine 160 may determine content to provide to the HMD device 105 for presentation to the user. For example, if the received information indicates that the user has looked to the left, the engine 160 may generate content for the HMD device 105 that corresponds to the user's movement in a virtual environment or in an environment augmenting the local area with additional content. Additionally, the engine 160 may perform an action within an application executing on the processing subsystem 110 in response to an action request received from the I/O interface 115 and/or the stylus 170 and provide feedback to the user that the action was performed. The provided feedback may include visual or audible feedback via the HMD device 105 or haptic feedback via the stylus 170, for example.
The HMD device 105 may present a variety of content to a user, including virtual views of an artificially rendered virtual-world environment and/or augmented views of a physical, real-world environment, augmented with computer-generated elements (e.g., two-dimensional (2D) or three-dimensional (3D) images, 2D or 3D video, sound, etc.). In some embodiments, the presented content includes audio that is presented via an internal or external device (e.g., speakers and/or headphones) that receives audio information from the HMD device 105, the processing subsystem 110, or both, and presents audio data based on the audio information. In some embodiments, such speakers and/or headphones may be integrated into or releasably coupled or attached to the HMD device 105. The HMD device 105 may also include one or more bodies, which may be rigidly or non-rigidly coupled together. A rigid coupling between rigid bodies may cause the coupled rigid bodies to act as a single rigid entity. In contrast, a non-rigid coupling between rigid bodies may allow the rigid bodies to move relative to each other. An embodiment of the HMD device 105 is the HMD device 200 shown in
In some embodiments, the HMD device 200 may include an imaging subsystem and a depth-sensing subsystem. For example, the HMD device 200 may include an imaging aperture 220 and an illumination aperture 225. An illumination source included in the depth-sensing subsystem 120 (
The front rigid body 205 may include or support one or more electronic display elements, one or more integrated eye-tracking systems, an IMU 230, and one or more position sensors 235. The IMU 230 may represent an electronic device that generates fast calibration data based on measurement signals received from one or more of the position sensors 235. The position sensor 235 may generate one or more measurement signals in response to motion of the HMD device 200.
As illustrated in
The tracking component 305 may facilitate tracking (e.g., identification of position and/or orientation) of the stylus 300 in a VR/AR/MR environment. In some examples, the tracking component 305 may represent or include a light source (e.g., a single light-emitting diode (“LED”) or group of LEDs that emits infrared or visible light), a light-reflective element for reflecting visible or infrared light, a magnetic field generator or sensor, or a physical feature with a known shape and orientation, etc. The tracking component 305 may be electrically active (e.g., including a power source for operation) or passive (e.g., not including a power source for operation). As described above, an HMD device (such as the HMD devices 105 or 200 of
The tip subsystem 310 may include components located at a tip end portion of the stylus 300. In some applications, a user may use the stylus 300 to draw or write within a VR/AR/MR environment, to point to a real-world or virtual object, to select a real-world or virtual object, to move a real-world or virtual object, to manipulate a virtual object, etc. In some examples, the tip subsystem 310 may be used to facilitate such actions by sensing manipulation of the tip, including sensing pressure on the tip (imparted, for example, by a user's hand or an object), sensing motion of the tip in space, sensing that the tip has been dragged across a surface, sensing the orientation and/or position of the tip, etc. The tip subsystem 310 may include a variety of components or features to facilitate or accomplish such tasks. By way of example, the tip subsystem 310 may include a pressure sensor, a touch sensor, a magnetic field sensor, a rotatable ball with a rotation sensor (e.g., at least one physical roller abutting the rotatable ball, a magnetic field sensor, etc.), a proximity sensor, a thermal sensor, or an ultrasonic emitter and sensor.
The haptic-feedback module 315 may, in some embodiments, be used to provide haptic feedback to the user through the stylus 300 in response to certain actions by the user or as instructed by the processing subsystem to provide an indication to the user. For example, the haptic-feedback module 315 may include a vibrator mechanism to provide constant, intermittent, or patterned vibrations to indicate that a user has successfully performed an action, such as placing the stylus 300 into a particular mode, interacting with a virtual object, starting a virtual drawing or writing operation, positioning the stylus 300 out of a field of view of the HMD device, etc. Different applications may use the haptic-feedback module 315 to provide various indications to the user.
The motion sensor(s) 320 may include an IMU or other device for sensing a position, an orientation, and/or movements of the stylus 300 within a real-world or VR/AR/MR environment. By way of example and not limitation, the motion sensor(s) 320 may include an accelerometer, a gyroscope, a rotation sensor, a magnetometer, an electromagnetic tracking system, etc. Data from the motion sensor(s) 320 may be processed by the processing subsystem 110 of the HMD system 100 (
The touch strip(s) 325 may be an input element on the stylus 300 for sensing touch by the user or by a physical object. For example, the touch strip(s) 325 may include a capacitive touch element, a resistive touch element, or other touch sensor to enable the user to interact with and provide input (e.g., selections, gestures, etc.) to the stylus 300. Data from the touch strip(s) 325 may be processed by the processing subsystem 110 of the HMD system 100 (
The stylus 300 may include at least one mechanical button 330 as another input element for the user to manipulate (e.g., interact with) the stylus 300. The user may press the mechanical button 330 to perform various actions. By way of example and not limitation, the mechanical button 330 may be used to make a selection of a virtual object or command, to change a mode of operation of the stylus 300 (e.g., between a surface mode for interacting with a physical surface in the real world and a non-surface mode for performing actions with the stylus 300 in an open space, between on and off modes, between passive and active modes, etc.), or to manipulate (e.g., virtually grab, move, etc.) a virtual object.
The pressure sensor(s) 335 may obtain data representative of pressure on the stylus 300. The pressure sensor(s) 335 may include, for example, a solitary pressure sensor, a matrix of pressure sensing electrodes, a pressure sensor configured to detect pressure on a tip of the stylus 300, a pressure sensor configured to detect pressure on a back end portion of the stylus 300, a pressure sensor or matrix of pressure sensors associated with (e.g., positioned beneath) the touch strip(s) 325, a pressure sensor associated with (e.g., positioned beneath) the mechanical button(s) 330, etc. The pressure sensor(s) 335 may provide the sensed data representative of pressure to the processing subsystem 110 of the HMD system 100 (
The communication component 340 may be configured to communicate data from the various other components of the stylus 300 to the I/O interface 115, the processing subsystem 110, and/or the HMD device 105 (
In some embodiments, the tracking component 408 may be positioned at any location along the elongated housing 402 of the stylus 400, including at multiple locations (e.g., at the back end portion 410, at the tip end portion 406, at an intermediate location between the back end portion 410 and the tip end portion 406, any combination thereof, etc.). For example, the tracking component 408 may be positioned in one or more locations where at least a portion of the tracking component 408 can be viewed by an image sensor of an HMD device or system, to facilitate and/or enable tracking of the stylus 400 within a real-world environment, for tracking within a VR/AR/MR environment. Multiple tracking components 408 may be employed to provide improved (e.g., redundant) tracking of the stylus 400, such as to enable views of at least one tracking component 408 while another is fully or partially covered by the user's hand.
A touch strip 412 may be positioned along an external surface of the elongated housing 402. The touch strip 412 may include a touch-sensitive surface including, for example, a capacitive touch element, a resistive touch element, or other touch sensor to enable the user to interact with and provide input (e.g., selections, gestures, etc.) to the stylus 400 and ultimately to an associated HMD system. As shown in
At least one mechanical button 414 may be positioned under the touch strip 412, as shown in
A motion sensor 416 disposed within the elongated housing 402 may record data relevant to the position, orientation, and/or movement of the stylus 400. Example motion sensors suitable for use in the stylus 400 are described above with reference to
At least one pressure sensor 418 may be disposed on or in the elongated housing 402, which may be configured to detect a pressure exerted on the stylus 400 such as from a hand or finger of a user, from an object in the real-world environment, and so forth. The pressure sensor(s) 418 may be positioned in a location along the elongated housing to facilitate interaction with the pressure sensor(s) 418. For example, the pressure sensor(s) 418 may be located proximate the tip end portion 406 in a location that an average user might grip the stylus 400 during use, such as during a virtual writing, drawing, or object selection action. The pressure sensor(s) 418 may additionally or alternatively be positioned under the touch strip 412, to enable the sensors of the stylus 400 to detect simultaneous touch and pressure inputs from the user. The pressure sensor(s) 418 may also be positioned in association with the tip subsystem 404 to detect pressure on the tip end portion 406 during use of the stylus 400 in a surface mode configured for interaction with (e.g., virtual writing or drawing on) a physical surface in the real-world environment. In some examples, the physical surface may be a passive surface with no stylus-tracking capabilities. The pressure sensor(s) 418 may additionally or alternatively be positioned at the back end portion 410 of the stylus 400 to detect additional pressure exerted, such as for a virtual erase action, virtual or real-world object selection, etc. Example pressure sensor(s) that are suitable for use in the stylus 400 are described above with reference to
The stylus 400 may also include a communication component 420 that is configured to transmit sensor data generated by one or more of the tip subsystem 404, tracking component 408, touch strip 412, mechanical button(s) 414, motion sensor 416, and pressure sensor(s) 418 to the associated HMD system (e.g., to a processing subsystem, an HMD display, or an I/O interface, etc.).
A haptic-feedback module 422 may be disposed within the elongated housing 402 to provide haptic feedback to the user for various reasons, such as to indicate that a command has been accepted, more information is needed, a virtual object has been interacted with, an incoming message has been received, etc. Example haptic-feedback modules suitable for use with the stylus 400 are described above with reference to
Although
Referring to
In some embodiments, the pressure sensor matrix 518 may be configured to detect pressure(s) exerted on the stylus 500 during a writing or drawing operation, for example, to provide data for identifying a pressure profile associated with certain user actions, such as writing graphemes. In some examples, “grapheme” may refer to a letter, number, punctuation mark, pictograph, or other symbol. As a user writes a grapheme, the fingers and hand of the user may move in a way that results in the fingers and thumb pressing against a writing instrument (e.g., the stylus 500) with a predictable or measurable pressure profile, including periods of relatively higher or lower pressure against the writing instrument with each finger or thumb. By obtaining pressure data from the pressure sensor matrix 518 during a writing operation with the stylus 500, an HMD system (e.g., the processing subsystem 110 of HMD system 100 in
In some embodiments, the pressure sensor matrix 518 may extend closer to a tip of the stylus 500 than is depicted in
The tip subsystem 504 of the stylus 500 may include a magnetic ball 524 and a magnetic field sensor 526 used to identify and detect rotation of the magnetic ball 524. In some examples, “magnetic ball” may refer to a ball or other roller that alters a surrounding magnetic field as it rotates. The magnetic ball 524 may be or include a ferromagnetic material, an electromagnet, a rare earth magnet, or another material that affects and/or responds to a magnetic field. As the magnetic ball 524 rotates, the magnetic field sensor 526 may detect a change in a magnetic field based on rotation of the magnetic ball 524. Data from the magnetic field sensor 526 may be used to estimate rotation of the magnetic ball 524 as the magnetic ball 524 interacts with a real-world object, such as by virtually writing on, virtually drawing on, or otherwise touching a surface of the real-world object. Data corresponding to rotation of the magnetic ball 524 may be used to generate an image for display in an HMD system, such as a line, a grapheme, or a drawing, etc.
The stylus 600 may also include a power supply system 630 to provide electrical power to the various sensors and other electrical components of the stylus 600. The power supply system 630 may include a power storage element (e.g., a battery and/or a capacitor, etc.). In some embodiments, the power supply system 630 also includes a recharging module for supplying electrical energy to the power storage element. The recharging module, if present, may be configured for wired or wireless charging, and may be internal to the stylus 600 and/or may be external to the stylus 600 (e.g., a charging station, an AC adapter, a charging pad, etc.). Alternatively, the power storage element may be removable and replaceable.
Similar to other embodiments described in the present disclosure, the stylus 700 may include an elongated housing 702, a tip subsystem 704 at a tip end portion 706, a tracking component 708 at a back end portion 710, a touch strip 712 along a portion of a length of the stylus 700, a first mechanical button 714A positioned under the touch strip 712 at or near the tip end portion 706, a second mechanical button 714B positioned under the touch strip 712 at or near the back end portion 710, at least one motion sensor 716 disposed within the elongated housing 702, and a pressure sensor matrix 718 positioned at least partially around a circumference of the elongated housing 702 and at or near the tip end portion 706.
The elements of the stylus 700 shown in
A user may manipulate the stylus 700 by pressing the mechanical buttons 714A, 714B. Depending on the application, the mechanical buttons 714A, 714B may be used for many different actions in a VR/AR/MR environment or in the real-world environment. By way of example, a user may press the mechanical buttons 714A, 714B to select or switch modes of the stylus 700 or an associated HMD system, such as a surface mode, a non-surface mode, an “on” start, an “off” state, a low-power state, a pass-through mode, a video capture mode, an image capture mode, etc. In another example, the mechanical buttons 714A, 714B may be used to make a selection or perform an action in a VR/AR/MR environment, such as to interact with a virtual object, to begin a writing or drawing operation, to erase or delete a virtual object, to display a virtual image (e.g., a menu, object, selection tool, etc.), to enlarge or shrink a virtual object, to activate a virtual mechanism, to make a selection of a virtual or real-world object, etc.
The motion sensor(s) 716 may be configured to detect axial rotation 734 (i.e., rotation about a longitudinal axis). A user may axially rotate the stylus 700 for a variety of reasons depending on a particular application, such as to manipulate a virtual object. The motion sensor(s) 716 may also be configured to detect tilting 736 (i.e., rotation about an axis orthogonal to the longitudinal axis), axial translation 738, and lateral translation 740 of the stylus 700. The motion sensor(s) 716 may, in some embodiments, be configured to detect both a distance and speed of motion, in linear and/or angular terms. Data associated with the detected motion may be used for many different actions in a VR/AR/MR environment or in the real-world environment, such as the actions described above or others. The data from the motion sensor(s) 716 may also be used to assist in tracking the stylus 700 in a real-world environment and/or in a VR/AR/MR environment, including to estimate a position and/or an orientation of the stylus 700 or to predict an estimated future position and/or orientation of the stylus 700.
The touch strip 712 may be configured to detect a user touching the touch strip 712. In some embodiments, a user may manipulate the stylus 700 with a dragging touch 742 on the touch strip 712. The dragging touch 742 may be a gesture that results in different actions, depending on a particular application. By way of example and not limitation, a dragging touch 742 on the touch strip 712 may result in scrolling through virtual objects or options displayed on an associated HMD device, swiping across a virtual object, moving a virtual object, enlarging or shrinking a virtual object, adjusting a brightness of a virtual image or portion thereof, changing a sensitivity of a control operation, etc.
The pressure sensor matrix 718 may be configured to detect a pressure exerted on a side of the elongated housing 702, such as by a hand or fingers of a user. The pressure sensor matrix 718 may, in some embodiments, detect both a presence and a magnitude of pressure in one or multiple locations on the pressure sensor matrix 718. Exerting pressure on the pressure sensor matrix 718 may provide various indications to an associated HMD system, such as readiness for a writing or drawing operation, activation of a virtual mechanism, selection of a virtual or real-world object, manipulation of a virtual object, etc. In addition, data from the pressure sensor matrix 718 may be used by an associated HMD system to generate a pressure profile for an action or intended action, such as a user writing a grapheme.
In addition, the tracking component 708 may be used by an associated HMD system to detect manipulations of the stylus 700, including the axial rotation 734, tilting 736, axial translation 738, lateral translation 740, etc. Data from detection of the tracking component 708 in a real-world environment, such as by at least one image sensor of an associated HMD system or HMD device, may be used alone or in combination with data from the various components and sensors of the stylus 700 to track the stylus 700 in the real-world environment and in a VR/AR/MR environment.
Data representative of the various manipulations of the stylus 700 may be communicated from the various components and sensors of the stylus 700 to an associated HMD system for processing and for use (e.g., to generate a displayed image, to perform an action, to select a virtual or real-world object, etc.) in an associated HMD device.
In one example, a stylus 800 may be manipulated by a user to form a grapheme 850 in a VR/AR/MR environment, as illustrated in
The grapheme 850 shown in
To form the grapheme 850, the user may manipulate the stylus 800 to perform a first motion 852 upward and to the right with the tip end portion 806 of the stylus 800, then a second motion 854 downward and to the left. A third motion 856 upward and to the left may be made by the user to position the tip end portion 806 for further writing or drawing, but not with the intent to write or draw during the third motion 856. A fourth motion 858 to the right may complete the grapheme 850.
The stylus 800 and an associated HMD system may use sensor data from the stylus 800 to identify the intended grapheme 850. The user may initially indicate whether the stylus is to be used in a surface mode in which the tip end portion 806 is pressed against a surface (e.g., a passive surface) of a real-world object, or in a non-surface mode in which the stylus 800 is to be manipulated in an open space while forming the grapheme 850.
If the surface mode is indicated by the user, then pressure exerted on the stylus 800 against the surface may be sensed by the tip subsystem 804, such as during the first, second, and fourth movements 852, 854, and 858. The lack of pressure exerted against the tip subsystem 804 during the third movement 856 may indicate that a writing or drawing segment is not intended. In addition, in embodiments where the tip subsystem 804 employs a roller or other device for tracking interaction with a surface, rolling or dragging the tip end portion 806 across a surface may provide additional data for indicating an intended writing or drawing operation.
In a non-surface mode, the user may manipulate the stylus 800 to indicate that a writing or drawing segment is or is not intended. For example, the user may press against a mechanical button 814 during an intended writing or drawing segment (e.g., during the first, second, and fourth movements 852, 854, and 858 in the example of
In addition, the pressure sensor matrix 818 may obtain data to create a pressure profile representative of a capital letter “A” of
Data obtained by the motion sensor(s) 816 during a writing or drawing operation may also be used by the stylus 800 or the associated HMD system to form the grapheme 850 in a VR/AR/MR environment. For example, data representative of axial rotation, tilting, axial translation, and lateral translation of the stylus 800 may be used in both of a surface mode and a non-surface mode to track movement of the tip end portion 806 to determine (e.g., predict, estimate, track) an intended grapheme 850. Similarly, the tracking component 808 may be used by the HMD system to track movement of the stylus 800, alone or in conjunction with the data from the motion sensor(s) 816, to determine or estimate an intended grapheme.
Data from any combination of the sensors (e.g., the tip subsystem 804, the touch strip 812, the mechanical button 814, the motion sensor 816, the pressure sensor matrix 818, etc.) in the stylus 800 and from tracking of the tracking component 808 by the associated HMD system may be used to determine a grapheme 850 or drawing intended by the user to be formed in the VR/AR/MR environment. In some embodiments, data from the respective sensors and tracking component 808 may provide redundant ways for the HMD system to determine the intended grapheme 850 or drawing, such as for improved accuracy and precision. By way of example, pressure profiles generated for two different graphemes may be similar and therefore ambiguous, but data obtained from the motion sensor(s) 816, tip subsystem 804, tracking component 808, and/or mechanical button 814 may be used to distinguish between the two different graphemes. Moreover, text recognition technology may be employed to identify the intended grapheme 850 based on data representative of manipulation of the stylus 800.
Referring to
The HMD device 904 may display a virtual image 910 for viewing by the user 906. The virtual image 910 may appear to the user 906 to be some distance in front of the user 906, such as near arm's length, as illustrated in
As discussed above, in some examples the user 906 may interact with the stylus 902 to directly create or manipulate virtual objects or selections in an VR/AR/MR environment. In other examples, the user 906 may interact with the stylus 902 to indirectly manipulate a virtual or real object. For example, the user 906 may use the stylus 902 to remotely control or manipulate a remote robot, such as a telepresence robot that is physically separated from the user 906 and configured to receive and execute commands via a network. Specifically, the user 906 may, by manipulating the stylus 902, cause the remote robot to perform an action, such as moving, physically or virtually drawing or writing a design or grapheme, grasping a real or virtual object, etc.
In operation 1120, an image of a tracking component of the stylus may be captured, such as by one or more image sensors of an HMD device. Position information, including location and orientation, in the real-world environment and in a VR/AR/MR environment, may be recorded by the image sensor(s). Operations 1110 and 1120 may be performed in any order or simultaneously.
In operation 1130, the stylus may be tracked in a VR/AR/MR environment based on at least one of the sensed manipulation of operation 1110 and the image of the tracking component captured in operation 1120. For example, a processing subsystem of the HMD system may receive and use data representative of the manipulation of the stylus and of the captured image to identify the tracking component of the stylus within the image, and to determine a position (e.g., location and orientation) of the stylus in the real-world environment. Actions may be performed in the VR/AR/MR environment based on the manipulation of the stylus, such as rendering and displaying an image or altering an image displayed to the user by the HMD device.
Accordingly, disclosed are styluses and related HMD systems and methods that may, in some examples, improve usability of styluses in a VR/AR/MR environment. Accurate and precise detection of manipulation of styluses in the VR/AR/MR environment may be enabled by sensors and/or components of the styluses, such as motion sensors, tip subsystems, pressure sensors, tracking components, mechanical buttons, and other sensors and components described herein. The styluses may be used in a surface mode while interacting with a physical surface in a real-world environment or in a non-surface mode while being manipulated in an open space. In some examples, the formation of graphemes in a VR/AR/MR environment may be facilitated by the sensor(s) and components of the styluses.
As detailed above, the computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules and subsystems described herein. In a basic configuration, these computing device(s) may each include at least one memory device and at least one physical processor.
In some examples, the term “memory device” generally refers to any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, a memory device may store, load, and/or maintain one or more of the modules described herein. Examples of memory devices include, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), Flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory.
In some examples, the term “physical processor” generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, a physical processor may access and/or modify one or more modules stored in the above-described memory device. Examples of physical processors include, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor.
Although illustrated as separate elements, the modules and subsystems described and/or illustrated herein may represent portions of a single module or application. In addition, in certain embodiments one or more of these modules and subsystems may represent one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks. For example, one or more of the modules described and/or illustrated herein may represent modules stored and configured to run on one or more of the computing devices or systems described and/or illustrated herein. One or more of these modules may also represent all or portions of one or more special-purpose computers configured to perform one or more tasks.
In addition, one or more of the modules or subsystems described herein may transform data, physical devices, and/or representations of physical devices from one form to another. For example, one or more of the modules recited herein may receive sensor data to be transformed, transform the sensor data, output a result of the transformation to alter a displayed image, use the result of the transformation to determine a position of a stylus in a real-world environment, and store the result of the transformation to perform actions in a VR/AR/MR environment. Additionally or alternatively, one or more of the modules and subsystems recited herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form to another by executing on the computing device, storing data on the computing device, and/or otherwise interacting with the computing device.
In some embodiments, the term “computer-readable medium” generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions. Examples of computer-readable media include, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and Flash media), and other distribution systems.
Embodiments of the instant disclosure may include or be implemented in conjunction with an artificial reality system. Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, e.g., a virtual reality (VR), an augmented reality (AR), a mixed reality (MR), a hybrid reality, or some combination and/or derivatives thereof. Artificial reality content may include completely generated content or generated content combined with captured (e.g., real-world) content. The artificial reality content may include video, audio, haptic feedback, or some combination thereof, any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional effect to the viewer). Additionally, in some embodiments, artificial reality may also be associated with applications, products, accessories, services, or some combination thereof, that are used to, e.g., create content in an artificial reality and/or are otherwise used in (e.g., perform activities in) an artificial reality. The artificial reality system that provides the artificial reality content may be implemented on various platforms, including a head-mounted display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers.
The process parameters and sequence of the steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.
The preceding description has been provided to enable others skilled in the art to best utilize various aspects of the exemplary embodiments disclosed herein. This description is not intended to be exhaustive or to be limited to any precise form disclosed. Many modifications and variations are possible without departing from the spirit and scope of the instant disclosure. The embodiments disclosed herein should be considered in all respects illustrative and not restrictive. Reference should be made to the appended claims and their equivalents in determining the scope of the instant disclosure.
Unless otherwise noted, the terms “connected to” and “coupled to” (and their derivatives), as used in the specification and claims, are to be construed as permitting both direct and indirect (i.e., via other elements or components) connection. In addition, the terms “a” or “an,” as used in the specification and claims, are to be construed as meaning “at least one of.” Finally, for ease of use, the terms “including” and “having” (and their derivatives), as used in the specification and claims, are interchangeable with and have the same meaning as the word “comprising.”