The present application relates to technically inventive, non-routine solutions that are necessarily rooted in computer technology and that produce concrete technical improvements.
As recognized herein, current augmented reality systems are limited in their functionality in that they only provide certain limited ways for user interaction. There are currently no adequate solutions to the foregoing computer-related, technological problem and the present application recognizes the need for an improved computer-based user interface that improves the functionality and ease of use of such augmented reality systems.
Accordingly, in one aspect a headset includes a housing, at least one processor in the housing, a transparent display accessible to the at least one processor and coupled to the housing, and at least first and second vibrators accessible to the at least one processor and coupled to the housing. The first and second vibrators are located at different positions with respect to the housing. The headset also includes storage accessible to the at least one processor and coupled to the housing. The storage includes instructions executable by the at least one processor to track a person as the person moves through an environment. The instructions are also executable to, based on tracking the person, actuate one of the first and second vibrators to indicate a direction in which the person is to travel and/or to alert the person of an object that is within a threshold distance to the person. The person may be tracked at least in part using computer vision.
In some examples, the instructions may indeed be executable to, based on tracking the person, actuate one of the first and second vibrators to indicate a direction in which the person is to travel. Furthermore, one of the first and second vibrators may be actuated to indicate the direction in which the person is to travel as part of a navigational assistance application executed by the headset to navigate the person to a destination. In some of these examples, the instructions may be executable to, based on a determination that the person is deviating from a route to the destination, actuate one of the first and second vibrators to indicate the direction in which the person is to travel.
Also in some examples, the instructions may indeed be executable to, based on tracking the person, actuate one of the first and second vibrators to alert the person of an object that is within a threshold distance to the person. In some examples, the instructions may be executable to both use computer vision to determine that the object is within the threshold distance to the person and to use augmented reality (AR) processing to present a graphic on the transparent display that indicates the location of the object. If desired, in some implementations the instructions may also be executable by the at least one processor to actuate one of the first and second vibrators to alert the person of the object that is within the threshold distance to the person based on an accessibility setting for the headset being enabled. Further, the instructions may be executable to receive user input indicating the object as one for which to alert the person in the future when the person is within the threshold distance of the object and, based on the user input, perform the actuation of one of the first and second vibrators to alert the person of the object that is within the threshold distance to the person.
Still further, in some implementations the instructions may be executable to, based on tracking the person and at a first time, actuate one of the first and second vibrators to indicate a direction in which the person is to travel. Based on tracking the person and at a second time different from the first time, the instructions may also be executable to actuate one of the first and second vibrators to indicate a direction in which the person is to look prior to taking another action related to travel.
Additionally, in some examples the instructions may be executable to access data related to vibration to apply at the headset in conformance with physical therapy for the person and use at least one of the first and second vibrators to apply vibration at the headset according to the data.
Still further, in some examples the instructions may be executable to receive user input indicating an amount of vibration to apply to indicate the direction in which the person is to travel and/or to alert the person of objects that are within the threshold distance to the person. The instructions may then be executable to actuate one of the first and second vibrators in conformance with the user input.
As another example, in some implementations the instructions may be executable to receive user input indicating that another notification type is to be used along with vibration and, based on tracking the person and based on the user input, provide a notification of the other notification type to indicate the direction in which the person is to travel and/or to alert the person of the object that is within the threshold distance to the person. The instructions may then be executable to, concurrently with providing the notification of the other notification type, actuate one of the first and second vibrators to indicate the direction in which the person is to travel and/or to alert the person of the object that is within the threshold distance to the person.
In another aspect, a method includes tracking, using a headset and computer vision, a person as the person moves through an environment. The method also includes, based on tracking the person, actuating at least one vibrator on the headset to indicate a direction in which the person is to travel and/or to alert the person of an object that is within a threshold distance to the person.
In some examples, the method may include receiving user input indicating the object as one for which to alert the person and receiving user input indicating a particular distance for the headset to use as the threshold distance. The method may then include actuating the at least one vibrator on the headset to alert the person of the object when within the threshold distance to the person based on both the user input indicating the object as one for which to alert the person and the user input indicating the particular distance.
In another aspect, at least one computer readable storage medium (CRSM) that is not a transitory signal includes instructions executable by at least one processor of a headset to track, using computer vision, a person as the person moves through an environment. The instructions are also executable to, based on tracking the person, actuate at least one vibrator on the headset to indicate a direction in which the person is to travel and/or to alert the person of an object that is within a threshold distance to the person.
In some examples, the instructions may also be executable to, based on tracking the person in a first instance, actuate a first vibrator on the headset to alert the person of an object that the person has tagged as a first object for which to be alerted. Then based on tracking the person in a second instance at a later time than the first instance, the instructions may be executable to actuate a second vibrator on the headset to alert the person of a second object that the person has tagged as an object for which to be alerted. In these examples, the second object may be different from the first object and the second vibrator may be different from the first vibrator.
The details of present principles, both as to their structure and operation, can best be understood in reference to the accompanying drawings, in which like reference numerals refer to like parts, and in which:
Among other things, the present application discloses systems and methods for using haptic and tactile feedback in augmented reality hardware and software to improve the functionality and computer-based user interfaces of AR devices while also improving their ease of use. For instance, haptic feedback at an AR device may be used to provide navigational and directional feedback so that, e.g., users that may want to navigate somewhere can receive a vibration when they go off-course.
Haptic feedback at an AR device may also be used to provide accessibility solutions for navigation and wayfinding by people with special needs or disabilities, such as to assist a visually-impaired user to navigate through space. This is based on the understanding that users with visual impairments often have difficulty maintaining a straight path when walking, increasing travel times and their risk of injury. Auditory notifications to help these users are often insufficient as the audio cannot always be deciphered in an environment with a lot of background noise. Thus, the present application discloses using haptic feedback for people such as visually impaired users to communicate potential obstacles to them that might be occluding the users' pathways.
For example, a user might receive a slight vibration on one side of their AR headset when they are approaching an object in that direction. The haptic feedback vibration may thus give the user a warning that they are coming close to the object.
As another example, a first user might be walking down a sidewalk and encounter a tree that has fallen on the sidewalk. The first user's headset may not only alert the first user of the tree via headset vibration after recognizing it using object recognition, but may also map the tree to its current physical geolocation. Then when another user is walking down the same sidewalk at a later time, that other user's headset may have already been provided with or otherwise have access to the geolocation data for the fallen tree to know the tree is blocking that user's path as well. Based on that data, the other user's headset may therefore provide directions to divert the other user to another sidewalk and away from the tree, and/or to pre-alert the other user via vibrations or other alerts before the other user's headset even “sees” the tree via its own computer vision. For example, a vibration may be provided at the other user's headset along with an auditory notification indicating “tree is blocking the sidewalk in 10 meters” even if the other user's headset has not yet recognized the tree via its own camera input and object recognition.
In addition, the present application provides for users such as those with memory impairments to place haptic-based reminders in specific geo-spatial locations to create location-based haptic notifications, with location being tracked using simultaneous localization and mapping (SLAM) for example.
Accordingly, computer vision using an AR headset's camera may be employed in one or more of these examples to track users as they move about a space and/or navigate through an environment.
Additionally, the present application also provides for using haptic feedback for force simulation. For example, physical therapy applications may simulate resistance during head and neck related motor movements to rehabilitate a person from an injury or improve their neck strength.
Haptic feedback as disclosed herein may also be customized by users. For example, users can select multi-sensory feedback pairings (e.g., haptic and audio, haptic and visual, or all of haptic, visual, and audio) to increase immersion and sensory reinforcement. Users may also be allowed to select or adjust the Hertz (Hz) rate at which vibrotactile messages are conveyed.
Furthermore, present principles may be used in conjunction with other technologies such as artificial intelligence (using artificial neural networks), machine learning, and computer vision. For an artificial intelligence (AI) model, for example, a user can communicate with an AR headset to create predictive behavior that can be later inferred by the AI model, such as setting specific types of notifications (e.g., “Alert me when the food is finished cooking in the oven”) for the AI model to automatically set those types of notifications in the future without user input each time.
In terms of machine learning, a user's behaviors can be learned and inferred over time and the accuracy of an AI model employing machine learning can thereby adapt to the user's behaviors and act accordingly. For example, where a person is training to perform a particular task or activity, AR technology as disclosed herein may adapt as the user gets better with the task or activity. Take golf, for example. A user may receive a vibration when their vision shifts from the ball to the fairway too early as they swing a golf club to strike the ball, and as the user's vision shifts less with each swing as the user learns a proper golf swing, the intensity of the vibration may be less and/or triggered by a lower sensitivity threshold, and the AI model may thereby adapt as the user gets better over time to further refine the user's golf swing. To this end, supervised or unsupervised training of one or more deep or recurrent neural networks in the AI model may occur to optimize the neural network(s) used for inferring a proper golf swing (or other metric) given the user's unique swing characteristics, unique body dimensions, and objective parameters of an acceptable golf swing. For example, optimization/training may occur using one or more classification algorithms and/or regression algorithms along with inputs of video and/or motion data of the user's golf swing and body characteristics themselves. Training of an AI model may occur in other situations as well, such as training a person to drive a vehicle as will be discussed further below.
As far as computer vision goes and as referenced above, it may utilize what one or more cameras on the headset “see”, and thus haptics may alert the user of objects coming too close to the camera and therefore too close to the user. As an example, vibrations could start small and build as an object gets progressively closer to the user to provide a form of alert about the object.
Prior to delving into the details of the instant techniques, with respect to any computer systems discussed herein, a system may include server and client components, connected over a network such that data may be exchanged between the client and server components. The client components may include one or more computing devices including televisions (e.g., smart TVs, Internet-enabled TVs), computers such as desktops, laptops and tablet computers, so-called convertible devices (e.g., having a tablet configuration and laptop configuration), and other mobile devices including smart phones. These client devices may employ, as non-limiting examples, operating systems from Apple Inc. of Cupertino Calif., Google Inc. of Mountain View, Calif., or Microsoft Corp. of Redmond, Wash. A Unix® or similar such as Linux® operating system may be used. These operating systems can execute one or more browsers such as a browser made by Microsoft or Google or Mozilla or another browser program that can access web pages and applications hosted by Internet servers over a network such as the Internet, a local intranet, or a virtual private network.
As used herein, instructions refer to computer-implemented steps for processing information in the system. Instructions can be implemented in software, firmware or hardware, or combinations thereof and include any type of programmed step undertaken by components of the system; hence, illustrative components, blocks, modules, circuits, and steps are sometimes set forth in terms of their functionality.
A processor may be any general purpose single- or multi-chip processor that can execute logic by means of various lines such as address lines, data lines, and control lines and registers and shift registers. Moreover, any logical blocks, modules, and circuits described herein can be implemented or performed with a general purpose processor, a digital signal processor (DSP), a field programmable gate array (FPGA) or other programmable logic device such as an application specific integrated circuit (ASIC), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor can also be implemented by a controller or state machine or a combination of computing devices. Thus, the methods herein may be implemented as software instructions executed by a processor, suitably configured application specific integrated circuits (ASIC) or field programmable gate array (FPGA) modules, or any other convenient manner as would be appreciated by those skilled in those art. Where employed, the software instructions may also be embodied in a non-transitory device that is being vended and/or provided that is not a transitory, propagating signal and/or a signal per se (such as a hard disk drive, CD ROM or Flash drive). The software code instructions may also be downloaded over the Internet. Accordingly, it is to be understood that although a software application for undertaking present principles may be vended with a device such as the system 100 described below, such an application may also be downloaded from a server to a device over a network such as the Internet.
Software modules and/or applications described by way of flow charts and/or user interfaces herein can include various sub-routines, procedures, etc. Without limiting the disclosure, logic stated to be executed by a particular module can be redistributed to other software modules and/or combined together in a single module and/or made available in a shareable library.
Logic when implemented in software, can be written in an appropriate language such as but not limited to C# or C++, and can be stored on or transmitted through a computer-readable storage medium (that is not a transitory, propagating signal per se) such as a random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), compact disk read-only memory (CD-ROM) or other optical disk storage such as digital versatile disc (DVD), magnetic disk storage or other magnetic storage devices including removable thumb drives, etc.
In an example, a processor can access information over its input lines from data storage, such as the computer readable storage medium, and/or the processor can access information wirelessly from an Internet server by activating a wireless transceiver to send and receive data. Data typically is converted from analog signals to digital by circuitry between the antenna and the registers of the processor when being received and from digital to analog when being transmitted. The processor then processes the data through its shift registers to output calculated data on output lines, for presentation of the calculated data on the device.
Components included in one embodiment can be used in other embodiments in any appropriate combination. For example, any of the various components described herein and/or depicted in the Figures may be combined, interchanged or excluded from other embodiments.
“A system having at least one of A, B, and C” (likewise “a system having at least one of A, B, or C” and “a system having at least one of A, B, C”) includes systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.
The term “circuit” or “circuitry” may be used in the summary, description, and/or claims. As is well known in the art, the term “circuitry” includes all levels of available integration, e.g., from discrete logic circuits to the highest level of circuit integration such as VLSI, and includes programmable logic components programmed to perform the functions of an embodiment as well as general-purpose or special-purpose processors programmed with instructions to perform those functions.
Now specifically in reference to
As shown in
In the example of
The core and memory control group 120 include one or more processors 122 (e.g., single core or multi-core, etc.) and a memory controller hub 126 that exchange information via a front side bus (FSB) 124. As described herein, various components of the core and memory control group 120 may be integrated onto a single processor die, for example, to make a chip that supplants the “northbridge” style architecture.
The memory controller hub 126 interfaces with memory 140. For example, the memory controller hub 126 may provide support for DDR SDRAM memory (e.g., DDR, DDR2, DDR3, etc.). In general, the memory 140 is a type of random-access memory (RAM). It is often referred to as “system memory.”
The memory controller hub 126 can further include a low-voltage differential signaling interface (LVDS) 132. The LVDS 132 may be a so-called LVDS Display Interface (LDI) for support of a display device 192 (e.g., a CRT, a flat panel, a projector, a touch-enabled light emitting diode display or other video display, etc.). A block 138 includes some examples of technologies that may be supported via the LVDS interface 132 (e.g., serial digital video, HDMI/DVI, display port). The memory controller hub 126 also includes one or more PCI-express interfaces (PCI-E) 134, for example, for support of discrete graphics 136. Discrete graphics using a PCI-E interface has become an alternative approach to an accelerated graphics port (AGP). For example, the memory controller hub 126 may include a 16-lane (×16) PCI-E port for an external PCI-E-based graphics card (including, e.g., one of more GPUs). An example system may include AGP or PCI-E for support of graphics.
In examples in which it is used, the I/O hub controller 150 can include a variety of interfaces. The example of
The interfaces of the I/O hub controller 150 may provide for communication with various devices, networks, etc. For example, where used, the SATA interface 151 provides for reading, writing or reading and writing information on one or more drives 180 such as HDDs, SDDs or a combination thereof, but in any case the drives 180 are understood to be, e.g., tangible computer readable storage mediums that are not transitory, propagating signals. The I/O hub controller 150 may also include an advanced host controller interface (AHCI) to support one or more drives 180. The PCI-E interface 152 allows for wireless connections 182 to devices, networks, etc. The USB interface 153 provides for input devices 184 such as keyboards (KB), mice and various other devices (e.g., cameras, phones, storage, media players, etc.).
In the example of
The system 100, upon power on, may be configured to execute boot code 190 for the BIOS 168, as stored within the SPI Flash 166, and thereafter processes data under the control of one or more operating systems and application software (e.g., stored in system memory 140). An operating system may be stored in any of a variety of locations and accessed, for example, according to instructions of the BIOS 168.
Still further, the system 100 may include one or more vibrators 191 consistent with present principles. Each of the vibrators 191 may be established by an electric motor connected to an off-center and/or off-balanced weight via the motor's rotatable shaft. The shaft may then rotate under control of the motor to create vibration.
The system 100 may further include an audio receiver/microphone 195 that provides input from the microphone 195 to the processor 122 based on audio that is detected, such as via a user/person providing audible input to the microphone 195. Still further, the system 100 may include a camera 193 that gathers one or more images and provides input related thereto to the processor 122. The camera 193 may be a thermal imaging camera, an infrared (IR) camera, a digital camera such as a webcam, a three-dimensional (3D) camera, and/or a camera otherwise integrated into the system 100 and controllable by the processor 122 to gather pictures/images and/or video.
Additionally, though not shown for simplicity, in some embodiments the system 100 may include a gyroscope that senses and/or measures the orientation of the system 100 and provides input related thereto to the processor 122, as well as an accelerometer that senses acceleration and/or movement of the system 100 and provides input related thereto to the processor 122. Also, the system 100 may include a GPS transceiver that is configured to communicate with at least one satellite to receive/identify geographic position information and provide the geographic position information to the processor 122. However, it is to be understood that another suitable position receiver other than a GPS receiver may be used in accordance with present principles to determine the location of the system 100.
It is to be understood that an example client device or other machine/computer may include fewer or more features than shown on the system 100 of
Turning now to
Now describing
Still further, note that the headset 216 may include still other components not shown for simplicity, such as a network interface for communicating over a network such as the Internet and a battery for powering components of the headset 216 such as the vibrators 304, 306. Additionally, note that while the headset 216 is illustrated as computerized smart glasses, the headset 216 may also be established by another type of augmented reality (AR) headset, or even a virtual reality (VR) headset in some examples that may not have a transparent display but is still be able to present virtual AR objects along with a real-world, real-time camera feed of an environment imaged by one or more of the cameras 310, 312 to provide an AR experience to the user. Also note that electronic contact lenses with their own respective heads up displays may also be used consistent with present principles.
Now in reference to
As illustrated in
Then at a later time as the person progresses down the street 401, the person may come within the threshold distance of a third object to the upper left relative to the perspective of the person. Thus, the vibrator 412 may be actuated to provide a vibration as illustrated by element 412 to indicate the person is within the threshold distance to yet another object (an upright bar 407 of the scaffolding). At about the same time, another upright bar 409 of the scaffolding on the opposite side of the street 401 may also come within the threshold distance to the person on the upper right relative to the person's perspective, and accordingly vibrator 416 may be actuated to provide a vibration as illustrated by element 404.
It may therefore be appreciated based on
Also note that in some examples other notification types may be presented concurrently with vibration notifications to indicate upcoming turns to follow directions to the destination. For example, text 510 indicating “right” may be presented along with a graphical arrow 512 indicating a right turn. Audio may also be provided through one or more speakers on the headset 500, such as audio for the person to “turn right at the next intersection”.
Referring now to
The logic may then move to block 602 where the device may receive input from one or more cameras coupled to the device (or even disposed elsewhere in the environment) and use the input and computer vision to track the person as the person moves through the environment to the destination on foot, by car, etc. In terms of computer vision, it is to be understood that in at least some examples computer vision may include the type of computer/machine vision used in augmented reality (AR) processing to determine the real world location of real world objects relative to each other and relative to the headset. Thus, computer vision may include image registration, and/or receiving and analyzing digital images to extract three dimensional data from the images for location reference. To this end, artificial intelligence models employing one or more neural networks may be used for making inferences about and/or mapping the real world locations of objects with respect to each other as shown in images from one or more cameras. Simultaneous localization and mapping (SLAM) algorithms may also be used.
From block 602 the logic may then proceed to decision diamond 604. At diamond 604 the device may determine whether the person is deviating from a route to the destination that is determined by the navigational assistance application and being followed by the person. The determination may be made, for instance, by tracking the person via the computer vision to determine that the person has veered off the route, although input from a GPS transceiver on the headset may also be used.
Responsive to an affirmative determination at diamond 604, the logic may proceed to block 606. At block 606 the device may actuate a vibrator on a side of the headset in the direction in which the person is to travel to get back on course (e.g., the right side). Also note that in some examples where the person is driving and should make a turn to get back on course, at a different time prior to actuating the vibrator to indicate the direction in which the person is to travel the headset may provide a different vibration using the vibrator on the right side of the headset to indicate that the person should look in his or her vehicle's blind spot before making the right turn.
Different vibration patterns to indicate different things may therefore be used consistent with present principles, such as a constant vibration for a certain length of time to indicate the turn itself, and periodic vibrations separated by equal lengths of time but also for the same particular total length of time to indicate to look in the vehicle's blind spot. Yet another vibration pattern may even be used if the driver takes his or her eyes off the road and looks down at his/her cell phone or otherwise takes his/her eyes off the road, as may be determined based on the computer vision and/or eye tracking. Still another vibration pattern (or even higher vibration intensity) may be used where the device determines using biometric sensor data and/or sleep tracking that the driver (e.g. a semi-trailer truck driver) is falling or has fallen asleep while driving to thus alert the driver to keep awake while driving. Also note that the vibrations provided at block 606 may be provided in conformance with user input and/or configured settings, such as a particular vibration intensity selected by the person as will be described further below in reference to
However, still in reference to
A negative determination at diamond 608 may cause the logic to revert back to block 602 and proceed therefrom. However, an affirmative determination at diamond 608 may instead cause the logic to proceed to block 610.
At block 610 the device may actuate a vibrator on a side of the headset in the direction of the object to alert the person to the presence of the object within the threshold distance. Different vibration patterns may even be used to indicate different object types or sizes, such as a constant vibration for a certain length of time to indicate inanimate objects and/or objects above a threshold size, and periodic vibrations separated by equal lengths of time but also for the same particular total length of time to indicate living objects and/or objects below the threshold size. Different vibration intensities may also be used so that, for example, a more intense vibration may be provided for an object above a threshold size while a lesser vibration may be provided for an object below the threshold size.
Also note that the vibrations provided at block 610 may be provided in conformance with user input and/or configured settings. For example, vibrations may be provided when the person comes within the threshold distance to particular objects already tagged by the person as objects for which to provide alerts when the person is within the threshold distance, as will be described further below in reference to
From block 610 the logic may then proceed to block 612. At block 612 the device may, concurrent with actuating a vibrator to vibrate the headset at block 610, also present a graphic on the headset's display that indicates the current location of the object that has been determined at diamond 608 to be within the threshold distance to the person. For instance, an arrow may be presented on the headset's display pointing to the object. Also at block 610, audio indicating the location of the object may also be presented, such as “scaffolding pole three feet to your left”. Which types of notifications the headset is to present at block 612 may be based on user input specifying notification types, as will be described further below in relation to
Beginning at block 700, the device may receive input to begin physical therapy, such as input to the GUI 1200 of
Accordingly, from block 702 the logic may proceed to block 704 where the headset may actuate the respective vibrators to apply vibration at the headset in conformance with the data so that the person may perform his or her physical therapy using vibrations from the headset.
Beginning first with
Accordingly, at block 800 the headset may receive one or more commands to actuate vibrators on the headset, with the commands being received from a server or video game console or other device executing a video game. Which vibrators to actuate, at which intensity, and using which vibration patterns may all be specified by the video game's developer and indicated by the server/console to the headset via the command(s) received at block 800 as the person plays the video game. From block 800 the logic may then proceed to block 802 where the headset may actuate one or more vibrators coupled to it in conformance with the received command(s).
Turning to
Accordingly, at block 900 the headset may receive or access data indicating that a text message has been received, that an event indicated in the electronic calendar is about to transpire, that an IoT oven's timer has expired, etc. From block 900 the logic may then proceed to block 902 where the headset may actuate one or more vibrators to provide an associated vibration alert or notification. Note that different vibrators, vibration intensity, and/or vibration patterns may be used for different types of notifications or alerts that the headset is to provide. For example, light periodic vibrations from a vibrator located up and to the right of a right lens of the headset may be used for providing notifications of incoming text messages while more intense, constant vibrations from a vibrator located down and to the left of a left lens of the headset may be used for providing calendar event alerts.
As shown in
Now in reference to
As shown in
Furthermore, in some examples the person may even set the threshold distance for the headset 1102 to use by first selecting the input box 1110 (e.g., via staring or voice command) and then providing input to it to establish the threshold distance (e.g., by speaking the desired distance as detected by a microphone on the headset 1102). In some examples, the person may even specify via voice command or other input the vibration intensity, vibration pattern, and even particular vibrator on the headset 1102 to use to provide an alert in reference to a particular tagged object so that different intensities, patterns, and/or vibrators may be used to alert the person when within the threshold distance to different tagged objects.
Continuing the detailed description in reference to
Then responsive to selection of the selector 1202, the GUI 1300 of
Now in reference to
The person may then begin driving the vehicle and, as shown in
As also shown in
Now in reference to
A first option 1602 is shown on the GUI 1600 and it may be selectable to configure the headset to provide vibrations useful for navigation about an environment consistent with present principles. The GUI 1600 also shows a second option 1604 that may be selectable to configure the headset to provide vibrations at the headset while a person plays video games consistent with present principles. As also shown in
Additionally, the GUI 1600 may include still other options such as an option 1616 to configure the headset to provide audio notifications along with vibration alerts/notifications for a given event or item. An option 1618 may also be selected to configure the headset to provide visual graphics alerts/notifications along with vibration notifications for a given event or item.
Still further, in some examples the GUI 1600 may include various selectors 1620, 1622, and 1624 to select particular vibration intensities for the headset to use by default or for a certain circumstance or determination. As shown, the GUI 1600 may indicate the intensities in terms of low, medium, and high, as well as by vibration intensity in Hertz. A selector 1626 may even be presented and be selectable to configure the headset to use a progressive vibration intensity in which vibration starts off with low intensity and progressively gets more intense as time goes on for a given alert/notification that is to be provided.
It may now be appreciated that present principles provide for an improved computer-based user interface that improves the functionality and ease of use of the devices disclosed herein. The disclosed concepts are rooted in computer technology for computers to carry out their functions.
It is to be understood that whilst present principals have been described with reference to some example embodiments, these are not intended to be limiting, and that various alternative arrangements may be used to implement the subject matter claimed herein. Components included in one embodiment can be used in other embodiments in any appropriate combination. For example, any of the various components described herein and/or depicted in the Figures may be combined, interchanged or excluded from other embodiments.
Number | Name | Date | Kind |
---|---|---|---|
4045885 | Stern | Sep 1977 | A |
6216086 | Seymour et al. | Apr 2001 | B1 |
6320496 | Sokoler et al. | Nov 2001 | B1 |
6798443 | Maguire, Jr. | Sep 2004 | B1 |
7899469 | Casey | Mar 2011 | B2 |
8023963 | Yonker et al. | Sep 2011 | B2 |
8094091 | Noma | Jan 2012 | B2 |
8131271 | Ramer et al. | Mar 2012 | B2 |
8150617 | Manber et al. | Apr 2012 | B2 |
8229458 | Busch | Jul 2012 | B2 |
8706416 | Wang et al. | Apr 2014 | B2 |
8718614 | Kolodziej | May 2014 | B2 |
8788197 | Fink | Jul 2014 | B2 |
8825387 | Mays et al. | Sep 2014 | B2 |
8922480 | Freed et al. | Dec 2014 | B1 |
8973149 | Buck | Mar 2015 | B2 |
8996304 | Needham et al. | Mar 2015 | B2 |
9063633 | Rajasingham | Jun 2015 | B2 |
9080890 | Svendsen et al. | Jul 2015 | B2 |
9117066 | Nathan et al. | Aug 2015 | B2 |
9247779 | Aloumanis et al. | Feb 2016 | B1 |
9405918 | Freed et al. | Aug 2016 | B2 |
9541411 | Tang et al. | Jan 2017 | B2 |
9799301 | Sahin | Oct 2017 | B2 |
9851561 | Lu | Dec 2017 | B2 |
9898868 | Aonuma et al. | Feb 2018 | B2 |
10012508 | Beaumont et al. | Jul 2018 | B2 |
10257434 | Arnold | Apr 2019 | B2 |
10283081 | Sahin | May 2019 | B2 |
10324290 | Weller et al. | Jun 2019 | B2 |
10437460 | Moore et al. | Oct 2019 | B2 |
10464482 | Shuster et al. | Nov 2019 | B2 |
10521944 | Sareen et al. | Dec 2019 | B2 |
10809081 | Kentley-Klay et al. | Oct 2020 | B1 |
10916216 | Sahin | Feb 2021 | B2 |
10991292 | Shin et al. | Apr 2021 | B2 |
20070015519 | Casey | Jan 2007 | A1 |
20070194902 | Blanco et al. | Aug 2007 | A1 |
20070229396 | Rajasingham | Oct 2007 | A1 |
20070266239 | Vismans et al. | Nov 2007 | A1 |
20070273697 | Zaman | Nov 2007 | A1 |
20080001847 | Kratchounova et al. | Jan 2008 | A1 |
20080046176 | Jurgens | Feb 2008 | A1 |
20080120029 | Zelek | May 2008 | A1 |
20080214157 | Ramer et al. | Sep 2008 | A1 |
20080248815 | Busch | Oct 2008 | A1 |
20090186628 | Yonker et al. | Jul 2009 | A1 |
20090216438 | Shafer | Aug 2009 | A1 |
20090292528 | Kameyama | Nov 2009 | A1 |
20100023252 | Mays et al. | Jan 2010 | A1 |
20100138858 | Velazquez | Jun 2010 | A1 |
20100248745 | Ozawa et al. | Sep 2010 | A1 |
20110144902 | Forte et al. | Jun 2011 | A1 |
20110270522 | Fink | Nov 2011 | A1 |
20110301835 | Bongiomo | Dec 2011 | A1 |
20120130630 | Tang et al. | May 2012 | A1 |
20120150431 | Ooka | Jun 2012 | A1 |
20120208559 | Svendsen et al. | Aug 2012 | A1 |
20120242591 | Kawalkar | Sep 2012 | A1 |
20120284281 | Meyer et al. | Nov 2012 | A1 |
20130006521 | Needham et al. | Jan 2013 | A1 |
20130029685 | Moshfeghi | Jan 2013 | A1 |
20130122937 | Meyer et al. | May 2013 | A1 |
20130127980 | Haddick et al. | May 2013 | A1 |
20130217366 | Kolodziej | Aug 2013 | A1 |
20130261966 | Wang et al. | Oct 2013 | A1 |
20130321257 | Moore et al. | Dec 2013 | A1 |
20140018106 | Fulger et al. | Jan 2014 | A1 |
20140057657 | Manber et al. | Feb 2014 | A1 |
20140125789 | Bond et al. | May 2014 | A1 |
20140201844 | Buck | Jul 2014 | A1 |
20140214267 | Sellschopp | Jul 2014 | A1 |
20140375683 | Salter | Dec 2014 | A1 |
20150116212 | Freed et al. | Apr 2015 | A1 |
20150119667 | Reihman | Apr 2015 | A1 |
20150193819 | Chang | Jul 2015 | A1 |
20150317956 | Lection et al. | Nov 2015 | A1 |
20160093106 | Black | Mar 2016 | A1 |
20160104451 | Sahin | Apr 2016 | A1 |
20160154240 | Lee et al. | Jun 2016 | A1 |
20160171771 | Pedrotti | Jun 2016 | A1 |
20160189442 | Wright | Jun 2016 | A1 |
20160224106 | Liu | Aug 2016 | A1 |
20160350973 | Shapira | Dec 2016 | A1 |
20170032787 | Dayal | Feb 2017 | A1 |
20170078529 | Datikashvili et al. | Mar 2017 | A1 |
20170124881 | Whitehead | May 2017 | A1 |
20170176746 | Weller et al. | Jun 2017 | A1 |
20170184846 | Lu | Jun 2017 | A1 |
20170197617 | Penilla | Jul 2017 | A1 |
20170329139 | Shearman et al. | Nov 2017 | A1 |
20180005503 | Kaindl | Jan 2018 | A1 |
20180108322 | Sahin | Apr 2018 | A1 |
20180249087 | Arnold | Aug 2018 | A1 |
20180299888 | Sullivan | Oct 2018 | A1 |
20180315247 | Van Andel | Nov 2018 | A1 |
20190014206 | Kuhn | Jan 2019 | A1 |
20190057531 | Sareen et al. | Feb 2019 | A1 |
20190126824 | Oba | May 2019 | A1 |
20190293943 | Weller et al. | Sep 2019 | A1 |
20190303626 | Kaladgi et al. | Oct 2019 | A1 |
20190392779 | Sahin | Dec 2019 | A1 |
20200193068 | Jones | Jun 2020 | A1 |
20200226966 | Shin et al. | Jul 2020 | A1 |
20210120213 | Beni | Apr 2021 | A1 |
Number | Date | Country |
---|---|---|
101568802 | Apr 2012 | CN |
107092638 | Aug 2017 | CN |
0578788 | Jan 1994 | EP |
201018815 | Dec 2010 | GB |
3669702 | Jul 2005 | JP |
4547721 | Sep 2010 | JP |
5487677 | May 2014 | JP |
100763238 | Oct 2007 | KR |
2015133889 | Sep 2015 | WO |
Entry |
---|
Corning Incorporated, “Day Made of Glass 2: Same Day Expanded Coming Vision (2012)”, Feb. 3, 2012. https://youtube.com/watch?v=jZkHpNnXLBO. |
Liviu, Marica, “Back Seat Transparent Touchscreen Window”, Jul. 24, 2011, https://www.youtube.com/watch?v=ct6dpX7dZzl. |
Tonnis and Klinker, “Effective Control of a Car Driver's Attention for Visual and Acoustic Guidance towards the Direction of Imminent Dangers”, 2006, Proceedings of the ISMAR. 13-22, 2006/10/01, 10.1109/ISMAR.2006.207789. |
Nii U SiNG Party Official Trailer, 2013, https://www.youtube.com/watch?v=TGdnj7APnJo. |
Hatfield et al., “Presentation of Graphical Objects on Display Based on Input From Rear-Facing Camera”, file history of related U.S. Appl. No. 15/930,354, filed May 12, 2020. |
Beaumont et al., “Providing Directions to a Location in a Facility”, file history of related U.S. Appl. No. 14/638,542, filed Mar. 4, 2015, now U.S. Pat. No. 10,012,508 issued Jul. 3, 2018. |
Hatfield et al., “Presentation of Graphical Objects on Display Based on Input From Rear-facing Camera”, related U.S. Appl. No. 15/930,354, Non-Final Office Action dated Feb. 3, 2021. |
Hatfield et al., “Presentation of Graphical Objects on Display Based on Input From Rearfacing Camera”, related U.S. Appl. No. 15/930,354, Applicant's response to Non-Final Office Action filed Apr. 8, 2021. |
Number | Date | Country | |
---|---|---|---|
20210097285 A1 | Apr 2021 | US |