The present application relates to human/computer interfaces and more particularly to a headset computing display device that accepts voice commands and tracks head motions to provide command inputs to and receive display information from a software application executed on a remote host computing device.
Mobile computing devices, such as notebook personal computers (PC's), Smartphones, and tablet computing devices, are now common tools used for producing, analyzing, communicating, and consuming data in both business and personal life. Consumers continue to embrace a mobile digital lifestyle as the ease of access to digital information increases with high-speed wireless communications technologies becoming ubiquitous. Popular uses of mobile computing devices include displaying large amounts of high-resolution computer graphics information and video content, often wirelessly streamed to the device. While these devices typically include a display screen, the preferred visual experience of a high-resolution, large format display cannot be easily replicated in such mobile devices because the physical size of such devices is limited to promote mobility. Another drawback of the aforementioned device types is that the user interface is hands-dependent, typically requiring a user to enter data or make selections using a keyboard (physical or virtual) or touch-screen display. As a result, consumers are now seeking a hands-free high-quality, portable, color display solution to augment or replace their hands-dependent mobile devices.
Recently developed micro-displays can provide large-format, high-resolution color pictures and streaming video in a very small form factor. One application for such displays can be integrated into a wireless headset computer worn on the head of the user with a display positioned within the field of view of the user, similar in format to eyeglasses, an audio headset, or video eyewear. A “wireless computing headset” device includes one or more small high-resolution micro-displays and optics to magnify the image. The WVGA micro-displays can provide super video graphics array (SVGA) (800×600) resolution or extended graphic arrays (XGA) (1024×768) or even higher resolutions. A wireless computing headset contains one or more wireless computing and communication interfaces, enabling data and streaming video capability, and provides greater convenience and mobility than hands dependent devices.
For more information concerning such devices, see co-pending U.S. application Ser. No. 12/348,646 entitled “Mobile Wireless Display Software Platform for Controlling Other Systems and Devices,” by Parkinson et al., filed Jan. 5, 2009, PCT International Application No. PCT/US09/38601 entitled “Handheld Wireless Display Devices Having High Resolution Display Suitable For Use as a Mobile Internet Device,” by Jacobsen et al., filed Mar. 27, 2009, and U.S. Application No. 61/638,419 entitled “Improved Headset Computer,” by Jacobsen et al., filed Apr. 25, 2012, each of which are incorporated herein by reference in their entirety.
Example embodiments of the present invention include a method of, and corresponding system for, operating a Smartphone or PC application, including executing an application on a Smartphone or PC, the executed application being native to the Smartphone or PC. The invention method and system generating an output of the native application for simultaneous display through the Smartphone or PC screen and a headset computing display device. In one embodiment, display output for the headset computing device is in a markup language. The headset computing device translates the received display output for rendering through the headset display in response to requesting and receiving the display output generated by the Smartphone or PC. The headset computing device operating in a speech recognition and head tracking user interface mode, monitors for recognized user speech (voice commands) and head tracking commands (head motions) from an end-user wearing the headset computing device. The headset computing device in response to received speech recognition and head tracking commands—end-user inputs—translates the received speech recognition and head tracking commands to equivalent Smarthphone or PC commands (e.g., touch-screen, keyboard, and/or mouse commands) and transmitting the equivalent commands to the Smartphone or PC to control the native application.
A further example embodiment of the present invention includes a method of, and corresponding system for, operating a Smartphone (or PC) including executing a native image and/or video viewing application on a Smartphone. The embodiment operates a headset computing device in a speech recognition and head tracking user interface mode, monitoring for user speech recognition and head tracking commands from an end-user at the headset computing device. The embodiment translating received speech recognition and head tracking commands to equivalent Smartphone commands at the headset computing device in response to received speech recognition and head tracking commands. The equivalent Smartphone commands include capturing an image or video on a display of the headset computing device, transmitting the equivalent Smartphone command to the Smartphone to control the native image and video viewing application, and displaying the captured image and/or video through the headset computing display device and through the Smartphone display, simultaneously.
The example embodiments may further include, the use of a markup language included with generated display output, such as Hyper Text Markup Language 5 (HTML5). Example embodiments may present to the end-user menu selections and prompts in terms of speech recognition and head tracking commands, such as audible and visual prompts. A wireless communications link between the host Smartphone or PC and the headset computing device. May use wireless communications links such as Bluetooth or Wi-Fi wireless standards.
Still further example embodiments may further still effectively enable a speech recognition and hands-free user interface and control of the native application executed on a Smartphone or PC.
Further example methods of, and corresponding devices for, displaying on a headset computer output from a Smartphone application. Embodiments execute an application on the Smartphone generating output of the executed application for display through a headset computer. The Smartphone configures and transmits instructions in a description language indicating actions for the headset computer to display the output of the executed application. Embodiments receive over low bandwidth at the headset computer the configured instructions, and in response thereto, the headset computer forms a display of the generated output based on indicated actions to perform. The actions include any of on screen notifications, messages, graphical elements, request to play one of a plurality of predefined sound sequences and control a component of the headset computer.
The actions of the respective element types, and for each element type, the instructions can indicate one of a plurality of styles predefined for the headset computer. The description language can be HTML 5 or other markup language. The formed display rendered at the headset in the headset domain can include menu selections and prompts presented in terms of a speech recognition/head tracking user interface. The menu selections and outputs can be audibly and visually presented to a user. The receiving by the headset computer and the transmitting by the Smartphone can be over a wireless communications link. The wireless communications link can be any of Bluetooth, Wi-Fi or other protocol.
Example methods can further include monitoring for input from a speech recognition/head tracking user interface at the headset computer. In response to a received speech recognition/head tracking command input, the headset computer translates the received speech recognition/head tracking command to an equivalent Smartphone command and transmits the equivalent Smartphone command to the Smartphone to control the executed application. The display by the headset computer can effectively enable speech and hands-free user interaction with the Smartphone.
The foregoing will be apparent from the following more particular description of example embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments of the present invention.
A description of example embodiments of the invention follows.
A Headset Computer (HSC), having computational and operational capabilities sufficient at least for use as an auxiliary display and end-user input device, is operationally connected to a “host” (or alternatively “main” or “controller”) computing device enabling a conventional software application executed on the host to be displayed simultaneously through the host and the HSC and controlled using end-user inputs received at the HSC. Host computing devices may include Smartphones, tablets or personal computers (PC's). The HSC and host can be operatively couple using wires or preferably wirelessly.
According to an example method of the present invention, a Smartphone or PC can be the primary (or main) computing device, and the HSC can be an auxiliary display and user interface to the Smartphone or PC computing device. The Smartphone or PC can run its own native software using its own display in a conventional and customary manner. The HSC alone cannot execute such a software application due to its lack of traditional user interface hardware, such as a touch screen, keyboard, and/or mouse. Therefore, in order to accommodate hands-free computing, the Smartphone or PC can execute a second class of application that, while running directly on the Smartphone or PC, is specifically designed to use the HSC as a display output for the native application and to use the Automatic Speech Recognizer (ASR) and Head-Tracker (HT) functionally built into the HSC user input devices.
In example embodiments of the present invention, the HSC contains at least sufficient computational and operational capabilities to: (i) display a screen-full of information; (ii) monitor head-gesture tracking and feed head-movement information back to the main computer (e.g., Smartphone or PC); and, (iii) monitor and process speech, feeding speech recognition results back to the main computer. By pairing the HSC with the main computer (Smartphone or PC), the main computer may effectively hold specially created hands-free applications that primarily run on the main computer, but use the display of and augmented input from the HSC.
In one embodiment the HSC may take the form of the HSC described in a co-pending U.S. patent application Ser. No. 13/018,999, entitled “Wireless Hands-Free Computing Headset With Detachable Accessories Controllable By Motion, Body Gesture And/Or Vocal Commands” by Jacobsen et al., filed Feb. 1, 2011, which is hereby incorporated by reference in its entirety.
Example embodiments of the HSC 100 can receive user input through recognizing voice commands, sensing head movements, 110, 111, 112 and hand gestures 113, or any combination thereof. Microphone(s) operatively coupled or preferably integrated into the HSC 100 can be used to capture speech commands which are then digitized and processed using automatic speech recognition (ASR) techniques. Gyroscopes, accelerometers, and other micro-electromechanical system sensors can be integrated into the HSC 100 and used to track the user's head movement to provide user input commands. Cameras or other motion tracking sensors can be used to monitor a user's hand gestures for user input commands. The voice command automatic speech recognition and head motion tracking features such a user interface overcomes the hands-dependant formats of other mobile devices.
The headset computing device 100 can be used as a remote auxiliary display for streaming video signals received from a remote host computing device 200 (shown in
A head worn frame 1000 and strap 1002 are generally configured so that a user can wear the headset computer device 100 on the user's head. A housing 1004 is generally a low profile unit which houses the electronics, such as the microprocessor, memory or other storage device, low power wireless communications device(s), along with other associated circuitry. Speakers 1006 provide audio output to the user so that the user can hear information, such as the audio portion of a multimedia presentation, or audio prompt, alert, or feedback signaling recognition of a user command.
Micro-display subassembly 1010 is used to render visual information, such as images and video, to the user. Micro-display 1010 is coupled to the arm 1008. The arm 1008 generally provides physical support such that the micro-display subassembly is able to be positioned within the user's field of view, preferably in front of the eye of the user or within its peripheral vision preferably slightly below or above the eye. Arm 1008 also provides the electrical or optical connections between the micro-display subassembly 1010 and the control circuitry housed within housing unit 1004.
According to aspects that will be explained in more detail below, the HSC display device 100 with micro-display 1010 can enable an end-user to select a field of view 300 (
While the example embodiments of an HSC 100 shown in
In an example embodiment of the present invention, an external “smart” device 200 (also referred to herein as a “host” device or “controller”) is used in conjunction with a HSC 100 to provide information and hands-free control to a user. The example embodiment uses the transmission of small amounts of data between the host 200 and HSC 100, and, thus, provides a more reliable data transfer method and control method for real-time control.
A preferred embodiment involves the collaboration between two devices, one of which is an HSC 100. The other device (controller or host) is a smart device 200, which is a device that executes an application, processes data and provides functionality to the user. Example controllers include, but are not limited to, Smartphones, tablets and laptops.
The controller 200 and HSC 100 can be paired over a suitable transport, typically a wireless bi-directional communications link, such as Bluetooth. Because the amount of data transferred by such a suitable protocol is low, the transport can be shared with other, activities requiring higher bandwidth.
The controller 200 can run software, such as a second class of application, running at a top layer of a software stack, that enables it to send instructions to the HSC 100. In order to avoid having to send data outlining the entire screen contents on the HSC 100, the controller 200 instead can send instructions that the HSC 100 can interpret using software functioning on the HSC 100. The instructions sent from the controller 200 can describe actions to perform, including on-screen notifications, messages and other graphical elements. Such instructions can also include requests to play one of a set of pre-set (predefined) sound sequences or control other components of the HSC 100.
The instructions can further include pre-set “styled elements”. Styled elements can include short-hand instructions or code relating to how to lay out a display screen, which text to display, text font, font size, and other stylistic element information such as drawing arrows, arrow styles and sizes, background and foreground colors, images to include or exclude, etc. Therefore, for each element type (notification, message, textbox, picturebox, etc.) there are multiple display styles. The styled elements can allow the controller 200 great flexibility in how information is displayed on the HSC 100. In this way, for given information, visual elements displayed on the HSC 100 can differ from the visual elements displayed on the display of controller 200.
Higher bandwidth activities can be requested while the bi-directional communications protocol is transferring display instruction. Higher bandwidth traffic can be separate from the low bandwidth traffic of the present invention. For example, the HSC 100 can be utilized for a video call, whereby a request is sent to acquire access and control of the display, camera and audio communications peripherals (microphone and speaker) and to display live video and play live audio on the HSC 100. Such requests can be part of the accessory protocol (e.g., low bandwidth instructions for HSC 100), but the high bandwidth video call traffic can be outside of it.
Through the transmission of styled elements the amount of data to be transmitted over the connection 150 is small-simple instructions on how to lay out a screen, which text to display, and other stylistic information, such as drawing arrows, or the background colors, images to include, etc. Additional data can be streamed over the same link 150, or another connection and displayed on screen 1010, such as a video stream if required by the controller 200.
In another embodiment, after the bi-directional communications link is established, a software application native to the host computing device can be executed by the host 200 processor. The native application can generate a feed of output images for display. So that the output images may be used for multiscreen display, a second class of application can be executed by the host to append the output images with a markup language, such as Hyper Text Markup Language 5 (HTML5). The host can communicate the marked-up display data to the HSC 100 for simultaneous display of application output through the displays of both the host device 200 and the HSC 100 but in respective format, layout, user-interface. The HSC 100 may process the marked-up display data to present available menu selections and prompts to the end-user of HSC 100 so that the end-user can interface through HSC 100 with the application running on the host 200 in a hands-free manner. For example, a visual prompt may include one or more text boxes indicating recognized verbal commands and/or arrows or other motion indicators indicating head tracking (and/or hand motion) commands. Audio prompts may, for example, include a text-to-speech machine recitation of recognized verbal commands.
The HSC device 100 can receive vocal input from the user via the microphone, hand movements or body gestures via positional and orientation sensors, the camera or optical sensor(s), and head movement inputs via the head tracking circuitry such as 3 axis to 9 axis degrees of freedom orientational sensing. These command inputs can be translated by software in the HSC device 100 into equivalent host device 200 commands (e.g., touch gesture, keyboard and/or mouse commands) that are then sent over the Bluetooth or other wireless interface 150 to the host 200. The host 200 then can interpret the translated equivalent commands in accordance with the host operating system and executed native application software to perform various functions.
Among the equivalent commands may be one to select a field of view 300 (
In the present example embodiment, at least for purposes of illustration, the HSC can be equipped with sufficient processing equipment and capabilities to only process display data, monitor user speech and motion input and translate such input into equivalent commands. In the illustrative example embodiment of
The host computing device 200 (e.g., Smartphone or other primary computing device equipped with a GPS receiver) can generate a map screen image based on its current coordinates. Typically, a Smartphone 200 is equipped with a GPS receiver and associated processing software such that the Smartphone 200 can provide its current location. Further, the Smartphone 200 usually is equipped with Internet connectivity to enable the downloading of a relevant map graphic. The map graphic can be post-processed by a second class of application running on the Smartphone such that the post-processed marked-up graphic data is sent to the HSC 100 for rendering and display to the user through field of view 300. The post-processed data can include a set of recognized speech commands 3030 sent in text form to the HSC 100, such as “Satellite”, “Pan”, “Zoom In”, and “Quit”, as illustrated in
While the map is being displayed on the HSC 100 through field of view 300, head-gesture movements can be monitored by the HSC 100, translated to equivalent touch-screen or keyboard (physical or virtual) commands and fed directly back to the Smartphone 200 via the bi-directional communications link 150 for processing in accordance with the GPS map-based application. For example, if the user's head is moved left, then the Smartphone 200 may pan the map left and send an updated graphic to the HSC display 1010. In this way Smartphone 200 can perform the processing work, while the HSC 100 can provide an auxiliary display and user interface.
At the same time the user may speak a valid recognized command 3030, such as “Satellite”. The user may be made aware that “Satellite” is a valid command by being visually or audibly prompted through the HSC 100 processing the marked-up data produced from the Smartphone application, which when processed by the HSC 100 instructs the HSC 100 to listen for valid commands 3030. In recognizing the command word “Satellite” the HSC 100 can turn the spoken language into an equivalent digital command and send the digital command back to the Smartphone 200. The Smartphone 200 can then respond to the received equivalent digital command by generating a new map view and sending the new generated view to the HSC 100 for display through the field of view 300 of the micro-display 1010, as is shown in
It is further envisioned that an end-user of the HSC 100 can select from an array of native applications stored in the memory or other storage device of the Smartphone 200 for hands-free operation. The HSC 100 user will be able to select which application to operate via an onscreen (HSC screen 300) menu and associated voice commands. Such a menu can be generated by the second class of application and sent to the HSC from the Smartphone 200 in the manner described above. Once executed, the second class of application can determine which native applications are compatible and can be accessed with the HSC 100.
During all of the HSC 100 operations, normal functionality of the host computing device 200 (Smartphone or PC) may continue. Such an approach allows the HSC 100 to be a much simpler device than in previously known (HSC) embodiments without giving up any functionality as perceived by the end-user. Computational intelligence is provided mainly by the host 200. However, it should be understood that all of the advances of the HSC 100 can remain available, such as providing hands-free operation and navigation using head tracking and input commands via combination of head tracking and speech commands, etc.
In the example of
The host computing device 200, here depicted as an iPhone®, generates a screen image based on its native e-mail application. (iPhone is a registered trademark of Apple Inc., Cupertino, Calif.) Typically, an iPhone (or Smartphone) 200 is equipped with an e-mail application and associated processing and communications capabilities such that the iPhone 200 can access an e-mail server via an Internet connection. The e-mail application display output 2010a graphic feed is post-processed by a second class of applications running on the Smartphone 200, such that the post-processed marked-up graphic data is sent to the HSC 100 for rendering and display to the user through field of view 300. The post-processed data can include a set of recognized speech commands 3030 sent in text form to the HSC 100, such as “Reply”, “Reply All”, “Discard”, “Next”, “Inbox”, and “Quit”, as illustrated in
While the e-mail application is being displayed on the HSC 100 through field of view 300, head-gesture movements can be monitored by the HSC 100, translated to equivalent touch-screen or keyboard (physical or virtual) commands, and fed directly back to the Smartphone 200 for processing in accordance with the e-mail application. For example, if the user's head is moved left—as prompted by arrow 3031, then the HSC 100 generates and transmits a corresponding Smartphone 200 compatable command. In response the Smartphone 200 may return to the “Inbox” and send an updated graphic to the HSC 100 for presentation at display 1010. In this way Smartphone 200 can perform the processing work, while the HSC 100 can provide an auxiliary display and user interface.
At the same time, the user may speak a valid recognized command 3030, such as “Reply” as previously described. Upon recognizing the command words, such as “Reply”, the HSC 100 can turn the spoken language into an equivalent digital command and send the generated command back to the Smartphone 200. The Smartphone 200 can then respond to the received equivalent command by generating an updated graphic representing a reply message and send the reply message screen view 2010b (modified for HSC 100 domain) to the HSC 100 for display through the field of view 300 of the micro-display 1010.
In parallel with steps 511 and 513 (or in series—before or after), a bi-directional communications link can be established between host device 200 and HSC 100 (step 505). Also in parallel with steps 511, 513, and 505 (or in series—before or after), the HSC 100 can be operated in an automatic speech recognition (ASR) and head tracking (HT) mode of operation during which end-user input (e.g., spoken verbal commands and/or head motion) is monitored and recognized as user commands 512. After beginning operation in ASR/HT mode 512, HSC 100 can request display output from host 200 (step 514) of the executing native application on host 200.
The HSC Auxiliary application, a second class of application, can receive the input request from the HSC 100 and the display output generated by the executed native application and then append mark-up instructions to the display output data (step 515). The appended marked-up display data can then transmitted to the HSC 100 via the established communications link (shown as 150 in
Next, the translated data is rendered and displayed through the HSC display (for example, micro-display 1010) (step 522). Displaying the output generated by the native application though the host display (step 519) and through the HSC display (step 522) can occur substantially simultaneously but in respective domains/formats. After displaying the output generated by the native application though the HSC display (step 522), HSC 100 determines whether user input (ASR/HT) at the HSC has been recognized (step 524). If end-user input (ASR/HT) at HSC 100 has been detected (at step 524), then such input can be translated to equivalent host device 200 commands and transmitted to the host device as user input (step 526). If no end-user input at HSC 100 is determined, then HSC 100 determines whether the HSC 100 process is completed (step 528). If the process at HSC 100 is not yet complete, the process can continue to operate according to the ASR/HT mode (step 512). If the HSC 100 process is complete, then the method can end (step 530).
Applications running within the context of the HSC Display Application 6104 may include a Speech Recognition input 6121, Head Tracking input 6122, translation module 6123 for translating marked-up display output (from Host 200) and for translating ASR/HT input (at HSC 100) to equivalent Host 200 commands, and Virtual Network Connection 6124, which allows a bi-directional communications link between the HSC 100 and host device 200 to be established.
A host software stack 6200 can include a kernal of an operating system (OS), such as a Linux kernal 6201, libraries and runtime libraries for implementing functions built into the programming language during the execution of an application, such as those of Libraries/Runtime stack 6202, application framework for implementing the standard structure of an application, such as Application Framework 6203, an application which can run on top of the OS kernal, libraries and framework, such as Native Application 6204, and a second class of application that runs on top of the stack and Native Application 6204, such as HCS Auxiliary Display Interface Application 6205. As described above in detail, the HCS Auxiliary Display Interface Application 6205 can allow the end-user to use the HSC 100 as an auxiliary display and hands-free user-interface to control the Native Application 6204 executing an host 200. Applications running within the context of the HCS Auxiliary Display Interface Application 6205 may include a Virtual Network Connection 6124, which allows a bi-directional communications link between the host device 200 and HSC 100 to be established. The host software stack 6200 can be more extensive and complex than the HSC software stack 6100.
The actions can be of respective element types and for each element type the instructions can indicate one of a plurality of styles predefined for the headset computer. The styles can be style elements. The formed display rendered in the headset domain can include menu selections and prompts presented in terms of a speech recognition/head tracking user interface. Such menu selections and outputs can be visually and audibly presented to a user. The communications link between the Smartphone and headset computer can be a wireless communications link, for example Bluetooth, Wi-Fi or other communications protocol.
The alternative example process illustrated in
Further example embodiments of the present invention may be configured using a computer program product; for example, controls may be programmed in software for implementing example embodiments of the present invention. Further example embodiments of the present invention may include a non-transitory computer readable medium containing instruction that may be executed by a processor, and, when executed, cause the processor to complete methods described herein. It should be understood that elements of the block and flow diagrams described herein may be implemented in software, hardware, firmware, or other similar implementation determined in the future. In addition, the elements of the block and flow diagrams described herein may be combined or divided in any manner in software, hardware, or firmware. If implemented in software, the software may be written in any language that can support the example embodiments disclosed herein. The software may be stored in any form of computer readable medium, such as random access memory (RAM), read only memory (ROM), compact disk read only memory (CD-ROM), and so forth. In operation, a general purpose or application specific processor loads and executes software in a manner well understood in the art. It should be understood further that the block and flow diagrams may include more or fewer elements, be arranged or oriented differently, or be represented differently. It should be understood that implementation may dictate the block, flow, and/or network diagrams and the number of block and flow diagrams illustrating the execution of embodiments of the invention.
The teachings of all patents, published applications and references cited herein are incorporated by reference in their entirety.
While this invention has been particularly shown and described with references to example embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.
This application is a divisional of U.S. application Ser. No. 13/799,570, filed Mar. 13, 2013, which claims the benefit of U.S. Provisional Application No. 61/638,419 by Jacobsen et al., entitled “Improved Headset Computer” filed on Apr. 25, 2012, U.S. Provisional Application No. 61/653,474 by Jacobsen et al., entitled “Headset Computer (HSC) As Auxiliary Display With ASR And HT Input” filed on May 31, 2012, and U.S. Provisional Application No. 61/748,444 by Parkinson et al., entitled “Smartphone API For Processing VGH Input” filed on Jan. 2, 2013 and U.S. Provisional Application No. 61/749,155 by Parkinson et al., entitled “Smartphone Application Programming Interface (API) For Controlling HC Differently From Smartphone Display” filed on Jan. 4, 2013. This application is related to U.S. application Ser. No. 12/774,179 by Jacobsen et al., entitled “Remote Control Of Host Application Using Motion And Voice Commands” filed May 5, 2010, which claims the benefit of U.S. Provisional Application No. 61/176,662, filed on May 8, 2009 entitled “Remote Control of Host Application Using Tracking and Voice Commands” and U.S. Provisional Application No. 61/237,884, filed on Aug. 28, 2009 entitled “Remote Control of Host Application Using Motion and Voice Commands”. The entire teachings of the above application(s) are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5005213 | Hanson et al. | Apr 1991 | A |
5594469 | Freeman et al. | Jan 1997 | A |
5990793 | Bieback | Nov 1999 | A |
6010216 | Jesick | Jan 2000 | A |
6084556 | Zwern | Jul 2000 | A |
6108197 | Janik | Aug 2000 | A |
6198462 | Daily et al. | Mar 2001 | B1 |
6204974 | Spitzer | Mar 2001 | B1 |
6313864 | Tabata et al. | Nov 2001 | B1 |
6325507 | Jannard et al. | Dec 2001 | B1 |
6369952 | Rallison et al. | Apr 2002 | B1 |
6538676 | Peters | Mar 2003 | B1 |
6778906 | Hennings et al. | Aug 2004 | B1 |
6798391 | Peterson, III | Sep 2004 | B2 |
6853293 | Swartz et al. | Feb 2005 | B2 |
6900777 | Hebert et al. | May 2005 | B1 |
6922184 | Lawrence et al. | Jul 2005 | B2 |
6956614 | Quintana et al. | Oct 2005 | B1 |
6966647 | Jannard et al. | Nov 2005 | B2 |
7004582 | Jannard et al. | Feb 2006 | B2 |
7013009 | Warren | Mar 2006 | B2 |
7082393 | Lahr | Jul 2006 | B2 |
7147324 | Jannard et al. | Dec 2006 | B2 |
7150526 | Jannard et al. | Dec 2006 | B2 |
7213917 | Jannard et al. | May 2007 | B2 |
7216973 | Jannard et al. | May 2007 | B2 |
7219994 | Jannard et al. | May 2007 | B2 |
7231038 | Warren | Jun 2007 | B2 |
7249846 | Garnd et al. | Jul 2007 | B2 |
7278734 | Jannard et al. | Oct 2007 | B2 |
7331666 | Swab et al. | Feb 2008 | B2 |
7445332 | Jannard et al. | Nov 2008 | B2 |
7452073 | Jannar et al. | Nov 2008 | B2 |
7458682 | Lee | Dec 2008 | B1 |
7461936 | Jannard | Dec 2008 | B2 |
7494216 | Jannard et al. | Feb 2009 | B2 |
7501995 | Morita et al. | Mar 2009 | B2 |
7512414 | Jannard et al. | Mar 2009 | B2 |
7620432 | Willins et al. | Nov 2009 | B2 |
7620433 | Bodley | Nov 2009 | B2 |
7682018 | Jannard | Mar 2010 | B2 |
7740353 | Jannard | Jun 2010 | B2 |
7744213 | Jannard et al. | Jun 2010 | B2 |
7753520 | Fuziak, Jr. | Jul 2010 | B2 |
7760898 | Howell el al. | Jul 2010 | B2 |
7798638 | Fuziak et al. | Sep 2010 | B2 |
7806525 | Howell et al. | Oct 2010 | B2 |
7918556 | Lewis | Apr 2011 | B2 |
7959084 | Wulff | Jun 2011 | B2 |
7966189 | Le et al. | Jun 2011 | B2 |
7967433 | Jannard et al. | Jun 2011 | B2 |
7969383 | Eberl et al. | Jun 2011 | B2 |
7969657 | Cakmakci et al. | Jun 2011 | B2 |
7976480 | Grajales et al. | Jul 2011 | B2 |
7988283 | Jannard | Aug 2011 | B2 |
7997723 | Pienimaa et al. | Aug 2011 | B2 |
8010156 | Warran | Aug 2011 | B2 |
8020989 | Jannard et al. | Sep 2011 | B2 |
8025398 | Jannard | Sep 2011 | B2 |
8072393 | Riechel | Dec 2011 | B2 |
8092011 | Sugihara et al. | Jan 2012 | B2 |
8098439 | Amitai et al. | Jan 2012 | B2 |
8108143 | Tester | Jan 2012 | B1 |
8123352 | Matsumoto et al. | Feb 2012 | B2 |
8140197 | Lapidot et al. | Mar 2012 | B2 |
8170262 | Liu | May 2012 | B1 |
8184983 | Ho et al. | May 2012 | B1 |
8212859 | Tang et al. | Jul 2012 | B2 |
8577427 | Serota | Nov 2013 | B2 |
8862186 | Jacobsen et al. | Oct 2014 | B2 |
8885719 | Kondo et al. | Nov 2014 | B2 |
8929954 | Jacobsen et al. | Jan 2015 | B2 |
9122307 | Jacobsen et al. | Sep 2015 | B2 |
20010003712 | Roelofs | Jun 2001 | A1 |
20020015008 | Kishida et al. | Feb 2002 | A1 |
20020030649 | Zavracky et al. | Mar 2002 | A1 |
20020065115 | Lindholm | May 2002 | A1 |
20020094845 | Inasaka | Jul 2002 | A1 |
20020154070 | Sato et al. | Oct 2002 | A1 |
20030016253 | Aoki et al. | Jan 2003 | A1 |
20030065805 | Barnes, Jr. | Apr 2003 | A1 |
20030067536 | Boulanger et al. | Apr 2003 | A1 |
20030068057 | Miller et al. | Apr 2003 | A1 |
20040193413 | Wilson et al. | Sep 2004 | A1 |
20040210852 | Balakrishnan et al. | Oct 2004 | A1 |
20040267527 | Creamer et al. | Dec 2004 | A1 |
20050237296 | Lee | Oct 2005 | A1 |
20050245292 | Bennett et al. | Nov 2005 | A1 |
20050264527 | Lin | Dec 2005 | A1 |
20060010368 | Kashi | Jan 2006 | A1 |
20060028400 | Lapstun et al. | Feb 2006 | A1 |
20060061551 | Fateh | Mar 2006 | A1 |
20060109237 | Morita et al. | May 2006 | A1 |
20060132382 | Jannard | Jun 2006 | A1 |
20060166705 | Seshadri et al. | Jul 2006 | A1 |
20060178085 | Sotereanos et al. | Aug 2006 | A1 |
20060221266 | Kato et al. | Oct 2006 | A1 |
20070009125 | Frerking et al. | Jan 2007 | A1 |
20070053544 | Jhao et al. | Mar 2007 | A1 |
20070093279 | Janik | Apr 2007 | A1 |
20070103388 | Spitzer | May 2007 | A1 |
20070180979 | Rosenberg | Aug 2007 | A1 |
20070220108 | Whitaker | Sep 2007 | A1 |
20070238475 | Goedken | Oct 2007 | A1 |
20070265495 | Vayser | Nov 2007 | A1 |
20080052643 | Ike et al. | Feb 2008 | A1 |
20080055194 | Baudino et al. | Mar 2008 | A1 |
20080144854 | Abreu | Jun 2008 | A1 |
20080180640 | Ito | Jul 2008 | A1 |
20080198324 | Fuziak | Aug 2008 | A1 |
20080239080 | Moscato | Oct 2008 | A1 |
20090002640 | Yang et al. | Jan 2009 | A1 |
20090079839 | Fischer et al. | Mar 2009 | A1 |
20090093304 | Ohta | Apr 2009 | A1 |
20090099836 | Jacobsen et al. | Apr 2009 | A1 |
20090117890 | Jacobsen et al. | May 2009 | A1 |
20090128448 | Riechel | May 2009 | A1 |
20090154719 | Wulff et al. | Jun 2009 | A1 |
20090180195 | Cakmakci et al. | Jul 2009 | A1 |
20090204410 | Mozer et al. | Aug 2009 | A1 |
20090251409 | Parkinson et al. | Oct 2009 | A1 |
20100020229 | Hershey el al. | Jan 2010 | A1 |
20100033830 | Yung | Feb 2010 | A1 |
20100053069 | Tricoukes et al. | Mar 2010 | A1 |
20100119052 | Kambli | May 2010 | A1 |
20100121480 | Stelzer et al. | May 2010 | A1 |
20100128626 | Anderson et al. | May 2010 | A1 |
20100141554 | Devereaux et al. | Jun 2010 | A1 |
20100156812 | Stallings et al. | Jun 2010 | A1 |
20100171680 | Lapidot et al. | Jul 2010 | A1 |
20100182137 | Pryor | Jul 2010 | A1 |
20100238184 | Janicki | Sep 2010 | A1 |
20100250231 | Almagro | Sep 2010 | A1 |
20100271587 | Pavlopoulos | Oct 2010 | A1 |
20100277563 | Gupta et al. | Nov 2010 | A1 |
20100289817 | Meier et al. | Nov 2010 | A1 |
20100302137 | Benko et al. | Dec 2010 | A1 |
20100306711 | Kahn et al. | Dec 2010 | A1 |
20110001699 | Jacobsen et al. | Jan 2011 | A1 |
20110089207 | Tricoukes et al. | Apr 2011 | A1 |
20110090135 | Tricoukes et al. | Apr 2011 | A1 |
20110092825 | Gopinathan et al. | Apr 2011 | A1 |
20110134910 | Chao-Suren et al. | Jun 2011 | A1 |
20110187640 | Jacobsen et al. | Aug 2011 | A1 |
20110214082 | Osterhout et al. | Sep 2011 | A1 |
20110221669 | Shams et al. | Sep 2011 | A1 |
20110221671 | King, III et al. | Sep 2011 | A1 |
20110227812 | Haddick et al. | Sep 2011 | A1 |
20110227813 | Haddick et al. | Sep 2011 | A1 |
20110254698 | Eberl et al. | Oct 2011 | A1 |
20110255050 | Jannard et al. | Oct 2011 | A1 |
20110273662 | Hwang et al. | Nov 2011 | A1 |
20120013843 | Jannard | Jan 2012 | A1 |
20120026071 | Hamdani et al. | Feb 2012 | A1 |
20120056846 | Zaliva | Mar 2012 | A1 |
20120062445 | Haddick et al. | Mar 2012 | A1 |
20120068914 | Jacobsen et al. | Mar 2012 | A1 |
20120075177 | Jacobsen et al. | Mar 2012 | A1 |
20120105740 | Jannard et al. | May 2012 | A1 |
20120114131 | Tricoukes et al. | May 2012 | A1 |
20120166203 | Fuchs et al. | Jun 2012 | A1 |
20120188245 | Hyatt | Jul 2012 | A1 |
20120236025 | Jacobsen et al. | Sep 2012 | A1 |
20120287284 | Jacobsen et al. | Nov 2012 | A1 |
20120302288 | Born et al. | Nov 2012 | A1 |
20130070930 | Johnson | Mar 2013 | A1 |
20130174205 | Jacobsen et al. | Jul 2013 | A1 |
20130288753 | Jacobsen et al. | Oct 2013 | A1 |
20130289971 | Parkinson et al. | Oct 2013 | A1 |
20130300649 | Parkinson et al. | Nov 2013 | A1 |
20140003616 | Johnson et al. | Jan 2014 | A1 |
20140059263 | Rosenberg et al. | Feb 2014 | A1 |
20140093103 | Breece, III et al. | Apr 2014 | A1 |
20140235169 | Parkinson et al. | Aug 2014 | A1 |
20140368412 | Jacobsen et al. | Dec 2014 | A1 |
20150039311 | Clark et al. | Feb 2015 | A1 |
20150072672 | Jacobsen et al. | Mar 2015 | A1 |
Number | Date | Country |
---|---|---|
1797299 | Jul 2006 | CN |
101581969 | Nov 2009 | CN |
101599267 | Dec 2009 | CN |
2001-100878 | Apr 2001 | JP |
2001-506389 | May 2001 | JP |
2001-216069 | Aug 2001 | JP |
2002-525769 | Aug 2002 | JP |
2008-052590 | Mar 2008 | JP |
2009-179062 | Aug 2009 | JP |
WO 9521408 | Aug 1995 | WO |
WO 9523994 | Sep 1995 | WO |
WO 9901838 | Jan 1999 | WO |
WO 0017848 | Mar 2000 | WO |
WO 0079327 | Dec 2000 | WO |
WO 2009076016 | Jun 2009 | WO |
WO 2009120984 | Oct 2009 | WO |
WO 2010129679 | Nov 2010 | WO |
WO 2011051660 | May 2011 | WO |
WO 2012040107 | Mar 2012 | WO |
WO 2012040386 | Mar 2012 | WO |
Entry |
---|
EP 12782481.1 Supplemental European Search Report, “Context Sensitive Overlays in Voice Controlled Headset Computer Displays,” dated Sep. 29, 2014. |
Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, Date of Mailing: Dec. 5, 2013 in re: PCT/US2013/036221, Title “Headset Computer (HSC) as Auxiliary Display With ASR and HT Input.” |
International Preliminary Report on Patentability, PCT/US2013/036221, “Headset Computer (HSC) as Auxiliary Display With ASR and HT Input”, Date of Mailing: Nov. 6, 2014. |
Number | Date | Country | |
---|---|---|---|
20150072672 A1 | Mar 2015 | US |
Number | Date | Country | |
---|---|---|---|
61638419 | Apr 2012 | US | |
61653474 | May 2012 | US | |
61748444 | Jan 2013 | US | |
61749155 | Jan 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13799570 | Mar 2013 | US |
Child | 14540853 | US |