Training/coaching system for a voice-enabled work environment

Information

  • Patent Grant
  • 8386261
  • Patent Number
    8,386,261
  • Date Filed
    Thursday, November 12, 2009
    14 years ago
  • Date Issued
    Tuesday, February 26, 2013
    11 years ago
Abstract
A voice assistant system is disclosed which directs the voice Prompts delivered to a first user of a voice assistant to also be communicated wirelessly to the voice assistant of a second user so that the second user can hear the voice Prompts as delivered to the first user.
Description

This invention generally relates to the use of speech or voice technology in a voice-enabled work environment to facilitate a variety of tasks, and more specifically for a method of allowing one user of a voice assistant system to train or coach another user of the system.


BACKGROUND OF THE INVENTION

Speech or voice technology, in the form of speech recognition, is used in a variety of different environments to facilitate the completion of work or various tasks. Such voice-enabled work environments, for example, include voice-directed work environments and voice-assisted work environments.


In a typical voice-enabled work environment, the worker wears a mobile computer having voice or speech capabilities. The mobile computer is worn on the body of a user or otherwise carried, such as around the waist, and a headset device connects to the mobile computer, such as with a cable or possibly in a wireless fashion. In another embodiment, the mobile computer might be implemented directly in the headset. The headset includes one or more speakers for playing voice instructions or prompts and other audio that are generated or synthesized by the mobile computer to direct or assist the work of the user and to confirm the spoken words of the user. The headset also has a microphone for capturing the speech of the user, such as speech commands and other audio, to process the commands spoken by the user and to allow the entry of data and other system feedback using the user's speech and speech recognition.


One example of such a voice-enabled work environment is generally referred to as voice-directed work, as the user takes specific direction from the central system and their mobile computer like they might take direction from a manager or supervisor or from reading a work order or to-do list. One such voice-directed work system, for example, is provided in the Talkman® system that is available from Vocollect, Inc. also of Pittsburgh, Pa. The mobile and/or wearable computers allow the users that wear or use them to maintain mobility at a worksite, while providing the users with the necessary directions or instructions and the desirable computing and data-processing functions. Such mobile computers often provide a wireless communication link to a larger, more centralized computer system that directs the work activities of a user within the system and processes any user speech inputs, such as collected data, in order to facilitate the work. An overall integrated system may utilize a central system that runs a variety of programs, such as a program for directing a plurality of mobile computers and their users in their day-to-day tasks. The users perform manual tasks and enter data according to voice instructions and information they receive from the central system, via the mobile computers. Through the headset and speech recognition and text-to-speech capabilities of the mobile computer, workers are able to receive voice instructions or questions about their tasks, to receive information about their tasks, to ask and answer questions, to report the progress of their tasks, and to report various working conditions, for example.


Another example of a voice-enabled work environment is referred to as voice-assisted work. Such a work environment is involved in situations where flexibility is required and specific task direction is not necessary. In a voice-assisted work environment, users engage in a selective speech-dialog with the system when they need to. The voice-assisted work system is designed to accommodate various prompts, instructions, and information as selectively directed by the user and their voiced commands, rather than issuing continuous instructions in a set order as with a voice-directed work system. One such voice-assisted system is provided by the AccuNurse® system available from the assignee of this application, Vocollect Healthcare Systems, Inc. (VHS) of Pittsburgh, Pa.


One of the main challenges in a voice-enabled system centers around the training of new users. The voice user interface (VUI) that is part of the voice-enabled system requires a user to know what to say and when to say it. The problem that the trainer or coach or other supervisor faces is that it is very difficult to tell a user what to do with respect to the interface when the trainer or coach cannot hear what the user is hearing or where they are in an ongoing speech dialog. The same problem surfaces with regard to ongoing training/coaching of existing users as well as when new users join the organization and need to learn how to use the system or a new feature is implemented in an existing system.


To overcome this challenge, hardware solutions have been used. For example a trainer or coach might connect a separate piece of hardware, such as a small loudspeaker, to the mobile device or personalized headset that the user is using in order to be able to hear what the user is hearing. These hardware solutions, although they successfully accomplish the task, are cumbersome to use and require direct (and obtrusive) interaction with the user being helped, trained, or coached.


A need still therefore exists for a voice-enabled system in which a trainer or coach can more effectively coach another user. There is also a need for a coach or trainer to know the voice prompts as delivered to the user being coached or know where in the speech dialog the user is so that better training may be facilitated without the need for additional intrusive coaching-specific hardware on a user's computer or other inconveniences to the user.


SUMMARY OF THE INVENTION

A voice assistant system is disclosed which directs the voice prompts delivered to a first user of a voice assistant device to also be communicated wirelessly to the voice assistant device of a second user so that the second user can hear the voice prompts as delivered to the first user.


When a device in the system activates a coaching mode, one device (the coach device) makes a connection to another device (the coached device) to receive system prompts from the coached device, and thus hears what the person who is being coached would hear. The normal voice-enabled work functions of the coach device are suspended while the coach device instead plays, as speech, the system prompts received from the other device.


The coached device includes a coach support module configured to forward system prompts to the coach device when the coaching mode is activated without otherwise altering the functioning of the coached device. In one embodiment, a voice device of the invention may be used to either coach another user or to be coached by a user.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and, together with the Detailed Description given below, serve to explain the invention.



FIG. 1 is a block diagram of a distributed implementation of a voice assistant system consistent with the principles of the present invention.



FIG. 2 is a side perspective view of one embodiment of a voice assistant of the voice assistant system of FIG. 1 consistent with the principles of the present invention.



FIG. 3 is a diagrammatic view of two voice assistants interacting in a coaching relationship according to the present invention.



FIG. 4 is an exemplary coaching routine executed by the voice assistant system of FIG. 1 consistent with the principles of the present invention.





DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION

A training or coaching system is described wherein one user coaches another user in the proper use of a voice-enabled device 45 within a voice-enabled environment. Both the coaching user and the user being coached have voice-enabled devices 45 associated with a voice-enabled work system. One device may selectively be placed in a coaching mode such as by using a voice command or other input (such as a button or manual input) to the device. The coach device forms a connection with a selected coached device as part of the coaching mode. With a connection established, the coached device monitors the connection and forwards its system prompts to the connected and activated coach device. The present invention allows the coach or coaching user (coach device) to hear the same system prompts and tones that the coached user (coached device) hears, while each of the parties uses his or her own device with no additional hardware. No separate listening kits or disruptive processes are necessary at the coached device.


Turning now to the drawing Figures, wherein like numbers denote like parts throughout the several Figures, as illustrated in FIG. 1, the present invention may be incorporated within a suitable voice-enabled work environment. Although a voice-assisted work environment is discussed herein for a patient-care application, it should be understood that the invention would have applicability in most any voice-enabled work environment, including voice-directed work environments.



FIG. 1 illustrates a voice-enabled system 5 in the form of a distributed computing system, with computing activities associated with at least one facility 10 and activities associated with an offsite company and/or an onsite enterprise customer IT department 15, such as an offsite Vocollect Healthcare Systems, Inc. Department. The system users are physically located at the facility 10, while centralized support and management capabilities for the voice-enabled system 5, on the other hand, may be provided by the offsite department and/or by the onsite enterprise customer IT department 15 which is coupled to the facility 10 with an appropriate network 70, such as a wide area network (WAN).


A workstation 20 at each facility 10 may interface with one or more portable computers in the form of voice-enabled devices 45. The voice-enabled devices 45 execute work plans and provide a user or worker a voice user interface (VUI) using speech and a speech dialog with the user.


The information associated with at least one work plan may be transmitted (e.g., in digital form) from the workstation 20 (e.g., using the network interface) via local area network (LAN) 30 to a voice transaction manager 35. Each facility 10 may have at least one voice transaction manager 35 to store and manage the work plans for the patients and patient care providers and facility configuration information. Specifically, the voice transaction manager 35 may represent and/or include practically any networked appliance, device, or computer as described hereinabove in connection with the workstation 25. The voice transaction manager 35 may be similar to a server computer in some embodiments. The voice transaction manager 35 may also include at least one database for storing the data. Data may also be transmitted from the voice transaction manager 35 to the workstation 20 through the network 30.


The information and data associated with at least one of the care plans in the voice transaction manager 35 may be transmitted (e.g., in digital form) from the voice transaction manager 35 (e.g., using the network interface) via wireless network 40 (e.g., a WLAN) to at least one voice-enabled device 45. Data may also be transmitted from the voice-enabled device 45 to the voice transaction manager 35, for example, for storage at the voice transaction manager 35 or at work station 20 and for additional processing.


The voice-enabled device 45 may include a number of separate portions or elements. In the embodiment illustrated in FIG. 2, a headset portion 50 (with a microphone, earpieces, and speakers) interfaces with a device portion 55 using and a connecting portion 60. In some embodiments, the connecting portion 60 may be a cable or may be a wireless link. The device 55 might be a portable or wearable computer device. In another embodiment, not shown, all necessary components of the voice-enabled device 45 may be contained in the headset portion 50 alone. That is, the functionality of the device 45 might be completely implemented in the headset portion 50.


The voice-enabled device 45 (or headset 50) also includes suitable processing and memory hardware and software to store and utilize the data received from the voice transaction manager 35. The voice-enabled device 45 is utilized to maintain a speech dialog with a user by utilizing certain speech commands and system Prompts.


The voice-enabled device 45 may be a wearable computer and/or a personal digital assistant (PDA) that includes WLAN capabilities in some embodiments. In particular, the voice-enabled device 45 may be a client, and more specifically a “thick client” that may allow speech recognition and speech synthesis to occur on the actual voice-enabled device 45, rather than remotely. One suitable embodiment of a voice-enabled device is set forth in U.S. patent application Ser. No. 12/536,696 filed Aug. 6, 2009, and entitled, “Voice Assistant System”, which application is incorporated by reference herein in its entirety.


In accordance with the principles of the voice-enabled work, each user at the facility 10 may have their own voice-enabled device 45 that they wear or carry. The user may log on to the system 5 and data may be transferred from the voice transaction manager 35 to the voice-enabled device 45. The data may include the various elements of the user's work plan for that day for use in the voice-enabled work environment. The work plan and information and the data associated therewith may be accessed and utilized using speech in a speech dialog, as discussed further herein. For the disclosed example herein, the data may be associated with a care plan for one or more patients and will be used as a basis for the speech dialog carried on between a user and the system 5. However, it will be appreciated that the invention might be used with any number of different voice-enabled systems and environments


The voice-enabled device 45 may support real time paging. For example, multiple devices 45 may communicate with each other via the wireless network 40 to send the pages directly. Alternatively, the pages may be first sent to the voice transaction manager 35, and then the pages may be relayed to the final destinations.


The speech dialog that is provided through the voice-enabled devices 45 may focus on various commands, and may include requiring the user to speak at least one input command with the device responding to the command and providing data or asking questions. The speech dialog may be based upon the data in the voice-enabled device 45 (FIG. 1), including the various patient care plans in one example. Such speech dialogs may be carried on with a voice-user interface (VUI) that includes speech recognition and text-to-speech (TTS) technology as would be understood by a person of ordinary skill in the art.


The speech dialog may be implemented through the VUI in a number of different ways and the application is not limited to a particular speech dialog or its progression. As noted above, in a voice-directed work environment, the speech dialog would include a constant stream of directions to a worker or user interspersed with spoken commands or spoken data entry by the user at appropriate junctures. This is implemented in generally a continual back-and-forth speech dialog for directing the user in the work environment. In a voice assisted environment, the speech dialog is less intrusive, and may be selectively engaged by a user. Generally, a user will speak a command, such as to request information or a work task, and the voice-enabled device will provide directions, data, or other synthesized speech output to the user in response. Herein, the spoken utterances or speech of a user, which will be utilized to engage in the speech dialog, will be referred to generally as voice commands or speech commands. In the VUI, the voice commands are subject to speech recognition technology to convert the voice command into a system command, such as text or data, in a form that may be utilized in the overall speech-enabled system. Alternatively, the system may provide its own data or text back to a user in what will be referred to herein as a system Prompt. Such system Prompts are in a data form to be processed through the system and are then converted into understandable speech by the text-to-speech features of the VUI to form what is referred to herein as a voice Prompt that may be played and heard by the user. That is, the speech dialog involves voice commands from the user to the device and voice Prompts from the device to the user. In the present invention, the Prompts that are directed or routed from a coached device to a coach device are generally referred to herein as system Prompts. The Prompts may be in any suitable data form to allow the data to be synthesized into speech and heard or listened to by a coach or trainer in accordance with the principles of the invention. Therefore, the terminology utilized herein to categorize the speech dialog is not limiting to the invention.


The speech dialog will depend on the specific voice commands of the user and the data that is needed by the voice-enabled device 45, or the information to be provided by the device 45. As may be appreciated, in the disclosed example, the speech dialog could take various different forms to provide, in the example, the information about a resident or a care plan to the user, or to obtain information and data about a resident pursuant to their care plan. The invention is not limited to the specific questions or format of any given speech dialog. The invention is directed to helping a user to learn how to interface through a speech dialog and also to assist another party in coaching or training a user in such an endeavor.


The voice-enabled device may also be utilized to provide the user with audible tones that assist in the interaction with the device 45. The audible tones provide an audible indication about various information or events without directly interrupting the user with a voice dialog. For example, an “all clear” tone may be provided when there are no active pages or reminders in the speech dialog, and an “incoming page” tone may be provided when the user has one or more active pages to listen to. The incoming page may be from another user and may include a recorded voice message similar to a conventional voicemail. However, the page is a silent page in that a public address system (i.e., PA system) need not be utilized, leading to less disruption. Those of ordinary skill in the art will appreciate that other tones may also be supported, and that many variations are consistent with the principles of the present invention in implementing a speech dialog.


As noted earlier, additional features and variations of an exemplary voice-assisted work system that might be utilized to implement the present invention are disclosed in U.S. Patent Application No. 61,114,920, assigned to the same assignee as the present application, which application is incorporated by reference herein as if fully set forth herein, and are also disclosed in the AccuNurse® System available from the assignee.



FIG. 3 illustrates one exemplary embodiment of the invention wherein one voice-enabled device 100 (the coach) and its user can enter a Coaching mode in order to listen to another voice-enabled device 200 (the coached) in accordance with coaching or training the user of that device 200. In this embodiment, the voice-enabled devices 100, 200 are configured generally similarly if not identically. Therefore, no additional or special set-up hardware or software is necessary for training. In fact, the Coaching mode could be configured in the opposite direction (the device 200 listening to the device 100) through a symmetrical process to the one described. Herein, for discussion purposes, the coaching user will be “User A” or “Coach”, and the user that is being coached will be “User B”, or “coached” user.


The voice-enabled device 100 of the coach user (User A) that is coaching another user (User B) includes a voice user interface (VUI) 110 that implements the speech dialog with the coach user. The voice user interface 110 converts the coach user's spoken utterances or voice commands to system data or system commands through speech recognition technology, and converts system data or system Prompts to voice Prompts through text-to-speech technology. The voice Prompts are then played for the user through the headset. In one embodiment, the speakers and microphone associated with playing and receiving speech in the speech dialog are found in the user's headset, shown as portion 50 in FIG. 2. In addition to speech commands, in one embodiment, the VUI 110 also processes manual inputs, such as button commands associated with the one or more buttons 56 on the device portion 55 as shown in FIG. 2. The input/outputs streams whether speech (SPK, MIC) or manual (BUTTONS) are indicated as being provided by the user block 130 in FIG. 3. The VUI 110 also sends and receives data and Prompts from the database 115, which acts as a local storage medium for information relevant to the voice-enabled device 100 and the speech dialog for that device 100.


In order to initiate Coaching mode, several options might be used. In one embodiment, the coach user 130 associated with the voice-enabled device 100 issues a spoken voice command, such as “Coach [username]”. For the illustrated example, the user may speak, “Coach User B”. As part of processing this command, the VUI 110 examines a list of available user names on database 115, which may be the same set of names that is available for other user-to-user commands such as paging using the voice-enabled devices. In an alternative embodiment, the list of user names may be also be accessible by saying the command “coach” and then using one or more buttons 56 to scroll through the list of available users on database 115. In still another alternative, the coaching may be initiated by button presses only. For example, one or more of the buttons 56 may be used to access a menu wherein a selection of “coach” may be selected from the menu with the buttons 56. The buttons may then be used to scroll through a list of available users and select a user. Alternatively, once coaching mode is selected manually, the user might then use speech to select a user to coach. If the identified user name is in fact the user attempting to initiate coaching, the VUI 110 responds with the speech dialog response, “You are not permitted to coach yourself” and returns to a main menu, aborting the Coaching mode.


In some situations, the list of available users may not immediately update on device 100. A new or unexpected user may take time, for example up to five minutes, to appear on the list in database 115 and be available for coaching. The device 100 may need to retrieve an updated list from the voice transaction manager server 35.


Assuming a valid username is identified by the spoken “Coach” command, an Activate( ) method 112 is run on a Coaching mode module 120 of device 100. In one embodiment, the Coaching mode module 120 is implemented utilizing a suitable LUA script.


It should be understood that the implementation, as shown in FIG. 3, is illustrative or representative of the functionality of the various voice-enabled devices. Therefore, the figure is not an exact representation of the various hardware and software components of a device that may be used to implement the present invention. As such, the devices 100, 200 will utilize appropriate processing hardware and software for implementing the functionality of the devices in accordance with the principles of the invention. As such, the present invention is not limited to a particular hardware and software configuration, and various blocks and components in FIG. 3 do not necessarily set forth and are not limited to specific hardware or software components. A person of ordinary skill in the art will understand that the functionality of the present invention might be implemented in a number of different ways in a portable computer device with a suitable processor and appropriate hardware and software components.


The Coaching mode module 120 sends a look-up table request (workerinfo.get( )) to the voice transaction manager 35 to obtain the local network IP address associated with the valid username of the user to be coached, User B. If no local network address is returned, an error message is played to the user 130 and Coaching mode is aborted. If the server 35 returns a network IP address, the Coaching mode module 120 opens or establishes a direct socket connection 150 over the wireless network 40 to the voice-enabled device 200 of the user that is being coached (User B). If a direct socket connection cannot be established, an error message is played to user 130 and Coaching mode is aborted.


The voice-enabled device 100 gives status Prompts to the Coaching user User A as the connection is sought and established. In one embodiment, the user 130, as shown in FIG. 3, hears a spoken voice Prompt from the VUI 110 at five-second intervals until the socket connection is established or the process times out. If the connection is successfully established within 5 seconds, a message is played: “Coaching [user name]. Connected. Press the STOP button to exit coaching session.” If establishing the connection takes longer than five seconds, then after five seconds the device 100 will produce the status Prompt, “Coaching [user name]. Connecting, please wait.” After each additional five seconds, the status Prompt “Please wait” is heard. When the connection is thereafter established, another status Prompt is played: “Connected. Press the STOP button to exit coaching session.” If a connection cannot be established after a set period of time, for example 15 seconds, an error message is played through VUI 110 and Coaching mode is aborted.


In one embodiment, a generic error message might be played that is the same regardless of the reason for the lack of connection: “Connection cannot be made at this time.” The error message is always followed by return to the main menu of the VUI 110 with the appropriate main menu Prompt or tone as appropriate for the VUI.


The VUI 110 of voice-enabled device 100 continues to process voice commands and manual inputs from the associated user 130 during the establishment of the connection. In one embodiment, if the user 130 presses a STOP or CANCEL button of the voice-enabled device 100, or gives an abort voice command such as the spoken voice command, “Cancel”, the VUI 110 runs a Deactivate( ) method 114 of the Coaching mode module 120 and aborts Coaching mode with a spoken message: “Exiting coaching session”, followed by a return to the main VUI menu with the playing of an appropriate system Prompt or tone. The user 130 might then continue using their voice-enabled device 100 in an appropriate manner for the voice-enabled work.


Once the socket connection 150 is established and Coaching mode is running via Coaching mode module 120, the VUI 110 on the Coach device 100 continues to monitor the database 115 as well as monitoring the manual inputs or buttons. In one embodiment, voice recognition capabilities are generally deactivated in device 100 while Coaching mode is active, but the VUI 110 performs a program loop, waiting for a signal from a manual input, such as a CANCEL or STOP button of device 110 that will deactivate Coaching mode. This allows the coach user to then speak to and instruct the coached user on how to interface in the voice dialog and to discuss what responses to give and what words to say without the speech recognition features of the coach device trying to perform speech recognition on the coach user's speech. Therefore, the coach user is free to speak to the coached user, such as for instruction, without the coach device trying to perform speech recognition on the coach user's speech. While in Coaching mode, VUI 110 might also disable other features of device 100 so as not interrupt User A (user 130) while User A is coaching and listening to User B (user 230). For example, the coaching device 100 might be configured to not play the audible tones associated with pages or reminders sent to the coach User A as part of the voice-enabled system. Such tones that might confuse User A as to whether the page or reminder was intended for Coach User A or they are hearing a system Prompt in the form of an audible tone form (user 230). Instead, VUI 110 processes pages and reminders and plays the appropriate tones for User A (user 130) when Coaching mode is deactivated.


In an alternative embodiment, some limited speech recognition capabilities might continue to operate in coach mode to allow the coach user to exit Coaching mode with a spoken command such as “Cancel” rather than requiring a manual input. The speech recognition feature in that scenario would then only recognize a limited vocabulary for cancellation purposes.


As part of its operation, the voice-enabled device 200 includes an appropriate coaching support module 220 that receives a notification 212 whenever the VUI 210 handles a Prompt in order to then convey the Prompt to the user 230. This coaching support module 220 processes each notification of a system Prompt, as well as handling the establishment of a connection 150 through the wireless network with a coaching device 100, such as by providing a blocked thread waiting for new socket connections. The coaching support module 220 may serve a limited number of connections by including a limit on the number of open socket connections it will maintain, for example ten. If so, the coaching support module 220 may be configured to decline to activate any additional socket connections once the maximum is reached, which would cause the additional unit to fail to connect as detailed above.


Whenever the coaching support module 220 receives a system Prompt notification 212 through the VUI 210 of device 200, it checks to determine if there are any active socket connections that have been established, which indicate that there are coaching or listening users. If there are one or more coaching users, the module 220 sends information about the system Prompt to each connected coach device, such as device 100, as shown in FIG. 3. If there are not established connections, the system Prompt will be discarded by the module 220.


When the Coaching mode module 120 of the coach device 100 receives the notification of a system Prompt over the socket connection 150 from the coaching support module 220 of the coached device 200, the Coaching mode module 120 communicates the system Prompt to the local database 115. The coach VUI 110, which is looping in a software fashion to monitor the database 115, will process the new system Prompt, convert it to a voice Prompt or tone as appropriate, and play it to the coach user 130. In this way, the coach user 130 hears the system Prompts that are also played by the coached device 200 and heard by the coached user 230 (User B). The coach User A thus knows what the coached User B is hearing. This facilitates better training with minimal disruption to User B.


The Coaching mode module 120 continually monitors the socket connection 150 to make sure that it is open. Coaching mode can be discontinued in several ways. If the socket connection 150 is disengaged, such as by network failure or deactivation of the device 200, the Coaching mode module 120 communicates the disconnected status to the database 115. When the VUI 110 queries the database and processes this change in status, it will terminate the Coaching mode and return to the main menu. It may play a message to User A: “Connection lost. Exiting coaching session.” It will then be out of coaching mode and may return to its normal operation.


If the coach device 100 is deactivated, such as by being disconnected or put to sleep, Coaching mode is also terminated such that when a user 130 next activates the device 100, it will be at the main VUI menu and not within Coaching mode.


During Coaching mode, in one possible embodiment, the VUI 110 of the coach device 100 is not receiving voice commands from the user 130, but continues to receive manual inputs, such as button inputs from the user 130. That is, the speech recognition feature of the VUI 110 might be disabled. The VUI 110 may continue to adjust the volume of the speakers in response to volume adjustment through use of the buttons 56. If the user 130 presses an appropriate STOP or CANCEL button, the VUI 110 runs the Deactivate( ) method 114 on the Coaching mode module 120 which deactivates the socket connection 150 and terminates Coaching mode with the message: “Exiting coaching session.” The VUI 110 then returns to the main VUI menu of the voice-enabled device 100 and can then provide speech recognition and a speech dialog.


The data or Prompts transferred over the socket connection 150 between the coaching support module 220 and the Coaching mode module 120 is not audio or sound data, but is instead text or the equivalent. Once received, the system Prompts are converted to audible speech for the user 130 by the local VUI 110 of the coach device 100, thus reducing the load on the wireless network. Other sounds associated with the coached device 200, such as prerecorded pages or the commands said by the coached user 230, are not transferred to or played by the coach device 100 in one embodiment.


As explained above, the operation of the coached device 200 is, in one embodiment, almost entirely unaltered by being coached in accordance with one feature of the invention. That is, the coaching features provided by the invention do not disrupt the user that is being coached. Whenever the VUI 210 issues a system Prompt to the user 230, a notification of that system Prompt is also automatically sent to the coaching support module 220, which further sends the system Prompts to other units if applicable as noted above. However, the coached user 230 (User B) receives no notice as to whether the system Prompts that they are hearing are being sent, and has no direct or obvious way to tell if the coached device 200 is being coached or being listened to. The coaching process does not affect the coached user's ability to use the coached device 200 in its normal fashion for the various voice-enabled work tasks.


In one embodiment of the invention, a voice-enabled device will only enter Coaching mode if it is not currently being coached itself. If the coaching support module of the device includes any active socket connections (i.e., it is being coached and thus acting as a coached device), an error message will play to the user of that device and the device will return to the main menu instead, aborting Coaching mode, and playing the generic error message: “Connection cannot be made at this time.” Because this error message is generic and not specific, the user of the voice-enabled device may still be unaware that he is being coached, and thus cannot become a coach himself and put his device into coaching mode. This invisibility of the coaching mode to the coached user that is provided by the present invention may be desirable in certain training situations.


In another embodiment, the user may be notified that the user's voice assistant is being coached by having the coached VUI 210 include some additional output such as a specific prompt or tone, background noises in the audio channel, or a visual indicator that the device is being coached. In training situations, visibility or awareness of the use of coaching may be desirable.


In one embodiment, a device that is currently in Coaching mode (original coach device) may subsequently be contacted by one or more additional coach devices with appropriate connections established. Thus, a coach user may be coached by other coach users in turn. In such a situation, Prompts received by the Coaching mode module of the original coach device and played for the original coach user would also be further relayed by the coaching support module of the original coach device and relayed to additional Coaching mode modules of the other additional coach devices. This “layered” use of the Coaching mode would successfully allow additional users to hear the prompts relayed to the original coach device, and may be useful in situations where there is a need to have multiple users involved in coaching or training another user, or there is a need to exceed the established limit (e.g. 10 sockets) to coaching connections as described above. Therefore, a coach may hear the Prompts and part of the dialog of a coached user either by connecting directly to a coached user or by connecting (via a coaching session) to another coach who is connected to the coached user.



FIG. 4 illustrates one example of a use of the Coaching mode. In this example, User A is the coach user, and their device is configured to coach or listen to the device of User B, the coached user. Block 300 refers to the system Prompts heard by User A, while block 310 refers to the Prompts and ongoing speech dialog of User B as User B works and performs various tasks.


Although the terms “coaching” and “coached” are used above with respect to one user listening to the Prompts given to another user, it is to be understood that the device may be used during a coaching process in reverse, with the person to be coached listening to the Prompts of the coach as part of the training process. This feature may also have applications outside of the coaching process, and no such usage restriction is intended.


This invention provides a variety of benefits over the training kits of the prior art. A coach or trainer can initiate a coaching session without having to disrupt the user being coached, because nothing needs to be connected to the coached user's device. The coach or trainer does not have to locate, assemble, and wear any sort of listening kit, and can initiate a coaching session in a matter of seconds making use of the equipment already being used as part of the voice-enabled work environment. The coach is able to teach and reinforce best practices of using the mobile device and personalized headset because the coach is using the same equipment that the user is using to navigate through the VUI for the voice-enabled system. Because the connection occurs over a wireless network, the coach does not even need to be in the same location as the user they are coaching, but can connect and listen remotely, which is not possible with loudspeaker-based training kits.


Although the embodiment listed above uses two identical voice-enabled devices functioning on the same local area network, the coaching function could also be performed at a distant site and with different equipment, and may be a direct communication between the devices as disclosed above or may be through a server or other intermediary.


The above embodiments are intended to be illustrative and not limiting on the scope of the invention.

Claims
  • 1. A device for a voice-enabled work environment comprising: a network interface operable to communicate with a wireless network;a voice user interface operable to, upon receiving a system Prompt, convert the system Prompt into speech in the form of a voice Prompt for a user, the voice user interface further operable to generate a notification regarding the system Prompt; andupon receiving a voice command from the user, use speech recognition technology to convert the voice command into a system command; anda coaching support module coupled with the voice user interface and configured to monitor the system Prompt notifications generated by the voice user interface, the coaching support module configured to be selectively activated into a mode for being coached by the establishment of a connection of the device to at least one separate coaching device in the wireless network so that the coaching support module, upon receiving notification of a System Prompt, is further operable to automatically forward system Prompts of the device to a separate connected coaching device as the system Prompts are received.
  • 2. The device of claim 1 wherein the coaching support module is configured to serve a limited number of connections to other devices for forwarding the system Prompts.
  • 3. The device of claim 1 wherein the coaching support module is further operable, upon receiving a system Prompt, to determine if a connection has been established to one other device before forwarding the system Prompt.
  • 4. The device of claim 1 wherein the connection is a socket connection to the at least one other device through the wireless network.
  • 5. The device of claim 1 wherein the establishment of a connection of the device to at least one other device is initiated by the at least one other device through the wireless network.
  • 6. A device for a voice-enabled work environment comprising: a network interface operable to communicate with a wireless network; a voice user interface operable to, upon receiving a system Prompt, convert the system Prompt into speech in the form of a voice Prompt for a user; and,upon receiving a voice command from the user, use speech recognition technology to convert the voice command into a system command; andthe device having a coaching mode module coupled with the voice user interface and operable to be activated into a coaching mode to coach the user of at least one other device, the coaching mode module, when the coaching mode module is activated, further operable to obtain a network address for the at least one other device and to establish a connection to the at least one other device over a wireless network for coaching the other device;the coaching mode module operable to receive system Prompts from the at least one other device that are sent over the wireless network connection from the at least one other device and to provide the system Prompts for the voice user interface to be output as a voice Prompt for the user of the device.
  • 7. The device of claim 6 wherein the voice user interface selectively deactivates the speech recognition technology for a voice command when the coaching mode is activated.
  • 8. The device of claim 6 wherein, when the coaching mode module is activated, the voice user interface only converts system Prompts that are received from the at least one other device into speech in the form of a voice Prompt.
  • 9. The device of claim 6 wherein the coaching mode module is activated through a voice command from the user to the voice user interface.
  • 10. The device of claim 6 further comprising a manual input component wherein the coaching mode module is activated through a manual input from the user.
  • 11. The device of claim 6 wherein the coaching mode module is operable to obtain information for establishing the connection through interfacing with a wireless network.
  • 12. The device of claim 6 wherein the device is further operable for terminating the connection to the at least one other device when the coaching mode module is deactivated.
  • 13. The device of claim 6 further comprising a manual input component, the coaching mode module being deactivated through at least one of a voice command or a manual input from the user.
  • 14. The device of claim 6 wherein the connection is a socket connection to the at least one other device through the wireless network.
  • 15. A voice-enabled work system, comprising: a wireless network;at least two voice-enabled devices, a first device for a user to coach with and a second device for a user to be coached, the first and second devices configured for communicating over the wireless network, each of the at least two devices including: a voice user interface operable to,upon receiving a system Prompt, convert the Prompt into speech in the form of a voice Prompt for a user, and,upon receiving a voice command from the user, use speech recognition technology to convert the voice command into a system command;the first device having a coaching mode module that is operable for being selectively activated into a coaching mode and further operable, when the coaching mode module is activated, to obtain a network address for the second device and to establish a connection with the second device over the wireless network, the first device operable to receive system Prompts from the second device to be output as voice Prompts to the user of the first device;the second device including a coaching support module operable for detecting a connection with the first device and, upon detecting a connection, operable for being activated into a mode for being coached and automatically forwarding system Prompts to a connected first device as the system Prompts are received by the voice user interface and coaching support module of the second device.
  • 16. The system of claim 15 wherein the second device is configured to detect connections with multiple devices and to forward system Prompts to multiple connected devices.
  • 17. The system of claim 15 wherein the connection is a socket connection through the wireless network.
  • 18. The system of claim 15 wherein the first device is operable to convert a received system Prompt from the second device into speech as a voice Prompt for the user.
  • 19. The system of claim 15 wherein the first device selectively deactivates the speech recognition technology of the first device for a voice command when the coaching mode module is activated into coaching mode.
  • 20. The system of claim 15 wherein, when the coaching mode module is activated into coaching mode, the first device only converts system Prompts that are received from the second device into speech in the form of a voice Prompt.
  • 21. The system of claim 15 wherein the coaching mode module of the first device is activated through a voice command from the user to the voice user interface.
  • 22. The system of claim 15 wherein the first device includes a manual input component wherein the coaching mode module is activated into coaching mode through a manual input to the first device from the user.
  • 23. The system of claim 15 wherein the coaching mode module is operable to obtain information for establishing the connection through interfacing with a wireless network.
  • 24. The system of claim 15 wherein the first device is further operable for terminating the connection to the second device when the coaching mode module is deactivated.
  • 25. The system of claim 15 wherein the first device includes a manual input component further comprising a manual input component, the coaching mode module being deactivated through at least one of a voice command or a manual input to the first device from the user.
  • 26. The system of claim 15 wherein the connection is a socket connection to the second device through the wireless network.
  • 27. A method for training a user in voice-enabled work environment, comprising: establishing communication over a wireless network between at least two voice-enabled devices, a first device for a user to coach with and a second device for a user to be coached, each of the at least two devices operable to, upon receiving a system Prompt, convert the system Prompt into speech, and, upon receiving a voice command from the user, use speech recognition technology to convert the voice command;selectively activating the first device into a coaching mode and establishing a coaching connection with the second device over the wireless network;at the second device, detecting a coaching connection with the first device;if a coaching connection is detected, activating the second device into a mode for being coached and automatically forwarding system Prompts of the second device to a connected first device as the system Prompts are received by the second device so the system Prompts might be converted to speech at both devices;at the first device, receiving system Prompts from the second device and converting them to speech at the first device for the user of the first device.
  • 28. The method of claim 27 further comprising at the second device, detecting a coaching connection with multiple devices and forwarding system Prompts to multiple connected devices.
  • 29. The method of claim 27 further comprising selectively deactivating the speech recognition technology of the first device when the coaching mode is activated.
  • 30. The method of claim 27 further comprising, when the coaching mode is activated, converting into speech at the first device only system Prompts that are received from the second device.
  • 31. The method of claim 27 further comprising activating the coaching mode through a voice command from the user.
  • 32. The method of claim 27 further comprising activating the coaching mode through a manual input to the first device from the user.
  • 33. The method of claim 27 further comprising terminating the connection to the second device when the coaching mode is deactivated.
  • 34. The method of claim 27 further comprising deactivating the coaching mode through at least one of a voice command or a manual input to the first device from the user.
RELATED APPLICATION

This Application is related to and claims the benefit of U.S. Provisional Patent Application Ser. No. 61/114,820, entitled “TRAINING/COACHING SYSTEM FOR A VOICE-ENABLED WORK ENVIRONMENT”, filed on Nov. 14, 2008, which application is incorporated by reference herein.

US Referenced Citations (536)
Number Name Date Kind
1483315 Saal Feb 1924 A
1753317 Rothen Apr 1930 A
2170287 Kinnebrew Aug 1939 A
D130619 Kendall et al. Dec 1941 S
2369860 Schroeder Feb 1945 A
D153112 Braun et al. Mar 1949 S
2506524 Stuck May 1950 A
2782423 Wiegand et al. Feb 1957 A
2958769 Bounds Nov 1960 A
3087028 Bonnin Apr 1963 A
D196654 Van Den Berg Oct 1963 S
D206122 Bradbury et al. Nov 1966 S
D206665 Sanzone Jan 1967 S
3327807 Mullin Jun 1967 A
3363214 Wright Jan 1968 A
D212863 Roberts Dec 1968 S
D215545 Husks Oct 1969 S
3654406 Reinthaler Apr 1972 A
3682268 Gorike Aug 1972 A
3781039 Edwards et al. Dec 1973 A
3808577 Mathauser Apr 1974 A
3873757 Berke et al. Mar 1975 A
3969796 Hodsdon Jul 1976 A
3971900 Foley Jul 1976 A
3971901 Foley Jul 1976 A
3984885 Yoshimura et al. Oct 1976 A
4010998 Tolnar, Jr. et al. Mar 1977 A
4018599 Hill et al. Apr 1977 A
4024368 Shattuck May 1977 A
4031295 Rigazio Jun 1977 A
4039765 Tichy Aug 1977 A
4049913 Sakoe Sep 1977 A
4068913 Stanger Jan 1978 A
4138598 Cech Feb 1979 A
4189788 Schenke Feb 1980 A
4213253 Gudelis Jul 1980 A
RE30662 Foley Jun 1981 E
4302635 Jacobsen Nov 1981 A
D265989 Harris Aug 1982 S
D268675 Hass Apr 1983 S
4418248 Mathis Nov 1983 A
4471496 Gardner Sep 1984 A
4472607 Houng Sep 1984 A
4495646 Gharachorloo Jan 1985 A
4499593 Antle Feb 1985 A
4619491 Drogo Oct 1986 A
4620760 Duncan Nov 1986 A
4634816 O'Malley et al. Jan 1987 A
4649332 Bell Mar 1987 A
4672672 Eggert et al. Jun 1987 A
4689822 Houng Aug 1987 A
4698717 Scheid Oct 1987 A
4739328 Koelle Apr 1988 A
D299129 Wiegel Dec 1988 S
4811243 Racine Mar 1989 A
4821318 Wu Apr 1989 A
D301145 Besasie et al. May 1989 S
4845650 Meade et al. Jul 1989 A
4846714 Welsby Jul 1989 A
4864158 Koelle et al. Sep 1989 A
4874316 Kamon et al. Oct 1989 A
4875233 Derhaag Oct 1989 A
4888591 Landt et al. Dec 1989 A
4907266 Chen Mar 1990 A
4914704 Cole et al. Apr 1990 A
4952024 Gale Aug 1990 A
D313092 Nilsson Dec 1990 S
4999636 Landt et al. Mar 1991 A
5003589 Chen Mar 1991 A
5010495 Willetts Apr 1991 A
5012511 Hanle Apr 1991 A
5018599 Dohi et al. May 1991 A
5023824 Chadima et al. Jun 1991 A
5024604 Savin Jun 1991 A
D318670 Taniguchi Jul 1991 S
5028083 Mischenko Jul 1991 A
5030807 Landt et al. Jul 1991 A
5055659 Hendrick et al. Oct 1991 A
5056161 Breen Oct 1991 A
D321879 Emmerling Nov 1991 S
5063600 Norwood Nov 1991 A
D326655 Iribe Jun 1992 S
5148155 Martin et al. Sep 1992 A
5155659 Kunert Oct 1992 A
D330704 Wagner Nov 1992 S
5177784 Hu Jan 1993 A
5179736 Scanlon Jan 1993 A
D334043 Taniguchi et al. Mar 1993 S
5197332 Shennib Mar 1993 A
5202197 Ansell et al. Apr 1993 A
5208449 Eastman et al. May 1993 A
5225293 Mitchell Jul 1993 A
5241488 Chadima Aug 1993 A
5251105 Kobayashi Oct 1993 A
D341567 Acker Nov 1993 S
5267181 George Nov 1993 A
5280159 Schultz Jan 1994 A
5281957 Schoolman Jan 1994 A
D344494 Cardenas Feb 1994 S
D344522 Taniguchi Feb 1994 S
5305244 Newman et al. Apr 1994 A
5309359 Katz May 1994 A
5347477 Lee Sep 1994 A
D351841 Blankenship et al. Oct 1994 S
5357596 Takebayashi et al. Oct 1994 A
5365050 Worthington et al. Nov 1994 A
5365434 Figliuzzi Nov 1994 A
5369857 Sacherman et al. Dec 1994 A
5371679 Abe et al. Dec 1994 A
5381486 Ludeke Jan 1995 A
5386494 White Jan 1995 A
5389917 LaManna Feb 1995 A
5393239 Ursich Feb 1995 A
5399102 Devine Mar 1995 A
5406037 Nageno Apr 1995 A
5410141 Koenck et al. Apr 1995 A
5432510 Matthews Jul 1995 A
5438626 Neuman Aug 1995 A
5438698 Burton et al. Aug 1995 A
5446788 Lucey et al. Aug 1995 A
5456611 Henry Oct 1995 A
5462452 Devine Oct 1995 A
5469505 Gattey Nov 1995 A
5478252 Lecomte Dec 1995 A
5479001 Kumar Dec 1995 A
5480313 d'Alayer de Costemore d'Arc Jan 1996 A
5481645 Bertino Jan 1996 A
D367256 Tokunaga Feb 1996 S
5491651 Janik Feb 1996 A
5501571 Van Durrett et al. Mar 1996 A
5504485 Landt et al. Apr 1996 A
5510795 Koelle Apr 1996 A
5514861 Swartz et al. May 1996 A
5515303 Cargin et al. May 1996 A
5521601 Kandlur May 1996 A
5535437 Karl et al. Jul 1996 A
5550547 Chan et al. Aug 1996 A
5553312 Gattey et al. Sep 1996 A
5555490 Carroll Sep 1996 A
5555554 Hofer Sep 1996 A
5572401 Carroll Nov 1996 A
5579400 Ballein Nov 1996 A
D376598 Hayashi Dec 1996 S
5581492 Janik Dec 1996 A
5604050 Brunette et al. Feb 1997 A
5607792 Garcia et al. Mar 1997 A
5610387 Bard et al. Mar 1997 A
D379456 Osiecki May 1997 S
D380199 Beruscha Jun 1997 S
5637417 Engmark Jun 1997 A
5639256 Endo et al. Jun 1997 A
D384072 Ng Sep 1997 S
5665485 Kuwayama et al. Sep 1997 A
5671037 Ogasawara et al. Sep 1997 A
5673037 Cesar et al. Sep 1997 A
D385263 Taylor Oct 1997 S
D385540 Taylor Oct 1997 S
D385541 Taylor Oct 1997 S
5677834 Mooneyham Oct 1997 A
5680465 Boyden Oct 1997 A
D385855 Ronzani Nov 1997 S
D387898 Ronzani Dec 1997 S
5698834 Worthington et al. Dec 1997 A
D390552 Ronzani Feb 1998 S
D391234 Chacon et al. Feb 1998 S
5716730 Deguchi Feb 1998 A
5719743 Jenkins et al. Feb 1998 A
5719744 Jenkins et al. Feb 1998 A
D391953 Copeland Mar 1998 S
5729697 Schkolnick Mar 1998 A
D394436 Hall et al. May 1998 S
5748841 Morin May 1998 A
5757339 Williams et al. May 1998 A
5762512 Trant et al. Jun 1998 A
5763867 Main Jun 1998 A
5766794 Brunette et al. Jun 1998 A
5774096 Usuki et al. Jun 1998 A
5777561 Chieu et al. Jul 1998 A
5781644 Chang Jul 1998 A
5787361 Chen Jul 1998 A
5793878 Chang Aug 1998 A
D398899 Chaco Sep 1998 S
5803750 Purington et al. Sep 1998 A
5812977 Douglas Sep 1998 A
5825045 Koenck et al. Oct 1998 A
5828693 Mays et al. Oct 1998 A
D400848 Clark et al. Nov 1998 S
5832098 Chen Nov 1998 A
5832430 Lleida et al. Nov 1998 A
5839104 Miller et al. Nov 1998 A
5841630 Seto et al. Nov 1998 A
5841859 Chen Nov 1998 A
5844824 Newman et al. Dec 1998 A
5850181 Heinrich et al. Dec 1998 A
5850187 Carrender et al. Dec 1998 A
5856038 Mason Jan 1999 A
5857148 Weisshappel et al. Jan 1999 A
5862241 Nelson Jan 1999 A
D406098 Walter et al. Feb 1999 S
5869204 Kottke et al. Feb 1999 A
5873070 Bunte et al. Feb 1999 A
D406575 Michael et al. Mar 1999 S
5884265 Squitteri Mar 1999 A
5890074 Rydbeck Mar 1999 A
5890123 Brown et al. Mar 1999 A
D408783 Lucaci et al. Apr 1999 S
5892813 Morin et al. Apr 1999 A
5895729 Phelps et al. Apr 1999 A
D409137 Sumita May 1999 S
5903870 Kaufman May 1999 A
5905632 Seto et al. May 1999 A
D410466 Mouri Jun 1999 S
D410921 Luchs et al. Jun 1999 S
D411179 Toyosato Jun 1999 S
5909667 Leontiades Jun 1999 A
5912632 Dieska et al. Jun 1999 A
5920261 Hughes et al. Jul 1999 A
5931513 Conti Aug 1999 A
5933330 Beutler et al. Aug 1999 A
5934911 Stout et al. Aug 1999 A
5935729 Mareno Aug 1999 A
5941726 Koegel Aug 1999 A
5941729 Sri-Jayantha Aug 1999 A
5942987 Heinrich et al. Aug 1999 A
5945235 Clanton et al. Aug 1999 A
D413582 Tompkins Sep 1999 S
D414470 Chacon Sep 1999 S
5950167 Yaker Sep 1999 A
5956675 Setlur et al. Sep 1999 A
5962837 Main Oct 1999 A
5966082 Cofino et al. Oct 1999 A
5974384 Yasuda Oct 1999 A
D416263 Kuczyk et al. Nov 1999 S
5984709 Zink et al. Nov 1999 A
5991085 Rallison et al. Nov 1999 A
5991726 Immarco et al. Nov 1999 A
5993246 Moldenhauer Nov 1999 A
5995019 Chieu et al. Nov 1999 A
5999085 Szwarc Dec 1999 A
6002918 Heiman et al. Dec 1999 A
6012030 French-St. George et al. Jan 2000 A
6016347 Magnasco Jan 2000 A
D420674 Powell Feb 2000 S
6021207 Puthuff et al. Feb 2000 A
6022237 Esh Feb 2000 A
6032127 Schkolnick Feb 2000 A
6036093 Schultz Mar 2000 A
6044347 Abella et al. Mar 2000 A
D422962 Shevlin et al. Apr 2000 S
6051334 Tsurumaru Apr 2000 A
D424035 Steiner May 2000 S
D424577 Backs et al. May 2000 S
6056199 Wiklof May 2000 A
6060193 Remes May 2000 A
6062891 Villiers May 2000 A
D426529 Lohrding Jun 2000 S
6071640 Robertson et al. Jun 2000 A
6075857 Doss et al. Jun 2000 A
6078251 Landt et al. Jun 2000 A
6078825 Hahn et al. Jun 2000 A
6084556 Zwern Jul 2000 A
6085428 Casby et al. Jul 2000 A
6091546 Spitzer Jul 2000 A
D430158 Bhatia Aug 2000 S
D430159 Bhatia et al. Aug 2000 S
6097301 Tuttle Aug 2000 A
6101260 Jensen et al. Aug 2000 A
6104281 Heinrich et al. Aug 2000 A
6109526 Ohanian Aug 2000 A
D430882 Tsai Sep 2000 S
6114625 Hughes et al. Sep 2000 A
6120932 Slipy et al. Sep 2000 A
6122329 Zai Sep 2000 A
D431562 Bhatia et al. Oct 2000 S
6127990 Zwern Oct 2000 A
6127999 Mizutani Oct 2000 A
6136467 Phelps et al. Oct 2000 A
6137686 Saye Oct 2000 A
6137868 Leach Oct 2000 A
6137879 Papadopoulos et al. Oct 2000 A
6149451 Weber Nov 2000 A
6154669 Hunter et al. Nov 2000 A
D434762 Ikenaga Dec 2000 S
6157533 Sallam Dec 2000 A
6160702 Lee Dec 2000 A
6164853 Foote Dec 2000 A
6167413 Daley Dec 2000 A
D436104 Bhatia Jan 2001 S
D436349 Kim Jan 2001 S
6171138 Lefebvre et al. Jan 2001 B1
6172596 Cesar et al. Jan 2001 B1
6173266 Marx et al. Jan 2001 B1
6179192 Weinger et al. Jan 2001 B1
6185535 Hedin Feb 2001 B1
6188985 Thrift Feb 2001 B1
6190795 Daley Feb 2001 B1
6195053 Kodukula Feb 2001 B1
6199044 Ackley et al. Mar 2001 B1
6204765 Brady et al. Mar 2001 B1
D440966 Ronzani Apr 2001 S
6225777 Garcia et al. May 2001 B1
6226622 Dabbiere May 2001 B1
6229694 Kono May 2001 B1
6230029 Hahn et al. May 2001 B1
6233559 Balakrishnan May 2001 B1
6233560 Tannenbaum May 2001 B1
6235420 Ng May 2001 B1
6237051 Collins May 2001 B1
D443870 Carpenter et al. Jun 2001 S
6243682 Eghtesadi et al. Jun 2001 B1
6246989 Polcyn Jun 2001 B1
6261715 Nakamura et al. Jul 2001 B1
6266641 Takaya Jul 2001 B1
D448027 Jones et al. Sep 2001 S
6286763 Reynolds et al. Sep 2001 B1
D449289 Weikel et al. Oct 2001 S
6302454 Tsurumaru Oct 2001 B1
6304430 Laine Oct 2001 B1
6304436 Branch et al. Oct 2001 B1
6304459 Toyosato et al. Oct 2001 B1
6310888 Hamlin Oct 2001 B1
6318636 Reynolds et al. Nov 2001 B1
6321198 Hank et al. Nov 2001 B1
6324053 Kamijo Nov 2001 B1
D451903 Amae et al. Dec 2001 S
D451907 Amae et al. Dec 2001 S
D451919 Abboud Dec 2001 S
6325507 Jannard Dec 2001 B1
6326543 Lamp Dec 2001 B1
6327152 Saye Dec 2001 B1
6339764 Livesay et al. Jan 2002 B1
6349001 Spitzer Feb 2002 B1
D454468 Mano Mar 2002 S
D454873 Clark et al. Mar 2002 S
6353313 Estep Mar 2002 B1
6356635 Lyman et al. Mar 2002 B1
6357534 Buetow et al. Mar 2002 B1
6357662 Helton Mar 2002 B1
6359603 Zwern Mar 2002 B1
6359777 Newman Mar 2002 B1
6359995 Ou Mar 2002 B1
6364675 Brauer Apr 2002 B1
6369952 Rallison et al. Apr 2002 B1
6371535 Wei Apr 2002 B2
6373693 Seto et al. Apr 2002 B1
6373942 Braund Apr 2002 B1
6374126 MacDonald, Jr. et al. Apr 2002 B1
6376942 Burger Apr 2002 B1
D457133 Yoneyama May 2002 S
6384591 Estep May 2002 B1
6384712 Goldman May 2002 B1
6384982 Spitzer May 2002 B1
6386107 Rancourt May 2002 B1
6394278 Reed May 2002 B1
6404325 Heinrich Jun 2002 B1
6422476 Ackley Jul 2002 B1
6424357 Frulla Jul 2002 B1
6429775 Martinez et al. Aug 2002 B1
6434251 Jensen et al. Aug 2002 B1
6434526 Cilurzo Aug 2002 B1
6438523 Oberteuffer et al. Aug 2002 B1
6445175 Estep Sep 2002 B1
6454608 Kitahara Sep 2002 B1
D463784 Taylor et al. Oct 2002 S
6486769 McLean Nov 2002 B1
D466497 Wikel Dec 2002 S
D467592 Hussaini Dec 2002 S
6496799 Pickering Dec 2002 B1
6500581 White et al. Dec 2002 B2
6501807 Chieu Dec 2002 B1
D468730 Wong et al. Jan 2003 S
D469080 Kohli Jan 2003 S
6504914 Brademann et al. Jan 2003 B1
6509546 Egitto Jan 2003 B1
6511770 Chang Jan 2003 B2
6523752 Nishitani et al. Feb 2003 B2
6525648 Kubler Feb 2003 B1
6529880 McKeen Mar 2003 B1
6532148 Jenks Mar 2003 B2
6560092 Itou et al. May 2003 B2
D475996 Skulley Jun 2003 S
D476297 Schwimmer Jun 2003 S
6574672 Mitchell et al. Jun 2003 B1
6581782 Reed Jun 2003 B2
6595420 Wilz, Sr. et al. Jul 2003 B1
6597465 Jarchow Jul 2003 B1
6607134 Bard et al. Aug 2003 B1
6608551 Anderson Aug 2003 B1
D480074 Tuhkanen Sep 2003 S
6628509 Kono Sep 2003 B2
6639509 Martinez Oct 2003 B1
D483281 Cobigo Dec 2003 S
D483369 Klemettila Dec 2003 S
D483370 Klemettila Dec 2003 S
6658130 Huang Dec 2003 B2
6660427 Hukill Dec 2003 B1
6663410 Revis Dec 2003 B2
6677852 Landt Jan 2004 B1
D487064 Stekelenburg Feb 2004 S
6697465 Goss Feb 2004 B1
D487276 Cobigo Mar 2004 S
D487470 Cobigo Mar 2004 S
6710701 Leatherman Mar 2004 B2
D488146 Minto Apr 2004 S
D488461 Okada Apr 2004 S
6731771 Cottrell May 2004 B2
D491917 Asai Jun 2004 S
D491953 Arakaki et al. Jun 2004 S
D492295 Glatt Jun 2004 S
6743535 Yoneyama Jun 2004 B2
6745014 Seibert Jun 2004 B1
6749960 Takeshita Jun 2004 B2
6754361 Hall Jun 2004 B1
6754632 Kalinowski et al. Jun 2004 B1
D494571 Polito Aug 2004 S
6769762 Saito et al. Aug 2004 B2
6769767 Swab et al. Aug 2004 B2
6772454 Barry Aug 2004 B1
6778676 Groth et al. Aug 2004 B2
6811088 Lanzaro et al. Nov 2004 B2
6812852 Cesar Nov 2004 B1
6816063 Kubler Nov 2004 B2
6826532 Casby et al. Nov 2004 B1
6830181 Bennett Dec 2004 B1
6847336 Lemelson Jan 2005 B1
6853294 Ramamurthy Feb 2005 B1
6859134 Heiman et al. Feb 2005 B1
6872080 Pastrick Mar 2005 B2
6890273 Perez May 2005 B1
D506065 Sugino et al. Jun 2005 S
6909546 Hirai Jun 2005 B2
6910911 Mellott et al. Jun 2005 B2
D507523 Resch et al. Jul 2005 S
6915258 Kontonassios Jul 2005 B2
6934675 Glinski Aug 2005 B2
6965681 Almqvist Nov 2005 B2
D512417 Hirakawa et al. Dec 2005 S
D512718 Mori Dec 2005 S
D512985 Travers et al. Dec 2005 S
6971716 DePaulis et al. Dec 2005 B2
6982640 Lindsay Jan 2006 B2
7003464 Ferrans et al. Feb 2006 B2
D517556 Cho Mar 2006 S
D518451 Nussberger Apr 2006 S
D519497 Komiyama Apr 2006 S
7028265 Kuromusha et al. Apr 2006 B2
7052799 Zatezalo et al. May 2006 B2
D522897 Kellond Jun 2006 S
7063263 Swartz et al. Jun 2006 B2
D524794 Kim Jul 2006 S
D525237 Viduya et al. Jul 2006 S
7082393 Lahr Jul 2006 B2
7085543 Nassimi Aug 2006 B2
7099464 Lucey et al. Aug 2006 B2
D528031 Kellond Sep 2006 S
7110800 Nagayasu et al. Sep 2006 B2
7110801 Nassimi Sep 2006 B2
D529438 Viduya et al. Oct 2006 S
D529447 Greenfield Oct 2006 S
7117159 Packingham et al. Oct 2006 B1
D531586 Poulet Nov 2006 S
7143041 Sacks et al. Nov 2006 B2
D533184 Kim Dec 2006 S
7145513 Cohen Dec 2006 B1
7146323 Guenther et al. Dec 2006 B2
D535974 Alwicker et al. Jan 2007 S
D536692 Alwicker et al. Feb 2007 S
D537978 Chen Mar 2007 S
7194069 Jones et al. Mar 2007 B1
D539816 Aoki Apr 2007 S
7216351 Maes May 2007 B1
D543994 Kurihara Jun 2007 S
7228429 Monroe Jun 2007 B2
D548220 Takagi Aug 2007 S
D549216 Viduya Aug 2007 S
D549217 Viduya Aug 2007 S
D549694 Viduya et al. Aug 2007 S
7257537 Ross et al. Aug 2007 B2
D551615 Wahl Sep 2007 S
D552595 Viduya et al. Oct 2007 S
D558761 Viduya et al. Jan 2008 S
D558785 Kofford Jan 2008 S
7319740 Engelke Jan 2008 B2
7346175 Hui et al. Mar 2008 B2
D565569 Viduya et al. Apr 2008 S
D567218 Viduya et al. Apr 2008 S
D567219 Viduya et al. Apr 2008 S
D567799 Viduya et al. Apr 2008 S
D567806 Viduya et al. Apr 2008 S
D568881 Hsiau May 2008 S
D569358 Devenish, III et al. May 2008 S
D569876 Griffin May 2008 S
7369991 Manabe et al. May 2008 B2
D571372 Brefka et al. Jun 2008 S
D572655 Osiecki Jul 2008 S
D573577 Huang Jul 2008 S
7398209 Kennewick et al. Jul 2008 B2
7413124 Frank et al. Aug 2008 B2
D583827 Wahl Dec 2008 S
D587269 Keeports Feb 2009 S
7487440 Gergic et al. Feb 2009 B2
7496387 Byford et al. Feb 2009 B2
7519196 Bech Apr 2009 B2
D593066 Sheba et al. May 2009 S
7604765 Sugimoto et al. Oct 2009 B2
D609246 Wahl Feb 2010 S
D612856 Wahl et al. Mar 2010 S
D626949 Wahl et al. Nov 2010 S
8011327 Mainini et al. Sep 2011 B2
8086463 Ativanichayaphong et al. Dec 2011 B2
8128422 Mellott et al. Mar 2012 B2
20010017926 Viamini Aug 2001 A1
20010046305 Muranami Nov 2001 A1
20020003889 Fischer Jan 2002 A1
20020015008 Kishida Feb 2002 A1
20020021551 Kashiwagi Feb 2002 A1
20020044058 Heinrich Apr 2002 A1
20020076060 Hall Jun 2002 A1
20020131616 Bronnikov Sep 2002 A1
20020178344 Bourguet Nov 2002 A1
20030095525 Lavin May 2003 A1
20030130852 Tanaka Jul 2003 A1
20030233165 Hein Dec 2003 A1
20040024586 Andersen Feb 2004 A1
20040063475 Weng Apr 2004 A1
20040091129 Jensen May 2004 A1
20040220686 Cass Nov 2004 A1
20050010418 McNair Jan 2005 A1
20050095899 Mellott May 2005 A1
20050230388 Wu Oct 2005 A1
20050272401 Zatezalo Dec 2005 A1
20060044112 Bridgelall Mar 2006 A1
20070080930 Logan Apr 2007 A1
20070221138 Mainini Sep 2007 A1
20080072847 Liao Mar 2008 A1
20090134226 Stobbe May 2009 A1
Foreign Referenced Citations (9)
Number Date Country
04138886 Apr 1993 DE
00732817 Sep 1996 EP
1383029 Jan 2004 EP
1531418 May 2005 EP
02242099 Sep 1991 GB
WO0041543 Jul 2000 WO
WO02069320 Sep 2002 WO
WO2005008476 Jan 2005 WO
WO2007044755 Jan 2007 WO
Non-Patent Literature Citations (13)
Entry
US 6,335,860, 01/2002, Shin (withdrawn)
Two-page Retail Technology article entitled Vocollect is the perfect pick at Nisa-Today's, published Dec. 31, 2004 in Business Media Ltd.; retrieved from Internet www.retailtechnology.co.uk/CaseStudies/vocollect.htm on Jan. 9, 2009.
Seven-page International Search Report and Written Opinion mailed Jul. 6, 2010 for PCT/US2009/064344.
One-page Peripheral PDF's: http://cgi. ebay.com/USB-wireless-MOUSE-w-receiver-pocket-USB-HUB-PS-2-si—W0QQitemZ110007231278QQihZ001QQcategoryZ60264QQcmDZViewItem, Nov. 16, 2006.
Six-page www.vocollect.com—Vocollect PDF brochure, Nov. 2005.
Four-page Vocollect Speech Recognition Headsets brochure-Clarity and comfort. Reliable performance. Copyright Sep. 2005.
Four-page Vocollect Speech Recognition Headsets brochure—SR 30 Series Talkman High-Noise Headset. Copyright 2005.
Two-page Vocollect SR 20 Talkman Lightweight Headset Product Information Sheet. Copyright Aug. 2004.
Photographs 1-7 SR Talkman Headset Aug. 2004—Prior art.
Two-page Supplemental Vocollect SR 20, Talkman Lightweight Headset Product Information Sheet. Copyright Aug. 2004.
Fifteen-page Takebayashi, Spontaneous Speech Dialogu System TOSSBURG II—The User-Centered Multimodal Interface, Published Nov. 15, 2995.
Four-page Wang, “SALT: A Spoken Language Interface for Web-based Multimodal Dialog Systems,” In Proc. ICSLP, 2002, pp. 2241-2244.
Three-page Bers, et al. “Designing Conversational Interfaces with Multimodal Interaction”, DARPA Workshop on Broadcast News Understanding Systems, 1998, pp. 319-321.
Related Publications (1)
Number Date Country
20100125460 A1 May 2010 US
Provisional Applications (1)
Number Date Country
61114920 Nov 2008 US