The systems and methods described herein relate to speech systems, and more particularly to canceling a speech interaction session.
Computer operating systems and user interfaces associated with them have evolved over several years into very complex software programs that are difficult to learn, master and thereby leverage the full potential of the programs. Many operating systems include a speech interface for people to communicate and express ideas and commands.
Most operating systems that utilize a speech interface provide a low-level interface that allows speech-enabled applications to work with the operating system. Such a low level interface provides basic speech functionality to the speech-enabled applications. Consequently, each speech-enabled application must provide a higher level of interface to a user. As a result, each speech-enabled application may be different from other speech-enabled applications from the user's perspective. The user may have to interact differently with each speech-enabled application. This makes it difficult for the user to work with multiple speech-enabled applications and limits the user's computing experience.
In addition, speech interaction systems may permit users to initiate an interaction session using electromechanical mechanisms such as, e.g., pushing a button, or by making a spoken request to the system to initiate a session. Most speech interaction systems require a user to terminate by issuing a voice command such as, e.g., “cancel”, “goodbye”, or “finished”. Alternate mechanisms for terminating a session are desirable.
Described herein are systems and methods for canceling a speech interaction session. The systems and methods permit a speech interaction session to be canceled using physical mechanisms such as, e.g., pressing a button on a keyboard or an electronic device for a predetermined time period or according to a predetermined sequence.
In an exemplary implementation a method of canceling a speech interaction session is provided. The exemplary method comprises receiving a signal indicating that a predetermined switch has been set to a first state; monitoring a time parameter indicative of a time the switch remains in the first state; and canceling the speech interaction session if the time parameter exceeds a threshold.
Described herein are exemplary system and methods for canceling a speech interaction session. The methods described herein may be embodied as logic instructions on a computer-readable medium. When executed on a processor, the logic instructions cause a general purpose computing device to be programmed as a special-purpose machine that implements the described methods. The processor, when configured by the logic instructions to execute the methods recited herein, constitutes structure for performing the described methods.
Exemplary Speech System
The speech system 100 further includes a speech engine 110 that has an input device such as a microphone 112 and an output device such as a speaker 114. Various other hardware components 116 utilized by the speech system 100 but not specifically mentioned herein are also included.
The speech system 100 also includes memory 118 typically found in computer systems, such as random access memory (RAM). The memory 118 stores an operating system 120. A speech object 122 that is stored in the memory 118 is shown separate from the operating system 120. However, it is noted that the speech object 122 and its components may also be a part of the operating system 120.
The memory 118 may store a first speech-enabled application, Application A 124 and a second speech-enabled application, Application B 126. Application A 124 is associated with a first listener object, Listener A 128 and Application B 126 is associated with a second listener object, Listener B 130. Listener A 128 includes a listener interface 132 by which Listener A 128 communicates with the speech object 122. Listener A 128 also includes a listener grammar 133 that is a unique speech grammar local to Listener A 128. Listener B 130 also includes the listener interface 132 through which Listener B 130 communicates with the speech object 122. Listener B 130 also includes a listener grammar 135 that is a unique speech grammar local to Listener B 130.
Application A 124 includes a communications path 125 that Application A 124 utilizes to communicate with Listener A 128. Similarly, Application B 126 includes a communications path 127 that Application B 126 utilizes to communicate with Listener B 130. The communication paths 125, 127 may comprise a common interface between the speech-enabled applications 124, 126 and the listener objects 128, 130, or they may comprise a private communication path accessible only by the respective speech-enabled application 124, 126 and listener object 128, 130. The communication paths 125, 127 may remain inactive until the speech-enabled applications 124, 126 activates the communications paths 125, 127 and requests attention from the corresponding listener object 128, 130. Additionally, the communication paths 125, 127 may provide one-way communication between the speech-enabled applications 124, 126 and the listener objects 128, 130 or they may provide two-way communications.
A speech manager 134 is stored in the memory 118 and is the main speech desktop object. It controls the main thread of the speech object 122. The speech manager 134 is used to control communications with the listener objects including dispatching appropriate events. The speech manager 134 exposes a speech manager interface 136 to speech-enabled applications and a speech site interface 140. A system grammar 138 is included in the speech manager 134 and provides a global speech grammar for the speech system 100. A listener table 142 stored in the speech manager 134 maintains a list of currently loaded and executing listeners (in this example, Listener A 128 and Listener B 130).
The speech object 122 also includes a “What Can I Say?” (WCIS) manager 144 and a configuration manager 146. The WCIS manager 144 provides access to a “What Can I Say?” (WCIS) user interface 148 and includes a Speech WCIS interface 150 that the WCIS manager 144 uses to communicate with the speech object 122.
It is noted that the elements depicted in
Speech Manager Interface
As previously noted, the speech manager 134 exposes the speech manager interface 136 to one or more speech-enabled applications, such as Application A 124 and Application B 126. The following discussion of the speech manager interface 136 refers to the speech manager interface 136 as (interface) ISpDesktop 136. ISpDesktop 136 is the nomenclature utilized in one or more versions of the WINDOWS family of operating systems provided by MICROSOFT CORP. Such a designation in the following discussion is for exemplary purposes only and is not intended to limit the platform described herein to a WINDOWS operating system
The following is an example of the ISpDesktop 136 interface.
Interface ISpDesktop
The “Init” or initialization first sets up the listener connections to the speech recognition engine 110. Once this connection is established, each listener object that is configured by the user to be active (listeners can be inactive if the user has decided to “turn off a listener” via the configuration mechanism) is initialized via a call to the ISpDesktopListener::Init( ) method. The listener object is given a connection to the speech engine 110 to load its speech grammars and set up the notification system.
The “Run” method activates and/or deactivates the speech system 100 functionality. The “Run” method is typically associated with a graphical user interface element or a hardware button to put the system in an active or inactive state.
The “Configure” method instructs the speech system 100 to display the configuration user interface 152, an example of which is shown in
Listener Interface
As previously noted, each listener object 128, 130 in the speech system 100 exposes the listener interface 132. An exemplary listener interface 132 is shown and described below. The following discussion of the listener interface 132 refers to the listener interface 132 as (interface) ISpDesktopListener 132. ISpDesktopListener 132 is the nomenclature utilized in one or more versions of the WINDOWS family of operating systems provided by MICROSOFT CORP. Such a designation in the following discussion is for exemplary purposes only and is not intended to limit the platform described herein to a WINDOWS operating system
The following is an example of the ISpDesktopListener 132 interface.
Interface ISpDesktopListener
The “Suspend” method notifies the listeners 128, 130 that the speech system 100 is deactivated. Conversely, the “Resume” method notifies the listeners 128, 130 that the speech system 100 is activated. The listeners 128, 130 can use this information to tailor their particular behavior (e.g., don't update the speech grammars if the speech system 100 is not active).
The “OnFocusChanged” method informs a particular listener 128, 130 that a new speech-enabled application 124 has focus (i.e., a user has highlighted the new speech-enabled application 124). The listener 128 associated with the newly focused speech-enabled application 124 uses this information to activate its grammar. Conversely, a previously active listener (e.g., Listener B 130) that loses focus when focus changes to the newly focused speech-enabled application 124 uses the information to deactivate its grammar.
The “What Can I Say” method is used for the WCIS Manager 144 to notify each listener 128, 130 that a user has requested the WCIS user interface 148 to be displayed. As previously mentioned, the WCIS user interface 148 is shown in
WCIS Interface
The “What Can I Say?” (WCIS) interface 150 is implemented by the What Can I Say? user interface 148 and is used by the listeners 128, 130 to update their WCIS information in that dialogue. An exemplary WCIS interface 150 is shown and described below. The following discussion of the WCIS interface 150 refers to the WCIS interface 150 as (interface) ISpWCIS 150. ISpWCIS 150 is the nomenclature utilized in one or more versions of the WINDOWS family of operating systems provided by MICROSOFT CORP. Such a designation in the following discussion is for exemplary purposes only and is not intended to limit the platform described herein to a WINDOWS operating system
The following is an example of the ISpWCIS 150 interface.
Interface ISpWCIS
The final three parameters (bstrTitle, cWCISInfo, pEnumWCISInfo) are used to display a category title in the WCIS user interface 148 (bstrTitle) and to retrieve the actual phrases to be displayed under this category (cWCISInfo and pEnumWCISInfo).
Speech Site Interface
The speech site interface 140 is implemented by the speech manager 134 and provides the listeners 128, 130 (in ISpDesktopListener::Init( )) a way in which to communicate back with the speech manager 134. An exemplary speech site interface 140 is shown and described below. The following discussion of the speech site interface 140 refers to the speech site interface 140 as (interface) ISpDesktopListenerSite 140. ISpDesktopListenerSite 140 is the nomenclature utilized in one or more versions of the WINDOWS family of operating systems provided by MICROSOFT CORP. Such a designation in the following discussion is for exemplary purposes only and is not intended to limit the platform described herein to a WINDOWS operating system
The following is an example of the ISpDesktopListenerSite 140 interface.
Interface ISpDesktopListenerSite
The TextFeedback method is used by the listeners 128, 130 to inform a user of pending actions. For example, a “Program Launch” listener uses this method to inform the user that it is about to launch an application. This is very useful in a case where starting up a new application takes some time and assures the user that an action was taken. The TfLBBalloonStyle method is used by a WINDOWS component (called Cicero) to communicate the text to any display object that is interested in this information. The pszFeedback and cch parameters are the feedback text and its length in count of characters respectively.
Additional information about speech system 100 is disclosed in U.S. Patent Application Publication No. 2003/0235818 entitled SPEECH PLATFORM ARCHITECURE, assigned to Microsoft Corporation of Redmond, Wash., USA, the disclosure of which is incorporated herein in its entirety.
The user-input devices 206 can include any device allowing a computer to receive a developer's input, such as a keyboard 210, other device(s) 212, and a mouse 214. The other device(s) 212 can include a touch screen, a voice-activated input device, a track ball, and any other device that allows the system 200 to receive input from a user. The computer 208 includes a processing unit 216 and random access memory and/or read-only memory 218. Memory 218 includes an operating system 220 for managing operations of computer 208 and one or more application programs, such as speech interaction module 222, speech interaction cancellation module 224, and other application modules 226. Memory 218 may further include XML data files 228 and an operation log 230. The computer 208 communicates with a user and/or a developer through the screen 204 and the user-input devices 206. Operation of the speech interaction cancellation module 224 is explained in greater detail below
Exemplary Operations
At operation 310 a key signal is received indicating the state of the input device. For the purposes of this description, the input device functions as a switch that can assume at least one of two logical states, i.e., pressed or not pressed. The key signal indicates the state of the input device. In a computer-based implementation, the key may be a key on the keyboard 210, a button on a mouse 214, or a button on another input device, or a “soft” button on a touch-screen, or a dialog button activated by a mouse click. The signal generated by the input device is passed to the operating system 220, which ultimately passes the signal to the speech interaction cancellation module 224.
At operation 315 the key signal is monitored to determine whether the input device, designated as a key in the drawing, is in the “down” position, i.e., whether the key or button is depressed. It will be appreciated that the designation of “down” is arbitrary, and based on conventional input device design in which buttons are normally biased in an “up” direction and are depressed by a user to generate a signal indicating that the input device has been activated. If the input device is not in the down position, then the speech interaction cancellation module implements a loop that monitors the state of the input device.
By contrast, if the input device is in the down position, then control passes to operation 320 and a flag is set indicating that the input device is being held, i.e., is in the down position. At operation 325 a timestamp reflecting the time at which the key was depressed is recorded. The timestamp may be stored in a suitable memory location in either volatile or non-volatile memory.
Optionally, at operation 330 a timer is started. Operation 330 is unnecessary in a computer system that has a system clock. In such a system recording the timestamp at operation 325 effectively starts a timer.
If the key remains down for a time period that exceeds the threshold, then control passes to operation 430 and the current speech interaction session is canceled. In an exemplary implementation canceling the speech interaction session includes canceling all operations executed by the user since the beginning of a session. For example, assume that the speech interaction module interacts with one or more application modules 226, which permit a user to manipulate one or more data files 228. Upon cancellation of the speech interaction session any changes made to the data files 228 are “undone”. This may be accomplished by maintaining an operation log 230 in the system memory 218 that records any changes made to data files 228 during the speech interaction session, and reversing the operations recorded in the log when a session is canceled. At operation 435 the keyheld flag is set to FALSE and operations of the speech interaction cancellation module 224 can terminate or return to the monitoring operations of
Referring back to operation 415, if the key is not held in the down position for the time duration required to invoke cancellation operations 430-435, then control passes back to operation 415. If the key is not in the down position, then control passes to operation 440 and the timer is stopped. Operation 445 implements a redundant check to determine if the key remained down for a time period that exceeds the threshold and if so then control passes to operations 430-435 and the current speech interaction session is canceled.
By contrast, if at operation 445 the time period for which the key was down did not exceed the threshold, then control passes to optional operation 450 and the timer is reset. If the system clock is used then operation 445 is unnecessary because the timestamp will be reset in subsequent operation. At operation 455 a new speech interaction session may be initiated.
The operations of
In alternate implementations the operations of
At operation 510 a key signal is received indicating the state of the input device. For the purposes of this description, the input device functions as a switch that can assume at least one of two logical states, i.e., pressed or not pressed. The key signal indicates the state of the input device. In a computer-based implementation, the key may be a key on the keyboard 210, a button on a mouse 214, or a button on another input device, or a “soft” button on a touch-screen, or a dialog button activated by a mouse click. The signal generated by the input device is passed to the operating system 220, which ultimately passes the signal to the speech interaction cancellation module 224.
At operation 515 the key signal is monitored to determine whether the input device, designated as a key in the drawing, is in the “down” position, i.e., whether the key or button is depressed. It will be appreciated that the designation of “down” is arbitrary, and based on conventional input device design in which buttons are normally biased in an “up” direction and are depressed by a user to generate a signal indicating that the input device has been activated. If the input device is not in the down position, then the speech interaction cancellation module implements a loop that monitors the state of the input device.
By contrast, if the input device is in the down position, then control passes to operation 520 and it is determined whether the user is logged into the device and/or application. In an exemplary implementation this may be determined by setting a flag to a specific value when the user logs into the device and/or application. This flag may then be checked to determine whether the value of the flag indicates that the user is logged in to the device and/or application. If, at operation 520, the user is not logged in then control passes to operation 525 and control returns to the calling routine.
By contrast, if at operation 520 the user is logged into the device and/or application, then control passes to operation 530 and a flag is set indicating that the input device is being held, i.e., is in the down position. At operation 535 a timestamp reflecting the time at which the key was depressed is recorded. The timestamp may be stored in a suitable memory location in either volatile or non-volatile memory.
Optionally, at operation 540 a timer is started. Operation 540 is unnecessary in a computer system that has a system clock. In such a system recording the timestamp at operation 535 effectively starts a timer.
In an implementation adapted for a device and/or application that implements log-in procedures and/or a standby mode, following operation 655 a test is implemented at operation 660 to determine whether the device is in the power on state, as opposed to a standby state. If the device is in the power-on state then control passes to operation 665, and it is determined whether the user is logged into the device and/or application. If the user is logged in, then control passes to operation 670 and new session routines are initiated.
By contrast, if at operation 660 the device is not in a power-on state, then control passes to operation 675, and a test is implemented to determine whether the time period between the key release and the current time exceeds a threshold for the device remaining in the power-on state without key activity. If the elapsed time exceeds this threshold, the device has slipped into a standby state, and control returns to the calling routine at operation 680. By contrast, if the threshold is not exceeded, then control may pass back to operation 660.
The operations of
In alternate implementations the operations of
Exemplary Computer Environment
The various components and functionality described herein are implemented with a number of individual computers.
Generally, various different general purpose or special purpose computing system configurations can be used. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The functionality of the computers is embodied in many cases by computer-executable instructions, such as program modules, that are executed by the computers. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Tasks might also be performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media.
The instructions and/or program modules are stored at different times in the various computer-readable media that are either part of the computer or that can be read by the computer. Programs are typically distributed, for example, on floppy disks, CD-ROMs, DVD, or some form of communication media such as a modulated signal. From there, they are installed or loaded into the secondary memory of a computer. At execution, they are loaded at least partially into the computer's primary electronic memory. The invention described herein includes these and other various types of computer-readable media when such media contain instructions programs, and/or modules for implementing the steps described below in conjunction with a microprocessor or other data processors. The invention also includes the computer itself when programmed according to the methods and techniques described below.
For purposes of illustration, programs and other executable program components such as the operating system are illustrated herein as discrete blocks, although it is recognized that such programs and components reside at various times in different storage components of the computer, and are executed by the data processor(s) of the computer.
With reference to
Computer 700 typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by computer 700 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. “Computer storage media” includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 700. Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more if its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.
The system memory 706 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 710 and random access memory (RAM) 712. A basic input/output system 714 (BIOS), containing the basic routines that help to transfer information between elements within computer 700, such as during start-up, is typically stored in ROM 710. RAM 712 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 704. By way of example, and not limitation,
The computer 700 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media discussed above and illustrated in
The computer may operate in a networked environment using logical connections to one or more remote computers, such as a remote computing device 750. The remote computing device 750 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer 700. The logical connections depicted in
When used in a LAN networking environment, the computer 700 is connected to the LAN 752 through a network interface or adapter 756. When used in a WAN networking environment, the computer 700 typically includes a modem 758 or other means for establishing communications over the Internet 754. The modem 758, which may be internal or external, may be connected to the system bus 708 via the I/O interface 742, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 700, or portions thereof, may be stored in the remote computing device 750. By way of example, and not limitation,
Although the described arrangements and procedures have been described in language specific to structural features and/or methodological operations, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or operations described. Rather, the specific features and operations are disclosed as preferred forms of implementing the claimed present subject matter.