1. Field of the Invention
The field of the invention is data processing, or, more specifically, methods, apparatus, and products for synchronizing distributed speech recognition.
2. Description of Related Art
User interaction with applications running on small devices through a keyboard or stylus has become increasingly limited and cumbersome as those devices have become increasingly smaller. In particular, small handheld devices like mobile phones and PDAs serve many functions and contain sufficient processing power to support user interaction through other modes, such as multimodal access. Devices which support multimodal access combine multiple user input modes or channels in the same interaction allowing a user to interact with the applications on the device simultaneously through multiple input modes or channels. The methods of input include speech recognition, keyboard, touch screen, stylus, mouse, handwriting, and others. Multimodal input often makes using a small device easier.
Multimodal applications often run on servers that serve up multimodal web pages for display on a multimodal browser. A ‘multimodal browser,’ as the term is used in this specification, generally means a web browser capable of receiving multimodal input and interacting with users with multimodal output. Multimodal browsers typically render web pages written in XHTML+Voice (‘X+V’). X+V provides a markup language that enables users to interact with an multimodal application often running on a server through spoken dialog in addition to traditional means of input such as keyboard strokes and mouse pointer action. X+V adds spoken interaction to standard web content by integrating XHTML (eXtensible Hypertext Markup Language) and speech recognition vocabularies supported by VoiceXML. For visual markup, X+V includes the XHTML standard. For voice markup, X+V includes a subset of VoiceXML. For synchronizing the VoiceXML elements with corresponding visual interface elements, X+V uses events. XHTML includes voice modules that support speech synthesis, speech dialogs, command and control, and speech grammars. Voice handlers can be attached to XHTML elements and respond to specific events. Voice interaction features are integrated with XHTML and can consequently be used directly within XHTML content.
The performance of speech recognition systems receiving speech that has been transmitted over voice channels, particularly mobile channels, can be significantly degraded when compared to using an unmodified signal. The degradations are as a result of both relatively low bit rate speech coding and channel transmission errors. A Distributed Speech Recognition (‘DSR’) system addresses these problems by eliminating the speech channel and instead using an error protected data channel to send a parameterized representation of the speech, which is suitable for recognition. The processing is distributed between the terminal (‘DSR client’) and a voice server. The DSR client performs the feature parameter extraction, or the front-end of the speech recognition function. The speech features then are transmitted over a data channel to a remote “back-end” recognizer on a voice server. This architecture substantially reduces transmission channel effects on speech recognition performance.
Naively connecting a DSR client to a voice server, however, is not without remaining challenges. One troubling issue is that data communications connections for voice, such as, for example, SIP/RTP connections or H.323 connections, typically experience significant delay, delay which itself varies over time. Such delay presents echo effects and also results in an echo from a voice prompt being fed back to the voice server, thereby degrading speech recognition accuracy. Perfect echo removal on the DSR client may mitigate this issue, but perfect echo removal is very difficult to achieve. Estimating the delay, for example, and starting recognition only after the channel delay is problematic because of the stochastic nature of packet switching; the delay varies from packet to packet of speech data.
In addition, when accessing the service over limited bandwidth networks such as cellular networks, it is useful to send a minimal amount of data so as to conserve bandwidth. Voice Activity Detection (‘VAD’) and discontinuous transmission (‘DTX’) are prior art methods of conserving bandwidth, but neither is effective unless the voice server is waiting for speech data. A common solution to both problems, echo prevention and bandwidth conservation, is push to talk (‘PTT’) switching, which sends speech data only when a switch is manually pressed. This solution is not acceptable in many multimodal applications because of user experience issues. X+V provides a <sync> markup element for use in synchronizing the receipt of spoken information and visual elements. The <sync> element, however, is of no use in synchronizing voice server readiness with the beginning of speech from a user for purposes of echo elimination and bandwidth conservation. For all these reasons, therefore, there is therefore an ongoing need for improvement in synchronizing distributed speech recognition.
Methods, apparatus, and computer program products are disclosed for synchronizing distributed speech recognition (‘DSR’) that include receiving in a DSR client notification from a voice server of readiness to conduct speech recognition and, responsive to the receiving, transmitting by the DSR client, from the DSR client to the voice server, speech for recognition.
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular descriptions of exemplary embodiments of the invention as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts of exemplary embodiments of the invention.
Exemplary methods, apparatus, and products for synchronizing distributed speech recognition according to embodiments of the present invention are described with reference to the accompanying drawings, beginning with
The system of
Each of the example DSR clients (152) in the system of
The system of
Such a network may implement:
The system of
The arrangement of the voice server, the DSR clients, and the network making up the exemplary system illustrated in
Synchronizing distributed speech recognition in accordance with the present invention is generally implemented with one or more voice servers, computers, that is, automated computing machinery, that carry out speech recognition in a distributed speech recognition system. For further explanation, therefore,
Stored in RAM (168) is a DSR server application (188), a module of computer program instructions capable of carrying out synchronized distributed speech recognition according to embodiments of the present invention by transmitting to a DSR client notification of readiness to conduct speech recognition and, after transmitting the notification of readiness, receiving speech for recognition from the DSR client. DSR server application (188) typically is user-level, multimodal, server-side computer program. DSR server application (188) may, for example, be implemented with a set of VoiceXML documents which taken together comprise a VoiceXML application. DSR server application (188) may, for example, alternatively be implemented as a web server that supports X+V by providing XHTML responses to HTTP requests from XHTML clients.
Also stored in RAM is a VoiceXML interpreter (192), a module of computer program instructions that parses and executes VoiceXML. VoiceXML input to VoiceXML interpreter (192) may originate from VoiceXML applications or from X+V applications. In this example, VoiceXML interpreter (192) interprets and executes VoiceXML segments provided to VoiceXML interpreter (192) through DSR server application (188).
Also stored in RAM (168) is a speech recognition engine (193), a module of computer program instructions that accepts parameterized speech for recognition as preprocessed by a DSR client, converts the parameterized speech to text, parses the converted speech against a vocabulary or grammar, and returns text representing recognized speech. Also stored in RAM (168) is a Text To Speech (‘TTS’) Engine (194), a module of computer program instructions that accepts text as input and returns the same text in the form of digitally encoded speech, for use in providing speech as prompts for users of DSR systems.
Also stored in RAM (168) is an operating system (154). Operating systems useful in voice servers according to embodiments of the present invention include UNIX™, Linux™, Microsoft NT™, AIX™, IBM's i5/OS™, and others as will occur to those of skill in the art. Operating system (154), DSR server application (188), VoiceXML interpreter (192), speech recognition engine (193), and Text To Speech Engine (194) in the example of
Voice server (151) of
Voice server (151) of
The example voice server of
The exemplary voice server (151) of
Synchronizing distributed speech recognition in accordance with the present invention is generally implemented with one or more DSR clients, computers, that is, automated computing machinery. In the system of
Stored in RAM (168) is a DSR client application (195), a module of computer program instructions capable synchronizing distributed speech recognition according to embodiments of the present invention by receiving notification from a voice server of readiness to conduct speech recognition and, responsive to receiving the notification of readiness, transmitting speech for recognition to the voice server. Also stored in RAM (168) is a voice activity detection (‘VAD’) engine (184), a module of computer program instructions that accepts digitally encoded audio signals from a sound card, employs statistical techniques to filter out portions of the audio signals that represent mere noise or non-voice audio, and provides the filtered speech to the DSR client for transmission to a voice server. Also stored in RAM (168) is a speech parameter extraction engine (190), a module of computer program instructions that accepts digitally encoded speech that has been filtered for voice activity by a VAD engine, extracts from the encoded filtered speech parameters that describe the speech sufficiently to support speech recognition, and provides the parameterized speech to the DSR client for transmission to a voice server.
Also stored in RAM (168) is an operating system (154). Operating systems useful in DSR clients according to embodiments of the present invention include UNIX™, Linux™, Microsoft NT™, AIX™, IBM's i5/OS™, and others as will occur to those of skill in the art. Operating system (154), DSR client application (195), VAD engine (184), and speech parameter extraction engine (190) in the example of
DSR client (152) of
DSR client (152) of
The example DSR client of
The example DSR client of
The exemplary DSR client (152) of
For further explanation,
The method of
“You may choose an airline for this flight. At the tone, please speak the name of an airline.”
“At the tone, please state the number passengers traveling.”
“When the speaker icon turn yellow, please speak the name of your preferred hotel.”
The tones and the icon changing color are examples of signals for the user to actually begin speaking. The DSR client may provide the prompts at any time, with no need to wait for the voice server to notify readiness for speech recognition. Such prompts typically are prompts from a VoiceXML application or prompts from VoiceXML segments from an X+V application that have been TTS converted and provided from the voice server to the DSR client as voice prompts. The DSR client signals the user to speak, however, only after receiving from the voice server notification of readiness to conduct speech recognition.
The method of
The method of
The method of
VOIP is a generic term for routing of speech over an IP-based network. The speech data flows over a general-purpose packet-switched network, instead of traditional dedicated, circuit-switched voice transmission lines. Protocols used to carry voice signals over the IP network are commonly referred to as ‘Voice over IP’ or ‘VOIP’ protocols. VOIP traffic may be deployed on any IP network, including networks lacking a connection to the rest of the Internet, for instance on a private building-wide local area network or ‘LAN.’
Many protocols are used to effect VOIP. The two most popular types of VoIP are effected with the IETF's Session Initian Protocol (‘SIP’) and the ITU's protocol known as ‘H.323.’ SIP clients use TCP and UDP port 5060 to connect to SIP servers. SIP itself is used to set up and tear down calls for speech transmission. VOIP with SIP then uses RTP for transmitting the actual encoded speech. Similarly, H.323 is an umbrella recommendation from the standards branch of the International Telecommunications Union that defines protocols to provide audio-visual communication sessions on any packet network.
Methods for ‘COding/DECoding’ speech are referred to as ‘codecs.’ The European Telecommunications Standards Institute (‘ETSI’) provides several codecs for encoding speech for use in DSR, including, for example, the ETSI ES 201 108 DSR Front-end Codec, the ETSI ES 202 050 Advanced DSR Front-end Codec, the ETSI ES 202 211 Extended DSR Front-end Codec, and the ETSI ES 202 212 Extended Advanced DSR Front-end Codec. In standards such as RFC3557 entitled
The method of
For further explanation,
In this example, DSR client (152) next provides to a user (128) a speech prompt (306) advising the user of information to be provided. Then the DSR client (152) signals (308) the user (128) to speak. The DSR client (152) discards (316) speech received prior to signaling the user to speak.
DSR client (152) next receives user speech (312). That is, DSR client (152) receives audio signals through a sound card, like the one illustrated and described above with regard to reference (174) on
DSR client (152) then transmits speech for recognition (320) to voice server (151). Synchronizing distributed speech recognition is accomplished here because the DSR client (152) transmits speech for recognition only after receiving from the voice server (151) notification of readiness to conduct speech recognition. In this example, DSR client (152) transmits speech for recognition according to VOIP.
DSR client (152) then receives from the voice server (151) notification (324) to cease transmitting speech. Message (324) is implemented in an application level data communications protocol improved to include messages for synchronizing distributed speech recognition, such as, for example, an improved version of HTTP, SIP, RTP, VOIP, DMSP, WAP, or HDTP.
In view of the explanations set forth above in this paper, readers will recognize that the benefits of synchronizing distributed speech recognition according to various embodiments of the present invention typically include:
Exemplary embodiments of the present invention are described largely in the context of a fully functional computer system for synchronizing distributed speech recognition. Readers of skill in the art will recognize, however, that the present invention also may be embodied in a computer program product disposed on signal bearing media for use with any suitable data processing system. Such signal bearing media may be transmission media or recordable media for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of recordable media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Examples of transmission media include telephone networks for voice communications and digital data communications networks such as, for example, Ethernets™ and networks that communicate with the Internet Protocol and the World Wide Web. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method of the invention as embodied in a program product. Persons skilled in the art will recognize immediately that, although some of the exemplary embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present invention.
It will be understood from the foregoing description that modifications and changes may be made in various embodiments of the present invention without departing from its true spirit. The descriptions in this specification are for purposes of illustration only and are not to be construed in a limiting sense. The scope of the present invention is limited only by the language of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5577165 | Takebayashi et al. | Nov 1996 | A |
5584052 | Gulau et al. | Dec 1996 | A |
5960399 | Barclay et al. | Sep 1999 | A |
5969717 | Ikemoto | Oct 1999 | A |
6208972 | Grant et al. | Mar 2001 | B1 |
6243375 | Speicher | Jun 2001 | B1 |
6275806 | Pertrushin | Aug 2001 | B1 |
6301560 | Masters | Oct 2001 | B1 |
6513011 | Uwakubo | Jan 2003 | B1 |
6532446 | King | Mar 2003 | B1 |
6604073 | Yoda | Aug 2003 | B2 |
6606599 | Grant et al. | Aug 2003 | B2 |
6801604 | Maes et al. | Oct 2004 | B2 |
6856960 | Dragosh et al. | Feb 2005 | B1 |
6920425 | Will et al. | Jul 2005 | B1 |
6999930 | Roberts et al. | Feb 2006 | B1 |
7035805 | Miller | Apr 2006 | B1 |
7139715 | Dragosh et al. | Nov 2006 | B2 |
7150399 | Barrus et al. | Dec 2006 | B2 |
7162414 | Stanford | Jan 2007 | B2 |
7171243 | Watanabe et al. | Jan 2007 | B2 |
7188067 | Grant et al. | Mar 2007 | B2 |
7330890 | Partovi et al. | Feb 2008 | B1 |
7376586 | Partovi et al. | May 2008 | B1 |
7409349 | Wang et al. | Aug 2008 | B2 |
7487085 | Cross | Feb 2009 | B2 |
8494849 | Collotta et al. | Jul 2013 | B2 |
20020065944 | Hickey et al. | May 2002 | A1 |
20020092019 | Marcus | Jul 2002 | A1 |
20020099553 | Brittan et al. | Jul 2002 | A1 |
20020120554 | Vega | Aug 2002 | A1 |
20020147593 | Lewis et al. | Oct 2002 | A1 |
20020184373 | Maes | Dec 2002 | A1 |
20020184610 | Chong et al. | Dec 2002 | A1 |
20030039341 | Burg et al. | Feb 2003 | A1 |
20030046316 | Gergic et al. | Mar 2003 | A1 |
20030046346 | Mumick et al. | Mar 2003 | A1 |
20030101451 | Bentolila et al. | May 2003 | A1 |
20030125945 | Doyle | Jul 2003 | A1 |
20030139930 | He et al. | Jul 2003 | A1 |
20030179865 | Stillman et al. | Sep 2003 | A1 |
20030182622 | Sibal et al. | Sep 2003 | A1 |
20030195739 | Washio | Oct 2003 | A1 |
20030217161 | Balasuriya | Nov 2003 | A1 |
20030229900 | Reisman | Dec 2003 | A1 |
20030235282 | Sichelman et al. | Dec 2003 | A1 |
20040019487 | Kleindienst et al. | Jan 2004 | A1 |
20040025115 | Sienel et al. | Feb 2004 | A1 |
20040031058 | Reisman | Feb 2004 | A1 |
20040044516 | Kennewick et al. | Mar 2004 | A1 |
20040044522 | Yang et al. | Mar 2004 | A1 |
20040049390 | Brittan et al. | Mar 2004 | A1 |
20040059705 | Wittke et al. | Mar 2004 | A1 |
20040083109 | Halonen et al. | Apr 2004 | A1 |
20040120472 | Popay et al. | Jun 2004 | A1 |
20040120476 | Harrison et al. | Jun 2004 | A1 |
20040138890 | Ferrans et al. | Jul 2004 | A1 |
20040153323 | Charney et al. | Aug 2004 | A1 |
20040179038 | Blattner et al. | Sep 2004 | A1 |
20040216036 | Chu et al. | Oct 2004 | A1 |
20040236574 | Ativanichayaphong | Nov 2004 | A1 |
20040260562 | Kijirai | Dec 2004 | A1 |
20050075884 | Badt, Jr. | Apr 2005 | A1 |
20050091059 | Lecoeuche | Apr 2005 | A1 |
20050125231 | Lipe et al. | Jun 2005 | A1 |
20050131701 | Cross | Jun 2005 | A1 |
20050138219 | Boughannam | Jun 2005 | A1 |
20050138647 | Boughannam | Jun 2005 | A1 |
20050154580 | Horowitz et al. | Jul 2005 | A1 |
20050160461 | Baumgartner et al. | Jul 2005 | A1 |
20050188412 | Dacosta | Aug 2005 | A1 |
20050203729 | Roth et al. | Sep 2005 | A1 |
20050203747 | Lecoeuche | Sep 2005 | A1 |
20050261908 | Cross | Nov 2005 | A1 |
20050273769 | Eichenberger | Dec 2005 | A1 |
20050283367 | Cross | Dec 2005 | A1 |
20060047510 | Cross | Mar 2006 | A1 |
20060064302 | Cross | Mar 2006 | A1 |
20060069564 | Allison et al. | Mar 2006 | A1 |
20060074680 | Cross | Apr 2006 | A1 |
20060075120 | Smit | Apr 2006 | A1 |
20060079261 | Nakamura | Apr 2006 | A1 |
20060111906 | Cross | May 2006 | A1 |
20060122836 | Cross | Jun 2006 | A1 |
20060123358 | Lee et al. | Jun 2006 | A1 |
20060136222 | Cross | Jun 2006 | A1 |
20060146728 | Engelsma et al. | Jul 2006 | A1 |
20060168095 | Sharma et al. | Jul 2006 | A1 |
20060168595 | McArdle | Jul 2006 | A1 |
20060184626 | Agapi | Aug 2006 | A1 |
20060190264 | Jaramillo | Aug 2006 | A1 |
20060218039 | Johnson | Sep 2006 | A1 |
20060229880 | White | Oct 2006 | A1 |
20060235694 | Cross | Oct 2006 | A1 |
20060287845 | Cross et al. | Dec 2006 | A1 |
20060287865 | Cross et al. | Dec 2006 | A1 |
20060287866 | Cross et al. | Dec 2006 | A1 |
20060288309 | Cross et al. | Dec 2006 | A1 |
20070203708 | Polcyn et al. | Aug 2007 | A1 |
20070265851 | Cross et al. | Nov 2007 | A1 |
20070274296 | Cross et al. | Nov 2007 | A1 |
20070274297 | Cross et al. | Nov 2007 | A1 |
20070288241 | Cross et al. | Dec 2007 | A1 |
20070294084 | Cross | Dec 2007 | A1 |
20080065386 | Cross et al. | Mar 2008 | A1 |
20080065387 | Cross et al. | Mar 2008 | A1 |
20080065388 | Cross et al. | Mar 2008 | A1 |
20080065389 | Cross et al. | Mar 2008 | A1 |
20080065390 | Ativanichayaphong et al. | Mar 2008 | A1 |
20080086564 | Putman et al. | Apr 2008 | A1 |
20080140410 | Cross et al. | Jun 2008 | A1 |
20080162136 | Ativanichayaphong et al. | Jul 2008 | A1 |
20080177530 | Cross et al. | Jul 2008 | A1 |
20080195393 | Cross et al. | Aug 2008 | A1 |
20080208584 | Cross et al. | Aug 2008 | A1 |
20080208585 | Ativanichayaphong et al. | Aug 2008 | A1 |
20080208586 | Ativanichayaphong et al. | Aug 2008 | A1 |
20080208587 | Cross et al. | Aug 2008 | A1 |
20080208588 | Cross et al. | Aug 2008 | A1 |
20080208589 | Cross et al. | Aug 2008 | A1 |
20080208590 | Cross et al. | Aug 2008 | A1 |
20080208591 | Ativanichayaphong et al. | Aug 2008 | A1 |
20080208592 | Cross et al. | Aug 2008 | A1 |
20080208593 | Ativanichayaphong et al. | Aug 2008 | A1 |
20080208594 | Cross et al. | Aug 2008 | A1 |
20080228494 | Cross et al. | Sep 2008 | A1 |
20080228495 | Cross et al. | Sep 2008 | A1 |
20080235021 | Cross et al. | Sep 2008 | A1 |
20080235022 | Cross et al. | Sep 2008 | A1 |
20080235027 | Cross | Sep 2008 | A1 |
20080235029 | Cross et al. | Sep 2008 | A1 |
20080249782 | Ativanichayaphong et al. | Oct 2008 | A1 |
20080255850 | Cross et al. | Oct 2008 | A1 |
20080255851 | Cross et al. | Oct 2008 | A1 |
Number | Date | Country |
---|---|---|
1385783 | Dec 2002 | CN |
1385783 | Dec 2002 | CN |
1564123 | Jan 2005 | CN |
0794670 | Sep 1997 | EP |
1450350 | Aug 2004 | EP |
2000155529 | Jun 2000 | JP |
02003140672 | May 2003 | JP |
WO 9948088 | Sep 1999 | WO |
WO 0051106 | Aug 2000 | WO |
WO 0232140 | Apr 2002 | WO |
WO 2004062945 | Jul 2004 | WO |
WO 2006108795 | Oct 2006 | WO |
Entry |
---|
Axelsson, et al.; “XHTML+Voice Profile 1.2” Internet, [Online] Mar. 16, 2004, pp. 1-53, XP002484188 Retrieved from the Internet: URL: http://www.voicexml.org/specs/mutlimodal/x+v/12/spec.html [retrieved on Jun. 12, 2008]. |
W3C: “Voice Extensible Markup Language (VoiceXML) Version 2.0” Internet Citation, [Online] XP002248286 Retrieved from the Internet: URL:http://www.w3.org/TR/voicexml20 [retrieved on Jul. 18, 2003], Mar. 16, 2004. |
PCT Search Report, Jun. 25, 2008; PCT Application No. ; PCT/EP2008/051358. |
PCT Search Report, Jun. 18, 2008; PCT Application No. ; PCT/EP2008/051363. |
U.S. Appl. No. 10/919,005, filed Dec. 2005, Eichenberger, et al. |
U.S. Appl. No. 12/109,151, filed Apr. 2008, Agapi, et al. |
U.S. Appl. No. 12/109,167, filed Apr. 2008, Agapi, et al. |
U.S. Appl. No. 12/109,204, filed Apr. 2008, Agapi, et al. |
U.S. Appl. No. 12/109,227, filed Apr. 2008, Agapi, et al. |
U.S. Appl. No. 12/109,214, filed Apr. 2008, Agapi, et al. |
Axelsson, et al.; “XHTML+Voice Profile 1.2” Internet, [Online] Mar. 16, 2004, pp. 1-53, XP002484188 Retrieved from the Internet: URL: http://www.voicexml.org/specs/mutlimodal/x+v/12/spec.html [retrieved on Jun. 12, 2008 ]. |
W3C: “Voice Extensible Markup Language (VoiceXML) Version 2.0” Internet Citation, [Online] XP002248286 Retrieved from the Internet: URL:http://www.w3.org/TR/voicexml20 [retrieved Jul. 18, 2003]. |
W3C: “Voice Extensible Markup Language (VoiceXML) 2.1, W3C Candidate Recommendation Jun. 13, 2005” Internet, [Online] Jun. 13, 2005, pp. 1-34, XP002484189 Retrieved from the Internet: URL:http://www.w3.org/TR/2005/CR-voicexml21-20050613/ [retrieved on Jun. 12, 2008]. |
PCT Search Report, Jun. 25, 2008; PCT Application No. PCT/EP2008/051358. |
PCT Search Report, Jun. 18, 2008; PCT Application No. PCT/EP2008/051363. |
Didier Guillevic, et al.,Robust Semantic Confidence Scoring ICSLP 2002: 7th International Conference On Spoken Language Processing. Denver Colorado, Sep. 16-20, 2002 International Conference on Spoken Language Processing (ICSLP), Adelaide: Causal Productions, AI, Sep. 16, 2002, p. 853, XP007011561 ISBN:9788-1-876346-40-9. |
Number | Date | Country | |
---|---|---|---|
20070265851 A1 | Nov 2007 | US |