1. Field of the Invention
The field of the invention is data processing, or, more specifically, methods, apparatus, and products for testing a grammar used in speech recognition for reliability in a plurality of operating environments having different background noise.
2. Description of Related Art
User interaction with applications running on small devices through a keyboard or stylus has become increasingly limited and cumbersome as those devices have become increasingly smaller. In particular, small handheld devices like mobile phones and PDAs serve many functions and contain sufficient processing power to support user interaction through multimodal access, that is, by interaction in non-voice modes as well as voice mode. Devices which support multimodal access combine multiple user input modes or channels in the same interaction allowing a user to interact with the applications on the device simultaneously through multiple input modes or channels. The methods of input include speech recognition, keyboard, touch screen, stylus, mouse, handwriting, and others. Multimodal input often makes using a small device easier.
Multimodal applications are often formed by sets of markup documents served up by web servers for display on multimodal browsers. A ‘multimodal browser,’ as the term is used in this specification, generally means a web browser capable of receiving multimodal input and interacting with users with multimodal output, where modes of the multimodal input and output include at least a speech mode. Multimodal browsers typically render web pages written in XHTML+Voice (‘X+V’). X+V provides a markup language that enables users to interact with an multimodal application often running on a server through spoken dialog in addition to traditional means of input such as keyboard strokes and mouse pointer action. Visual markup tells a multimodal browser what the user interface is look like and how it is to behave when the user types, points, or clicks. Similarly, voice markup tells a multimodal browser what to do when the user speaks to it. For visual markup, the multimodal browser uses a graphics engine; for voice markup, the multimodal browser uses a speech engine. X+V adds spoken interaction to standard web content by integrating XHTML (eXtensible Hypertext Markup Language) and speech recognition vocabularies supported by VoiceXML. For visual markup, X+V includes the XHTML standard. For voice markup, X+V includes a subset of VoiceXML. For synchronizing the VoiceXML elements with corresponding visual interface elements, X+V uses events. XHTML includes voice modules that support speech synthesis, speech dialogs, command and control, and speech grammars. Voice handlers can be attached to XHTML elements and respond to specific events. Voice interaction features are integrated with XHTML and can consequently be used directly within XHTML content.
In addition to X+V, multimodal applications also may be implemented with Speech Application Tags (‘SALT’). SALT is a markup language developed by the Salt Forum. Both X+V and SALT are markup languages for creating applications that use voice input/speech recognition and voice output/speech synthesis. Both SALT applications and X+V applications use underlying speech recognition and synthesis technologies or ‘speech engines’ to do the work of recognizing and generating human speech. As markup languages, both X+V and SALT provide markup-based programming environments for using speech engines in an application's user interface. Both languages have language elements, markup tags, that specify what the speech-recognition engine should listen for and what the synthesis engine should ‘say.’ Whereas X+V combines XHTML, VoiceXML, and the XML Events standard to create multimodal applications, SALT does not provide a standard visual markup language or eventing model. Rather, it is a low-level set of tags for specifying voice interaction that can be embedded into other environments. In addition to X+V and SALT, multimodal applications may be implemented in Java with a Java speech framework, in C++, for example, and with other technologies and in other environments as well.
Current multimodal applications support a voice mode of user interaction using a speech engine. A speech engine provides speech recognition though use of a grammar. A grammar communicates to the speech engine the potential words or sequences of words that the speech engine may recognized when processing a user's speech. That is, the grammar narrows the set of potential results returned by the speech engine when performing speech recognition to reduce the amount of processing performed by the speech engine. Rather than having to determine which of all possible words in a language matches the user's speech, the speech engine may utilize a grammar to reduce the determination to which of a subset of those words in a language matches the user's speech.
Deployment of such multimodal applications onto multimodal devices generally includes extensive testing and tuning of the speech recognition grammars in the ambient noise environment where the application will be used. Because multimodal devices generally operate in a variety of different environments, each grammar must be tested and tuned for each operating environment in which the grammar may be utilized. For example, if there are in number of grammars that need to be tested in n number of operating environments, completely testing the grammars in all of the operating environments requires m×n recordings of the user's response to application prompts using the grammars for recognition in the different operating environments. The drawback to current methods of testing a grammar is that performing in number of tests in n operating environments is often prohibitively expensive.
Methods, systems, and products for testing a grammar used in speech recognition for reliability in a plurality of operating environments having different background noise that include: receiving recorded background noise for each of the plurality of operating environments; generating a test speech utterance for recognition by a speech recognition engine using a grammar; mixing the test speech utterance with each recorded background noise, resulting in a plurality of mixed test speech utterances, each mixed test speech utterance having different background noise; performing, for each of the mixed test speech utterances, speech recognition using the grammar and the mixed test speech utterance, resulting in speech recognition results for each of the mixed test speech utterances; and evaluating, for each recorded background noise, speech recognition reliability of the grammar in dependence upon the speech recognition results for the mixed test speech utterance having that recorded background noise.
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular descriptions of exemplary embodiments of the invention as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts of exemplary embodiments of the invention.
Exemplary methods, apparatus, and products for testing a grammar used in speech recognition for reliability in a plurality of operating environments having different background noise according to embodiments of the present invention are described with reference to the accompanying drawings, beginning with
Each operating environment (140) of
In the example of
The laboratory server (136) of
In this example, the elements named <command>, <name>, and <when> are rules of the grammar. Rules are a combination of a rule name and an expansion of a rule that advises a speech engine or a voice interpreter which words presently can be recognized. In this example, expansion includes conjunction and disjunction, and the vertical bars ‘|’ mean ‘or.’ A speech engine or a voice interpreter processes the rules in sequence, first <command>, then <name>, then <when>. The <command> rule accepts for recognition ‘call’ or ‘phone’ or ‘telephone’ plus, that is, in conjunction with, whatever is returned from the <name> rule and the <when> rule. The <name> rule accepts ‘bob’ or ‘martha’ or ‘joe’ or ‘pete’ or ‘chris’ or ‘john’ or ‘artoush’ or ‘tom’, and the <when> rule accepts ‘today’ or ‘this afternoon’ or ‘tomorrow’ or ‘next week.’ The command grammar as a whole matches utterances like these, for example:
In the example of
Speech recognition reliability represents a measure of a speech engine's ability to use a particular grammar in accurately recognizing a speech utterance with the presence of a particular operating environment's background noise. A grammar may be adequate for use in some operating environments while being inadequate for other operating environments. For example, consider a multimodal application operating in a vehicle that provides directions to a particular city from the vehicle's current location. When the operating environment is the inside of the vehicle traveling at thirty miles per hour, the background noise may be so negligible that a speech engine can obtain accurate recognition results with a grammar that specifies all of the cities in a particular state, regardless that many cities have names that a similarly pronounced. When the operating environment changes as the vehicle speeds up to ninety miles per hour, the background noise may also have increased to the point that the speech engine cannot distinguish between the cities having similarly pronounced names. Rather a grammar that only includes the cities in a particular county may be better suited for speech recognition when the vehicle is operating at ninety miles per hour.
In the example of
The multimodal device (152) of
In other embodiments, the multimodal application (195) may include computer program instructions implemented in a higher level, non-machine language that requires runtime translation into machine code. In such embodiments, the multimodal application (195) is supported as by a multimodal execution environment (202). The multimodal execution environment (202) may support execution of the multimodal application (195) by processing the multimodal application (195) itself or coordinating with other local or remote components such as, for example, a voice interpreter (192) to process portions of the multimodal application (195). The multimodal execution environment (202) may translate the multimodal application (195) into platform specific, machine code directly executable on the processors of the multimodal device (152), perform memory management for the multimodal application (195) during execution, control access to platform hardware, and so on. The implementation of the multimodal execution environment (202) typically depends on the implementation of the multimodal application (195). When the multimodal application (195) is implemented using X+V or SALT tags, then the multimodal execution environment may be implemented as a multimodal browser. When the multimodal application (195) is implemented using Java, then the multimodal execution environment may be implemented as a Java Virtual Machine. Readers will note that the implementations described above are for explanation only and not for limitation.
The multimodal device (152) of
In the exemplary system of
Each multimodal device (152) of
The system of
Each of the example multimodal devices (152) in the system of
As mentioned, a multimodal device according to embodiments of the present invention is capable of providing speech for recognition to a speech engine (153). A speech engine is a functional module, typically a software module, although it may include specialized hardware also, that does the work of recognizing and generating or ‘synthesizing’ human speech. The speech engine implements speech recognition by use of a further module referred to in this specification as an automated speech recognition (‘ASR’) engine, and the speech engine carries out speech synthesis by use of a further module referred to in this specification as a text-to-speech (‘TTS’) engine. As shown in
As shown in
In a thin client architecture, the speech engine (153) and the voice interpreter (192) are located remotely from the multimodal client device in a voice server (151), the API for the voice interpreter is still implemented in the multimodal device, with the API modified to communicate voice dialog instructions, speech for recognition, and text and voice prompts to and from the voice interpreter on the voice server. For ease of explanation, only one (112) of the multimodal devices (152) in the system of
The use of these three example multimodal devices (152) is for explanation only, not for limitation of the invention. Any automated computing machinery capable of accepting speech from a user, providing the speech digitized to a speech engine through a voice interpreter, and receiving and playing speech prompts and responses from the voice interpreter may be improved to function as a multimodal device for adjusting a speech engine based on background noise according to embodiments of the present invention.
The system of
The system of
The system of
The arrangement of the multimodal devices (152), the web server (147), the voice server (151), laboratory server (136), and the data communications network (100) making up the exemplary system illustrated in
Testing a grammar used in speech recognition for reliability in a plurality of operating environments having different background noise according to embodiments of the present invention may be implemented with one or more computers, that is, automated computing machinery. For further explanation, therefore,
Stored in RAM (168) is a grammar analysis module (132), a set of computer program instructions capable of testing a grammar used in speech recognition for reliability in a plurality of operating environments having different background noise according to embodiments of the present invention. The grammar analysis module (132) of
The speech engine (153) of
The acoustic models (108) associate speech waveform data representing recorded pronunciations of speech with textual representations of those pronunciations, which are referred to as ‘phonemes.’ The speech waveform data may be implemented as a Speech Feature Vector (‘SFV’) that may be represented, for example, by the first twelve or thirteen Fourier or frequency domain components of a sample of digitized speech waveform. Accordingly, the acoustic models (108) may be implemented as data structures or tables in a database, for example, that associates these SFVs with phonemes representing, to the extent that it is practically feasible to do so, all pronunciations of all the words in various human languages, each language having a separate acoustic model (108). The lexicons (106) are associations of words in text form with phonemes representing pronunciations of each word; the lexicon effectively identifies words that are capable of recognition by an ASR engine. Each language has a separate lexicon (106). Also stored in RAM (168) is a Text To Speech (‘TTS’) Engine (194), a module of computer program instructions that accepts text as input and returns the same text in the form of digitally encoded speech, for use in providing speech as prompts for and responses to users of multimodal systems.
The grammars (104) communicate to the ASR engine (150) the words and sequences of words that currently may be recognized. For precise understanding, readers will distinguish the purpose of the grammar and the purpose of the lexicon. The lexicon associates with phonemes all the words that the ASR engine can recognize. The grammar communicates the words currently eligible for recognition. The set of words currently eligible for recognition and the set of words capable of recognition may or may not be the same.
Also stored in RAM (168) is an operating system (154). Operating systems useful in voice servers according to embodiments of the present invention include UNIX™, Linux™, Microsoft Vista™, IBM's AIX™, IBM's i5/OS™, and others as will occur to those of skill in the art. Operating system (154), speech engine (153), grammar analysis module (132), grammar reliability table (134), and background noises (142) in the example of
Laboratory server (136) of
Laboratory server (136) of
The example laboratory server of
The exemplary laboratory server (136) of
As mentioned above, the grammars tested by a grammar analysis module may be utilized to perform speech recognition for a multimodal application operating on a multimodal device. In a thin client architecture, a voice server may provide speech recognition services using the grammars tested according to embodiments of the present application and provided to the voice server by a multimodal application in the form, for example, of VoiceXML dialogs. For further explanation, therefore,
The example voice server (151) of
Stored in RAM (168) is a voice server application (188), a module of computer program instructions capable of operating a voice server according to embodiments of the present invention. Voice server application (188) provides voice recognition services for multimodal devices by accepting requests for speech recognition and returning speech recognition results, including text representing recognized speech, text for use as variable values in dialogs, and text as string representations of scripts for semantic interpretation. Voice server application (188) also includes computer program instructions that provide text-to-speech (‘TTS’) conversion for voice prompts and voice responses to user input in multimodal applications such as, for example, X+V applications, SALT applications, or Java Speech applications.
Voice server application (188) may be implemented as a web server, implemented in Java, C++, or another language, that supports X+V, SALT, VoiceXML, or other multimodal languages, by providing responses to HTTP requests from X+V clients, SALT clients, Java Speech clients, or other multimodal clients. Voice server application (188) may, for a further example, be implemented as a Java server that runs on a Java Virtual Machine and supports a Java voice framework by providing responses to HTTP requests from Java client applications running on multimodal devices. And voice server applications that support embodiments of the present invention may be implemented in other ways as may occur to those of skill in the art, and all such ways are well within the scope of the present invention.
Also stored in RAM is a voice interpreter (192), a module of computer program instructions that supports the voice mode of user interaction with a multimodal application operating on a multimodal device. The voice interpreter (192) provides speech engine input such as grammars, speech for recognition, and text prompts for speech synthesis to the speech engine (153) and returns to the multimodal application speech engine output in the form of recognized speech, semantic interpretation results, and digitized speech for voice prompts. Input to voice interpreter (192) may originate, for example, from VoiceXML clients running remotely on multimodal devices, from X+V clients running remotely on multimodal devices, from SALT clients running on multimodal devices, or from Java client applications running remotely on multimodal devices. In this example, voice interpreter (192) interprets and executes VoiceXML segments representing voice dialog instructions received from remote multimodal devices and provided to voice interpreter (192) through voice server application (188).
When implemented in X+V, a multimodal application in a thin client architecture may provide voice dialog instructions, VoiceXML segments, VoiceXML <form> elements, and the like, to voice interpreter (149) through data communications across a network with the multimodal application. The voice dialog instructions include one or more grammars, data input elements, event handlers, and so on, that advise the voice interpreter how to administer voice input from a user and voice prompts and responses to be presented to a user. The voice interpreter (192) administers such dialogs by processing the dialog instructions sequentially in accordance with a VoiceXML Form Interpretation Algorithm (‘FIA’). The voice interpreter (192) interprets VoiceXML dialogs provided to the voice interpreter (192) by a multimodal application.
For further explanation regarding the thin client architecture,
In some embodiments, the multimodal application (195) of
The multimodal application (195) of
In the example of
The example of
In the example of
Many protocols are used to effect VOIP. The two most popular types of VOIP are effected with the IETF's Session Initiation Protocol (‘SIP’) and the ITU's protocol known as ‘H.323.’ SIP clients use TCP and UDP port 5060 to connect to SIP servers. SIP itself is used to set up and tear down calls for speech transmission. VOIP with SIP then uses RTP for transmitting the actual encoded speech. Similarly, H.323 is an umbrella recommendation from the standards branch of the International Telecommunications Union that defines protocols to provide audio-visual communication sessions on any packet data communications network.
The apparatus of
Voice server application (188) provides voice recognition services for multimodal devices by accepting dialog instructions, VoiceXML segments, and returning speech recognition results, including text representing recognized speech, text for use as variable values in dialogs, and output from execution of semantic interpretation scripts as well as voice prompts. Voice server application (188) supports text-to-speech (‘TTS’) conversion for voice prompts and voice responses to user input in multimodal applications such as, for example, X+V applications, SALT applications, or Java Speech applications.
The voice server application (188) receives speech for recognition from a user and passes the speech through API calls to voice interpreter (192) which in turn uses an ASR engine (150) for speech recognition. The ASR engine receives digitized speech for recognition, uses frequency components of the digitized speech to derive an SFV, uses the SFV to infer phonemes for the word from the language-specific acoustic model (108), and uses the phonemes to find the speech in the lexicon (106). The ASR engine then compares speech found as words in the lexicon to words in a grammar (104) to determine whether words or phrases in speech are recognized by the ASR engine.
The multimodal application (195) is operatively coupled to the speech engine (153). In this example, the operative coupling between the multimodal application (195) and the speech engine (153) is implemented with a VOIP connection (216) through a voice services module (130), then through the voice server application (188) and the voice interpreter (192). Depending on whether the multimodal application is implemented in X+V, Java, or SALT, the voice interpreter (192) may be implemented using a VoiceXML interpreter, a VoiceXML interpreter exposing a Java interface, a SALT interpreter, or any other implementation as will occur to those of skill in the art. The voice services module (130) is a thin layer of functionality, a module of computer program instructions, that presents an API (316) for use by an application level program in providing dialog instructions and speech for recognition to a voice server application (188) and receiving in response voice prompts and other responses. In this example, application level programs are represented by multimodal application (195) and the multimodal execution environment (202).
The voice services module (130) provides data communications services through the VOIP connection and the voice server application (188) between the multimodal device (152) and the voice interpreter (192). The API (316) of
The explanation above with reference to
The example multimodal device (152) of
The speech engine (153) in this kind of embodiment, a thick client architecture, often is implemented as an embedded module in a small form factor device such as a handheld device, a mobile phone, PDA, and the like. An example of an embedded speech engine that may be improved utilizing grammars tested for reliability in a plurality of operating environments having different background noise according to embodiments of the present invention is IBM's Embedded ViaVoice Enterprise. The example multimodal device of
Also stored in RAM (168) in this example is a multimodal application (195), a module of computer program instructions capable of operating a multimodal device as an apparatus that supports multiple modes of user interaction, including a voice mode and one or more non-voice modes. The multimodal application (195) implements speech recognition by accepting speech for recognition from a user and sending the speech for recognition through API calls to the ASR engine (150). The multimodal application (195) implements speech synthesis generally by sending words to be used as prompts for a user to the ITS engine (194). As an example of thick client architecture, the multimodal application (195) in this example does not send speech for recognition across a network to a voice server for recognition, and the multimodal application (195) in this example does not receive synthesized speech, TTS prompts and responses, across a network from a voice server. All grammar processing, voice recognition, and text to speech conversion in this example is performed in an embedded fashion in the multimodal device (152) itself.
More particularly, multimodal application (195) in this example is a user-level, multimodal, client-side computer program that provides a speech interface through which a user may provide oral speech for recognition through microphone (176), have the speech digitized through an audio amplifier (185) and a coder/decoder (‘codec’) (183) of a sound card (174) and provide the digitized speech for recognition to ASR engine (150). The multimodal application (195) may be implemented as a set or sequence of X+V documents executing in a multimodal execution environment (202) implemented as a multimodal browser or microbrowser that passes VoiceXML grammars and digitized speech by calls through an API (316) directly to an embedded voice interpreter (192) for processing. The embedded voice interpreter (192) may in turn issue requests for speech recognition through API calls directly to the embedded ASR engine (150). Multimodal application (195) also can provide speech synthesis, TTS conversion, by API calls to the embedded TTS engine (194) for voice prompts and voice responses to user input.
In a further class of exemplary embodiments, the multimodal application (195) may be implemented as a Java voice application that executes in a multimodal execution environment (202) implemented as a Java Virtual Machine and issues calls through an API of the voice interpreter (192) for speech recognition and speech synthesis services. In further exemplary embodiments, the multimodal application (195) may be implemented as a set or sequence of SALT documents executed in a multimodal execution environment (202) implemented as a multimodal browser or microbrowser that issues calls through an API of the voice interpreter (192) for speech recognition and speech synthesis services. In addition to X+V, SALT, and Java implementations, multimodal application (195) may be implemented in other technologies as will occur to those of skill in the art, and all such implementations are well within the scope of the present invention.
The multimodal application (195) of
The multimodal application (195) may also operate generally according to embodiments of the present invention by altering flow of execution for the multimodal application in dependence upon the identified current ground noise for the current operating environment in which the multimodal application operates. In such a manner, the multimodal application (195) may utilize different dialogs to interact with the user depending on the multimodal device's current operating environment.
The multimodal application (195) of
The multimodal application (195) in this example, running on a multimodal device (152) that contains its own voice interpreter (192) and its own speech engine (153) with no network or VOIP connection to a remote voice server containing a remote VoiceXML interpreter or a remote speech engine, is an example of a so-called ‘thick client architecture,’ so-called because all of the functionality for processing voice mode interactions between a user and the multimodal application is implemented on the multimodal device itself.
For further explanation,
The method of
The method of
The method of
The method of
When a voice interpreter stores the recognition results in an ECMAScript field variable array for a field specified in the multimodal application, the recognition results (506) may be stored in field variable array using shadow variables similar to the application variable ‘application.lastresult$.’ For example, a field variable array may represent a possible recognition result through the following shadow variables:
The grammar analysis module (132) may often perform speech recognition with each mixed text speech utterance (608) multiple times to test the reliability of the grammar with a particular background noise (142). Accordingly, the grammar analysis module (132) of
Further consider that each mixed test speech utterance (608) has a unique identifier and specifies the city ‘Alton’ with the background noise (142) for a different operating environment. The grammar analysis module (132) may store the exemplary speech recognition results in the following table:
Table 1 above illustrates recognition results provided by a speech engine using the exemplary grammar above and three mixed test speech utterances having identifiers 0, 1, and 2. The exemplary speech recognition results above indicates that the speech engine successfully recognizes the test speech utterance ‘Alton’ mixed with background noise for a first operating environment five times using the exemplary grammar above. The exemplary speech recognition results above indicates that the speech engine successfully recognizes the test speech utterance ‘Alton’ mixed with background noise for a second operating environment three times using the exemplary grammar above. The exemplary speech recognition results above indicates that the speech engine successfully recognizes the test speech utterance ‘Alton’ mixed with background noise for a third operating environment only one time using the exemplary grammar above. Readers will note that the exemplary recognition results table above is for explanation only and not for limitation.
The method of
For example consider again the exemplary grammar above, the exemplary test speech utterance ‘Alton,’ and the exemplary recognition results from Table 1 above. The reliability indicator for the exemplary grammar and the background noise for the first operating environment—that is, the background noise embedded in the mixed test speech utterance having an identifier of ‘0’—is one hundred percent (or 1.00). The reliability indicator for the exemplary grammar and the background noise for the second operating environment—that is, the background noise embedded in the mixed test speech utterance having an identifier of ‘1’—is sixty percent (or 0.60). The reliability indicator for the exemplary grammar and the background noise for the third operating environment—that is, the background noise embedded in the mixed test speech utterance having an identifier of ‘2’—is twenty percent (or 0.20).
As part of evaluating (614) speech recognition reliability according to the method of
The exemplary grammar reliability table above specifies that the exemplary ‘cities_in_TX’ grammar above has a reliability of one hundred percent when utilized to recognize speech recorded in an operating environment with a background noise that matches the background noise having an identifier of ‘0,’ a reliability of sixty percent when utilized to recognize speech recorded in an operating environment with a background noise that matches the background noise having an identifier of ‘1,’ and a reliability of twenty percent when utilized to recognize speech recorded in an operating environment with a background noise that matches the background noise having an identifier of ‘2.’ The exemplary grammar reliability table above also specifies that an exemplary ‘counties_in_TX’ grammar has a reliability of one hundred percent when utilized to recognize speech recorded in an operating environment with a background noise that matches the background noise having an identifier of either ‘0’ or ‘2,’ and a reliability of eighty percent when utilized to recognize speech recorded in an operating environment with a background noise that matches the background noise having an identifier of ‘1.’ The exemplary grammar reliability table above also specifies that an exemplary ‘cities_in_Hilidalgo_TX’ grammar has a reliability of one hundred percent when utilized to recognize speech recorded in an operating environment with a background noise that matches any of the background noises having an identifier of ‘0,’‘1,’ or ‘2.’ Reader will note that the exemplary grammar reliability table above is for explanation only and not for limitation. Other values and formats for associating a reliability indicator with a grammar and a background noise or operating environment as will occur to those of skill the art may also be useful in embodiments of the present invention.
As mentioned above, a multimodal application may utilize grammars tested according to embodiments of the present invention to perform reliable speech recognition. For further explanation,
The method of
In some embodiments, the multimodal application (195) may sample the background noise while a user is not interacting with the multimodal device to avoid having additional noise from the user interaction included the current background noise (702) for the current operating environment. In other embodiments, the multimodal application (195) may sample the current background noise (702) while the user is interacting with the device. For example, the multimodal application (195) may sample the current background noise (702) immediately before or after the user provides a voice utterance for speech recognition.
The method of
For further example of altering (704) the multimodal application's flow of execution, consider a multimodal application that operates in a user's vehicle to provide driving direction to any city in Texas from the vehicle's current location. Further consider that the multimodal application provides two different interactions with a user depending on the vehicle's speed. The first user interaction allows the user to speak the name of the city in Texas, and the multimodal application in turn provides directions to that city. In this example, the grammar used to recognize the names of all of the cities in Texas has a high reliability in the presence of background noise as long as the current operating environment is such that the vehicle is traveling less than forty miles per hour. For operating environments above forty miles per hour, the grammars reliability diminishes as the noise makes distinguishing between cities with similar sounding names difficult. The multimodal application therefore provides a second user interaction when the current background noise indicates that the user is in an operating environment where the vehicle is traveling above forty mile per hour. This second user interaction prompts the user for the county in which the desired city is located. The multimodal application utilizes a grammar that specifies all of the counties in Texas to recognize the county spoken by the user. The multimodal application then dynamically builds a grammar for recognizing the cities in that county and prompts the user to provide the name of the city. Because a grammar that lists the cities for only a single county typically has a lot less cities with similar sounding names than a grammar listing all of the cities in Texas, the additional noise from operating the vehicle at a higher speed does not impair the multimodal application's ability to reliably perform speech recognition in the second user interaction in the same manner that the speech recognition reliability during the first user interaction is impaired.
In the method of
Altering (704) the multimodal application's flow of execution according to the method of
In the method of
Exemplary embodiments of the present invention are described largely in the context of a fully functional computer system for testing a grammar used in speech recognition for reliability in a plurality of operating environments having different background noise. Readers of skill in the art will recognize, however, that the present invention also may be embodied in a computer program product disposed on computer readable media for use with any suitable data processing system. Such computer readable media may be transmission media or recordable media for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of recordable media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Examples of transmission media include telephone networks for voice communications and digital data communications networks such as, for example, Ethernets™ and networks that communicate with the Internet Protocol and the World Wide Web. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method of the invention as embodied in a program product. Persons skilled in the art will recognize immediately that, although some of the exemplary embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present invention.
It will be understood from the foregoing description that modifications and changes may be made in various embodiments of the present invention without departing from its true spirit. The descriptions in this specification are for purposes of illustration only and are not to be construed in a limiting sense. The scope of the present invention is limited only by the language of the following claims.
This application is a continuation of application Ser. No. 12/109,204, filed on Apr. 24, 2008, which application is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5577165 | Takebayashi et al. | Nov 1996 | A |
5584052 | Gulau et al. | Dec 1996 | A |
5969717 | Ikemoto | Oct 1999 | A |
6009391 | Asghar et al. | Dec 1999 | A |
6125345 | Modi et al. | Sep 2000 | A |
6128595 | Ruber | Oct 2000 | A |
6208972 | Grant et al. | Mar 2001 | B1 |
6243375 | Speicher | Jun 2001 | B1 |
6275806 | Pertrushin | Aug 2001 | B1 |
6301560 | Masters | Oct 2001 | B1 |
6513011 | Uwakubo | Jan 2003 | B1 |
6606599 | Grant et al. | Aug 2003 | B2 |
6856960 | Dragosh et al. | Feb 2005 | B1 |
6920425 | Will et al. | Jul 2005 | B1 |
6999930 | Roberts et al. | Feb 2006 | B1 |
7035805 | Miller | Apr 2006 | B1 |
7171243 | Watanabe et al. | Jan 2007 | B2 |
7188067 | Grant et al. | Mar 2007 | B2 |
7330890 | Partovi et al. | Feb 2008 | B1 |
7376586 | Partovi et al. | May 2008 | B1 |
7487085 | Cross | Feb 2009 | B2 |
7509569 | Barrus et al. | Mar 2009 | B2 |
8082148 | Agapi et al. | Dec 2011 | B2 |
20020049587 | Miyazawa | Apr 2002 | A1 |
20020065944 | Hickey et al. | May 2002 | A1 |
20020087306 | Lee et al. | Jul 2002 | A1 |
20020092019 | Marcus | Jul 2002 | A1 |
20020099553 | Brittan et al. | Jul 2002 | A1 |
20020120554 | Vega | Aug 2002 | A1 |
20020147593 | Lewis et al. | Oct 2002 | A1 |
20020184610 | Chong et al. | Dec 2002 | A1 |
20030039341 | Burg et al. | Feb 2003 | A1 |
20030046316 | Gergic et al. | Mar 2003 | A1 |
20030046346 | Mumick et al. | Mar 2003 | A1 |
20030097261 | Jeon et al. | May 2003 | A1 |
20030101451 | Bentolila et al. | May 2003 | A1 |
20030125945 | Doyle | Jul 2003 | A1 |
20030179865 | Stillman et al. | Sep 2003 | A1 |
20030182622 | Sibal et al. | Sep 2003 | A1 |
20030195739 | Washio | Oct 2003 | A1 |
20030217161 | Balasuriya | Nov 2003 | A1 |
20030229900 | Reisman | Dec 2003 | A1 |
20030235282 | Sichelman et al. | Dec 2003 | A1 |
20040019487 | Kleindienst et al. | Jan 2004 | A1 |
20040025115 | Sienel et al. | Feb 2004 | A1 |
20040031058 | Reisman | Feb 2004 | A1 |
20040044516 | Kennewick et al. | Mar 2004 | A1 |
20040049390 | Brittan et al. | Mar 2004 | A1 |
20040059705 | Wittke et al. | Mar 2004 | A1 |
20040083109 | Halonen et al. | Apr 2004 | A1 |
20040120472 | Popay et al. | Jun 2004 | A1 |
20040120476 | Harrison et al. | Jun 2004 | A1 |
20040122665 | Huang et al. | Jun 2004 | A1 |
20040138890 | Ferrans et al. | Jul 2004 | A1 |
20040153323 | Charney et al. | Aug 2004 | A1 |
20040179038 | Blattner et al. | Sep 2004 | A1 |
20040216036 | Chu et al. | Oct 2004 | A1 |
20040236571 | Laurila et al. | Nov 2004 | A1 |
20040236574 | Ativanichayaphong | Nov 2004 | A1 |
20040260562 | Kijirai | Dec 2004 | A1 |
20050075884 | Badt, Jr. | Apr 2005 | A1 |
20050091059 | Lecoeuche | Apr 2005 | A1 |
20050131701 | Cross | Jun 2005 | A1 |
20050138219 | Bou-Ghannam et al. | Jun 2005 | A1 |
20050138647 | Bou-ghannam et al. | Jun 2005 | A1 |
20050154580 | Horowitz et al. | Jul 2005 | A1 |
20050160461 | Baumgartner et al. | Jul 2005 | A1 |
20050188412 | Dacosta | Aug 2005 | A1 |
20050203729 | Roth et al. | Sep 2005 | A1 |
20050203747 | Lecoeuche | Sep 2005 | A1 |
20050261908 | Cross | Nov 2005 | A1 |
20050273769 | Eichenberger | Dec 2005 | A1 |
20050283367 | Cross | Dec 2005 | A1 |
20060047510 | Ativanichayaphong et al. | Mar 2006 | A1 |
20060064302 | Cross | Mar 2006 | A1 |
20060069564 | Allison et al. | Mar 2006 | A1 |
20060074680 | Cross | Apr 2006 | A1 |
20060075120 | Smit | Apr 2006 | A1 |
20060111906 | Cross | May 2006 | A1 |
20060122836 | Cross | Jun 2006 | A1 |
20060123358 | Lee et al. | Jun 2006 | A1 |
20060136222 | Cross | Jun 2006 | A1 |
20060146728 | Engelsma et al. | Jul 2006 | A1 |
20060168095 | Sharma et al. | Jul 2006 | A1 |
20060168595 | McArdle | Jul 2006 | A1 |
20060184626 | Agapi | Aug 2006 | A1 |
20060190264 | Jaramillo | Aug 2006 | A1 |
20060218039 | Johnson | Sep 2006 | A1 |
20060229880 | White | Oct 2006 | A1 |
20060235694 | Cross | Oct 2006 | A1 |
20060287845 | Cross et al. | Dec 2006 | A1 |
20060287865 | Cross et al. | Dec 2006 | A1 |
20060287866 | Cross et al. | Dec 2006 | A1 |
20060288309 | Cross et al. | Dec 2006 | A1 |
20070265851 | Cross et al. | Nov 2007 | A1 |
20070274296 | Cross et al. | Nov 2007 | A1 |
20070274297 | Cross et al. | Nov 2007 | A1 |
20070288241 | Cross et al. | Dec 2007 | A1 |
20070294084 | Cross | Dec 2007 | A1 |
20080065386 | Cross et al. | Mar 2008 | A1 |
20080065387 | Cross et al. | Mar 2008 | A1 |
20080065388 | Cross et al. | Mar 2008 | A1 |
20080065389 | Cross et al. | Mar 2008 | A1 |
20080065390 | Ativanichayaphong et al. | Mar 2008 | A1 |
20080086564 | Putman et al. | Apr 2008 | A1 |
20080140410 | Cross et al. | Jun 2008 | A1 |
20080162136 | Ativanichayaphong et al. | Jul 2008 | A1 |
20080177530 | Cross et al. | Jul 2008 | A1 |
20080195393 | Cross et al. | Aug 2008 | A1 |
20080208584 | Cross et al. | Aug 2008 | A1 |
20080208585 | Ativanichayaphong et al. | Aug 2008 | A1 |
20080208586 | Ativanichayaphong et al. | Aug 2008 | A1 |
20080208587 | Cross et al. | Aug 2008 | A1 |
20080208588 | Cross et al. | Aug 2008 | A1 |
20080208589 | Cross et al. | Aug 2008 | A1 |
20080208590 | Cross et al. | Aug 2008 | A1 |
20080208591 | Ativanichayaphong et al. | Aug 2008 | A1 |
20080208592 | Cross et al. | Aug 2008 | A1 |
20080208593 | Ativanichayaphong et al. | Aug 2008 | A1 |
20080208594 | Cross et al. | Aug 2008 | A1 |
20080228494 | Cross et al. | Sep 2008 | A1 |
20080228495 | Cross et al. | Sep 2008 | A1 |
20080235021 | Cross et al. | Sep 2008 | A1 |
20080235022 | Cross et al. | Sep 2008 | A1 |
20080235027 | Cross | Sep 2008 | A1 |
20080235029 | Cross et al. | Sep 2008 | A1 |
20080249770 | Kim et al. | Oct 2008 | A1 |
20080249782 | Ativanichayaphong et al. | Oct 2008 | A1 |
20080255850 | Cross et al. | Oct 2008 | A1 |
20080255851 | Cross et al. | Oct 2008 | A1 |
20090198492 | Rempel | Aug 2009 | A1 |
20090268883 | Agapi et al. | Oct 2009 | A1 |
20090271188 | Agapi et al. | Oct 2009 | A1 |
20090271199 | Agapi et al. | Oct 2009 | A1 |
20090271438 | Agapi et al. | Oct 2009 | A1 |
Number | Date | Country |
---|---|---|
1385783 | Dec 2002 | CN |
1564123 | Jan 2005 | CN |
0794670 | Sep 1997 | EP |
1450350 | Aug 2004 | EP |
2000155529 | Jun 2000 | JP |
2003140672 | May 2003 | JP |
WO 9948088 | Sep 1999 | WO |
WO 0051106 | Aug 2000 | WO |
WO 0232140 | Apr 2002 | WO |
WO 2004062945 | Jul 2004 | WO |
WO 2006108795 | Oct 2006 | WO |
Entry |
---|
Axelsson, et al.; “XHTML+Voice Profile 1.2” Internet, [Online] Mar. 16, 2004 (Mar. 6, 2004), pp. 1-53, XP002484188 Retrieved from the Internet: URL: http://www.voicexml.org/specs/mutlimodal/x+v/12/spec.html [retrieved on Jun. 12, 2008]. |
Didier Guillevic, et al.,Robust Semantic Confidence Scoring ICSLP 2002: rn International Conference on Spoken Language Processing. Denver Colorado, Sep. 16-20, 2002 International Conference on Spoken Language Processing (ICSLP), Adelaide: Causal Productions, AI, Sep. 16, 2002, p. 853, XP007011561 ISBN:9788-1- 876346-40-9. |
PCT Search Report, Jun. 18, 2008; PCT Application No. PCT/EP2008/051363. |
PCT Search Report, Jun. 25, 2008; PCT Application No. PCT/EP2008/051358. |
W3C: “Voice Extensible Markup Language (VoiceXML) 2.1, W3C Candidate Recommendation Jun. 13, 2005” Internet, [Online] Jun. 13, 2005 (2005-16-13), pp. 1-34, XP002484189 Retrieved from the Internet: URL:http://www.w3.org/TR/2005/CR-voicexml21-20050613/ [retrieved on Jun. 12, 2008]. |
W3C: “Voice Extensible Markup Language (VoiceXML) Version 2.0” Internet Citation, [Online] XP002248286 Retrieved from the Internet: URL:http://www.w3.org/TR/voicexml20 [retrieved on Jul. 18, 2003]. |
Number | Date | Country | |
---|---|---|---|
20120053934 A1 | Mar 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12109204 | Apr 2008 | US |
Child | 13289233 | US |