1. Field of the Invention
The field of the invention is data processing, or, more specifically, methods, apparatus, and products for ordering recognition results produced by an automatic speech recognition (‘ASR’) engine for a multimodal application.
2. Description of Related Art
User interaction with applications running on small devices through a keyboard or stylus has become increasingly limited and cumbersome as those devices have become increasingly smaller. In particular, small handheld devices like mobile phones and PDAs serve many functions and contain sufficient processing power to support user interaction through multimodal access, that is, by interaction in non-voice modes as well as voice mode. Devices which support multimodal access combine multiple user input modes or channels in the same interaction allowing a user to interact with the applications on the device simultaneously through multiple input modes or channels. The methods of input include speech recognition, keyboard, touch screen, stylus, mouse, handwriting, and others. Multimodal input often makes using a small device easier.
Multimodal applications are often formed by sets of markup documents served up by web servers for display on multimodal browsers. A ‘multimodal browser,’ as the term is used in this specification, generally means a web browser capable of receiving multimodal input and interacting with users with multimodal output, where modes of the multimodal input and output include at least a speech mode. Multimodal browsers typically render web pages written in XHTML+Voice (‘X+V’). X+V provides a markup language that enables users to interact with an multimodal application often running on a server through spoken dialog in addition to traditional means of input such as keyboard strokes and mouse pointer action. Visual markup tells a multimodal browser what the user interface is look like and how it is to behave when the user types, points, or clicks. Similarly, voice markup tells a multimodal browser what to do when the user speaks to it. For visual markup, the multimodal browser uses a graphics engine; for voice markup, the multimodal browser uses a speech engine. X+V adds spoken interaction to standard web content by integrating XHTML (eXtensible Hypertext Markup Language) and speech recognition vocabularies supported by VoiceXML. For visual markup, X+V includes the XHTML standard. For voice markup, X+V includes a subset of VoiceXML. For synchronizing the VoiceXML elements with corresponding visual interface elements, X+V uses events. XHTML includes voice modules that support speech synthesis, speech dialogs, command and control, and speech grammars. Voice handlers can be attached to XHTML elements and respond to specific events. Voice interaction features are integrated with XHTML and can consequently be used directly within XHTML content.
In addition to X+V, multimodal applications also may be implemented with Speech Application Tags (‘SALT’). SALT is a markup language developed by the Salt Forum. Both X+V and SALT are markup languages for creating applications that use voice input/speech recognition and voice output/speech synthesis. Both SALT applications and X+V applications use underlying speech recognition and synthesis technologies or ‘speech engines’ to do the work of recognizing and generating human speech. As markup languages, both X+V and SALT provide markup-based programming environments for using speech engines in an application's user interface. Both languages have language elements, markup tags, that specify what the speech-recognition engine should listen for and what the synthesis engine should ‘say.’ Whereas X+V combines XHTML, VoiceXML, and the XML Events standard to create multimodal applications, SALT does not provide a standard visual markup language or eventing model. Rather, it is a low-level set of tags for specifying voice interaction that can be embedded into other environments. In addition to X+V and SALT, multimodal applications may be implemented in Java with a Java speech framework, in C++, for example, and with other technologies and in other environments as well.
Currently, a multimodal application performs speech recognition by submitting digitized speech to an automatic speech recognition (‘ASR’) engine. The ASR engine receives digitized speech from the multimodal application and matches the digitized speech with a set of recognized words or phrases. The matched set of word or phrases are then returned to the multimodal application. Often the ASR engine returns more than one word or phrase for each clip of digital speech submitted by the multimodal application for recognition because the audible characteristics of the results are similar. Consider, for example, that a multimodal application provided an ASR engine with digitized speech for the song title “That Girl.” The ASR engine may return the following matched phrases:
The order in which the ASR engine returns each of the matched results is typically based on the confidence level that the ASR engine calculates for each result, which specifies how confident the ASR engine is that each result matches the digitized speech. The result with the highest confidence level is first in order, the result with the second highest confidence level is second in order, the result with the third highest confidence level is third in order, and so on. The drawback to this current method of ordering recognition results is that the current method does not take into account other available information that might be used to order the recognition results in a manner that more accurately reflects the probability that each recognition result matches the digitized speech. As such, readers will appreciate that room for improvement exists in ordering recognition results produced by an ASR engine for a multimodal application.
Ordering recognition results produced by an automatic speech recognition (‘ASR’) engine for a multimodal application implemented with a grammar of the multimodal application in the ASR engine, with the multimodal application operating in a multimodal browser on a multimodal device supporting multiple modes of interaction including a voice mode and one or more non-voice modes, the multimodal application operatively coupled to the ASR engine through a VoiceXML interpreter, includes: receiving, in the VoiceXML interpreter from the multimodal application, a voice utterance; determining, by the VoiceXML interpreter using the ASR engine, a plurality of recognition results in dependence upon the voice utterance and the grammar; determining, by the VoiceXML interpreter according to semantic interpretation scripts of the grammar, a weight for each recognition result; and sorting, by the VoiceXML interpreter, the plurality of recognition results in dependence upon the weight for each recognition result.
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular descriptions of exemplary embodiments of the invention as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts of exemplary embodiments of the invention.
Exemplary methods, apparatus, and products for ordering recognition results produced by an automatic speech recognition (‘ASR’) engine for a multimodal application according to embodiments of the present invention are described with reference to the accompanying drawings, beginning with
Ordering recognition results produced by an ASR engine for a multimodal application (195) is implemented with a grammar (104) of the multimodal application (195) in the ASR engine (150). The grammar (104) of
In this example, the elements named <command>, <name>, and <when> are rules of the grammar. Rules are a combination of a rulename and an expansion of a rule that advises an ASR engine or a VoiceXML interpreter which words presently can be recognized. In the example above, rule expansions includes conjunction and disjunction, and the vertical bars ‘|’ mean ‘or.’ An ASR engine or a VoiceXML interpreter processes the rules in sequence, first <command>, then <name>, then <when>. The <command> rule accepts for recognition ‘call’ or ‘phone’ or ‘telephone’ plus, that is, in conjunction with, whatever is returned from the <name> rule and the <when> rule. The <name> rule accepts ‘bob’ or ‘martha’ or ‘joe’ or ‘pete’ or ‘chris’ or ‘john’ or ‘artoush’ or ‘tom,’ and the <when> rule accepts ‘today’ or ‘this afternoon’ or ‘tomorrow’ or ‘next week.’ The command grammar as a whole matches utterances like these, for example:
As mentioned above, the multimodal application (196) operates in the multimodal browser (196), which provides an execution environment for the multimodal application (195). To support the multimodal browser (196) in processing the multimodal application (195), the system of
The VoiceXML interpreter (192) of
A multimodal device on which a multimodal application operates is an automated device, that is, automated computing machinery or a computer program running on an automated device, that is capable of accepting from users more than one mode of input, keyboard, mouse, stylus, and so on, including speech input—and also providing more than one mode of output such as, graphic, speech, and so on. A multimodal device is generally capable of accepting speech input from a user, digitizing the speech, and providing digitized speech to a speech engine for recognition. A multimodal device may be implemented, for example, as a voice-enabled browser on a laptop, a voice browser on a telephone handset, an online game implemented with Java on a personal computer, and with other combinations of hardware and software as may occur to those of skill in the art. Because multimodal applications may be implemented in markup languages (X+V, SALT), object-oriented languages (Java, C++), procedural languages (the C programming language), and in other kinds of computer languages as may occur to those of skill in the art, a multimodal application may refer to any software application, server-oriented or client-oriented, thin client or thick client, that administers more than one mode of input and more than one mode of output, typically including visual and speech modes.
The system of
Each of the example multimodal devices (152) in the system of
As mentioned, a multimodal device according to embodiments of the present invention is capable of providing speech to a speech engine for recognition. The speech engine (153) of
A multimodal application (195) in this example provides speech for recognition and text for speech synthesis to a speech engine through the VoiceXML interpreter (192). As shown in
The VoiceXML interpreter (192) provides grammars, speech for recognition, and text prompts for speech synthesis to the speech engine (153), and the VoiceXML interpreter (192) returns to the multimodal application speech engine (153) output in the form of recognized speech, semantic interpretation results, and digitized speech for voice prompts. In a thin client architecture, the VoiceXML interpreter (192) is located remotely from the multimodal client device in a voice server (151), the API for the VoiceXML interpreter is still implemented in the multimodal device (152), with the API modified to communicate voice dialog instructions, speech for recognition, and text and voice prompts to and from the VoiceXML interpreter on the voice server (151). For ease of explanation, only one (107) of the multimodal devices (152) in the system of
The use of these four example multimodal devices (152) is for explanation only, not for limitation of the invention. Any automated computing machinery capable of accepting speech from a user, providing the speech digitized to an ASR engine through a VoiceXML interpreter, and receiving and playing speech prompts and responses from the VoiceXML interpreter may be improved to function as a multimodal device according to embodiments of the present invention.
The system of
The system of
The system of
The arrangement of the multimodal devices (152), the web server (147), the voice server (151), and the data communications network (100) making up the exemplary system illustrated in
Ordering recognition results produced by an ASR engine for a multimodal application according to embodiments of the present invention in a thin client architecture may be implemented with one or more voice servers, computers, that is, automated computing machinery, that provide speech recognition and speech synthesis. For further explanation, therefore,
Stored in RAM (168) is a voice server application (188), a module of computer program instructions capable of operating a voice server in a system that is configured to order recognition results produced by an ASR engine for a multimodal application according to embodiments of the present invention. Voice server application (188) provides voice recognition services for multimodal devices by accepting requests for speech recognition and returning speech recognition results, including text representing recognized speech, text for use as variable values in dialogs, and text as string representations of scripts for semantic interpretation. Voice server application (188) also includes computer program instructions that provide text-to-speech (‘TTS’) conversion for voice prompts and voice responses to user input in multimodal applications such as, for example, X+V applications, SALT applications, or Java Speech applications. Voice server application (188) may be implemented as a web server, implemented in Java, C++, or another language, that supports ordering recognition results produced by an ASR engine for a multimodal application according embodiments of the present invention.
The voice server (151) in this example includes a speech engine (153). The speech engine is a functional module, typically a software module, although it may include specialized hardware also, that does the work of recognizing and synthesizing human speech. The speech engine (153) includes an automated speech recognition (‘ASR’) engine (150) for speech recognition and a text-to-speech (‘TTS’) engine (194) for generating speech. The speech engine (153) also includes a grammar (104), a lexicon (106), and a language-specific acoustic model (108). The language-specific acoustic model (108) is a data structure, a table or database, for example, that associates Speech Feature Vectors with phonemes representing, to the extent that it is practically feasible to do so, all pronunciations of all the words in a human language. The lexicon (106) is an association of words in text form with phonemes representing pronunciations of each word; the lexicon effectively identifies words that are capable of recognition by an ASR engine. Also stored in RAM (168) is a Text To Speech (‘TTS’) Engine (194), a module of computer program instructions that accepts text as input and returns the same text in the form of digitally encoded speech, for use in providing speech as prompts for and responses to users of multimodal systems.
The voice server application (188) in this example is configured to receive, from a multimodal client located remotely across a network from the voice server, digitized speech for recognition from a user and pass the speech along to the ASR engine (150) for recognition. ASR engine (150) is a module of computer program instructions, also stored in RAM in this example. In carrying out ordering recognition results produced by an ASR engine for a multimodal application, the ASR engine (150) receives speech for recognition in the form of at least one digitized word and uses frequency components of the digitized word to derive a Speech Feature Vector (‘SFV’). An SFV may be defined, for example, by the first twelve or thirteen Fourier or frequency domain components of a sample of digitized speech. The ASR engine can use the SFV to infer phonemes for the word from the language-specific acoustic model (108). The ASR engine then uses the phonemes to find the word in the lexicon (106).
In the example of
The VoiceXML interpreter (192) of
Also stored in RAM (168) is an operating system (154). Operating systems useful in voice servers according to embodiments of the present invention include UNIX™, Linux™, Microsoft NT™, IBM's AIX™, IBM's i5/OS™, and others as will occur to those of skill in the art. Operating system (154), voice server application (188), VoiceXML interpreter (192), speech engine (153), including ASR engine (150), and TTS Engine (194) in the example of
Voice server (151) of
Voice server (151) of
The example voice server of
The exemplary voice server (151) of
For further explanation,
The multimodal device (152) supports multiple modes of interaction including a voice mode and one or more non-voice modes. The example multimodal device (152) of
In addition to the multimodal sever application (188), the voice server (151) also has installed upon it a speech engine (153) with an ASR engine (150), a grammar (104), a lexicon (106), a language-specific acoustic model (108), and a TTS engine (194), as well as a Voice XML interpreter (192) that includes a form interpretation algorithm (193). VoiceXML interpreter (192) interprets and executes VoiceXML dialog (522) received from the multimodal application and provided to VoiceXML interpreter (192) through voice server application (188). VoiceXML input to VoiceXML interpreter (192) may originate from the multimodal application (195) implemented as an X+V client running remotely in a multimodal browser (196) on the multimodal device (152). The VoiceXML interpreter (192) administers such dialogs by processing the dialog instructions sequentially in accordance with a VoiceXML Form Interpretation Algorithm (‘FIA’) (193).
VOIP stands for ‘Voice Over Internet Protocol,’ a generic term for routing speech over an IP-based data communications network. The speech data flows over a general-purpose packet-switched data communications network, instead of traditional dedicated, circuit-switched voice transmission lines. Protocols used to carry voice signals over the IP data communications network are commonly referred to as ‘Voice over IP’ or ‘VOIP’ protocols. VOIP traffic may be deployed on any IP data communications network, including data communications networks lacking a connection to the rest of the Internet, for instance on a private building-wide local area data communications network or ‘LAN.’
Many protocols are used to effect VOIP. The two most popular types of VOIP are effected with the IETF's Session Initiation Protocol (‘SIP’) and the ITU's protocol known as ‘H.323.’ SIP clients use TCP and UDP port 5060 to connect to SIP servers. SIP itself is used to set up and tear down calls for speech transmission. VOIP with SIP then uses RTP for transmitting the actual encoded speech. Similarly, H.323 is an umbrella recommendation from the standards branch of the International Telecommunications Union that defines protocols to provide audio-visual communication sessions on any packet data communications network.
The apparatus of
Voice server application (188) provides voice recognition services for multimodal devices by accepting dialog instructions, VoiceXML segments, and returning speech recognition results, including text representing recognized speech, text for use as variable values in dialogs, and output from execution of semantic interpretation scripts—as well as voice prompts. Voice server application (188) includes computer program instructions that provide text-to-speech (‘TTS’) conversion for voice prompts and voice responses to user input in multimodal applications providing responses to HTTP requests from multimodal browsers running on multimodal devices.
The voice server application (188) receives speech for recognition from a user and passes the speech through API calls to VoiceXML interpreter (192) which in turn uses an ASR engine (150) for speech recognition. The ASR engine receives digitized speech for recognition, uses frequency components of the digitized speech to derive an SFV, uses the SFV to infer phonemes for the word from the language-specific acoustic model (108), and uses the phonemes to find the speech in the lexicon (106). The ASR engine then compares speech found as words in the lexicon to words in a grammar (104) to determine whether words or phrases in speech are recognized by the ASR engine.
The multimodal application (195) is operatively coupled to the ASR engine (150) through the VoiceXML interpreter (192). In this example, the operative coupling to the ASR engine (150) through a VoiceXML interpreter (192) is implemented with a VOIP connection (216) through a voice services module (130). The voice services module is a thin layer of functionality, a module of computer program instructions, that presents an API (316) for use by an application level program in providing dialogs (522) and speech for recognition to a VoiceXML interpreter and receiving in response voice prompts and other responses, including action identifiers according to embodiments of the present invention. The VoiceXML interpreter (192), in turn, utilizes the speech engine (153) for speech recognition and generation services.
The VoiceXML interpreter (192) of
In the example of
Ordering recognition results produced by an ASR engine for a multimodal application of a multimodal application according to embodiments of the present invention in thick client architectures is generally implemented with multimodal devices, that is, automated computing machinery or computers. In the system of
The example multimodal device (152) of
The speech engine (153) in this kind of embodiment, a thick client architecture, often is implemented as an embedded module in a small form factor device such as a handheld device, a mobile phone, PDA, and the like. An example of an embedded speech engine useful for ordering recognition results produced by an ASR engine for a multimodal application according to embodiments of the present invention is IBM's Embedded ViaVoice Enterprise. The example multimodal device of
Also stored in RAM (168) in this example is a multimodal application (195), a module of computer program instructions capable of operating a multimodal device as an apparatus that supports ordering recognition results produced by an ASR engine for a multimodal application according to embodiments of the present invention. The multimodal application (195) implements speech recognition by accepting speech utterances for recognition from a user and sending the utterance for recognition through VoiceXML interpreter API calls to the ASR engine (150). The multimodal application (195) implements speech synthesis generally by sending words to be used as prompts for a user to the TTS engine (194). As an example of thick client architecture, the multimodal application (195) in this example does not send speech for recognition across a network to a voice server for recognition, and the multimodal application (195) in this example does not receive synthesized speech, TTS prompts and responses, across a network from a voice server. All grammar processing, voice recognition, and text to speech conversion in this example is performed in an embedded fashion in the multimodal device (152) itself.
More particularly, multimodal application (195) in this example is a user-level, multimodal, client-side computer program that provides a speech interface through which a user may provide oral speech for recognition through microphone (176), have the speech digitized through an audio amplifier (185) and a coder/decoder (‘codec’) (183) of a sound card (174) and provide the digitized speech for recognition to ASR engine (150). The multimodal application (195) may be implemented as a set or sequence of X+V pages (124) executing in a multimodal browser (196) or microbrowser that passes VoiceXML grammars and digitized speech by calls through a VoiceXML interpreter API directly to an embedded VoiceXML interpreter (192) for processing. The embedded VoiceXML interpreter (192) may in turn issue requests for speech recognition through API calls directly to the embedded ASR engine (150). The embedded VoiceXML interpreter (192) may then issue requests to the action classifier (132) to determine an action identifier in dependence upon the recognized result provided by the ASR engine (150). Multimodal application (195) also can provide speech synthesis, TTS conversion, by API calls to the embedded TTS engine (194) for voice prompts and voice responses to user input.
The multimodal application (195) is operatively coupled to the ASR engine (150) through a VoiceXML interpreter (192). In this example, the operative coupling through the VoiceXML interpreter is implemented using a VoiceXML interpreter API (316). The VoiceXML interpreter API (316) is a module of computer program instructions for use by an application level program in providing dialog instructions, speech for recognition, and other input to a VoiceXML interpreter and receiving in response voice prompts and other responses. The VoiceXML interpreter API presents the same application interface as is presented by the API of the voice service module (130 on
The VoiceXML interpreter (192) of
The multimodal application (195) in this example, running in a multimodal browser (196) on a multimodal device (152) that contains its own VoiceXML interpreter (192) and its own speech engine (153) with no network or VOIP connection to a remote voice server containing a remote VoiceXML interpreter or a remote speech engine, is an example of a so-called ‘thick client architecture,’ so-called because all of the functionality for processing voice mode interactions between a user and the multimodal application—as well as all or most of the functionality for ordering recognition results produced by an ASR engine for a multimodal application of a multimodal application according to embodiments of the present invention—is implemented on the multimodal device itself.
For further explanation,
The multimodal application is operatively coupled to the ASR engine (150) through a VoiceXML interpreter (192). The operative coupling provides a data communications path from the multimodal application (195) to the ASR engine (150) for grammars, speech for recognition, and other input. The operative coupling also provides a data communications path from the ASR engine (150) to the multimodal application (195) for recognized speech, semantic interpretation results, and other results. The operative coupling may be effected with a VoiceXML interpreter (192 on
The method of
Ordering recognition results produced by an ASR engine for a multimodal application (124) according to the method of
The source attribute ‘src’ specifics the URI of the definition of the exemplary grammar. Although the above example illustrates how a grammar may be referenced externally, a grammar's definition may also be expressed in-line in an X+V page.
The method of
As mentioned above, the ASR engine often returns more than one recognition result (506) for each voice utterance (502) submitted by the multimodal application (195) for recognition because the audible characteristics of each result (506) is similar. Consider, for example, that a multimodal application operates to play a song when a user speaks the song's title. In such an example, the VoiceXML interpreter (192) received a voice utterance comprising the song title “That Girl.” The VoiceXML interpreter (192) may receive the following recognition results from an ASR engine:
As mentioned above, the VoiceXML interpreter (192) of
When the VoiceXML interpreter (192) stores the recognition results in an ECMAScript field variable array for a field specified in the multimodal application (195), the recognition results (506) may be stored in field variable array using shadow variables similar to the application variable ‘application.lastresult$.’ For example, a field variable array may represent a possible recognition result through the following shadow variables:
Because the VoiceXML interpreter (192) typically receives the recognition results (506) from an ASR engine in the order of each result's confidence level, the VoiceXML interpreter (192) stores the plurality of recognition results (506) in the ‘application.lastresult$’ array such that the result with the highest confidence level is ordered first in the ‘application.lastresult$’ array, the result with the second highest confidence level is ordered second in the ‘application.lastresult$’ array, the result with the third highest confidence level is ordered third in the ‘application.lastresult$’ array, and so on. For further explanation, consider that the VoiceXML interpreter (192) receives the following recognition results from an ASR engine in the following order according to each result's confidence level:
The VoiceXML interpreter (192) may store the exemplary recognition results above in the ‘application.lastresult$’ array as illustrated below in the following table:
Although the text above describes using the ‘application.lastresult$’ array to store the recognition result (506) returned to the VoiceXML interpreter (192) by an ASR engine, readers will note that in other ECMAScript data structure may also be used to store the recognition results (506). For example, the VoiceXML interpreter (192) may store the recognition results in a field variable array similar to the application variable array described above.
The method of
In the example of
In the example of
In the example of
The exemplary grammar segment above illustrates four semantic interpretation scripts, each semantic interpretation script enclosed in curly brackets ‘{’ and ‘}.’ A multimodal application may utilize the exemplary grammar segment above to specify a set of valid responses from a user prompted by the multimodal application to provide a song title. The exemplary grammar segment above communicates various phrases to an ASR engine for use in recognizing a voice utterance containing the name of a song. A VoiceXML interpreter stores recognized results from an ASR engine in the ‘songfield’ field variable array. In the example above, each semantic interpretation script statically defines the weight for each recognition result by instructing the VoiceXML interpreter to assign a static value to the shadow variable ‘weight’ of the ‘songfield’ field variable array. Therefore, when the recognition result ‘songfield.utterance’ contains ‘Bad Girl,’ the weight ‘songfield.weight’ is statically defined as ‘100.’ When the recognition result ‘songfield.utterance’ contains ‘Dad's Girl,’ the weight ‘songfield.weight’ is statically defined as ‘55.’ When the recognition result ‘songfield.utterance’ contains ‘That Girl,’ the weight ‘songfield.weight’ is statically defined as ‘203.’ When the recognition result ‘songfield.utterance’ contains ‘Third World,’ the weight ‘songfield.weight’ is statically defined as ‘21.’ The static values ‘100,’ ‘55,’ ‘203,’ and ‘21’ may represent the popularity of a particular song, with higher numbers representing more popular songs and lower numbers representing less popular songs. Readers will note that the example above is for explanation and not for limitation.
Rather than statically defining the weight (510) for each recognition result (506) in the example of
Again, the semantic interpretation script in the exemplary grammar segment above is enclosed in curly brackets ‘{’ and ‘}.’ In the example above, the semantic interpretation script dynamically defines the weight for each recognition result by instructing the VoiceXML interpreter to call the ECMAScript function ‘getSongRank’ using the song name for the recognized result as a parameter. The semantic interpretation script then instructs the VoiceXML interpreter to store the value returned from the ‘getSongRank’ function in the weight ‘songfield.weight’ for the recognized result for a song. The ‘getSongRank’ function may access the rank for the recognized song name from a song data repository. Readers will note that the example above is for explanation and not for limitation. Regardless of whether the semantic interpretation scripts (516) of the grammar (104) statically or dynamically define the weight (510) for each recognition result (506), the VoiceXML interpreter (192) may determine (508) a weight (510) for each recognition result (506) according to the method of
The method of claim 5 also includes sorting (512), by the VoiceXML interpreter (192), the plurality of recognition results (506) in dependence upon the weight (510) for each recognition result (506). Sorting (512), by the VoiceXML interpreter (192), the plurality of recognition results (506) according to the method of
The source attribute ‘src’ specifics the URI of the definition of an exemplary grammar ‘grammar.le’ that includes semantic interpretation scripts defining the weights for each recognition result statically or dynamically as explained above. The exemplary sorting attribute ‘sort-weight’ specifies whether to sort the plurality of recognition results (506) according to a value of the weight (510) for each recognition result (506). A value of ‘true’ for the exemplary sorting attribute ‘sort-weight’ specifies sorting the recognition results according to a value of the weight (510) for each recognition result (506) from highest to lowest. A value of ‘false,’ the default value, for the exemplary sorting attribute ‘sort-weight’ specifies sorting the recognition results according to the confidence level calculated by the ASR engine. Although in the example above, a value of ‘true’ for the weight sorting attribute ‘sort-weight’ specifies sorting the recognition results according to a value of the weight (510) for each recognition result (506) from highest to lowest, readers will note that in some embodiments, a value of true may specify sorting the recognition results from lowest to highest.
For further explanation of sorting (514), by the VoiceXML interpreter (192), the plurality of recognition results (506) in dependence upon the weight (510) for each recognition result (506) and a sorting attribute (518) for the grammar (104), consider the following table containing exemplary recognition results returned by the ASR engine and the corresponding weights (510) for each recognition result (506) determined by the VoiceXML interpreter (192) according to semantic interpretation scripts (516) of the grammar (104):
In the exemplary table above, each of the exemplary recognition results are matched by an ASR engine to the voice utterance received in a VoiceXML interpreter. The confidence level for each recognition result is calculated by the ASR engine and returned to the VoiceXML interpreter along with each recognition result. As mentioned above, the VoiceXML interpreter may store the recognition results in the ECMAScript ‘application.lastresult$’ array in the order returned by the ASR engine, which is from highest confidence level to lowest confidence level. The recognition result may be stored in the ‘application.lastresult$.utterance’ shadow variable and the confidence level may be stored in the ‘application.lastresult$.confidence’ shadow variable. The weights are determined by the VoiceXML interpreter according to the semantic interpretation scripts in the grammar for the rules of the grammar used to match the recognition results. The weight for each recognition result may be stored in the ‘application.lastresult$.weight’ shadow variable.
When the sorting attribute (518) specifies sorting the plurality of recognition results (506) according to a value of the weight (510) for each recognition result (506) such as, for example, ‘sort-weight=“true”,’ then the VoiceXML interpreter may sort (514) the plurality of recognition results (506) in dependence upon the weight (510) for each recognition result (506) and the sorting attribute (518) to produce the following results in the ‘application.lastresult$’ array:
Readers will note after the VoiceXML interpreter sorts (514) the plurality of recognition results (506) in dependence upon the weight (510) for each recognition result (506) and the sorting attribute (518), the song title ‘That Girl’ is now in the first element of the ‘application.lastresult$’ array because ‘That Girl’ has the highest weight. In the recognition results (506) returned to the VoiceXML interpreter by the ASR engine, however, the song title ‘Bad Girl’ was the first element in the ‘application.lastresult$’ array because it had the highest confidence level. The VoiceXML interpreter may sort (514) the plurality of recognition results (506) in dependence upon the weight (510) for each recognition result (506) and the sorting attribute (518) according to the method of
In the example of
The source attribute ‘src’ specifics the URI of the definition of an exemplary grammar ‘grammar.le’ that includes semantic interpretation scripts defining the weights for each recognition result statically or dynamically as explained above. The exemplary sorting attribute ‘sort-expr’ specifies sorting the plurality of recognition results (506) according to an ECMAScript script ‘reorderWeights( ).’ For further explanation, again consider the following table containing exemplary recognition results returned by the ASR engine and the corresponding weights (510) for each recognition result (506) determined by the VoiceXML interpreter (192) according to semantic interpretation scripts (516) of the grammar (104):
In the exemplary table above, each of the exemplary recognition results are matched by an ASR engine to the voice utterance received in a VoiceXML interpreter. The confidence level for each recognition result is calculated by the ASR engine and returned to the VoiceXML interpreter along with each recognition result. As mentioned above, the VoiceXML interpreter may store the recognition results in the ECMAScript ‘application.lastresult$’ array in the order returned by the ASR engine, which is from highest confidence level to lowest confidence level. The recognition result may be stored in the ‘application.lastresult$.utterance’ shadow variable and the confidence level may be stored in the ‘application.lastresult$.confidence’ shadow variable. The weights are determined by the VoiceXML interpreter according to the semantic interpretation scripts in the grammar for the rules of the grammar used to match the recognition results. The weight for each recognition result may be stored in the ‘application.lastresult$.weight’ shadow variable. After the VoiceXML interpreter determines weights for each recognition result in the ‘application.lastresults$’ array, the VoiceXML interpreter may execute the ‘reorderWeights( )’ script to reorder the ‘application.lastresult$’ array based on the weight of each recognition result.
In the example above, the ‘reorderWeights( )’ script sorts the recognition results (506) based on both the weight (510) for each result and the confidence level for each result. In particular, the exemplary ‘reorderWeights( )’ script sorts the recognition results (506) based on the product of the confidence level for each recognition result (506) and the weight (510) for each recognition result (506). Readers will note, however, that the exemplary ‘reorderWeights( )’ script is used for explanation only and not for limitation. When the sorting attribute (518) specifies sorting the plurality of recognition results (506) according to the exemplary ECMAScript script ‘reorderWeights( ),’ then the VoiceXML interpreter may sort (514) the plurality of recognition results (506) in dependence upon the weight (510) for each recognition result (506) and the sorting attribute (518) to produce the following results in the ‘application.lastresult$’ array:
Readers will note after the VoiceXML interpreter sorts (514) the plurality of recognition results (506) in dependence upon the weight (510) for each recognition result (506) and the sorting attribute (518), the song title ‘That Girl’ is now in the first element of the ‘application.lastresult$’ array because ‘That Girl’ has the highest product for the weight and confidence level. In the recognition results (506) returned to the VoiceXML interpreter by the ASR engine, however, the song title ‘Bad Girl’ was the first element in the ‘application.lastresult$’ array because it had the highest confidence level. The VoiceXML interpreter may sort (514) the plurality of recognition results (506) in dependence upon the weight (510) for each recognition result (506) and the sorting attribute (518) according to the method of
Exemplary embodiments of the present invention are described largely in the context of a fully functional computer system for ordering recognition results produced by an ASR engine for a multimodal application. Readers of skill in the art will recognize, however, that the present invention also may be embodied in a computer program product disposed on signal bearing media for use with any suitable data processing system. Such signal bearing media may be transmission media or recordable media for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of recordable media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Examples of transmission media include telephone networks for voice communications and digital data communications networks such as, for example, Ethernets™ and networks that communicate with the Internet Protocol and the World Wide Web. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method of the invention as embodied in a program product. Persons skilled in the art will recognize immediately that, although some of the exemplary embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present invention.
It will be understood from the foregoing description that modifications and changes may be made in various embodiments of the present invention without departing from its true spirit. The descriptions in this specification are for purposes of illustration only and are not to be construed in a limiting sense. The scope of the present invention is limited only by the language of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5577165 | Takebayashi et al. | Nov 1996 | A |
5584052 | Gulau et al. | Dec 1996 | A |
5969717 | Ikemoto | Oct 1999 | A |
6208972 | Grant et al. | Mar 2001 | B1 |
6243375 | Speicher | Jun 2001 | B1 |
6275806 | Pertrushin | Aug 2001 | B1 |
6301560 | Masters | Oct 2001 | B1 |
6513011 | Uwakubo | Jan 2003 | B1 |
6606599 | Grant et al. | Aug 2003 | B2 |
6856960 | Dragosh et al. | Feb 2005 | B1 |
6920425 | Will et al. | Jul 2005 | B1 |
6999930 | Roberts et al. | Feb 2006 | B1 |
7032169 | Ativanichayaphong et al. | Apr 2006 | B2 |
7035805 | Miller | Apr 2006 | B1 |
7171243 | Watanabe et al. | Jan 2007 | B2 |
7188067 | Grant et al. | Mar 2007 | B2 |
7330890 | Partovi et al. | Feb 2008 | B1 |
7376586 | Partovi et al. | May 2008 | B1 |
7487085 | Ativanichayaphong et al. | Feb 2009 | B2 |
7509659 | Barrus et al. | Mar 2009 | B2 |
20020065944 | Hickey et al. | May 2002 | A1 |
20020092019 | Marcus | Jul 2002 | A1 |
20020099553 | Brittan et al. | Jul 2002 | A1 |
20020120554 | Vega | Aug 2002 | A1 |
20020147593 | Lewis et al. | Oct 2002 | A1 |
20020184610 | Chong et al. | Dec 2002 | A1 |
20030039341 | Burg et al. | Feb 2003 | A1 |
20030046316 | Gergic et al. | Mar 2003 | A1 |
20030046346 | Mumick et al. | Mar 2003 | A1 |
20030101451 | Bentolila et al. | May 2003 | A1 |
20030125945 | Doyle | Jul 2003 | A1 |
20030179865 | Stillman et al. | Sep 2003 | A1 |
20030182622 | Sibal et al. | Sep 2003 | A1 |
20030195739 | Washio | Oct 2003 | A1 |
20030217161 | Balasuriya | Nov 2003 | A1 |
20030229900 | Reisman | Dec 2003 | A1 |
20030235282 | Sichelman et al. | Dec 2003 | A1 |
20040019487 | Kleindienst et al. | Jan 2004 | A1 |
20040025115 | Sienel et al. | Feb 2004 | A1 |
20040031058 | Reisman | Feb 2004 | A1 |
20040044516 | Kennewick et al. | Mar 2004 | A1 |
20040049390 | Brittan et al. | Mar 2004 | A1 |
20040059705 | Wittke et al. | Mar 2004 | A1 |
20040083109 | Halonen et al. | Apr 2004 | A1 |
20040120472 | Popay et al. | Jun 2004 | A1 |
20040120476 | Harrison et al. | Jun 2004 | A1 |
20040138890 | Farrans et al. | Jul 2004 | A1 |
20040153323 | Charney et al. | Aug 2004 | A1 |
20040179038 | Blattner et al. | Sep 2004 | A1 |
20040216036 | Chu et al. | Oct 2004 | A1 |
20040236574 | Ativanichayaphong | Nov 2004 | A1 |
20040260562 | Kijirai | Dec 2004 | A1 |
20050075884 | Badt | Apr 2005 | A1 |
20050091059 | Lecoeuche | Apr 2005 | A1 |
20050131701 | Cross | Jun 2005 | A1 |
20050138219 | Boughannam | Jun 2005 | A1 |
20050138647 | Boughannam | Jun 2005 | A1 |
20050154580 | Horowitz et al. | Jul 2005 | A1 |
20050160461 | Baumgartner et al. | Jul 2005 | A1 |
20050188412 | Dacosta | Aug 2005 | A1 |
20050203729 | Roth et al. | Sep 2005 | A1 |
20050203747 | Lecoeuche | Sep 2005 | A1 |
20050261908 | Cross | Nov 2005 | A1 |
20050273769 | Eichenberger et al. | Dec 2005 | A1 |
20050283367 | Cross | Dec 2005 | A1 |
20060047510 | Cross | Mar 2006 | A1 |
20060064302 | Cross | Mar 2006 | A1 |
20060069564 | Allison et al. | Mar 2006 | A1 |
20060074680 | Cross | Apr 2006 | A1 |
20060075120 | Smit | Apr 2006 | A1 |
20060111906 | Cross | May 2006 | A1 |
20060122836 | Cross | Jun 2006 | A1 |
20060123358 | Lee et al. | Jun 2006 | A1 |
20060136222 | Cross | Jun 2006 | A1 |
20060146728 | Engelsma et al. | Jul 2006 | A1 |
20060168095 | Sharma et al. | Jul 2006 | A1 |
20060168595 | McArdle | Jul 2006 | A1 |
20060184626 | Agapi | Aug 2006 | A1 |
20060190264 | Jaramillo | Aug 2006 | A1 |
20060218039 | Johnson | Sep 2006 | A1 |
20060229880 | White | Oct 2006 | A1 |
20060235694 | Cross | Oct 2006 | A1 |
20060287845 | Cross et al. | Dec 2006 | A1 |
20060287865 | Cross et al. | Dec 2006 | A1 |
20060287866 | Cross et al. | Dec 2006 | A1 |
20060288309 | Cross et al. | Dec 2006 | A1 |
20070265851 | Cross et al. | Nov 2007 | A1 |
20070274296 | Cross et al. | Nov 2007 | A1 |
20070274297 | Cross et al. | Nov 2007 | A1 |
20070288241 | Cross et al. | Dec 2007 | A1 |
20070294084 | Cross et al. | Dec 2007 | A1 |
20080065386 | Cross et al. | Mar 2008 | A1 |
20080065387 | Cross et al. | Mar 2008 | A1 |
20080065388 | Cross et al. | Mar 2008 | A1 |
20080065389 | Cross et al. | Mar 2008 | A1 |
20080065390 | Ativanichayaphong et al. | Mar 2008 | A1 |
20080086564 | Putman et al. | Apr 2008 | A1 |
20080140410 | Cross et al. | Jun 2008 | A1 |
20080162136 | Ativanichayaphong et al. | Jul 2008 | A1 |
20080177530 | Cross et al. | Jul 2008 | A1 |
20080195393 | Cross et al. | Aug 2008 | A1 |
20080208584 | Cross et al. | Aug 2008 | A1 |
20080208585 | Ativanichayaphong et al. | Aug 2008 | A1 |
20080208586 | Ativanichayaphong et al. | Aug 2008 | A1 |
20080208587 | Cross et al. | Aug 2008 | A1 |
20080208588 | Cross et al. | Aug 2008 | A1 |
20080208589 | Cross et al. | Aug 2008 | A1 |
20080208590 | Cross et al. | Aug 2008 | A1 |
20080208591 | Ativanichayaphong et al. | Aug 2008 | A1 |
20080208592 | Cross et al. | Aug 2008 | A1 |
20080208593 | Ativanichayaphong et al. | Aug 2008 | A1 |
20080208594 | Cross et al. | Aug 2008 | A1 |
20080228494 | Cross et al. | Sep 2008 | A1 |
20080228495 | Cross, Jr. et al. | Sep 2008 | A1 |
20080235021 | Cross et al. | Sep 2008 | A1 |
20080235022 | Cross et al. | Sep 2008 | A1 |
20080235027 | Cross | Sep 2008 | A1 |
20080235029 | Cross et al. | Sep 2008 | A1 |
20080249782 | Ativanichayaphong et al. | Oct 2008 | A1 |
20080255850 | Cross et al. | Oct 2008 | A1 |
20080255851 | Cross et al. | Oct 2008 | A1 |
Number | Date | Country |
---|---|---|
1385783 | Dec 2002 | CN |
1385783 | Dec 2002 | CN |
1564123 | Jan 2005 | CN |
0794670 | Sep 1997 | EP |
1450350 | Aug 2004 | EP |
1450350 | Aug 2004 | EP |
0507148.5 | Apr 2005 | GB |
2000155529 | Jun 2000 | JP |
02003140672 | May 2003 | JP |
WO 9948088 | Sep 1999 | WO |
WO 0051106 | Aug 2000 | WO |
WO 0232140 | Apr 2002 | WO |
WO 2004062945 | Jul 2004 | WO |
WO2006108795 | Oct 2006 | WO |
Number | Date | Country | |
---|---|---|---|
20080208585 A1 | Aug 2008 | US |