1. Field of the Invention
The field of the invention is data processing, or, more specifically, methods, apparatus, and products for speech-enabled web content searching using a multimodal browser.
2. Description of Related Art
User interaction with applications running on small devices through a keyboard or stylus has become increasingly limited and cumbersome as those devices have become increasingly smaller. In particular, small handheld devices like mobile phones and PDAs serve many functions and contain sufficient processing power to support user interaction through multimodal access, that is, by interaction in non-voice modes as well as voice mode. Devices which support multimodal access combine multiple user input modes or channels in the same interaction allowing a user to interact with the applications on the device simultaneously through multiple input modes or channels. The methods of input include speech recognition, keyboard, touch screen, stylus, mouse, handwriting, and others. Multimodal input often makes using a small device easier.
Multimodal applications are often formed by sets of markup documents served up by web servers for display on multimodal browsers. A ‘multimodal browser,’ as the term is used in this specification, generally means a web browser capable of receiving multimodal input and interacting with users with multimodal output, where modes of the multimodal input and output include at least a speech mode. Multimodal browsers typically render web pages written in XHTML+Voice (‘X+V’). X+V provides a markup language that enables users to interact with an multimodal application often running on a server through spoken dialog in addition to traditional means of input such as keyboard strokes and mouse pointer action. Visual markup tells a multimodal browser what the user interface is look like and how it is to behave when the user types, points, or clicks. Similarly, voice markup tells a multimodal browser what to do when the user speaks to it. For visual markup, the multimodal browser uses a graphics engine; for voice markup, the multimodal browser uses a speech engine. X+V adds spoken interaction to standard web content by integrating XHTML (eXtensible Hypertext Markup Language) and speech recognition vocabularies supported by VoiceXML. For visual markup, X+V includes the XHTML standard. For voice markup, X+V includes a subset of VoiceXML. For synchronizing the VoiceXML elements with corresponding visual interface elements, X+V uses events. XHTML includes voice modules that support speech synthesis, speech dialogs, command and control, and speech grammars. Voice handlers can be attached to XHTML elements and respond to specific events. Voice interaction features are integrated with XHTML and can consequently be used directly within XHTML content.
In addition to X+V, multimodal applications also may be implemented with Speech Application Tags (‘SALT’). SALT is a markup language developed by the Salt Forum. Both X+V and SALT are markup languages for creating applications that use voice input/speech recognition and voice output/speech synthesis. Both SALT applications and X+V applications use underlying speech recognition and synthesis technologies or ‘speech engines’ to do the work of recognizing and generating human speech. As markup languages, both X+V and SALT provide markup-based programming environments for using speech engines in an application's user interface. Both languages have language elements, markup tags, that specify what the speech-recognition engine should listen for and what the synthesis engine should ‘say.’ Whereas X+V combines XHTML, VoiceXML, and the XML Events standard to create multimodal applications, SALT does not provide a standard visual markup language or eventing model. Rather, it is a low-level set of tags for specifying voice interaction that can be embedded into other environments. In addition to X+V and SALT, multimodal applications may be implemented in Java with a Java speech framework, in C++, for example, and with other technologies and in other environments as well.
As smaller, handheld devices have become increasingly popular, more and more users are accessing web content through multimodal browsers that operate on these small, handheld devices. In order to aid users in finding relevant information, web pages may be designed to provide web content searching capabilities using a multimodal markup language such as X+V. Such speech-enabled searching of web content, however, is currently available only to those web pages that include speech-enabled web content, that is web content implemented using voice markup such as, for example, X+V. This drawback occurs because speech-enabled web content searching is currently implemented in the web page that contains the web content. Much of the current content on the web, however, is not speech-enabled because the web pages containing this content do not exploit the voice capabilities provided by markup languages that include voice markup such as X+V. As such, readers will appreciate that room for improvement exists in speech-enabled web content searching.
Speech-enabled web content searching using a multimodal browser implemented with one or more grammars in an automatic speech recognition (‘ASR’) engine, with the multimodal browser operating on a multimodal device supporting multiple modes of interaction including a voice mode and one or more non-voice modes, the multimodal browser operatively coupled to the ASR engine, includes: rendering, by the multimodal browser, web content; searching, by the multimodal browser, the web content for a search phrase, including yielding a matched search result, the search phrase specified by a first voice utterance received from a user and a search grammar; and performing, by the multimodal browser, an action in dependence upon the matched search result, the action specified by a second voice utterance received from the user and an action grammar.
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular descriptions of exemplary embodiments of the invention as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts of exemplary embodiments of the invention.
Exemplary methods, apparatus, and products for speech-enabled web content searching using a multimodal browser according to embodiments of the present invention are described with reference to the accompanying drawings, beginning with
The multimodal device (152) of
In the exemplary system of
Speech-enabled web content searching using a multimodal browser (195) is implemented with one or more grammars in the ASR engine (150). A grammar communicates to the ASR engine (150) the words and sequences of words that currently may be recognized. A grammar typically includes grammar rules that advise an ASR engine or a voice interpreter which words and word sequences presently can be recognized. Grammars for use according to embodiments of the present invention may be expressed in any format supported by an ASR engine, including, for example, the Java Speech Grammar Format (‘JSGF’), the format of the W3C Speech Recognition Grammar Specification (‘SRGS’), the Augmented Backus-Naur Format (‘ABNF’) from the IETF's RFC2234, in the form of a stochastic grammar as described in the W3C's Stochastic Language Models (N-Gram) Specification, and in other grammar formats as may occur to those of skill in the art. Grammars typically operate as elements of dialogs, such as, for example, a VoiceXML <menu> or an X+V<form>. A grammar's definition may be expressed in-line in a dialog. Or the grammar may be implemented externally in a separate grammar document and referenced from with a dialog with a URI. Here is an example of a grammar expressed in JSFG:
In this example, the elements named <command>, <name>, and <when> are rules of the grammar. Rules are a combination of a rulename and an expansion of a rule that advises an ASR engine or a VoiceXML interpreter which words presently can be recognized. In the example above, rule expansions includes conjunction and disjunction, and the vertical bars ‘|’ mean ‘or.’ An ASR engine or a VoiceXML interpreter processes the rules in sequence, first <command>, then <name>, then <when>. The <command> rule accepts for recognition ‘call’ or ‘phone’ or ‘telephone’ plus, that is, in conjunction with, whatever is returned from the <name> rule and the <when> rule. The <name> rule accepts ‘bob’ or ‘martha’ or ‘joe’ or ‘pete’ or ‘chris’ or ‘john’ or ‘artoush’ or ‘tom,’ and the <when> rule accepts ‘today’ or ‘this afternoon’ or ‘tomorrow’ or ‘next week.’ The command grammar as a whole matches utterances like these, for example:
The exemplary grammar above is implemented using static grammar rules. Readers will note, however, that grammars useful according to embodiments of the present invention may also be implemented using dynamically defined grammar rules that are specified by the grammar as rules that are not to be processed by the ASR until after at least one static rule has been matched. Such dynamic rules are dynamically defined at run time as a new static rule in dependence upon a matched value of a previously matched static rule. The following grammar, for example:
uses a double-bracket syntax and a parameter list to specify the <<brand>> rule as a dynamic rule that is not to be processed by an ASR until after the <item> rule has been matched. In this <<brand>> example, the static <command> rule contains a rule reference:
The dynamic <<brand>> rule is initially defined in this example grammar only by a URL:
The URL identifies a computer resource capable of dynamically defining at run time the dynamic <<brand>> rule of the grammar as a new static rule in dependence upon matched value from the <item> rule, the rule required to be matched before the dynamic rule is processed. In this example, the computer resource so identified is a Java Server Page (‘JSP’) located at http://groceries.com. The JSP is a computer resource that is programmed to define the dynamic <<brand>> rule of the grammar as a new static rule in dependence upon matched value from the <item> rule. The ASR engine expands the definition of the <<brand>> rule with the results of the match from the <item> rule and provides the expansion to the JSP page to return a new static rule. In this way, the ASR engine may dynamically define the dynamic rule at run time as a new static rule by expanding the definition of the dynamic rule with a matched value of the referenced static <item> rule. If the <item> rule were matched with “peppers,” for example, then the definition of the dynamic <<brand>> rule may be expanded as:
And the new static rule returned from the JSP page may be, for example:
If the <item> rule were matched with “tomatoes,” for example, then the definition of the dynamic <<brand>> rule may be expanded as:
And the new static rule returned from the JSP page may be, for example:
If the <item> rule were matched with “toothpaste,” for example, then the definition of the dynamic <<brand>> rule may be expanded as:
And the new static rule returned from the JSP page may be, for example:
And so on—with a different definition of the new static rule possible for each matched value of the referenced static <item> rule.
Note that in this example, the dynamic <<brand>> rule occurs in document order after the static <item> rule whose match value is required before the dynamic rule can be processed. In this example, the ASR engine typically will match the <item> rule in document order before processing the <<brand>> rule. This document order, however, is not a limitation of the present invention. The static and dynamic rules may occur in any document order in the grammar, and, if a dynamic rule is set forth in the grammar ahead of a static rule upon which the dynamic rule depends, then the ASR engine is configured to make more than one pass through the grammar, treating the dynamic rule in the meantime as a rule that matches any speech in the utterance until a next rule match, a next token match, or the end of processing of the pertinent user utterance.
As mentioned above, the multimodal browser (196) provides an execution environment for the web page (195). To support the multimodal browser (196) in speech-enabled web page searching, the system of
A multimodal device on which a multimodal browser operates is an automated device, that is, automated computing machinery or a computer program running on an automated device, that is capable of accepting from users more than one mode of input, keyboard, mouse, stylus, and so on, including speech input—and also providing more than one mode of output such as, graphic, speech, and so on. A multimodal device is generally capable of accepting speech input from a user, digitizing the speech, and providing digitized speech to a speech engine for recognition. A multimodal device may be implemented, for example, as a voice-enabled browser on a laptop, a voice browser on a telephone handset, an online game implemented with Java on a personal computer, and with other combinations of hardware and software as may occur to those of skill in the art.
The system of
Each of the example multimodal devices (152) in the system of
As mentioned, a multimodal device according to embodiments of the present invention is capable of providing speech to a speech engine for recognition. The speech engine (153) of
The multimodal browser (196) in this example provides speech for recognition and text for speech synthesis to a speech engine through the VoiceXML interpreter (192). As shown in
The VoiceXML interpreter (192) provides grammars, speech for recognition, and text prompts for speech synthesis to the speech engine (153), and the VoiceXML interpreter (192) returns to the multimodal browser speech engine (153) output in the form of recognized speech, semantic interpretation results, and digitized speech for voice prompts. In a thin client architecture, the VoiceXML interpreter (192) is located remotely from the multimodal client device in a voice server (151), the API for the VoiceXML interpreter is still implemented in the multimodal device (152), with the API modified to communicate voice dialog instructions, speech for recognition, and text and voice prompts to and from the VoiceXML interpreter on the voice server (151). For ease of explanation, only one (107) of the multimodal devices (152) in the system of
The use of these four example multimodal devices (152) is for explanation only, not for limitation of the invention. Any automated computing machinery capable of accepting speech from a user, providing the speech digitized to an ASR engine, and receiving and playing speech prompts and responses from the ASR engine may be improved to function as a multimodal device according to embodiments of the present invention.
The system of
The system of
The system of
The arrangement of the multimodal devices (152), the web server (147), the voice server (151), and the data communications network (100) making up the exemplary system illustrated in
Speech-enabled web content searching using a multimodal browser according to embodiments of the present invention in a thin client architecture may be implemented with one or more voice servers, computers, that is, automated computing machinery, that provide speech recognition and speech synthesis. For further explanation, therefore,
The exemplary voice server (151) of
Stored in RAM (168) is a voice server application (188), a module of computer program instructions capable of operating a voice server in a system that is configured for speech-enabled web content searching using a multimodal browser according to embodiments of the present invention. Voice server application (188) provides voice recognition services for multimodal devices by accepting requests for speech recognition and returning speech recognition results, including text representing recognized speech, text for use as variable values in dialogs, and text as string representations of scripts for semantic interpretation. Voice server application (188) also includes computer program instructions that provide text-to-speech (‘TTS’) conversion for voice prompts and voice responses to user input in multimodal browsers that provide an execution environment for web pages implemented using, for example, HTML, XHTML, or X+V. Voice server application (188) may be implemented as a web server, implemented in Java, C++, or another language, that supports speech-enabled web content searching using a multimodal browser according embodiments of the present invention.
The voice server (151) in this example includes a speech engine (153). The speech engine is a functional module, typically a software module, although it may include specialized hardware also, that does the work of recognizing and synthesizing human speech. The speech engine (153) includes an automated speech recognition (‘ASR’) engine (150) for speech recognition and a text-to-speech (‘TTS’) engine (194) for generating speech. The speech engine (153) also includes a grammar (104), a lexicon (106), and a language-specific acoustic model (108). The language-specific acoustic model (108) is a data structure, a table or database, for example, that associates Speech Feature Vectors with phonemes representing, to the extent that it is practically feasible to do so, all pronunciations of all the words in a human language. The lexicon (106) is an association of words in text form with phonemes representing pronunciations of each word; the lexicon effectively identifies words that are capable of recognition by an ASR engine. Also stored in RAM (168) is a Text To Speech (‘TTS’) Engine (194), a module of computer program instructions that accepts text as input and returns the same text in the form of digitally encoded speech, for use in providing speech as prompts for and responses to users of multimodal systems.
The voice server application (188) in this example is configured to receive, from a multimodal client located remotely across a network from the voice server, digitized speech for recognition from a user and pass the speech along to the ASR engine (150) for recognition. ASR engine (150) is a module of computer program instructions, also stored in RAM in this example. In carrying out speech-enabled web content searching using a multimodal browser, the ASR engine (150) receives speech for recognition in the form of at least one digitized word and uses frequency components of the digitized word to derive a Speech Feature Vector (‘SFV’). An SFV may be defined, for example, by the first twelve or thirteen Fourier or frequency domain components of a sample of digitized speech. The ASR engine can use the SFV to infer phonemes for the word from the language-specific acoustic model (108). The ASR engine then uses the phonemes to find the word in the lexicon (106).
In the example of
Also stored in RAM (168) is an operating system (154). Operating systems useful in voice servers according to embodiments of the present invention include UNIX™, Linux™, Microsoft NT™, IBM's AIX™, IBM's i5/OS™, and others as will occur to those of skill in the art. Operating system (154), voice server application (188), VoiceXML interpreter (192), speech engine (153), including ASR engine (150), and TTS engine (194) in the example of
Voice server (151) of
Voice server (151) of
The example voice server of
The exemplary voice server (151) of
For further explanation,
The multimodal browser (196) of
The multimodal device (152) of
In addition to the multimodal sever application (188), the voice server (151) also has installed upon it a speech engine (153) with an ASR engine (150), a grammar (104), a lexicon (106), a language-specific acoustic model (108), and a TTS engine (194), as well as a Voice XML interpreter (192) that includes a form interpretation algorithm (193). VoiceXML interpreter (192) interprets and executes VoiceXML dialog (522) received from the multimodal browser (196) and provided to VoiceXML interpreter (192) through voice server application (188). VoiceXML input to VoiceXML interpreter (192) may originate from the multimodal browser operating on the multimodal device (152) for speech-enabled web content searching according to embodiments of the present invention. The VoiceXML interpreter (192) administers such dialogs by processing the dialog instructions sequentially in accordance with a VoiceXML Form Interpretation Algorithm (‘FIA’) (193).
VOIP stands for ‘Voice Over Internet Protocol,’ a generic term for routing speech over an IP-based data communications network. The speech data flows over a general-purpose packet-switched data communications network, instead of traditional dedicated, circuit-switched voice transmission lines. Protocols used to carry voice signals over the IP data communications network are commonly referred to as ‘Voice over IP’ or ‘VOIP’ protocols. VOIP traffic may be deployed on any IP data communications network, including data communications networks lacking a connection to the rest of the Internet, for instance on a private building-wide local area data communications network or ‘LAN.’
Many protocols are used to effect VOIP. The two most popular types of VOIP are effected with the IETF's Session Initiation Protocol (‘SIP’) and the ITU's protocol known as ‘H.323.’ SIP clients use TCP and UDP port 5060 to connect to SIP servers. SIP itself is used to set up and tear down calls for speech transmission. VOIP with SIP then uses RTP for transmitting the actual encoded speech. Similarly, H.323 is an umbrella recommendation from the standards branch of the International Telecommunications Union that defines protocols to provide audio-visual communication sessions on any packet data communications network.
The apparatus of
Voice server application (188) provides voice recognition services for multimodal devices by accepting dialog instructions, VoiceXML segments, and returning speech recognition results, including text representing recognized speech, text for use as variable values in dialogs, and output from execution of semantic interpretation scripts—as well as voice prompts. Voice server application (188) includes computer program instructions that provide text-to-speech (‘TTS’) conversion for voice prompts and voice responses to user input in multimodal browsers and that provide responses to HTTP requests from multimodal browsers running on multimodal devices.
The voice server application (188) receives speech for recognition from a user and passes the speech through API calls to VoiceXML interpreter (192) which in turn uses an ASR engine (150) for speech recognition. The ASR engine receives digitized speech for recognition, uses frequency components of the digitized speech to derive an SFV, uses the SFV to infer phonemes for the word from the language-specific acoustic model (108), and uses the phonemes to find the speech in the lexicon (106). The ASR engine then compares speech found as words in the lexicon to words in a grammar (104) to determine whether words or phrases in speech are recognized by the ASR engine.
In the example of
In the example of
Speech-enabled web content searching using a multimodal browser according to embodiments of the present invention in thick client architectures is generally implemented with multimodal devices, that is, automated computing machinery or computers. In the system of
The example multimodal device (152) of
The speech engine (153) in this kind of embodiment, a thick client architecture, often is implemented as an embedded module in a small form factor device such as a handheld device, a mobile phone, PDA, and the like. An example of an embedded speech engine useful for speech-enabled web content searching using a multimodal browser according to embodiments of the present invention is IBM's Embedded ViaVoice Enterprise. The example multimodal device of
Also stored in RAM (168) in this example are a web page (195) and a multimodal browser (196). The web page (195) contains web content implemented according to HTML, XHTML, or X+V. The multimodal browser (196) provides an execution environment for the web page (195). In the example of
The multimodal browser (196) implements speech recognition by accepting speech utterances for recognition from a user and sending the utterance for recognition through a VoiceXML interpreter API (316) to the ASR engine (150). The multimodal browser (196) implements speech synthesis generally by sending words to be used as prompts for a user to the TTS engine (194). As an example of thick client architecture, the multimodal browser (196) in this example does not send speech for recognition across a network to a voice server for recognition, and the multimodal browser (196) in this example does not receive synthesized speech, TTS prompts and responses, across a network from a voice server. All grammar processing, voice recognition, and text to speech conversion in this example is performed in an embedded fashion in the multimodal device (152) itself.
More particularly, multimodal browser (196) in this example is a user-level, multimodal, client-side computer program that provides a speech interface through which a user may provide oral speech for recognition through microphone (176), have the speech digitized through an audio amplifier (185) and a coder/decoder (‘codec’) (183) of a sound card (174) and provide the digitized speech for recognition to ASR engine (150). The multimodal browser (196) may pass VoiceXML grammars and digitized speech by calls through a VoiceXML interpreter API directly to an embedded VoiceXML interpreter (192) for processing. The embedded VoiceXML interpreter (192) may in turn issue requests for speech recognition through API calls directly to the embedded ASR engine (150). The embedded VoiceXML interpreter (192) may then issue requests to the action classifier (132) to determine an action identifier in dependence upon the recognized result provided by the ASR engine (150). The multimodal browser (196) also can provide speech synthesis, TTS conversion, by API calls to the embedded TTS engine (194) for voice prompts and voice responses to user input.
In the example of
In the example of
For further explanation,
The method of
In the example of
Speech-enabled web content searching using a multimodal browser (196) according to the method of
The source attribute ‘src’ specifics the URI of the definition of the exemplary grammar. Although the above example illustrates how a grammar may be referenced externally, a grammar's definition may also be expressed in-line in the <grammar> element.
The method of
The method of
The method of
For further explanation,
The method of
Searching (506), by the multimodal browser (196), the web content (504) for a search phrase, including yielding a matched search result (508) according to the method of
In the exemplary search grammar above, the grammar rule <words> is formed using an alternative list of all the words comprising the exemplary web content illustrated in
In the example method of
The multimodal browser (196) may provide the exemplary VoiceXML <form> element to a VoiceXML interpreter upon rendering (500) the web content (504) to the user so as to receive any utterance provided by the user in response to the rendered web content. The exemplary VoiceXML <form> element above activates the multimodal browser (196) to receive (604) the first voice utterance (606) from a user because the VoiceXML <form> element is a dialog for presenting information and collecting data. The exemplary VoiceXML <form> element above specifies that data is to be collected in the VoiceXML field identified as “vSearchField” using a grammar specified in the VoiceXML <grammar> element. If data is successfully collected, or ‘filled,’ in the “vSearchField” field, then the event “do.search” is triggered, or ‘thrown.’ The event “do.search” specifies that the multimodal browser (196) is to perform a search of the web content (504) based on the data collected in the “vSearchField” field. The manner in which the multimodal browser (196) collects data into the “vSearchField” field and the manner in which the multimodal browser (196) search the web content are discussed in more detail below.
Searching (506), by the multimodal browser (196), the web content (504) for a search phrase, including yielding a matched search result (508) according to the method of
In a thin client architecture, the multimodal browser (196) may provide the first voice utterance (606) and the search grammar (602) to the ASR engine as part of a call by the multimodal browser (196) to a voice services module (130 on
As mentioned above, the multimodal browser (196) of
Using the ‘application.lastresult$’ variable above, the multimodal browser (196) may identify the recognition results stored in ‘application.lastresult$.utterance’ as the search phrase (610). When the multimodal browser (196) stores the recognition results in an ECMAScript field variable for a field specified by the multimodal browser (196), the recognition results may be stored in field variable array using shadow variables similar to the application variable ‘application.lastresult$.’ For example, a field variable array may represent a possible recognition result through the following shadow variables:
In the example method of
The exemplary ECMAScript function ‘findText’ above receives parameters ‘text,’ ‘nodes,’ and ‘results.’ The ‘text’ parameter stores the search phrase (610). The ‘nodes’ parameter that stores the address of a list of DOM nodes that contains the web content (504). The ‘results’ parameter stores a pointer to a list of DOM nodes that represent the web content (504) that contains the search phrase (610). The ‘findText’ function above recursively traverses through each node of the DOM that represents the web content (504), determines whether each node contains the search phrase (610), and stores the identifiers for each node that contains the search phrase (610) in the ‘results’ list.
The multimodal browser (196) may match the search phrase (610) to at least one portion of the web content (504) using the exemplary ‘findText’ function above in a ECMAScript script that executes after the multimodal browser (196) determines (608) the search phrase (610). For further explanation, consider again that the exemplary VoiceXML <form> element above used to receive (604) the first voice utterance (606) from a user and to determine (608) the search phrase (610). The exemplary VoiceXML <form> element throws a ‘do.search’ event if the multimodal browser (196) successfully determines (608) the search phrase (610) using the search grammar (602). In response to a ‘do.search’ event being thrown, the following exemplary ECMAScript script may be run to execute the ‘findText’ function:
The exemplary ECMAScript script above is executed by the multimodal browser (196) when the ‘do.search’ event is thrown. The ‘findText’ function receives the search phrase (610) through the ‘application.lastresult$’ variable. The ‘document.body.childNodes’ variable represents a list of DOM nodes through which the ‘findText’ function traverses to identify nodes that contain the search phrase (610). The ‘searchResults’ variable is an array used to store the list of nodes that contain the search phrase (610). Each node listed in the ‘searchResults’ array, therefore, contains the matched search result (508).
In the method of
The exemplary ECMAScript function ‘augmentContent’ above receives parameters ‘text’ and ‘results.’ The ‘text’ parameter stores the search phrase (610). The ‘results’ parameter stores a pointer to a list of DOM nodes that represent the web content (504) that contains the search phrase (610). The ‘augmentContent’ function above parses the nodes in the ‘results’ list and appends exemplary additional web content to the matched search result (508), which is the text contained in a node of the ‘results’ list that matches the search phrase (610). The exemplary additional web content contained in the ‘augmentContent’ function above turns the matched search result (508) into a hyperlink to the definition of the matched search result (508) provided by TheFreeDictionary. The exemplary additional web content above also appends a superscripted numeral ‘1’ to the matched search result hyperlink along with metadata used to highlight the matched search result hyperlink and the superscripted numeral ‘1.’ When rendered by the multimodal browser (196), the exemplary additional content may resemble the additional web content on
The multimodal browser (196) of
The exemplary ECMAScript script above is executed by the multimodal browser (196) when the ‘do.search’ event is thrown. After executing the ‘findText’ function as described above, the multimodal browser (196) executes the ‘augmentContent’ function to parse the nodes in the ‘searchResults’ list and append additional web content to the matched search result (508) contained in the parsed nodes.
In the method of
The exemplary action grammar above, the exemplary grammar rule <word> is formed from matched search result ‘interaction’ and added to a grammar template used to recognize phrases such as:
The multimodal browser (196) of
The exemplary ECMAScript script above is executed by the multimodal browser (196) when the ‘do.search’ event is thrown. After executing the ‘findText’ function and the ‘augmentContent’ function as described above, the multimodal browser (196) executes the ‘createActionGrammar’ function, which creates an action grammar by adding the matched search result (508) to a grammar rule in a grammar template.
Performing (516), by the multimodal browser (196), an action in dependence upon the matched search result (508) according to the method of
In the method of
The semantic interpretation scripts of the exemplary action grammar above store one of these exemplary action identifiers in the application variable:
Performing (516), by the multimodal browser (196), an action in dependence upon the matched search result (508) according to the method of
The exemplary VoiceXML <link> element above throws the event specified in ‘eventexpr’ when the action grammar is matched, thereby determining an action identifier (624) using the semantic interpretation scripts embedded in the grammar. The action identifier (624) is stored in the ‘interpretation.search-action’ shadow variable for the ‘application.lastresult$’ variable.
The multimodal browser (196) may execute a set of instructions that performs one of several actions based on the action identifier (624) by placing the set of instructions in a VoiceXML <catch> element that is processed when the VoiceXML event is triggered. Consider the following VoiceXML <catch> element:
The multimodal browser (196) executes the exemplary VoiceXML <catch> element above when the ‘application.lastresult$.interpretation.search-action’ event is triggered and contains a value of ‘click,’ ‘map,’ or ‘google.’ If the ‘search-action’ variable contains an action identifier (624) having a value of ‘click,’ then the multimodal browser (196) locates the matched search result (508) represented as a hyperlink in a DOM node and actives the hyperlink using the ‘clickOnElement’ function. If the ‘search-action’ variable contains an action identifier (624) having a value of ‘google,’ then the multimodal browser (196) locates the matched search result (508) in a DOM node and opens a new window for obtaining web pages that contain the matched search result (508) using Google™. If the ‘search-action’ variable contains an action identifier (624) having a value of ‘map,’ then the multimodal browser (196) locates the matched search result (508) in a DOM node and changes the current document to a web page that maps the matched search result (508) using Yahoo!® maps. If the ‘search-action’ variable contains an action identifier (624) having a value of ‘google,’ then the multimodal browser (196) locates the matched search result (508) in a DOM node and opens a new window for obtaining web pages that contain the matched search result (508) using Google™.
The exemplary embodiments for speech-enabled web content searching described above are implemented using a multimodal browser. Speech-enabled web content searching using a multimodal browser advantageously allows speech-enabled searching of web content regardless of whether the web-content is speech-enabled. Such an advantage may be obtained because the speech-enabled functionality that permits web content searching according to embodiments of the present invention is implemented at the browser level through the multimodal browser itself and its supporting components such as a VoiceXML interpreter and ASR engine.
Exemplary embodiments of the present invention are described largely in the context of a fully functional computer system for speech-enabled web content searching using a multimodal browser. Readers of skill in the art will recognize, however, that the present invention also may be embodied in a computer program product disposed on signal bearing media for use with any suitable data processing system. Such signal bearing media may be transmission media or recordable media for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of recordable media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Examples of transmission media include telephone networks for voice communications and digital data communications networks such as, for example, Ethernets™ and networks that communicate with the Internet Protocol and the World Wide Web. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method of the invention as embodied in a program product. Persons skilled in the art will recognize immediately that, although some of the exemplary embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present invention.
It will be understood from the foregoing description that modifications and changes may be made in various embodiments of the present invention without departing from its true spirit. The descriptions in this specification are for purposes of illustration only and are not to be construed in a limiting sense. The scope of the present invention is limited only by the language of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5097528 | Gursahaney et al. | Mar 1992 | A |
5577165 | Takebayashi et al. | Nov 1996 | A |
5584052 | Gulau et al. | Dec 1996 | A |
5646979 | Knuth | Jul 1997 | A |
5689547 | Molne | Nov 1997 | A |
5884262 | Wise et al. | Mar 1999 | A |
5953392 | Rhie et al. | Sep 1999 | A |
5969717 | Ikemoto | Oct 1999 | A |
5991615 | Coppinger et al. | Nov 1999 | A |
6028601 | Machiraju et al. | Feb 2000 | A |
6031467 | Hymel et al. | Feb 2000 | A |
6084583 | Gerszberg et al. | Jul 2000 | A |
6101472 | Giangarra et al. | Aug 2000 | A |
6128651 | Cezar | Oct 2000 | A |
6141010 | Hoyle | Oct 2000 | A |
6157841 | Bolduc et al. | Dec 2000 | A |
6208972 | Grant et al. | Mar 2001 | B1 |
6212545 | Ohtani et al. | Apr 2001 | B1 |
6243375 | Speicher | Jun 2001 | B1 |
6243443 | Low et al. | Jun 2001 | B1 |
6275806 | Pertrushin | Aug 2001 | B1 |
6285862 | Ruhl et al. | Sep 2001 | B1 |
6298218 | Lowe et al. | Oct 2001 | B1 |
6301560 | Masters | Oct 2001 | B1 |
6321209 | Pasquali | Nov 2001 | B1 |
6332127 | Bandera et al. | Dec 2001 | B1 |
6381465 | Chern et al. | Apr 2002 | B1 |
6393296 | Sabnani et al. | May 2002 | B1 |
6397057 | Malackowski et al. | May 2002 | B1 |
6400806 | Uppaluru | Jun 2002 | B1 |
6401085 | Gershman et al. | Jun 2002 | B1 |
6405123 | Rennard et al. | Jun 2002 | B1 |
6452498 | Stewart | Sep 2002 | B2 |
6484148 | Boyd | Nov 2002 | B1 |
6513011 | Uwakubo | Jan 2003 | B1 |
6529159 | Fan et al. | Mar 2003 | B1 |
6552682 | Fan | Apr 2003 | B1 |
6560456 | Lohtia et al. | May 2003 | B1 |
6594347 | Calder et al. | Jul 2003 | B1 |
6600736 | Ball et al. | Jul 2003 | B1 |
6601026 | Appelt et al. | Jul 2003 | B2 |
6606599 | Grant et al. | Aug 2003 | B2 |
6606611 | Khan et al. | Aug 2003 | B1 |
6608556 | DeMoerloose et al. | Aug 2003 | B2 |
6636733 | Helferich | Oct 2003 | B1 |
6647269 | Hendrey et al. | Nov 2003 | B2 |
6658389 | Alpdemir | Dec 2003 | B1 |
6664922 | Fan | Dec 2003 | B1 |
6701162 | Everett | Mar 2004 | B1 |
6769010 | Knapp et al. | Jul 2004 | B1 |
6772213 | Glorikian | Aug 2004 | B2 |
6789077 | Slaughter et al. | Sep 2004 | B1 |
6813501 | Kinnunen et al. | Nov 2004 | B2 |
6823257 | Clapper | Nov 2004 | B2 |
6826614 | Hanmann et al. | Nov 2004 | B1 |
6842767 | Partovi et al. | Jan 2005 | B1 |
6856960 | Dragosh | Feb 2005 | B1 |
6862445 | Cohen | Mar 2005 | B1 |
6885736 | Uppaluru | Apr 2005 | B2 |
6895084 | Saylor et al. | May 2005 | B1 |
6912400 | Olsson et al. | Jun 2005 | B1 |
6920425 | Will et al. | Jul 2005 | B1 |
6941273 | Loghmani et al. | Sep 2005 | B1 |
6965864 | Thrift et al. | Nov 2005 | B1 |
6973429 | Smith | Dec 2005 | B2 |
6978136 | Jenniges et al. | Dec 2005 | B2 |
6980834 | Gupta et al. | Dec 2005 | B2 |
6999930 | Roberts et al. | Feb 2006 | B1 |
7007074 | Radwin | Feb 2006 | B2 |
7016845 | Vora et al. | Mar 2006 | B2 |
7020609 | Thrift et al. | Mar 2006 | B2 |
7028306 | Boloker et al. | Apr 2006 | B2 |
7035805 | Miller | Apr 2006 | B1 |
7103349 | Himanen et al. | Sep 2006 | B2 |
7113911 | Hinde et al. | Sep 2006 | B2 |
7116976 | Thomas et al. | Oct 2006 | B2 |
7116985 | Wilson et al. | Oct 2006 | B2 |
7136634 | Rissanen et al. | Nov 2006 | B1 |
7136846 | Chang et al. | Nov 2006 | B2 |
7137126 | Coffman et al. | Nov 2006 | B1 |
7162365 | Clapper | Jan 2007 | B2 |
7171243 | Watanabe et al. | Jan 2007 | B2 |
7188067 | Grant et al. | Mar 2007 | B2 |
7203721 | Ben-Efraim et al. | Apr 2007 | B1 |
7210098 | Sibal et al. | Apr 2007 | B2 |
7212971 | Jost et al | May 2007 | B2 |
7231025 | Labaton | Jun 2007 | B2 |
7257575 | Johnston et al. | Aug 2007 | B1 |
7283850 | Granovetter et al. | Oct 2007 | B2 |
7328770 | Owens et al. | Feb 2008 | B2 |
7330890 | Partovi et al. | Feb 2008 | B1 |
7346374 | Witkowski et al. | Mar 2008 | B2 |
7359723 | Jones | Apr 2008 | B2 |
7376434 | Thomas et al. | May 2008 | B2 |
7376586 | Partovi et al. | May 2008 | B1 |
7379969 | Osborn, Jr. | May 2008 | B2 |
7415537 | Maes | Aug 2008 | B1 |
7437183 | Makinen | Oct 2008 | B2 |
7457397 | Saylor et al. | Nov 2008 | B1 |
7477909 | Roth | Jan 2009 | B2 |
7487085 | Cross et al. | Feb 2009 | B2 |
7487453 | Goebel et al. | Feb 2009 | B2 |
7489946 | Srinivasan et al. | Feb 2009 | B2 |
7493259 | Jones et al. | Feb 2009 | B2 |
7493260 | Harb et al. | Feb 2009 | B2 |
7502627 | Sacks et al. | Mar 2009 | B2 |
7505978 | Bodin et al. | Mar 2009 | B2 |
7509260 | Cross et al. | Mar 2009 | B2 |
7509659 | McArdle | Mar 2009 | B2 |
7515900 | Van Camp | Apr 2009 | B2 |
7545917 | Jones et al. | Jun 2009 | B2 |
7551916 | Gortz et al. | Jun 2009 | B2 |
7564959 | Greenaae et al. | Jul 2009 | B2 |
7603291 | Raiyani et al. | Oct 2009 | B2 |
7636426 | Korah et al. | Dec 2009 | B2 |
7650170 | May et al. | Jan 2010 | B2 |
7664649 | Jost et al. | Feb 2010 | B2 |
7689253 | Basir | Mar 2010 | B2 |
7706780 | Adler | Apr 2010 | B2 |
7706818 | Cho | Apr 2010 | B2 |
7787867 | Berger et al. | Aug 2010 | B2 |
7801728 | Ben-David et al. | Sep 2010 | B2 |
7808980 | Skakkebaek et al. | Oct 2010 | B2 |
7809575 | Ativanichayaphong et al. | Oct 2010 | B2 |
7822608 | Cross et al. | Oct 2010 | B2 |
7827033 | Ativanichayaphong et al. | Nov 2010 | B2 |
7864929 | Carro | Jan 2011 | B2 |
7890128 | Thomas et al. | Feb 2011 | B1 |
7899173 | Ahn et al. | Mar 2011 | B2 |
7937125 | May et al. | May 2011 | B2 |
7965196 | Liebermann | Jun 2011 | B2 |
RE42738 | Williams | Sep 2011 | E |
8041296 | Skog et al. | Oct 2011 | B2 |
8046220 | Argarwal et al. | Oct 2011 | B2 |
8065143 | Yanagihara | Nov 2011 | B2 |
8073590 | Zilka | Dec 2011 | B1 |
8073700 | Jaramillo et al. | Dec 2011 | B2 |
8082148 | Agapi et al. | Dec 2011 | B2 |
8086289 | May et al. | Dec 2011 | B2 |
8103509 | Burns et al. | Jan 2012 | B2 |
8112104 | Thomas et al. | Feb 2012 | B1 |
8121837 | Agapi et al. | Feb 2012 | B2 |
8200295 | May et al. | Jun 2012 | B2 |
8203528 | Spalink | Jun 2012 | B2 |
8214242 | Agapi et al. | Jul 2012 | B2 |
8229081 | Agapi et al. | Jul 2012 | B2 |
8233919 | Haag et al. | Jul 2012 | B2 |
8243888 | Cho | Aug 2012 | B2 |
8260247 | Lazaridis et al. | Sep 2012 | B2 |
8265659 | Lee | Sep 2012 | B2 |
8265862 | Zilka | Sep 2012 | B1 |
8270980 | Forssell | Sep 2012 | B2 |
8280419 | Thomas et al. | Oct 2012 | B1 |
8280434 | Garg | Oct 2012 | B2 |
8285213 | Skog et al. | Oct 2012 | B2 |
8285273 | Roth | Oct 2012 | B2 |
8290540 | Kittel et al. | Oct 2012 | B2 |
8301168 | Zubas et al. | Oct 2012 | B2 |
8315875 | Burns et al. | Nov 2012 | B2 |
8326328 | LeBeau et al. | Dec 2012 | B2 |
8340966 | Kerimovska et al. | Dec 2012 | B2 |
8344851 | Bisht | Jan 2013 | B2 |
8359020 | Lebeau et al. | Jan 2013 | B2 |
8374872 | Kesireddy | Feb 2013 | B2 |
8380516 | Jaramillo et al. | Feb 2013 | B2 |
RE44103 | Williams | Mar 2013 | E |
8412284 | Khoshaba et al. | Apr 2013 | B2 |
8442429 | Hawit | May 2013 | B2 |
8442447 | Veluppillai et al. | May 2013 | B2 |
8447285 | Bladon et al. | May 2013 | B1 |
8447598 | Chutorash et al. | May 2013 | B2 |
8457612 | Daniell | Jun 2013 | B1 |
8457963 | Charriere | Jun 2013 | B2 |
8489690 | Abuelsaad et al. | Jul 2013 | B2 |
8508379 | Vander Veen et al. | Aug 2013 | B2 |
8526932 | Tofighbakhsh et al. | Sep 2013 | B2 |
8538386 | May et al. | Sep 2013 | B2 |
8538491 | Khoshaba et al. | Sep 2013 | B2 |
8549501 | Eichenberger et al. | Oct 2013 | B2 |
8554254 | May et al. | Oct 2013 | B2 |
8559922 | Hardin | Oct 2013 | B2 |
8565820 | Riemer et al. | Oct 2013 | B2 |
8571612 | Gold | Oct 2013 | B2 |
8577422 | Ledet | Nov 2013 | B1 |
8577543 | Basir et al. | Nov 2013 | B2 |
8583093 | Bort | Nov 2013 | B1 |
8583431 | Furman et al. | Nov 2013 | B2 |
8594743 | Sano | Nov 2013 | B2 |
20010051517 | Strietzel | Dec 2001 | A1 |
20010053252 | Creque | Dec 2001 | A1 |
20020062393 | Borger et al. | May 2002 | A1 |
20020065828 | Goodspeed | May 2002 | A1 |
20020065944 | Hickey et al. | May 2002 | A1 |
20020077086 | Tuomela et al. | Jun 2002 | A1 |
20020087408 | Burnett | Jul 2002 | A1 |
20020092019 | Marcus | Jul 2002 | A1 |
20020095472 | Berkowitz et al. | Jul 2002 | A1 |
20020099553 | Brittan et al. | Jul 2002 | A1 |
20020120554 | Vega | Aug 2002 | A1 |
20020147593 | Lewis et al. | Oct 2002 | A1 |
20020184610 | Chong et al. | Dec 2002 | A1 |
20020194388 | Boloker et al. | Dec 2002 | A1 |
20030024975 | Rajasekharan | Feb 2003 | A1 |
20030039341 | Burg et al. | Feb 2003 | A1 |
20030046316 | Gergic et al. | Mar 2003 | A1 |
20030046346 | Mumick et al. | Mar 2003 | A1 |
20030078779 | Desai et al. | Apr 2003 | A1 |
20030101451 | Bentolila et al. | May 2003 | A1 |
20030125945 | Doyle | Jul 2003 | A1 |
20030125958 | Alpdemir et al. | Jul 2003 | A1 |
20030171926 | Suresh et al. | Sep 2003 | A1 |
20030179865 | Stillman et al. | Sep 2003 | A1 |
20030182622 | Sibal et al. | Sep 2003 | A1 |
20030195739 | Washio | Oct 2003 | A1 |
20030217161 | Balasuriya | Nov 2003 | A1 |
20030229900 | Reisman | Dec 2003 | A1 |
20030235282 | Sichelman et al. | Dec 2003 | A1 |
20040006478 | Alpdemir et al. | Jan 2004 | A1 |
20040019487 | Kleindienst et al. | Jan 2004 | A1 |
20040025115 | Sienel et al. | Feb 2004 | A1 |
20040031058 | Reisman | Feb 2004 | A1 |
20040044516 | Kennewick et al. | Mar 2004 | A1 |
20040049390 | Brittan et al. | Mar 2004 | A1 |
20040059705 | Wittke et al. | Mar 2004 | A1 |
20040076279 | Taschereau | Apr 2004 | A1 |
20040083109 | Halonen et al. | Apr 2004 | A1 |
20040120472 | Popay et al. | Jun 2004 | A1 |
20040120476 | Harrison et al. | Jun 2004 | A1 |
20040138890 | Ferrans et al. | Jul 2004 | A1 |
20040140989 | Papageorge | Jul 2004 | A1 |
20040153323 | Charney et al. | Aug 2004 | A1 |
20040179038 | Blattner et al. | Sep 2004 | A1 |
20040203766 | Jenniges et al. | Oct 2004 | A1 |
20040216036 | Chu et al. | Oct 2004 | A1 |
20040224662 | O'Neil et al. | Nov 2004 | A1 |
20040236574 | Ativanichayaphong et al. | Nov 2004 | A1 |
20040260562 | Kujirai | Dec 2004 | A1 |
20050004840 | Wanninger | Jan 2005 | A1 |
20050015256 | Kargman | Jan 2005 | A1 |
20050021744 | Haitsuka et al. | Jan 2005 | A1 |
20050033582 | Gadd et al. | Feb 2005 | A1 |
20050054381 | Lee et al. | Mar 2005 | A1 |
20050075884 | Badt, Jr. | Apr 2005 | A1 |
20050091059 | Lecoeuche | Apr 2005 | A1 |
20050131701 | Cross et al. | Jun 2005 | A1 |
20050138219 | Boughannam | Jun 2005 | A1 |
20050138562 | Carro | Jun 2005 | A1 |
20050138647 | Boughannam | Jun 2005 | A1 |
20050154580 | Horowitz | Jul 2005 | A1 |
20050160461 | Baumgartner et al. | Jul 2005 | A1 |
20050188411 | Dacosta | Aug 2005 | A1 |
20050203729 | Roth et al. | Sep 2005 | A1 |
20050203747 | Lecoeuche | Sep 2005 | A1 |
20050261908 | Cross | Nov 2005 | A1 |
20050273769 | Eichenberger et al. | Dec 2005 | A1 |
20050283367 | Cross | Dec 2005 | A1 |
20060004627 | Baluja | Jan 2006 | A1 |
20060047510 | Cross | Mar 2006 | A1 |
20060064302 | Cross | Mar 2006 | A1 |
20060069564 | Allison et al. | Mar 2006 | A1 |
20060074680 | Cross | Apr 2006 | A1 |
20060075120 | Smit | Apr 2006 | A1 |
20060111906 | Cross | May 2006 | A1 |
20060122836 | Cross | Jun 2006 | A1 |
20060123358 | Lee et al. | Jun 2006 | A1 |
20060136222 | Cross | Jun 2006 | A1 |
20060146728 | Engelsma et al. | Jul 2006 | A1 |
20060150119 | Chesnais et al. | Jul 2006 | A1 |
20060168095 | Sharma et al. | Jul 2006 | A1 |
20060168595 | McArdle | Jul 2006 | A1 |
20060184626 | Agapi | Aug 2006 | A1 |
20060190264 | Jaramillo | Aug 2006 | A1 |
20060218039 | Johnson | Sep 2006 | A1 |
20060229880 | White | Oct 2006 | A1 |
20060235694 | Cross | Oct 2006 | A1 |
20060264209 | Atkinson et al. | Nov 2006 | A1 |
20060287845 | Cross et al. | Dec 2006 | A1 |
20060287865 | Cross et al. | Dec 2006 | A1 |
20060287866 | Cross et al. | Dec 2006 | A1 |
20060288309 | Cross et al. | Dec 2006 | A1 |
20070032229 | Jones | Feb 2007 | A1 |
20070061146 | Jaramillo et al. | Mar 2007 | A1 |
20070099636 | Roth | May 2007 | A1 |
20070169143 | Li | Jul 2007 | A1 |
20070174244 | Jones | Jul 2007 | A1 |
20070174273 | Jones et al. | Jul 2007 | A1 |
20070174904 | Park | Jul 2007 | A1 |
20070185768 | Vengroff et al. | Aug 2007 | A1 |
20070185841 | Jones et al. | Aug 2007 | A1 |
20070185843 | Jones et al. | Aug 2007 | A1 |
20070265851 | Cross et al. | Nov 2007 | A1 |
20070274296 | Cross et al. | Nov 2007 | A1 |
20070274297 | Cross et al. | Nov 2007 | A1 |
20070288241 | Cross et al. | Dec 2007 | A1 |
20070294084 | Cross et al. | Dec 2007 | A1 |
20080027707 | Stefik et al. | Jan 2008 | A1 |
20080065386 | Cross et al. | Mar 2008 | A1 |
20080065387 | Cross et al. | Mar 2008 | A1 |
20080065388 | Cross et al. | Mar 2008 | A1 |
20080065389 | Cross et al. | Mar 2008 | A1 |
20080065390 | Ativanichayaphong et al. | Mar 2008 | A1 |
20080086564 | Putman et al. | Apr 2008 | A1 |
20080097760 | Hong et al. | Apr 2008 | A1 |
20080140410 | Cross et al. | Jun 2008 | A1 |
20080162136 | Ativanichayaphong et al. | Jul 2008 | A1 |
20080177530 | Cross et al. | Jul 2008 | A1 |
20080195393 | Cross et al. | Aug 2008 | A1 |
20080208584 | Cross et al. | Aug 2008 | A1 |
20080208585 | Ativanichayaphong et al. | Aug 2008 | A1 |
20080208586 | Ativanichayaphong et al. | Aug 2008 | A1 |
20080208587 | Cross et al. | Aug 2008 | A1 |
20080208588 | Cross et al. | Aug 2008 | A1 |
20080208589 | Cross et al. | Aug 2008 | A1 |
20080208590 | Cross et al. | Aug 2008 | A1 |
20080208591 | Ativanichayaphong et al. | Aug 2008 | A1 |
20080208592 | Cross et al. | Aug 2008 | A1 |
20080208593 | Ativanichayaphong et al. | Aug 2008 | A1 |
20080208594 | Cross et al. | Aug 2008 | A1 |
20080228494 | Cross et al. | Sep 2008 | A1 |
20080228495 | Cross et al. | Sep 2008 | A1 |
20080235021 | Cross et al. | Sep 2008 | A1 |
20080235022 | Cross et al. | Sep 2008 | A1 |
20080235027 | Cross | Sep 2008 | A1 |
20080235029 | Cross et al. | Sep 2008 | A1 |
20080249782 | Ativanichayaphong et al. | Oct 2008 | A1 |
20080255850 | Cross et al. | Oct 2008 | A1 |
20080255851 | Cross et al. | Oct 2008 | A1 |
20090030680 | Mamou | Jan 2009 | A1 |
20090144428 | Bowater | Jun 2009 | A1 |
20090271199 | Agapi et al. | Oct 2009 | A1 |
20130005367 | Roth | Jan 2013 | A1 |
20130158994 | Jaramillo et al. | Jun 2013 | A1 |
Number | Date | Country |
---|---|---|
1385783 | Dec 2002 | CN |
1385783 | Dec 2002 | CN |
1564123 | Jan 2005 | CN |
0794670 | Sep 1997 | EP |
0 854 417 | Jul 1998 | EP |
1 143 679 | Oct 2001 | EP |
1450350 | Aug 2004 | EP |
0507148.5 | Apr 2005 | GB |
2000-155529 | Jun 2000 | JP |
2003-140672 | May 2003 | JP |
WO 9948088 | Sep 1999 | WO |
WO 0051106 | Aug 2000 | WO |
WO 0077978 | Dec 2000 | WO |
WO 0191488 | Nov 2001 | WO |
WO 0231814 | Apr 2002 | WO |
WO 0232140 | Apr 2002 | WO |
WO 0241169 | May 2002 | WO |
WO 2004054217 | Jun 2004 | WO |
WO 2004062945 | Jul 2004 | WO |
WO 2005020094 | Mar 2005 | WO |
WO2006108795 | Oct 2006 | WO |
WO 2007031447 | Mar 2007 | WO |
Entry |
---|
U.S. Appl. No. 10/919,005, filed Dec. 2005, Eichenberger, et al. |
U.S. Appl. No. 12/109,151, filed Apr. 2008, Agapi, et al. |
U.S. Appl. No. 12/109,167, filed Apr. 2008, Agapi, et al. |
U.S. Appl. No. 12/109,204, filed Apr. 2008, Agapi, et al. |
U.S. Appl. No. 12/109,227, filed Apr. 2008, Agapi, et al. |
U.S. Appl. No. 12/109,214, filed Apr. 2008, Agapi, et al. |
U.S. Appl. No. 10/924,520, filed Aug. 24, 2004, Charles W. Cross, Jr. |
U.S. Appl. No. 10/945,112, filed Sep. 20, 2004, Charles W. Cross, Jr. |
U.S. Appl. No. 10/870,517, filed Jun. 17, 2004, Charles W. Cross, Jr. |
U.S. Appl. No. 10/441,839, filed May 20, 2003, S. Ativanichayaphong. |
U.S. Appl. No. 11/062,731, filed Feb. 22, 2005, David Jaramillo. |
U.S. Appl. No. 11/007,830, filed Dec. 8, 2004, Charles W. Cross, Jr. |
U.S. Appl. No. 10/945,119, filed Sep. 20, 2004, Charles W. Cross, Jr. |
U.S. Appl. No. 11/022,464, filed Dec. 22, 2004, Charles W. Cross, Jr. |
U.S. Appl. No. 10/741,997, filed Dec. 19, 2003, Akram Boughannam. |
U.S. Appl. No. 10/741,499, filed Dec. 19, 2003, Akram Boughannam. |
U.S. Appl. No. 11/056,493, filed Feb. 11, 2005, Ciprian Agapi. |
U.S. Appl. No. 11/093,545, filed Mar. 30, 2005, Marc White. |
U.S. Appl. No. 11/105,865, filed Apr. 14, 2005, Charles W. Cross, Jr. |
U.S. Appl. No. 10/849,642, filed May 19, 2004, Charles W. Cross, Jr. |
U.S. Appl. No. 10/992,979, filed Nov. 19 2004, Charles W. Cross, Jr. |
U.S. Appl. No. 10/733,610, filed Dec. 11, 2003, Charles W. Cross, Jr. |
Axelsson, et al.; “XHTML+Voice Profile 1.2” Internet, [Online]Mar. 16, 2004, pp. 1-53, XP002484188 Retrieved from the Internet: URL: http://www.voicexml.org/specs/multimodal/x+v/12/spec.html [retrieved on Jun. 12, 2008]. |
W3C: “Voice Extensible Markup Language (VoiceXML) Version 2.0” Internet Citation, [Online] XP002248286 Retrieved from the Internet: URL:http://www.w3.org/TR/voicexml20 [retrieved on Jul. 18, 2003]. |
W3C: “Voice Extensible Markup Language (VoiceXML) 2.1, W3C Candidate Recommendation Jun. 13, 2005” Internet, [Online] Jun. 13, 2005 (2005-16-13), pp. 1-34, XP002484189 Retrieved from the Internet: URL:http://www.w3.org/TR/2005/CR-voicexml21-20050613/ [retrieved on Jun. 12, 2008]. |
PCT Search Report, Jun. 25, 2008; PCT Application No. PCT/EP2008/051358. |
PCT Search Report, Jun. 18, 2008; PCT Application No. PCT/EP2008/051363. |
Didier Guillevic, et al.,Robust Semantic Confidence Scoring ICSLP 2002: 7th International Conference on Spoken Language Processing. Denver Colorado, Sep. 16-20, 2002 International Conference on Spoken Language Processing (ICSLP), Adelaide: Causal Productions, AI, Sep. 16, 2002, p. 853, XP007011561 ISBN:9788-1-876346-40-9. |
Official Action dated Mar. 26, 2012 from corresponding European Application No. 08717576.6. |
McCobb, “Multimodal interaction and the mobile Web, Part 1: Multimodal auto-fill,” Nov. 15, 2005, pp. 1-8, http://www.ibm.com/developerworks/web/library/wi-mobweb/. |
White, “Multimodal interaction and the mobile Web, Part 2: Simple searchers with Find-It,” Dec. 6, 2005, pp. 1-5, http://www.ibm.com/edeveloperworks/wireless/library/wi-mobweb2/. |
McCobb, “Multimodal interaction and the mobile Web, Part 3: User authentication,” Jan. 10, 2006, pp. 1-6, http://www.ibm.com/developerworks/wireless/library/wi-mobweb3/. |
International Search Report mailed Nov. 17, 2006 for Application No. PCT/EP2006/066037. |
International Preliminary Report on Patentability and Written Opinion issued Mar. 18, 2008 for Application No. PCT/EP2006/066037. |
International Search Report and Written Opinion mailed Mar. 5, 2007 for Application No. PCT/U52006/038411. |
International Preliminary Report on Patentability mailed May 15, 2008 for Application No. PCT/U52006/038411. |
International Search Report and Written Opinion mailed Jul. 11, 2008 for Application No. PCT/EP2008/052829 . |
International Preliminary Report on Patentability mailed Sep. 24, 2009 for Application No. PCT/EP2008/052829. |
Axelsson et al., “Mobile X+V 1.2,” Voice XML Organization, Sep. 5, 2005, www.voicexml.org/specs/multimodal/x+v/mobile/12/>, retrieved Oct. 31, 2006. |
Carmichael, “Next, Previous, Stop: Implementing an Audio Metaphor of Tape Recorder Buttons for the THISL Information Retrieval Engine Speech Interface,” 2002. Available at: www.ida.liu.se/˜nlplab/chi-ws-02/papers/carmichael.doc. Last accessed Jan. 13, 2014. |
Dalianis et al. “SiteSeeker Voice—A speech controlled search engine,” Wapalizer Paper, pp. 1-2, Feb. 25 (2003), http://www.nada.kth.se/hercules/wapalizer/SiteSeekerVoice.html> retrieved on Oct. 30, 2006. |
Franz et al. “Searching the Web by Voice,” International Conference on Computational Linguistics, Proceedings of Coling, XX, XX, 2002, pp. 1213-1217. |
Google Short Message Service (SMS), [online] [retrieved on Sep. 29, 2005], retrieved from the Internet <URL: http://www.google.com/sms/>. |
Hemphill et al. “Surfing the Web by Voice,” Proceedings ACM Multimedia, Nov. 1995, pp. 215-222. |
Lai et al., “Robustness in Speech Based Interfaces: Sharing the Tricks of the Trade,” Proceeding CHI EA '02 CHI '02 Extended Abstracts on Human Factors in Computing Systems. 2002:915. |
Nokia 616 Car Kit, [online] [retrieved on Sep. 29, 2005], retrieved from the Internet <URL: http://europe.nokia.com/nokia/065324,00.html>. |
Van Tichelen, “Semantic Interpretation for Speech Recognition,” W3C Working Draft, Nov. 8, 2004, www.w3.org/TR/2004/WD-semantic interpretation-20041108/, retrieved Oct. 31, 2006. |
White, “Multimodal interaction and the mobile Web, Part 2: Simple searches with Find-It”, (Feb. 6, 2005), http://www-128.ibm.com/developerworks/web/library/wi-mobweb2/> retrieved on Oct. 31, 2006. |
Wyard et al. “Spoken Language Systems—Beyond Prompt and Response,” BT Technology Journal, Springer, Dordrect, NL, vol. 14, No. 1, Jan. 1996. |
Number | Date | Country | |
---|---|---|---|
20080228494 A1 | Sep 2008 | US |