Retrieval and presentation of network service results for mobile device using a multimodal browser

Information

  • Patent Grant
  • 8781840
  • Patent Number
    8,781,840
  • Date Filed
    Thursday, January 31, 2013
    12 years ago
  • Date Issued
    Tuesday, July 15, 2014
    10 years ago
Abstract
A method of obtaining information using a mobile device can include receiving a request including speech data from the mobile device, and querying a network service using query information extracted from the speech data, whereby search results are received from the network service. The search results can be formatted for presentation on a display of the mobile device. The search results further can be sent, along with a voice grammar generated from the search results, to the mobile device. The mobile device then can render the search results.
Description
BACKGROUND

1. Field of the Invention


The present invention relates to pervasive computing.


2. Description of the Related Art


A growing number of Web services are being made available to software developers. In general, a Web, or network, service refers to programmable application logic, which is made available to “consumer” applications via standard Internet protocols. Typically, a Web service is self-describing, self-contained, and modular in nature. Consumer applications access the Web service via protocols including Hypertext Transfer Protocol (HTTP) and Extensible Markup Language (XML)-based standards such as Simple Object Access Protocol (SOAP), Web Services Description Language (WSDL), and Universal Description Discovery and Integration (UDDI).


One common function of a Web service is to provide developers with access to different data sets via search engines. Examples of data sets that can be searched via a Web service and/or search engine can include, but are not limited to, weather information, traffic conditions, on-line auctions, and the like. For the most part, users access Web services from consumer applications which execute on conventional computer systems, and thus, have a standard sized display or monitor. The display provides a suitable visual interface through which the user can interact with the application and/or the Web service.


Pervasive computing has garnered significant attention in recent years. Pervasive computing refers to an emerging trend in which computing devices are increasingly ubiquitous, numerous, and mobile. In practical terms, the rise of pervasive computing has meant that users are accessing applications and/or Web services through smaller portable and/or otherwise mobile devices such as portable phones, personal digital assistants, embedded systems, or the like. Users use these portable devices in lieu of more conventional desktop computer systems. These smaller devices have correspondingly smaller displays, making it difficult for users to interact with applications and Web services using purely visual means. Conventional user interaction techniques for mobile devices which involve the use of a stylus and on-screen keyboard or handwriting recognition, however, do not provide users with a fast and accurate means of communication.


It would be beneficial to provide a technique which allows a user to quickly and intuitively access Web services via a mobile computer system which overcomes the limitations described above.


SUMMARY OF THE INVENTION

The present invention provides a solution for obtaining and/or retrieving search results over a network. One embodiment of the present invention can include a method of obtaining information using a mobile device including receiving a request including speech data from the mobile device, and querying a network service using query information extracted from the speech data, whereby search results are received from the network service. The search results can be formatted for presentation on a display of the mobile device. The search results further can be sent, along with a voice grammar generated from the search results, to the mobile device. The mobile device then can render the search results.


Another embodiment of the present invention can include a method of obtaining information using a mobile device. The method can include receiving a request including speech data from the mobile device and dynamically building a query using the speech data. The query can be sent to a network service and, in response, search results can be received from the network service. The method also can include dynamically creating a voice grammar from the search results and creating a multi-modal markup language document including the search results and the voice grammar and sending the multi-modal markup language document to the mobile device.


Yet another embodiment of the present invention can include a machine readable storage, having stored thereon a computer program having a plurality of code sections executable by a machine. The code sections can cause the machine to perform the various steps disclosed herein.





BRIEF DESCRIPTION OF THE DRAWINGS

There are shown in the drawings, embodiments which are presently preferred; it being understood, however, that the invention is not limited to the precise arrangements and instrumentalities shown.



FIG. 1 is a block diagram illustrating a system for retrieving information from a network-based service in accordance with one embodiment of the present invention.



FIG. 2 is a flow chart illustrating a method of retrieving information from a network-based service in accordance with another embodiment of the present invention.



FIG. 3 is a pictorial view of a graphical user interface (GUI) illustrating another aspect of the present invention.



FIG. 4 is a pictorial view of a GUI illustrating another aspect of the present invention.



FIG. 5 is a pictorial view of a GUI illustrating yet another aspect of the present invention.





DETAILED DESCRIPTION OF THE INVENTION

While the specification concludes with claims defining the features of the invention that are regarded as novel, it is believed that the invention will be better understood from a consideration of the description in conjunction with the drawings. As required, detailed embodiments of the present invention are disclosed herein; however, it is to be understood that the disclosed embodiments are merely exemplary of the invention, which can be embodied in various forms. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present invention in virtually any appropriately detailed structure. Further, the terms and phrases used herein are not intended to be limiting but rather to provide an understandable description of the invention.


The present invention provides a method, system, and apparatus for obtaining information from a network service. For example, the present invention can be used to obtain information from search engines and/or other Web services or can function as a Web service in and of itself. In accordance with the inventive arrangements disclosed herein, users can speak into a mobile device which hosts a multimodal browser. The speech data collected from the user's spoken utterance can be converted into a query, which can be forwarded to a network service. Results obtained from the query can be processed for presentation upon a display of the mobile device. For example, results can be formatted using an appropriate markup language. A grammar can be dynamically generated from the search results and included with the markup language document that specifies the search results. The resulting markup language document then can be sent to the mobile device.



FIG. 1 is a block diagram illustrating a system 100 for retrieving information from a network-based service in accordance with one embodiment of the present invention. As shown, system 100 can include a mobile device 105, a proxy server 115, and a search engine 120. The mobile device 105, the proxy server 115, and the search engine 120 can communicate via a communications network 125. The communications network 125 can include, but is not limited to, a wide area network, a local area network, the Public Switched Telephone Network (PSTN), the Web, the Internet, and one or more intranets. The communications network 125 further can include one or more wireless networks, whether short or long range. For example, in terms of short range wireless networks, the communication network 125 can include a local wireless network built using a Bluetooth or one of the 802.11x wireless communication protocols. In terms of long range wireless networks, the communication network 125 can include a mobile, cellular, and or satellite-based wireless network.


The mobile device 105 can be a handheld device such as a personal digital assistant (PDA), a mobile phone, or the like. In another embodiment, the mobile device can function as a component that is embedded within a larger system such as a larger computer system or a vehicle such as an automobile, plane, boat, or the like. In any case, mobile device 105 can include audio input and output hardware, i.e. a microphone and speaker along with the necessary audio circuitry for digitizing audio and for playing, or rendering, digitized audio via the speaker.


The mobile device 105 can execute an operating system (not shown) and also a multimodal browser 110. The term “multimodal” refers to the ability of the browser 110 to use multiple modes or channels for interactions with a user and/or other computer system within a same communication session. Different modes of interaction can include, but are not limited to, speech, keyboard data entry, touch screen data entry, and stylus data entry. Depending on the situation and the physical configuration and capabilities of mobile device 105, a combination of different input modes can be used for entering data. For example, when executing within a PDA, the multimodal browser 110 can allow a user to select items by tapping on a touch sensitive display as well as by providing spoken input. Similarly, a user can enter data into a given field using either voice input and/or a stylus. Multimodal browser 110 further can visually display and audibly play information to users.


In one embodiment, the multimodal browser 110 can render markup language documents. The particular type of markup language that can be rendered by multimodal browser 110 can be one that is suited for multimodal applications and/or interactions such as XHTML+Voice (X+V) markup language. X+V markup language brings spoken interaction to standard Web content by integrating XHTML and XML-Events technologies with XML vocabularies. X+V has been developed as part of the World Wide Web Consortium (W3C) Speech Interface Framework. The profile includes voice modules that support speech synthesis, speech dialogs, command and control, and speech grammars. Voice handlers can be attached to XHTML elements and respond to specific DOM events, thereby reusing the event model familiar to Web developers. Voice interaction features are integrated with XHTML and cascading style sheets (CSS) and can consequently be used directly within XHTML content. Thus, as used herein, rendering can include, but is not limited to, displaying content specified by a multimodal markup language document, playing audio and/or video content specified by the multimodal markup language document, or playing other media types as may be specified by such a document.


The present invention, however, is not limited to the user of any one particular type of markup language. It should be appreciated that other markup languages capable of supporting multimodal applications and/or interactions can be used. For example, another example of a multimodal technology that can be used can be Speech Application Language Tags (SALT).


Continuing with FIG. 1, the proxy server 115 can be a program executing within a suitable information processing system which can perform various translation functions to be described herein. In one embodiment, the proxy server 115 can be implemented as an Hypertext Transfer Protocol (HTTP) server. Regardless of the particular implementation of the proxy server 115, it can extract information from a request, and particularly from speech data, received from the mobile device 105. The extracted information can be used to formulate and construct a request, such as an HTTP request, which can be forwarded to the search engine 120, or other Web-based or network service.


The search engine 120 is a computer program or application which executes in a suitable information processing system. Search engine 120 can respond to queries or requests. Based upon the received request, the search engine 120 can search and retrieve information which conforms to the request. Typically, the search engine performs a keyword or other type of search by comparing parameters specified by a received query with an index maintained by the search engine 120. The index includes a collection of keywords that have been extracted from available content. The keywords of the index further are associated with the source document(s) or an address of such document(s), whether a text file, a markup language document, a multimedia file, or the like. Accordingly, once a match is found between the query parameters and the index, the corresponding file(s) or address(es), are retrieved. The results, whether a list of documents and addresses, or the actual documents, can be returned to the requestor, in this case the proxy server 115.


The proxy server 115 can format received results into a visual presentation that is more suited for display upon a smaller display screen, which is typical of mobile device 105. While any of a variety of different transform techniques can be used, in one embodiment, an Extensible Stylesheet Language (XSL) transform can be used. The proxy server 115 further can dynamically build a voice grammar from the results received from the search engine 120. The voice grammar allows the user to request additional information for each data item in the search results by voice. This grammar can be added to the search results and sent to the mobile device 105 as a multimodal markup language document.



FIG. 2 is a flow chart illustrating a method 200 of retrieving information from a network-based service in accordance with another embodiment of the present invention. In one embodiment, method 200 can be implemented using the system illustrated with reference to FIG. 1. Accordingly, method 200 can begin in step 205, where the mobile device, via the multimodal browser executing therein, issues an initial request to the proxy server. The initial request can request a multimodal markup language document such as an X+V document.


In step 210, the proxy server retrieves the multimodal markup language document identified by the request and forwards the document to the mobile device. This multimodal markup language document can specify or include a voice grammar which allows speech input directed to the mobile device executing the document to be interpreted and/or processed. In step 215, after receiving the multimodal markup language document, the multimodal browser within the mobile device executes or renders the multimodal markup language document. In step 220, the mobile device can receive a speech input from a user. In one embodiment, the speech input can be a spoken search request. For example, a user can say “find pizza restaurants in Boca Raton, Fla.”.


In step 225, the mobile device forwards speech data, i.e. the user request, to the proxy server. In one embodiment, the mobile device, and particularly the multimodal browser within the mobile device, can include a speech recognition engine which can convert the user speech to a textual representation. In that case, the speech data sent to the proxy server can be textual representations of received user speech input(s). In another embodiment, speech data can be embodied as audio data, i.e. a digital audio representation of the user speech. In that case, the proxy server can include a speech recognition engine which converts the user speech into a textual representation.


In step 230, the proxy server can extract query information from the received speech data. The speech data can be processed using semantic interpretation. Semantic interpretation allows the proxy server to capture conceptual relationships between smaller concepts and strings. Semantic interpretation can include, but is not limited to, determining the right concept, or sense, for each component of a complex term or phrase. This process can be referred to as semantic disambiguation. The semantic relations which hold among the smaller concepts are identified in order to build more complex concepts.


The extracted data, referred to as query data, can be translated, or used to, build a query in step 235. For example, using semantic interpretation, the proxy server can extract the words “pizza”, “restaurant”, “Boca Raton”, and “Florida” from the user spoken utterance “find pizza restaurant in Boca Raton, Fla.”. Semantic interpretation allows the proxy server to effectively identify “pizza” as a modifier of the term “restaurant” indicating a particular type or class of restaurant. Further, location information comprising a city and state is identified which can be used to limit the field of search. The extracted words can function as the parameters within a query that is constructed in step 235. For example, the query that can be constructed from the spoken utterance “find pizza restaurant in Boca Raton, Fla.” can be “query=pizza restaurant&city=Boca Raton&state=FL”.


In step 240, the proxy server can submit the query that was constructed from the speech data to a network-based service. As noted, in one embodiment, the network-based service can be a search engine, or the like. The network-based service, upon receiving the query from the proxy server, can retrieve the search results and provide the search results to the proxy server. In step 245, the proxy server can receive the search results from the network-based service.


In step 250, the proxy server can format the results received from the network-based service for display upon a display screen of the mobile device. As noted, the display screens of mobile devices, whether handheld, standalone, or embedded devices, typically are small and require special consideration when formatting content for visual presentation. In one embodiment, the proxy server can use a technology such as XSLT transforms to format the received data. In any case, in formatting the search results, the proxy server can generate a multimodal markup language document that specifies the search results. This document can be provided to the mobile device.


In step 255, the proxy server can generate, dynamically, a voice grammar. The voice grammar is generated from the words and/or text included in the search results received from the network-based service. Dynamically creating a voice grammar from the search results allows a user to query the search results and request further information or detail. In step 260, the dynamically created voice grammar can be included within the formatted search results. More particularly, the dynamically created voice grammar can be included, or referenced by, the multimodal markup language document that was created by the proxy server and which specifies the search results. As noted, this allows the user to issue voice requests for further information regarding any of the search result items specified by the multimodal markup language document.


In step 265, the voice grammar that was included in the multimodal markup language document sent to the mobile device in step 210, responsive to the initial request, also can be included within, or referenced by, the multimodal markup language document that specifies the search results. Including the original voice grammar provides the user with greater flexibility in terms of querying the search results using voice commands.


In step 270, the proxy server sends the multimodal markup language document to the mobile device. In step 275, the multimodal browser executing within the mobile device renders the received multimodal markup language document. Thus, mobile device can visually display and/or play content specified by the multimodal markup language document. As noted, since a grammar which was built from the search results is included with the multimodal markup language document, the user can request, via voice, additional information pertaining to any of the search result items.



FIG. 3 is a pictorial view of a graphical user interface (GUI) 300 illustrating another aspect of the present invention. GUI 300 illustrates a view of a mobile device display executing a suitable multimodal browser as discussed herein. GUI 300 illustrates the rendering of an initial multimodal markup language document which can be obtained and downloaded from the proxy server. This multimodal markup language document is responsible for receiving the initial user request that will be processed and converted into a request to be submitted to the network-based service.



FIG. 4 is a pictorial view of a GUI 400 illustrating another aspect of the present invention. GUI 400 illustrates the rendering of the multimodal markup language document that is returned from the proxy server and which specifies the search results obtained from the network-based service. For example, if the user speech provided as input to the multimodal markup language document of FIG. 3 was “find golf courses in Omaha, Nebr.”, that speech data can be provided to the proxy server. The proxy server can process the speech data and extract query parameters (or query information) such as “golf courses”, “Omaha”, and “NE”. This information can be used to build a query such as “query=golf courses&city=Omaha&state=NE”. This query can be provided to the network-based service.


As noted, results received from the network-based service can be formatted using a suitable multimodal markup language for display upon a display screen of the mobile device. Thus, GUI 400 illustrates the results obtained from such a query after formatting by the proxy server and rendering by the multimodal browser of the mobile device. While any word specified by the multimodal markup language document rendered in GUI 400 can be included in the dynamically generated voice grammar disclosed herein, in another embodiment, allowable words, or those words included within the grammar can be bolded as shown. It should be appreciated that any suitable means of visually indicating speakable, or allowable, words, i.e. color or the like, also can be used if so desired.



FIG. 5 is a pictorial view of a GUI 500 illustrating yet another aspect of the present invention. GUI 500 illustrates the case where the user has issued a voice command or query asking for further information regarding golf course 1 from FIG. 4. By speaking one or more words indicating golf course 1, as permitted by the dynamically generated voice grammar, that speech data is provided to the proxy server, processed as discussed herein, and submitted to the network-based service to obtain more detailed information. Results from this most recent query can be returned to the proxy server and formatted. The resulting multimodal markup language document, after rendering, is illustrated by GUI 500. As was the case with reference to FIG. 4, any of the words shown in GUT 500 also can be included within a further dynamically generated grammar thereby allowing the user to access additional detailed information such as a Web link for further details, a Web link for a map of the area surrounding the golf course, or a Web link to the golf courses of the City of Omaha, Nebr.


It should be appreciated that the inventive arrangements disclosed herein can be applied to search and retrieval of any of a variety of different types of information through speech. As such, the various examples discussed herein have been provided for purposes of illustration only and are not intended to limit the scope of the present invention. Further, the various embodiments described herein need not be limited to use with mobile devices. That is, the embodiments described herein can be used with conventional computer systems, whether linked with a communication network via a wired or wireless communication link. Similarly, though the mobile device has been depicted herein as being linked with the communication network through a wireless communication link in FIG. 1, the present invention also contemplates that such a device can be communicatively linked with the proxy server via a wired connection or a combination of both wired and wireless connections.


The present invention can be realized in hardware, software, or a combination of hardware and software. The present invention can be realized in a centralized fashion in one computer system or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software can be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein. The present invention also can be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods.


The terms “computer program”, “software”, “application”, variants and/or combinations thereof, in the present context, mean any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form. For example, computer program can include, but is not limited to, a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system.


The terms “a” and “an,” as used herein, are defined as one or more than one. The term “plurality”, as used herein, is defined as two or more than two. The term “another”, as used herein, is defined as at least a second or more. The terms “including” and/or “having”, as used herein, are defined as comprising (i.e., open language). The term “coupled”, as used herein, is defined as connected, although not necessarily directly, and not necessarily mechanically, i.e. communicatively linked through a communication channel or pathway.


This invention can be embodied in other forms without departing from the spirit or essential attributes thereof. Accordingly, reference should be made to the following claims, rather than to the foregoing specification, as indicating the scope of the invention.

Claims
  • 1. A method comprising acts of: sending from a device, via at least one communication medium, a request to obtain information;receiving at the device, via the at least one communication medium, a response to the request to obtain information, the response comprising one or more search results and a voice grammar dynamically generated based on the one or more search results;presenting via the device at least some of the one or more search results to a user of the device; andusing the voice grammar dynamically generated based on the one or more search results to process user speech spoken by the user subsequent to the at least some of the one or more search results being presented to the user.
  • 2. The method of claim 1, wherein the request to obtain information is a first request, and the act of using the voice grammar dynamically generated based on the one or more search results to process the user speech comprises determining that the user speech comprises an indication that the user desires to select at least one search result of the one or more search results, and wherein the method further comprises an act of: in response to determining that the user speech comprises an indication that the user desires to select the at least one search result, sending a second request via the at least one communication medium, the second request requesting information relating to the at least one search result.
  • 3. The method of claim 2, wherein the response to the first request is a first response and the voice grammar dynamically generated based on the one or more search results is a first voice grammar, and wherein the method further comprises acts of: receiving at the device, via the at least one communication medium, a second response in response to the second request, the second response comprising one or more pieces of information relating to the at least one search result and a second voice grammar dynamically generated based on the one or more pieces of information;presenting via the device at least some of the one or more pieces of information to the user; andusing the second voice grammar dynamically generated based on the one or more pieces of information to process user speech spoken by the user subsequent to the at least some of the one or more pieces of information being presented to the user.
  • 4. The method of claim 1, wherein the act of presenting the at least some of the one or more search results to the user comprises: providing an indication to the user that the user is allowed to select at least one search result of the one or more search results by speaking one or more words associated with the at least one search result.
  • 5. The method of claim 4, wherein providing an indication to the user comprises providing a visual indication that distinguishes the one or more words associated with the at least one search result from other words presented to the user.
  • 6. The method of claim 1, wherein the request to obtain information is a subsequent request and the voice grammar dynamically generated based on the one or more search results is a subsequent voice grammar, and wherein the method further comprises acts of: sending from the device an initial request via the at least one communication medium;receiving at the device, via the at least one communication medium, an initial response to the initial request, the initial response comprising an initial voice grammar; andusing the initial voice grammar to process user speech spoken by the user.
  • 7. The method of claim 6, further comprising an act of: generating the subsequent request as a result of using the initial voice grammar to process user speech.
  • 8. A system comprising at least one processor configured to: send, via at least one communication medium, a request to obtain information;receive, via the at least one communication medium, a response to the request to obtain information, the response comprising one or more search results and a voice grammar dynamically generated based on the one or more search results;present via the device at least some of the one or more search results to a user of the device; anduse the voice grammar dynamically generated based on the one or more search results to process user speech spoken by the user subsequent to the at least some of the one or more search results being presented to the user.
  • 9. The system of claim 8, wherein the request to obtain information is a first request, and using the voice grammar dynamically generated based on the one or more search results to process the user speech comprises determining that the user speech comprises an indication that the user desires to select at least one search result of the one or more search results, and wherein the at least one processor is further configured to: in response to determining that the user speech comprises an indication that the user desires to select the at least one search result, send a second request via the at least one communication medium, the second request requesting information relating to the at least one search result.
  • 10. The system of claim 9, wherein the response to the first request is a first response and the voice grammar dynamically generated based on the one or more search results is a first voice grammar, and wherein the at least one processor is further configured to: receive, via the at least one communication medium, a second response in response to the second request, the second response comprising one or more pieces of information relating to the at least one search result and a second voice grammar dynamically generated based on the one or more pieces of information;present at least some of the one or more pieces of information to the user; anduse the second voice grammar dynamically generated based on the one or more pieces of information to process user speech spoken by the user subsequent to the at least some of the one or more pieces of information being presented to the user.
  • 11. The system of claim 8, wherein the at least one processor is further configured to present the at least some of the one or more search results to the user at least in part by: providing an indication to the user that the user is allowed to select at least one search result of the one or more search results by speaking one or more words associated with the at least one search result.
  • 12. The system of claim 11, wherein providing an indication to the user comprises providing a visual indication that distinguishes the one or more words associated with the at least one search result from other words presented to the user.
  • 13. The system of claim 8, wherein the request to obtain information is a subsequent request and the voice grammar dynamically generated based on the one or more search results is a subsequent voice grammar, and wherein the at least one processor is further configured to: send an initial request via the at least one communication medium;receive, via the at least one communication medium, an initial response to the initial request, the initial response comprising an initial voice grammar; anduse the initial voice grammar to process user speech spoken by the user.
  • 14. The system of claim 13, wherein the at least one processor is further configured to: generate the subsequent request as a result of using the initial voice grammar to process user speech.
  • 15. At least one computer-readable storage device having encoded thereon instructions that, when executed by at least one processor of a device, perform a method comprising acts of: sending from the device, via at least one communication medium, a request to obtain information;receiving at the device, via the at least one communication medium, a response to the request to obtain information, the response comprising one or more search results and a voice grammar dynamically generated based on the one or more search results;presenting via the device at least some of the one or more search results to a user of the device; andusing the voice grammar dynamically generated based on the one or more search results to process user speech spoken by the user subsequent to the at least some of the one or more search results being presented to the user.
  • 16. The at least one computer-readable storage device of claim 15, wherein the request to obtain information is a first request, and the act of using the voice grammar dynamically generated based on the one or more search results to process the user speech comprises determining that the user speech comprises an indication that the user desires to select at least one search result of the one or more search results, and wherein the method further comprises an act of: in response to determining that the user speech comprises an indication that the user desires to select the at least one search result, sending a second request via the at least one communication medium, the second request requesting information relating to the at least one search result.
  • 17. The at least one computer-readable storage device of claim 16, wherein the response to the first request is a first response and the voice grammar dynamically generated based on the one or more search results is a first voice grammar, and wherein the method further comprises acts of: receiving at the device, via the at least one communication medium, a second response in response to the second request, the second response comprising one or more pieces of information relating to the at least one search result and a second voice grammar dynamically generated based on the one or more pieces of information;presenting via the device at least some of the one or more pieces of information to the user; andusing the second voice grammar dynamically generated based on the one or more pieces of information to process user speech spoken by the user subsequent to the at least some of the one or more pieces of information being presented to the user.
  • 18. The at least one computer-readable storage device of claim 15, wherein the act of presenting the at least some of the one or more search results to the user comprises: providing an indication to the user that the user is allowed to select at least one search result of the one or more search results by speaking one or more words associated with the at least one search result.
  • 19. The at least one computer-readable storage device of claim 18, wherein providing an indication to the user comprises providing a visual indication that distinguishes the one or more words associated with the at least one search result from other words presented to the user.
  • 20. The at least one computer-readable storage device of claim 15, wherein the request to obtain information is a subsequent request and the voice grammar dynamically generated based on the one or more search results is a subsequent voice grammar, and wherein the method further comprises acts of: sending from the device an initial request via the at least one communication medium;receiving at the device, via the at least one communication medium, an initial response to the initial request, the initial response comprising an initial voice grammar; andusing the initial voice grammar to process user speech spoken by the user.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 13/283,448, filed on Oct. 27, 2011, which is a divisional of U.S. patent application Ser. No. 11/422,093, filed on Jun. 5, 2006, which claims the benefit of U.S. Provisional Patent Application Ser. No. 60/716,249, which was filed in the U.S. Patent and Trademark Office on Sep. 12, 2005. Each of these applications is incorporated herein by reference in its entirety.

US Referenced Citations (329)
Number Name Date Kind
5097528 Gursahaney et al. Mar 1992 A
5577165 Takebayashi et al. Nov 1996 A
5584052 Gulau et al. Dec 1996 A
5646979 Knuth Jul 1997 A
5689547 Molne Nov 1997 A
5884262 Wise et al. Mar 1999 A
5953392 Rhie et al. Sep 1999 A
5969717 Ikemoto Oct 1999 A
5991615 Coppinger et al. Nov 1999 A
6028601 Machiraju et al. Feb 2000 A
6031467 Hymel et al. Feb 2000 A
6084583 Gerszberg et al. Jul 2000 A
6101472 Giangarra et al. Aug 2000 A
6128651 Cezar Oct 2000 A
6141010 Hoyle Oct 2000 A
6157841 Bolduc et al. Dec 2000 A
6208972 Grant et al. Mar 2001 B1
6212545 Ohtani et al. Apr 2001 B1
6243375 Speicher Jun 2001 B1
6243443 Low et al. Jun 2001 B1
6275806 Pertrushin Aug 2001 B1
6285862 Ruhl et al. Sep 2001 B1
6298218 Lowe et al. Oct 2001 B1
6301560 Masters Oct 2001 B1
6321209 Pasquali Nov 2001 B1
6332127 Bandera et al. Dec 2001 B1
6381465 Chern et al. Apr 2002 B1
6393296 Sabnani et al. May 2002 B1
6397057 Malackowski et al. May 2002 B1
6400806 Uppaluru Jun 2002 B1
6401085 Gershman et al. Jun 2002 B1
6405123 Rennard et al. Jun 2002 B1
6452498 Stewart Sep 2002 B2
6484148 Boyd Nov 2002 B1
6513011 Uwakubo Jan 2003 B1
6529159 Fan et al. Mar 2003 B1
6552682 Fan Apr 2003 B1
6560456 Lohtia et al. May 2003 B1
6594347 Calder et al. Jul 2003 B1
6600736 Ball et al. Jul 2003 B1
6601026 Appelt et al. Jul 2003 B2
6606599 Grant et al. Aug 2003 B2
6606611 Khan et al. Aug 2003 B1
6608556 DeMoerloose et al. Aug 2003 B2
6636733 Helferich Oct 2003 B1
6647269 Hendrey et al. Nov 2003 B2
6658389 Alpdemir Dec 2003 B1
6664922 Fan Dec 2003 B1
6701162 Everett Mar 2004 B1
6769010 Knapp et al. Jul 2004 B1
6772213 Glorikian Aug 2004 B2
6789077 Slaughter et al. Sep 2004 B1
6813501 Kinnunen et al. Nov 2004 B2
6823257 Clapper Nov 2004 B2
6826614 Hanmann et al. Nov 2004 B1
6842767 Partovi et al. Jan 2005 B1
6856960 Dragosh et al. Feb 2005 B1
6862445 Cohen Mar 2005 B1
6885736 Uppaluru Apr 2005 B2
6895084 Saylor et al. May 2005 B1
6912400 Olsson et al. Jun 2005 B1
6920425 Will et al. Jul 2005 B1
6941273 Loghmani et al. Sep 2005 B1
6965864 Thrift et al. Nov 2005 B1
6973429 Smith Dec 2005 B2
6978136 Jenniges et al. Dec 2005 B2
6980834 Gupta et al. Dec 2005 B2
6999930 Roberts et al. Feb 2006 B1
7007074 Radwin Feb 2006 B2
7016845 Vora et al. Mar 2006 B2
7020609 Thrift et al. Mar 2006 B2
7028306 Boloker et al. Apr 2006 B2
7035805 Miller Apr 2006 B1
7103349 Himanen et al. Sep 2006 B2
7113911 Hinde et al. Sep 2006 B2
7116976 Thomas et al. Oct 2006 B2
7116985 Wilson et al. Oct 2006 B2
7136634 Rissanen et al. Nov 2006 B1
7136846 Chang et al. Nov 2006 B2
7137126 Coffman et al. Nov 2006 B1
7162365 Clapper Jan 2007 B2
7171243 Wantanabe et al. Jan 2007 B2
7188067 Grant et al. Mar 2007 B2
7203721 Ben-Efraim et al. Apr 2007 B1
7210098 Sibal et al. Apr 2007 B2
7212971 Jost et al May 2007 B2
7231025 Labaton Jun 2007 B2
7257575 Johnston et al. Aug 2007 B1
7283850 Granovetter et al. Oct 2007 B2
7328770 Owens et al. Feb 2008 B2
7330890 Partovi et al. Feb 2008 B1
7346374 Witkowski et al. Mar 2008 B2
7359723 Jones Apr 2008 B2
7376434 Thomas et al. May 2008 B2
7376586 Partovi et al. May 2008 B1
7379969 Osborn, Jr. May 2008 B2
7415537 Maes Aug 2008 B1
7437183 Makinen Oct 2008 B2
7457397 Saylor et al. Nov 2008 B1
7477909 Roth Jan 2009 B2
7487085 Ativanichayaphong et al. Feb 2009 B2
7487453 Goebel et al. Feb 2009 B2
7489946 Srinivasan et al. Feb 2009 B2
7493259 Jones et al. Feb 2009 B2
7493260 Harb et al. Feb 2009 B2
7502627 Sacks et al. Mar 2009 B2
7505978 Bodin et al. Mar 2009 B2
7509260 Cross et al. Mar 2009 B2
7509659 McArdle Mar 2009 B2
7515900 Van Camp Apr 2009 B2
7545917 Jones et al. Jun 2009 B2
7551916 Gortz et al. Jun 2009 B2
7564959 Greenaae et al. Jul 2009 B2
7603291 Raiyani et al. Oct 2009 B2
7636426 Korah et al. Dec 2009 B2
7650170 May et al. Jan 2010 B2
7664649 Jost et al. Feb 2010 B2
7689253 Basir Mar 2010 B2
7706780 Adler Apr 2010 B2
7706818 Cho Apr 2010 B2
7787867 Berger et al. Aug 2010 B2
7801728 Ben-David et al. Sep 2010 B2
7808980 Skakkebaek et al. Oct 2010 B2
7809575 Ativanichayaphong et al. Oct 2010 B2
7822608 Cross et al. Oct 2010 B2
7827033 Ativanichayaphong et al. Nov 2010 B2
7864929 Carro Jan 2011 B2
7890128 Thomas et al. Feb 2011 B1
7899173 Ahn et al. Mar 2011 B2
7937125 May et al. May 2011 B2
7965196 Liebermann Jun 2011 B2
RE42738 Williams Sep 2011 E
8041296 Skog et al. Oct 2011 B2
8046220 Agarwal et al. Oct 2011 B2
8065143 Yanagihara Nov 2011 B2
8073590 Zilka Dec 2011 B1
8073700 Jaramillo et al. Dec 2011 B2
8082148 Agapi et al. Dec 2011 B2
8086289 May et al. Dec 2011 B2
8103509 Burns et al. Jan 2012 B2
8112104 Thomas et al. Feb 2012 B1
8121837 Agapi et al. Feb 2012 B2
8200295 May et al. Jun 2012 B2
8203528 Spalink Jun 2012 B2
8214242 Agapi et al. Jul 2012 B2
8229081 Agapi et al. Jul 2012 B2
8233919 Haag et al. Jul 2012 B2
8243888 Cho Aug 2012 B2
8260247 Lazaridis et al. Sep 2012 B2
8265659 Lee Sep 2012 B2
8265862 Zilka Sep 2012 B1
8270980 Forssell Sep 2012 B2
8280419 Thomas et al. Oct 2012 B1
8280434 Garg Oct 2012 B2
8285213 Skog et al. Oct 2012 B2
8285273 Roth Oct 2012 B2
8290540 Kittel et al. Oct 2012 B2
8301168 Zubas et al. Oct 2012 B2
8315875 Burns et al. Nov 2012 B2
8326328 LeBeau et al. Dec 2012 B2
8340966 Kerimovska et al. Dec 2012 B2
8344851 Bisht Jan 2013 B2
8359020 Lebeau et al. Jan 2013 B2
8374872 Kesireddy Feb 2013 B2
8380516 Jaramillo et al. Feb 2013 B2
RE44103 Williams Mar 2013 E
8412284 Khoshaba et al. Apr 2013 B2
8442429 Hawit May 2013 B2
8442447 Veluppillai et al. May 2013 B2
8447285 Bladon et al. May 2013 B1
8447598 Chutorash et al. May 2013 B2
8457612 Daniell Jun 2013 B1
8457963 Charriere Jun 2013 B2
8489690 Abuelsaad et al. Jul 2013 B2
8508379 Vander Veen et al. Aug 2013 B2
8526932 Tofighbakhsh et al. Sep 2013 B2
8538386 May et al. Sep 2013 B2
8538491 Khoshaba et al. Sep 2013 B2
8549501 Eichenberger et al. Oct 2013 B2
8554254 May et al. Oct 2013 B2
8559922 Hardin Oct 2013 B2
8565820 Riemer et al. Oct 2013 B2
8571612 Gold Oct 2013 B2
8577422 Ledet Nov 2013 B1
8577543 Basir et al. Nov 2013 B2
8583093 Bort Nov 2013 B1
8583431 Furman et al. Nov 2013 B2
8594743 Sano Nov 2013 B2
20010051517 Strietzel Dec 2001 A1
20010053252 Creque Dec 2001 A1
20020062393 Borger et al. May 2002 A1
20020065828 Goodspeed May 2002 A1
20020065944 Hickey et al. May 2002 A1
20020077086 Tuomela et al. Jun 2002 A1
20020087408 Burnett Jul 2002 A1
20020092019 Marcus Jul 2002 A1
20020095472 Berkowitz et al. Jul 2002 A1
20020099553 Brittan et al. Jul 2002 A1
20020120554 Vega Aug 2002 A1
20020147593 Lewis et al. Oct 2002 A1
20020184610 Chong et al. Dec 2002 A1
20020194388 Boloker et al. Dec 2002 A1
20030024975 Rajasekharan Feb 2003 A1
20030039341 Burg et al. Feb 2003 A1
20030046316 Gergic et al. Mar 2003 A1
20030046346 Mumick et al. Mar 2003 A1
20030078779 Desai et al. Apr 2003 A1
20030101451 Bentolila et al. May 2003 A1
20030125945 Doyle Jul 2003 A1
20030125958 Alpdemir et al. Jul 2003 A1
20030171926 Suresh et al. Sep 2003 A1
20030179865 Stillman et al. Sep 2003 A1
20030182622 Sibal et al. Sep 2003 A1
20030195739 Washio Oct 2003 A1
20030217161 Balasuriya Nov 2003 A1
20030229900 Reisman Dec 2003 A1
20030235282 Sichelman et al. Dec 2003 A1
20040006478 Alpdemir et al. Jan 2004 A1
20040019487 Kleindienst et al. Jan 2004 A1
20040025115 Seinel et al. Feb 2004 A1
20040031058 Reisman Feb 2004 A1
20040044516 Kennewick et al. Mar 2004 A1
20040049390 Brittan et al. Mar 2004 A1
20040059705 Wittke et al. Mar 2004 A1
20040076279 Taschereau Apr 2004 A1
20040083109 Halonen et al. Apr 2004 A1
20040120472 Popay et al. Jun 2004 A1
20040120476 Harrison et al. Jun 2004 A1
20040138890 Ferrans et al. Jul 2004 A1
20040140989 Papageorge Jul 2004 A1
20040153323 Charney et al. Aug 2004 A1
20040179038 Blattner et al. Sep 2004 A1
20040203766 Jenniges et al. Oct 2004 A1
20040216036 Chu et al. Oct 2004 A1
20040224662 O'Neil et al. Nov 2004 A1
20040236574 Ativanichayaphong et al. Nov 2004 A1
20040260562 Kujirai Dec 2004 A1
20050004840 Wanninger Jan 2005 A1
20050015256 Kargman Jan 2005 A1
20050021744 Haitsuka et al. Jan 2005 A1
20050033582 Gadd et al. Feb 2005 A1
20050054381 Lee et al. Mar 2005 A1
20050075884 Badt, Jr. Apr 2005 A1
20050091059 Lecoeuche Apr 2005 A1
20050131701 Cross et al. Jun 2005 A1
20050138219 Bou-Ghannam et al. Jun 2005 A1
20050138562 Carro Jun 2005 A1
20050154580 Horowitz et al. Jul 2005 A1
20050160461 Baumgartner et al. Jul 2005 A1
20050188412 Dacosta Aug 2005 A1
20050203729 Roth et al. Sep 2005 A1
20050203747 Lecoeuche Sep 2005 A1
20050261908 Cross Nov 2005 A1
20050273769 Eichenberger et al. Dec 2005 A1
20050283367 Cross Dec 2005 A1
20060004627 Baluja Jan 2006 A1
20060047510 Ativanichayaphong et al. Mar 2006 A1
20060064302 Cross Mar 2006 A1
20060069564 Allison et al. Mar 2006 A1
20060074680 Cross et al. Apr 2006 A1
20060075120 Smit Apr 2006 A1
20060111906 Cross May 2006 A1
20060122836 Cross Jun 2006 A1
20060123358 Lee et al. Jun 2006 A1
20060136222 Cross Jun 2006 A1
20060146728 Engelsma et al. Jul 2006 A1
20060150119 Chesnais et al. Jul 2006 A1
20060168095 Sharma et al. Jul 2006 A1
20060168595 McArdle Jul 2006 A1
20060184626 Agapi Aug 2006 A1
20060190264 Jaramillo Aug 2006 A1
20060218039 Johnson Sep 2006 A1
20060229880 White Oct 2006 A1
20060235694 Cross Oct 2006 A1
20060264209 Atkinson et al. Nov 2006 A1
20060287845 Cross et al. Dec 2006 A1
20060287865 Cross et al. Dec 2006 A1
20060287866 Cross et al. Dec 2006 A1
20060288309 Cross et al. Dec 2006 A1
20070032229 Jones Feb 2007 A1
20070061146 Jaramillo et al. Mar 2007 A1
20070099636 Roth May 2007 A1
20070169143 Li Jul 2007 A1
20070174244 Jones Jul 2007 A1
20070174273 Jones et al. Jul 2007 A1
20070174904 Park Jul 2007 A1
20070185768 Vengroff et al. Aug 2007 A1
20070185841 Jones et al. Aug 2007 A1
20070185843 Jones et al. Aug 2007 A1
20070265851 Cross et al. Nov 2007 A1
20070274296 Cross et al. Nov 2007 A1
20070274297 Cross et al. Nov 2007 A1
20070288241 Cross et al. Dec 2007 A1
20070294084 Cross et al. Dec 2007 A1
20080027707 Stefik et al. Jan 2008 A1
20080065386 Cross et al. Mar 2008 A1
20080065387 Cross, Jr. et al. Mar 2008 A1
20080065388 Cross et al. Mar 2008 A1
20080065389 Cross et al. Mar 2008 A1
20080065390 Ativanichayaphong et al. Mar 2008 A1
20080086564 Putman et al. Apr 2008 A1
20080097760 Hong et al. Apr 2008 A1
20080140410 Cross et al. Jun 2008 A1
20080162136 Ativanichayaphong et al. Jul 2008 A1
20080177530 Cross et al. Jul 2008 A1
20080195393 Cross et al. Aug 2008 A1
20080208584 Cross et al. Aug 2008 A1
20080208585 Ativanichayaphong et al. Aug 2008 A1
20080208586 Ativanichayaphong et al. Aug 2008 A1
20080208587 Cross et al. Aug 2008 A1
20080208588 Cross et al. Aug 2008 A1
20080208589 Cross et al. Aug 2008 A1
20080208590 Cross et al. Aug 2008 A1
20080208591 Ativanichayaphong et al. Aug 2008 A1
20080208592 Cross et al. Aug 2008 A1
20080208593 Ativanichayaphong et al. Aug 2008 A1
20080208594 Cross et al. Aug 2008 A1
20080228494 Cross et al. Sep 2008 A1
20080228495 Cross et al. Sep 2008 A1
20080235021 Cross et al. Sep 2008 A1
20080235022 Cross et al. Sep 2008 A1
20080235027 Cross Sep 2008 A1
20080235029 Cross et al. Sep 2008 A1
20080249782 Ativanichayaphong et al. Oct 2008 A1
20080255850 Cross et al. Oct 2008 A1
20080255851 Cross et al. Oct 2008 A1
20090030680 Mamou Jan 2009 A1
20090271199 Agapi et al. Oct 2009 A1
20130005367 Roth et al. Jan 2013 A1
Foreign Referenced Citations (21)
Number Date Country
1385783 Dec 2002 CN
1564123 Jan 2005 CN
0 794 670 Sep 1997 EP
0 854 417 Jul 1998 EP
1 143 679 Oct 2001 EP
1 450 350 Aug 2004 EP
0507148.5 Apr 2005 GB
2000-155529 Jun 2000 JP
2003-140672 May 2003 JP
WO 9948088 Sep 1999 WO
WO 0051106 Aug 2000 WO
WO 0077978 Dec 2000 WO
WO 0191488 Nov 2001 WO
WO 0231814 Apr 2002 WO
WO 0232140 Apr 2002 WO
WO 0241169 May 2002 WO
WO 2004054217 Jun 2004 WO
WO 2004062945 Jul 2004 WO
WO 2005020094 Mar 2005 WO
WO 2006108795 Oct 2006 WO
WO 2007031447 Mar 2007 WO
Non-Patent Literature Citations (29)
Entry
International Search Report mailed Nov. 17, 2006 for Application No. PCT/EP2006/066037.
International Preliminary Report on Patentability and Written Opinion issued Mar. 18, 2008 for Application No. PCT/EP2006/066037.
International Search Report and Written Opinion mailed Mar. 5, 2007 for Application No. PCT/US2006/038411.
International Preliminary Report on Patentability mailed May 15, 2008 for Application No. PCT/U52006/038411.
International Search Report, Jun. 25, 2008; Application No. PCT/EP2008/051358.
Official Action for EP 08717576.6 mailed Mar. 26, 2012.
International Search Report and Written Opinion mailed Jul. 11, 2008 for Application No. PCT/EP2008/052829.
International Preliminary Report on Patentability mailed Sep. 24, 2009 for Application No. PCT/EP2008/052829.
International Search Report, Jun. 18, 2008; Application No. PCT/EP2008/051363.
[No Author Listed], W3C: “Voice Extensible Markup Language (VoiceXML) Version 2.0” Internet Citation, [Online] XP002248286 Retrieved from the Internet: URL: http://www.w3.org/TR/voicexml20 [retrieved on Jul. 18, 2003].
[No Author Listed], W3C: “Voice Extensible Markup Language (VoiceXML) Version 2.1 W3C Candidate Recommendation Jun. 13, 2005” Internet, [Online] Jun. 13, 2005 (2005-16-13), pp. 1-34, XP002484189 Retrieved from the Internet: URL: http://www.w3.org/TR/2005/CR-voicexml21-20050613/ [retrieved on Jun. 6, 2012].
Axelsson et al., “Mobile X+V 1.2,” Voice XML Organization, Sep. 5, 2005, www.voicexml.org/specs/multimodal/x+v/mobile/12/>, retrieved Oct. 31, 2006.
Axelsson et al., “XHTML+Voice Profile 1.2” Internet,[Online] Mar. 16, 2004 (Mar. 6, 2004), pp. 1-53, XP002484188 Retrieved from the Internet: URL: http://www.voicexml.org/specs/multimodal/x+v/12/spec.html [retrieved on Jun. 12, 2008].
Carmichael, “Next, Previous, Stop: Implementing an Audio Metaphor of Tape Recorder Buttons for the THISL Information Retrieval Engine Speech Interface,” 2002. Available at: www.ida.liu.se/˜nlplab/chi-ws-02/papers/carmichael.doc. Last accessed Jan. 13, 2014.
Dalianis et al. “SiteSeeker Voice—A speech controlled search engine,” Wapalizer Paper, pp. 1-2, Feb. 25, 2003.
Dalianis et al. “SiteSeeker Voice—A speech controlled search engine,” (Feb. 25, 2003), http://www.nada.kth.se/hercules/wapalizer/SiteSeekerVoice.html> retrieved on Oct. 30, 2006.
Franz et al. “Searching the Web by Voice,” International Conference on Computational Linguistics, Proceedings of Coling, XX, XX, 2002, pp. 1213-1217.
Google Short Message Service (SMS), [online] [retrieved on Sep. 29, 2005], retrieved from the Internet <URL: http://www.google.com/sms/>.
Guillevic et al., Robust Semantic Confidence Scoring ICSLP 2002: 7th International Conference On Spoken Language Processing. Denver Colorado, Sep. 16-20, 2002 International Conference On Spoken Language Processing (ICSLP), Adelaide: Casual Productions, AI, Sep. 16, 2002, p. 853, XP007011561 ISBN:9788-1-876346-40-9.
Hemphill et al. “Surfing the Web by Voice,” Proceedings ACM Multimedia, Nov. 1995, pp. 215-222.
Hunt et al., “Speech Recognition Grammar Specification Version 1.0,” W3C Recommendation, Mar. 16, 2004, www.w3.org/TR/speech-grammar/, retrieved Oct. 31, 2006.
Lai et al., “Robustness in Speech Based Interfaces: Sharing the Tricks of the Trade,” Proceeding CHI EA '02 CHI '02 Extended Abstracts on Human Factors in Computing Systems. 2002:915.
McCobb, “Multimodal interaction and the mobile Web, Part 1: Multimodal auto-fill,” Nov.15, 2005, pp. 1-8, http://www.ibm.com/developerworks/web/library/wi-mobweb/.
McCobb, “Multimodal interaction and the mobile Web, Part 3: User authentication,” Jan. 10, 2006, pp. 1-6, http://www.ibm.com/developerworks/wireless/library/wi-mobweb3/.
Nokia 616 Car Kit, [online] [retrieved on Sep. 29, 2005], retrieved from the Internet <URL: http://europe.nokia.com/nokia/0,,65324,00.html>.
Van Tichelen, “Semantic Interpretation for Speech Recognition,” W3C Working Draft, Nov. 8, 2004, www.w3.org/TR/2004/WD-semantic interpretation-20041108/, retrieved Oct. 31, 2006.
White, “Multimodal interaction and the mobile Web, Part 2: Simple searches with Find-It”, (Feb. 6, 2005), http://www-128.ibm.com/developerworks/web/library/wi-mobweb2/> retrieved on Oct. 31, 2006.
White, “Multimodal interaction and the mobile Web, Part 2: Simple searchers with Find-It,” Dec. 6, 2005, pp. 1-5, http://www.ibm.com/developerworks/wireless/library/wi-mobweb2/.
Wyard et al. “Spoken Language Systems—Beyond Prompt and Response,” BT Technology Journal, Springer, Dordrect, NL, vol. 14, No. 1, Jan. 1996.
Related Publications (1)
Number Date Country
20130158994 A1 Jun 2013 US
Provisional Applications (1)
Number Date Country
60716249 Sep 2005 US
Divisions (1)
Number Date Country
Parent 11422093 Jun 2006 US
Child 13283448 US
Continuations (1)
Number Date Country
Parent 13283448 Oct 2011 US
Child 13756073 US