The invention generally relates to automatic speech recognition (ASR), and more specifically, to client-server ASR on mobile devices.
An automatic speech recognition (ASR) system determines a semantic meaning of a speech input. Typically, the input speech is processed into a sequence of digital speech feature frames. Each speech feature frame can be thought of as a multi-dimensional vector that represents various characteristics of the speech signal present during a short time window of the speech. For example, the multi-dimensional vector of each speech frame can be derived from cepstral features of the short time Fourier transform spectrum of the speech signal (MFCCs)—the short time power or component of a given frequency band—as well as the corresponding first- and second-order derivatives (“deltas” and “delta-deltas”). In a continuous recognition system, variable numbers of speech frames are organized as “utterances” representing a period of speech followed by a pause, which in real life loosely corresponds to a spoken sentence or phrase.
The ASR system compares the input utterances to find statistical acoustic models that best match the vector sequence characteristics and determines corresponding representative text associated with the acoustic models. More formally, given some input observations A, the probability that some string of words W were spoken is represented as P(W|A), where the ASR system attempts to determine the most likely word string:
Given a system of statistical acoustic models, this formula can be re-expressed as:
where P(A|W) corresponds to the acoustic models and P(W) reflects the prior probability of the word sequence as provided by a statistical language model.
The acoustic models are typically probabilistic state sequence models such as hidden Markov models (HMMs) that model speech sounds using mixtures of probability distribution functions (Gaussians). Acoustic models often represent phonemes in specific contexts, referred to as PELs (Phonetic Elements), e.g. triphones or phonemes with known left and/or right contexts. State sequence models can be scaled up to represent words as connected sequences of acoustically modeled phonemes, and phrases or sentences as connected sequences of words. When the models are organized together as words, phrases, and sentences, additional language-related information is also typically incorporated into the models in the form of a statistical language model.
The words or phrases associated with the best matching model structures are referred to as recognition candidates or hypotheses. A system may produce a single best recognition candidate—the recognition result—or multiple recognition hypotheses in various forms such as an N-best list, a recognition lattice, or a confusion network. Further details regarding continuous speech recognition are provided in U.S. Pat. No. 5,794,189, entitled “Continuous Speech Recognition,” and U.S. Pat. No. 6,167,377, entitled “Speech Recognition Language Models,” the contents of which are incorporated herein by reference.
Recently, ASR technology has advanced enough to have applications that are implemented on the limited footprint of a mobile device. This can involve a somewhat limited stand-alone ASR arrangement on the mobile device, or more extensive capability can be provided in a client-server arrangement where the local mobile device does initial processing of speech inputs, and possibly some local ASR recognition processing, but the main ASR processing is performed at a remote server with greater resources, then the recognition results are returned for use at the mobile device.
U.S. Patent Publication 20110054899 describes a hybrid client-server ASR arrangement for a mobile device in which speech recognition may be performed locally by the device and/or remotely by a remote ASR server depending on one or more criteria such as time, policy, confidence score, network availability, and the like. An example screen shot of the initial prompt interface from one such mobile device ASR application, Dragon Dictation™ for iPhone™, is shown in
Embodiments of the present invention are directed to an arrangement for conducting natural language dialogs with a user on a mobile device using automatic speech recognition (ASR) and multiple different dialog applications. A user interface provides for user interaction with the dialogue applications in natural language dialogs. An ASR engine processes unknown speech inputs from the user to produce corresponding speech recognition results. A dialog concept module develops dialog concept items from the speech recognition results and stores the dialog concept items and additional dialog information in a dialog concept database. A dialog processor accesses dialog concept database information and coordinates operation of the ASR engine and the dialog applications to conduct with the user a plurality of separate parallel natural language dialogs in the dialog applications.
The user interface may include multiple application selection tabs for user selection of a given active dialog application to interact with the user. The dialog concept items may include an indication of the dialog application in which they were originated. In a specific embodiment, there may be a domain expert agent for each dialog application to coordinate with the dialog processor to conduct a natural language dialog with the user in the associated dialog application. The dialog processor may push relevant dialog concept database information to the dialog applications and/or the dialog applications may pull relevant information from the dialog concept database.
Embodiments of the present invention are directed to an arrangement for conducting natural language dialogs with a user on a mobile device using automatic speech recognition (ASR) and multiple different dialog applications. For example,
Unknown speech inputs from the user for the given active dialog application 304 are received via the user interface 301, step 501, and processed by an ASR arrangement processes to produce corresponding speech recognition results 305, step 502. In the specific embodiment shown in
A dialog concept module 409 develops dialog concept items from the speech recognition results and stores them along with additional dialog information (such as an indication of the dialog application in which they were originated) in a dialog concept database 410, step 503. A dialog processor 411 accesses dialog concept database information in the dialog concept database 410 and coordinates operation of the ASR engine 407 and multiple dialog applications 405 and/or 412 selected by the user via the application selection tabs 302 on the user interface 301, step 504, to conduct multiple separate parallel natural language dialogs with the user, step 505. The dialog processor 411 may use a domain expert agent for each dialog application 405 and/or 412 to conduct a natural language dialog with the user in a given associated dialog application. The dialog processor 411 may push relevant dialog concept database information 410 to the dialog applications 405 and/or 412 and/or the dialog applications 405 and/or 412 may pull relevant information from the dialog concept database 410.
In the specific embodiment shown in
The applications that can't use the specific information collected as the conversation goes on will grey themselves out in the tab-layout 602. For example, once the concept “Boston” has been registered in
If the user would like to bring a tab that was not activated from the beginning, he can do a long press on the tab-layout 602 and would see a popup listing all the available agencies. He could then activate a specific domain-expert agency who would receive all information from previous interactions. Some tabs could compute values and inject them into a concept cloud for use by other domain experts or other applications. For example, if the user requests: “What is the weather tonight?” in
Embodiments of the invention may be implemented in whole or in part in any conventional computer programming language such as VHDL, SystemC, Verilog, ASM, etc. Alternative embodiments of the invention may be implemented as pre-programmed hardware elements, other related components, or as a combination of hardware and software components.
Embodiments can be implemented in whole or in part as a computer program product for use with a computer system. Such implementation may include a series of computer instructions fixed either on a tangible medium, such as a computer readable medium (e.g., a diskette, CD-ROM, ROM, or fixed disk) or transmittable to a computer system, via a modem or other interface device, such as a communications adapter connected to a network over a medium. The medium may be either a tangible medium (e.g., optical or analog communications lines) or a medium implemented with wireless techniques (e.g., microwave, infrared or other transmission techniques). The series of computer instructions embodies all or part of the functionality previously described herein with respect to the system. Those skilled in the art should appreciate that such computer instructions can be written in a number of programming languages for use with many computer architectures or operating systems. Furthermore, such instructions may be stored in any memory device, such as semiconductor, magnetic, optical or other memory devices, and may be transmitted using any communications technology, such as optical, infrared, microwave, or other transmission technologies. It is expected that such a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the network (e.g., the Internet or World Wide Web). Of course, some embodiments of the invention may be implemented as a combination of both software (e.g., a computer program product) and hardware. Still other embodiments of the invention are implemented as entirely hardware, or entirely software (e.g., a computer program product).
Although various exemplary embodiments of the invention have been disclosed, it should be apparent to those skilled in the art that various changes and modifications can be made which will achieve some of the advantages of the invention without departing from the true scope of the invention.
This application is a continuation of U.S. patent application Ser. No. 13/904,269, filed May 29, 2013, titled “Multiple Parallel Dialogs in Smart Phone Applications.” Application Ser. No. 13/904,269, in its entirety, is incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
5615296 | Stanford et al. | Mar 1997 | A |
5651096 | Pallakoff et al. | Jul 1997 | A |
5774860 | Bayya et al. | Jun 1998 | A |
5794189 | Gould | Aug 1998 | A |
5873107 | Borovoy et al. | Feb 1999 | A |
6167377 | Gillick et al. | Dec 2000 | A |
6173266 | Marx et al. | Jan 2001 | B1 |
6201540 | Gallup | Mar 2001 | B1 |
6233559 | Balakrishnan | May 2001 | B1 |
6311159 | Van Tichelen et al. | Oct 2001 | B1 |
6418440 | Kuo et al. | Jul 2002 | B1 |
6970935 | Maes | Nov 2005 | B1 |
7016847 | Tessel et al. | Mar 2006 | B1 |
7069254 | Foulger et al. | Jun 2006 | B2 |
7137126 | Coffman et al. | Nov 2006 | B1 |
7206747 | Morgan et al. | Apr 2007 | B1 |
7224346 | Sheng | May 2007 | B2 |
7451152 | Kraft et al. | Nov 2008 | B2 |
7512904 | Matthews et al. | Mar 2009 | B2 |
7555713 | Yang | Jun 2009 | B2 |
7562082 | Zhou | Jul 2009 | B2 |
7599915 | Hill et al. | Oct 2009 | B2 |
7676517 | Hurst-Hiller et al. | Mar 2010 | B2 |
7774713 | Mital et al. | Aug 2010 | B2 |
7953730 | Bleckner et al. | May 2011 | B1 |
8326622 | Kraenzel et al. | Dec 2012 | B2 |
8719034 | Cross, Jr. et al. | May 2014 | B2 |
8959109 | Scott et al. | Feb 2015 | B2 |
9043709 | Chae et al. | May 2015 | B2 |
20010028368 | Swartz et al. | Oct 2001 | A1 |
20020135614 | Bennett | Sep 2002 | A1 |
20020140741 | Felkey et al. | Oct 2002 | A1 |
20020184023 | Busayapongchai et al. | Dec 2002 | A1 |
20030098891 | Molander | May 2003 | A1 |
20030171928 | Falcon et al. | Sep 2003 | A1 |
20030182131 | Arnold et al. | Sep 2003 | A1 |
20030191627 | Au | Oct 2003 | A1 |
20040166832 | Portman et al. | Aug 2004 | A1 |
20040215649 | Whalen et al. | Oct 2004 | A1 |
20050033582 | Gadd et al. | Feb 2005 | A1 |
20050080625 | Bennett et al. | Apr 2005 | A1 |
20050120306 | Klassen et al. | Jun 2005 | A1 |
20050192804 | Kitagawa et al. | Sep 2005 | A1 |
20060053016 | Falcon et al. | Mar 2006 | A1 |
20070033005 | Cristo et al. | Feb 2007 | A1 |
20070050191 | Weider et al. | Mar 2007 | A1 |
20070061148 | Cross et al. | Mar 2007 | A1 |
20080048908 | Sato | Feb 2008 | A1 |
20080091406 | Baldwin et al. | Apr 2008 | A1 |
20080189110 | Freeman | Aug 2008 | A1 |
20090150156 | Kennewick et al. | Jun 2009 | A1 |
20100146449 | Brown et al. | Jun 2010 | A1 |
20100218141 | Xu et al. | Aug 2010 | A1 |
20100312547 | Van Os et al. | Dec 2010 | A1 |
20110054899 | Phillips | Mar 2011 | A1 |
20110106534 | LeBeau et al. | May 2011 | A1 |
20110193726 | Szwabowski et al. | Aug 2011 | A1 |
20110231798 | Cok | Sep 2011 | A1 |
20110246944 | Byrne et al. | Oct 2011 | A1 |
20120023524 | Suk et al. | Jan 2012 | A1 |
20120078611 | Soltani | Mar 2012 | A1 |
20120131470 | Wessling | May 2012 | A1 |
20120185798 | Louch et al. | Jul 2012 | A1 |
20120209608 | Lee | Aug 2012 | A1 |
20120316871 | Koll et al. | Dec 2012 | A1 |
20130054791 | Oki et al. | Feb 2013 | A1 |
20130159920 | Scott | Jun 2013 | A1 |
20130169524 | Han et al. | Jul 2013 | A1 |
20130226590 | Lee | Aug 2013 | A1 |
20130325460 | Kim et al. | Dec 2013 | A1 |
20130346872 | Scott | Dec 2013 | A1 |
20140095147 | Hebert et al. | Apr 2014 | A1 |
20140108019 | Ehsani et al. | Apr 2014 | A1 |
20140136183 | Hebert et al. | May 2014 | A1 |
20140149920 | Wang et al. | May 2014 | A1 |
20140163959 | Hebert et al. | Jun 2014 | A1 |
20140164953 | Lynch et al. | Jun 2014 | A1 |
20140173460 | Kim | Jun 2014 | A1 |
20140189584 | Weng et al. | Jul 2014 | A1 |
20140195243 | Cha et al. | Jul 2014 | A1 |
20140249821 | Kennewick et al. | Sep 2014 | A1 |
20140257793 | Gandrabur et al. | Sep 2014 | A1 |
20140278435 | Ganong, III et al. | Sep 2014 | A1 |
20140281969 | Kumar | Sep 2014 | A1 |
20140297268 | Govrin et al. | Oct 2014 | A1 |
20140297283 | Hebert et al. | Oct 2014 | A1 |
20140358545 | Robichaud et al. | Dec 2014 | A1 |
20150066479 | Pasupalak et al. | Mar 2015 | A1 |
20150082175 | Onohara et al. | Mar 2015 | A1 |
20150160907 | Zhang et al. | Jun 2015 | A1 |
Entry |
---|
Lin, Bor-shen, Hsin-min Wang, and Lin-shan Lee. “A distributed architecture for cooperative spoken dialogue agents with coherent dialogue state and history.”Asru. vol. 99. 1999. |
J.E. Kendall et a., “Information Delivery Systems: An Exploration of Web Pull and Push Technologies,” Communications of the Association for Information Systems, vol. 1, Art . 14, Apr. 1999. |
May 20, 2015 U.S. Non-Final Office Action—U.S. Appl. No. 13/904,269. |
Nov. 4, 2015 U.S. Final Office Action—U.S. Appl. No. 13/904,269. |
Number | Date | Country | |
---|---|---|---|
20170011744 A1 | Jan 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13904269 | May 2013 | US |
Child | 15215956 | US |