The study of task-based natural language dialogs has generally been restricted to settings in which two agents collaborate on a single task. Within the domain of computerized personal assistance, there is a need to be able to provide assistance to people who are engaged in multiple tasks. As an example, suppose someone is driving and wants help from his or her personal assistant regarding the choice of a movie and dinner for that evening. The problem is that these tasks can interact in both positive and negative ways: if one picks a particular movie, that choice might affect the restaurant choice, and vice versa. In addition, a user will typically not initially know what movie or restaurant he or she wants: the user may only know general constraints that the user reveals incrementally to the system during the dialog. Such systems must additionally allow users to change their mind regarding those constraints (for example, the cuisine, neighborhood or movie genre) during the course of the dialog.
The work of (Lemon et al., 2002, see References section below) focused on multitask dialogs involving the control of multiple devices. Other work has focused on a more narrow view of tasks and their interaction using a statistically-based approach (Griol and Molina, 2016) in which the term “task” has a less commonsense association. Rather than relating the term “tasks” to tasks that can effect change in the world, the focus of the reference is on, essentially, different dialog acts. Similarly, early work on agenda-based dialog management systems made use of a very loose notion of a task, and, even though the agenda-based dialog management systems addressed the need to modify previous user choices (Rudnicky and Wei, 1999), such systems did not consider revisions that should arise automatically because of consistency concerns while also conflating attributions of mental state with procedurally-motivated program elements.
Other work on multi-task dialogs has focused on dialog interruptions (Yang et al., 2011). A separate branch of research views the “multi-task dialog problem” as a problem of being able to extend an existing task dialog system with new tasks in order to increase robustness (Crook et al., 2015). Somewhat related are efforts to extend dialog systems to be able to support conversations with multiple applications (sometimes referred to as cross-domain intentions), each of which has a particular specialization (Ming Sum and Rudnicky, 2015).
There is, therefore, a need to support multiple task dialogs using a computerized personal assistant.
In a multi-intent search dialog, according to an embodiment of the invention, a human user and a computerized personal assistant incrementally exchange information to support achievement of multiple tasks of the human user. These multiple tasks can interact, and choices made by the user can be revised, during the course of the dialog. Those revisions can, in turn, lead to modifications in the ongoing specification of other tasks. The approach is a plan-based one in which a dialog between the two agents is viewed as a collaboration involving the tasks under discussion.
In accordance with an embodiment of the invention, there is provided a computer-implemented method for managing a dialog between a computerized personal assistant and a human user. The computer-implemented method comprises performing dialog processing to permit the computerized personal assistant to interact with the human user in a collaborative dialog to ascertain values of parameters to execute multiple task intentions of the human user in the same collaborative dialog, at least one of the multiple task intentions being initially partially specified. The dialog processing comprises, with a task engine of the computerized personal assistant, iteratively expanding task intentions of an intention base comprising the multiple task intentions until the computerized personal assistant and the human user collaboratively arrive at values of the parameters of the multiple task intentions of the intention base that are executable by the computerized personal assistant. The iteratively expanding task intentions of the intention base comprises, at each iteration, using the task engine of the computerized personal assistant in evaluating a new option to be expressed via an utterance of the computerized personal assistant to the human user, the new option comprising a new constraint that has not been considered before, that is consistent with the intention base, and that reduces future options for the intention base, the collaborative dialog thereby converging on the intention base being executable by the computerized personal assistant.
In further, related embodiments, evaluating the new option may be based on a currently active task intention, any constraints for the currently active task intention that have already been considered in previous iterations, and any changes in the intention base that have resulted from revisions of the intention base in previous iterations. Evaluating the new option may comprise presenting the new constraint, for the currently active task intention, to the human user, and, (i) if the human user accepts the new constraint, updating the currently active task intention with the new constraint and updating the constraint as having been considered, (ii) if the human user rejects the new constraint, updating the constraint as having been considered, (iii) if the human user proposes a new task intention that is related in parameters to an existing task intention of the intention base, sharing parameters between the new task intention and the existing task intention in the intention base, (iv) if the human user proposes a new task intention unrelated to an existing task intention, augmenting the intention base with the new task intention, and (v) if the human user adds a new constraint or changes an existing constraint, revising the intention base to include the new constraint or changed constraint and to change any other constraints in the intention base that are affected by the new constraint or changed constraint. The computer-implemented method may further comprise generating natural language to be uttered to the human user. At least two of the multiple task intentions may interact with each other by one or more of a greater cost or a lesser cost of performing the at least two of the multiple task intentions together.
In other related embodiments, the computer-implemented method may further comprise receiving, from a natural language understanding engine, a natural language interpretation of utterances of the human user to a speech recognition system. The natural language interpretation may comprise at least one of: (i) intent data and mention list data from a statistical natural language system and (ii) logical form natural language data output from a deep natural language system. The natural language interpretation may be used as the basis for at least one of a new constraint and a new task intention for the collaborative dialog between the computerized personal assistant and the human user. The computer-implemented method may further comprise modeling a task intention of the human user in a dynamic intention structure built using a library of task recipes specifying how domain tasks are to be carried out in a hierarchical task model. The dynamic intention structure may comprise, for each task intention: a task intention identifier, a task intention variable, an act, a constraint, and a representation of any subsidiary task dynamic intention structure. The computer-implemented method may further comprise modeling the at least one of the multiple task intentions, which is initially partially specified, using at least one of: (i) an existential quantifier within a scope of the initially partially specified task intention; (ii) an incompletely specified constraint of an intended action of the initially partially specified task intention; and (iii) an action description, which is not yet fully decomposed, of the initially partially specified task intention. Upon receiving an interpreted utterance of the human user that is unrelated to performing collaborative dialog to ascertain values of parameters to execute multiple task intentions of the human user, a natural language response to the human user may be generated, to guide the human user to return to the collaborative dialog.
In another embodiment according to the invention, there is provided a computerized collaborative dialog manager system for managing a dialog between a computerized personal assistant and a human user. The computerized collaborative dialog manager system comprises a processor, and a memory with computer code instructions stored thereon, the processor and the memory, with the computer code instructions, being configured to implement a task engine. The task engine is configured to perform dialog processing to permit the computerized personal assistant to interact with the human user in a collaborative dialog to ascertain values of parameters to execute multiple task intentions of the human user in the same collaborative dialog, at least one of the multiple task intentions being initially partially specified. The dialog processing comprises iteratively expanding task intentions of an intention base comprising the multiple task intentions until the computerized personal assistant and the human user collaboratively arrive at values of the parameters of the multiple task intentions of the intention base that are executable by the computerized personal assistant. The task engine comprises an option engine configured to, at each iteration, evaluate a new option to be expressed via an utterance of the computerized personal assistant to the human user. The new option comprises a new constraint that has not been considered before, that is consistent with the intention base, and that reduces future options for the intention base, the collaborative dialog thereby converging on the intention base being executable by the computerized personal assistant.
In further related embodiments, the task engine may be configured to evaluate the new option based on a currently active task intention, any constraints for the currently active task intention that have already been considered in previous iterations, and any changes in the intention base that have resulted from revisions of the intention base in previous iterations. The task engine may be configured to evaluate the new option by a computerized process comprising presenting the new constraint, for the currently active task intention, to the human user, and, (i) if the human user accepts the new constraint, updating the currently active task intention with the new constraint and updating the constraint as having been considered, (ii) if the human user rejects the new constraint, updating the constraint as having been considered, (iii) if the human user proposes a new task intention that is related in parameters to an existing task intention of the intention base, sharing parameters between the new task intention and the existing task intention in the intention base, (iv) if the human user proposes a new task intention unrelated to an existing task intention, augmenting the intention base with the new task intention, and (v) if the human user adds a new constraint or changes an existing constraint, revising the intention base to include the new constraint or changed constraint and to change any other constraints in the intention base that are affected by the new constraint or changed constraint. The computerized collaborative dialog manager system may further comprise a dialog generator configured to generate natural language to be uttered to the human user. The task engine may be configured to manage dialog in which at least two of the multiple task intentions interact with each other by one or more of a greater cost or a lesser cost of performing the at least two of the multiple task intentions together.
In other related embodiments, the computerized collaborative dialog manager system may further comprise an input processor configured to receive, from a natural language understanding engine, a natural language interpretation of utterances of the human user to a speech recognition system. The natural language interpretation may comprise at least one of: (i) intent data and mention list data from a statistical natural language system and (ii) logical form natural language data output from a deep natural language system. The task engine may be configured to use the natural language interpretation, as the basis for at least one of a new constraint and a new task intention for the collaborative dialog between the computerized personal assistant and the human user. The task engine may be configured to model a task intention of the human user in a dynamic intention structure based at least on consulting a library of task recipes specifying how domain tasks are to be carried out in a hierarchical task model. The dynamic intention structure implemented by the task engine may comprise, for each task intention: a task intention identifier, a task intention variable, an act, a constraint, and a representation of any subsidiary task dynamic intention structure. The task engine may be further configured to, upon receiving an interpreted utterance of the human user that is unrelated to performing collaborative dialog to ascertain values of parameters to execute multiple task intentions of the human user, generate a natural language response to the human user to guide the human user to return to the collaborative dialog.
In another embodiment according to the invention, there is provided a non-transitory computer-readable medium configured to store instructions for managing a dialog between a computerized personal assistant and a human user. The instructions, when loaded and executed by a processor, cause the processor to manage the dialog by performing dialog processing to permit the computerized personal assistant to interact with the human user in a collaborative dialog to ascertain values of parameters to execute multiple task intentions of the human user in the same collaborative dialog, at least one of the multiple task intentions being initially partially specified. The dialog processing comprises, with a task engine of the computerized personal assistant, iteratively expanding task intentions of an intention base comprising the multiple task intentions until the computerized personal assistant and the human user collaboratively arrive at values of the parameters of the multiple task intentions of the intention base that are executable by the computerized personal assistant. The iteratively expanding task intentions of the intention base comprises, at each iteration, using the task engine of the computerized personal assistant to evaluate a new option to be expressed via an utterance of the computerized personal assistant to the human user, the new option comprising a new constraint that has not been considered before, that is consistent with the intention base, and that reduces future options for the intention base, the collaborative dialog thereby converging on the intention base being executable by the computerized personal assistant.
The foregoing will be apparent from the following more particular description of example embodiments, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments.
A description of example embodiments follows.
Research in task-based dialog management has focused for the most part on dialogs between agents involved in a single task. A major approach within this area of research has focused on the development of plan- or collaborative-based systems where each agent shares beliefs, intentions and task information to enable completion of the task under discussion (Grosz and Sidner, 1990; Grosz and Kraus, 1996; see References section below).
In accordance with an embodiment of the invention, a computerized collaborative dialog manager system focuses instead on multiple tasks, such as planning a dinner and a movie or planning a weekend that might involve wine tasting, a balloon ride, and dinner. An embodiment implements the sorts of dialogs that one would like to support between a virtual personal assistant (VPA) and a human user who is pursuing those tasks. In such dialogs, users typically only incrementally reveal their preferences or constraints regarding an eventual choice and often shift between sub-dialogs for different tasks as the conversation unfolds. Hence, the assistant cannot pursue the solution of tasks in a linear fashion: that is, by first solving one task and then moving on to the next. Such dialogs are referred to herein as “search dialogs” because the two agents are jointly searching the space of possible options, and that space will decrease as new constraints are added, unless old ones are changed.
In accordance with an embodiment of the invention, a search dialog is roughly modeled as follows. A user and a system start with a partially specified intention, say, to reserve a table at some restaurant. As the dialog evolves, each agent exchanges information with the other in the form of constraints, options and selections. The information exchanged reflects the expertise of each agent in the collaborative planning: the user will have personal preferences for and knowledge of certain restaurants, for example, while the system will have extensive information about restaurant locations and availability. This process continues until the user and system arrive at a fully specified and executable version of the task intentions.
A number of challenges arise in such multi-task dialogs. First, the tasks under consideration can interact in both positive and negative ways. An embodiment according to the invention models a positive/negative interaction between two tasks in terms of a lesser/greater cost in doing the two tasks together. A negative interaction between, for example, the tasks of dinner at a restaurant and watching a movie at a theater later might occur through a choice of a restaurant whose location is farther from the theater than another choice. As the conversation interleaves between the individual task sub-dialogs and because of the characteristic non-linearity of task elaboration discussed above, a user's specification of a task attribute can invalidate a developing plan for the other task entailing revision of that other task description.
A second type of revision of past decisions can come about because users typically change their mind during such dialogs: initially a user might choose a particular restaurant that is Italian only to later indicate a preference for Mexican cuisine, entailing revision of some of the consequences of the previous choice (Ortiz and Shen, 2014). (In contrast to a typical master-apprentice dialog in which it is assumed that a good master does not normally make mistakes). Therefore, the dialog control that manages the moves between the subdialogs involving each task cannot be handled with a stack as is normally the case when dealing with interruptions: if the intention corresponding to the first task was put on a stack, that intention might itself be revised in the course of the conversation with a second task. Moreover, if there are more than two tasks, there is no reason to believe that after updating or revising an intention as part of the current conversation one should return to the task discussed immediately before the interruption: there may be good reasons to go back to an older task that was revised as a consequence of the change.
To illustrate these phenomena and the challenges involved, as well as to motivate the approach taken in accordance with an embodiment of the invention, we turn first to the examples of
In the first 5 utterances of the example of
Utterances 14 (of the example of
In this example of a system in accordance with an embodiment of the invention, it is assumed that the preposition “after” here is interpreted pragmatically as in “as soon as possible after.” Consequently, the dependency between dining and movie tasks is tied to that temporal constraint, and subsequent choices will reflect the cost in terms of travel time (that is, leading to positive or negative interactions). This triggers planning that is explained in utterance 27 involving temporal relaxation (i.e., revision) (Yu and Williams, 2013) of previous constraints to arrive at a useful recommendation, which, in this case, involves a change to watching a movie before dinner and moving dinner to a later time. Upon conclusion, the system interacts with a reservation server to make the reservation.
To support multiple task dialogs using a computerized personal assistant in contexts such as those illustrated in
The task engine 106 includes an iterative expansion engine 120, which performs dialog processing that involves iteratively expanding 117 the multiple task intentions 114 that are included in an intention base 111. This continues until the computerized personal assistant 108 and the human user 110 collaboratively arrive at values of the parameters 118 of the multiple task intentions 114 of the intention base 111 that are executable by the computerized personal assistant 108. The task engine 106 also comprises an option engine 122 that is configured, at each iteration, to evaluate a new option to be expressed via an utterance of the computerized personal assistant 108 to the human user 110.
In the embodiment of
With reference to the embodiment of
In more detail, in the embodiment of Id, Vars, Act, Constraints, Sub
, where Sub is a set of ID's representing sub-task DIS's. Each sub-DIS can have sub-DIS' s, until the top-level intention is fully elaborated and executable.
The procedure of the embodiment of
In the procedure of the embodiment of
In the example procedure of
With reference to the embodiment of
Turning to
Turning to
In
In accordance with an embodiment of the invention, a collection of intentions is modeled in an Intention Base (IB). When an agent changes his mind about a constraint revision occurs. The revision of an IB involves a number of steps (Ortiz and Hunsberger, 2013). What follows is an example focusing on revision of constraints with respect to cases (1) and (2) above regarding incomplete intentions. Let CI be some set of constraints associated with a DIS, I, in an IB. If one wants to revise CI (written CI*p) with some p, one collects all the maximal subsets of CI that do not entail ¬p: CI*p={S∪{p}|S⊆CI, S¬p, if S⊂T⊆CI then T
⊆p}. Suppose ψ∉CI but CI
ψ, then one calls ψ the “side effect” of the initial IB. With this definition of revision, side effects can be removed automatically. Suppose that: CI={italian(x), restaurant(x), name(x, CDS), name(x, CDS)⊃D restaurant(x)∧italian(x)∧cheap(x)} and one has the background knowledge, italian(p)⊃¬american(p). If one intends to go to restaurant CDS and one later decides to go to an American restaurant one will no longer intend to go to CDS. Here, it is noted that there is not always a unique revision but in this example, for the sake of simplicity, it is assumed that there is. In addition, it is noted that one can think of the variable x appearing in non-rules (e.g., italian(x)) as really a skolem constant. As mentioned herein, the semantics of the boxes is in terms of a translation into formulas involving a leading “intends” modality with the implicit existentials for variables in the boxes made explicit. During the revision, boxes are unpacked into components, like the constraints shown here, revised and checked for consistency in terms of the modal logic translation and then reconstructed into new boxes. The example rules shown are usually “protected,” in an implemented embodiment, and not subject to revision.
An embodiment according to the invention implements a conversational assistant prototype system called the Intelligent Concierge. The system converses with the user about common destinations such as restaurants, movie theaters, and parking options. It helps users refine their needs and desires until they find exactly what they are happy with. A Natural Language Understanding (NLU) pipeline provides input to the Collaborative Dialog Manager (CDM), that operates at the center of the Concierge, taking a user utterance in the form of natural language text produced by a speech recognition system and interpreting it. The NLU's output can be either an intent and mention list from a statistical NL system (Wang et al., 2011) or a logical form that is output from a deep NLU system (Dahlgren, 2013), which is focused on in the following discussion. With the aid of a library of reasoning components and backend knowledge sources, CDM interprets input in the context of the current dialog and evolving intention and processes dialogs of the form seen herein, taking the user's request, performing required actions such as making a restaurant reservation, requesting more information such as preferences regarding a particular cuisine, or offering information to the user such as providing a list of restaurant options. External sources, such as Opentable, are accessed via backend reasoning processes. The operation of CDM and support for search dialogs is assisted by tightly integrating the dialog manager with supporting reasoning modules: the latter informs the dialog manager as to what to say next, how to interpret new user input, or how to revise an intention. A temporal relaxation planner can be incorporated for reasoning about domain actions that produces the output associated with utterance 27, for example. Finally a Natural Language Generation (NLG) component generates the natural language surface form of the system output if CDM decides on a conversational action.
In accordance with an embodiment of the invention, CDM is built on top of a Dialog development framework (Rich and Sidner, 2012) based on Collaborative Discourse Theory (Grosz and Sidner, 1990; Grosz and Kraus, 1996; Lochbaum, 1998; Sidner, 1994). In the framework of an embodiment according to the invention, dialog participants have their own desires, beliefs, and intentions, but they may be inconsistent with each other. Dialog is the process in which the participants communicate and get them to be consistent in relevant ways.
The purpose of a dialog is to form a full plan between the user and the system in order to achieve a joint goal (i.e., an elaborated IB). Utterances of the other participant are processed and internalized to augment the agent's view of their collaboration.
The embodiment of
imp (Ex5 Ex1: (restaurant (x1) & _cardy (x1,_p1) & Ex2: (cheap (x2) & _mod (x1,x2)) & Ex6 Ex7 Ex3: (City-Hall (x3) & Ex8 Ex9 Ex4: (San-Francisco (x4) & in4(x3,x4) & ) _location (x3,x4)) & near (x1,x3) & _location (x1,x3)) & find(_inf,_e1) & _obj (_e1,x1) & _subj (_e1,_you)))
and maintained in a layered semantic graph form. Intentions are represented as DIS's.
Unlike most other modern dialog systems, CDM, in an embodiment according to the invention, does not make use of a finite state machine (Pieraccini and Huerta, 2005) or information states (Larsson and Traum, 2000) to model the dialog state. Instead, it builds DIS's using a library of task recipes, each of which specify how domain tasks should be carried out or composed. Tasks can be both physical tasks (e.g., driving) as well as internal (“mental”) actions (e.g., scheduling). Each recipe, as, for example, shown in
In accordance with an embodiment of the invention, actions are generated when the system cannot form a full plan from its recipe library and the user needs to be consulted to gain additional information. CDM has a library of basic utterance types, shown, for example, in
To provide an example of the operation of CDM, in accordance with an embodiment of the invention, there follows an example description of the dialog interpretation process which maps NLU output into an utterance type in the dialog context. When the user asks to “Find cheap restaurants near San Francisco City Hall” as the opening request in the sample dialog in
In accordance with an embodiment of the invention, a procedure ranks potential interpretations, first assuming that each user utterance represents the beginning of a new dialog. Then, preferring to carry on with an existing subdialog, CDM identifies any potential interpretations that would fit into the current dialog context, and re-rank or change the interpretation if necessary (step 21 in
Sometimes, the interpreted task is consistent with the dialog context, but not all of the specific task input values (lines 22-23 of
There are times when the user is legitimately changing topics in the middle of a conversation, such as when the user asks to “Find parking near there” (utterance 6) after the system suggested Caffe Delle Stelle. The current dialog context suggests that the interpreted user utterance remain Propose.Should(findParking(y)). However, this does not mean that the two sub-dialogs are unrelated. In fact, linguistic cues such as the anaphoric expression “there” clearly signals the interdependency between the two tasks (lines 16-19 in
A slightly more complicated intention interdependency occurs in the sample dialog when the user asks to “see an action movie” (utterance 26) after going to the restaurant. Initially, CDM interprets the user request as Propose.Should(findMovie(z)). However, while trying to reconcile it with the existing task findRestaurant(x), CDM discovers that the two tasks should not be executed separately (this could result in a negative interaction, i.e., higher cost), but instead be composed into a single supertask scheduleEvents (see
In summary, an embodiment according to the invention provides support for multi-intent search dialogs, in which a user and a VPA incrementally exchange information to support achievement of some set of user tasks. These tasks can interact and choices made by the user can be revised during the course of the dialog. Those revisions can, in turn, lead to modifications in the ongoing specification of other tasks. The approach is a plan-based one in which a dialog between two agents is viewed as a collaboration involving the tasks under discussion. An embodiment according to the invention exhibits robustness in the range of dialogs that can be supported through richness of task models, task elaboration strategies and dynamic revisions, and can closely integrate dialog processing and supporting reasoning.
In an embodiment according to the invention, processes described as being implemented by one processor may be implemented by component processors configured to perform the described processes. Such component processors may be implemented on a single machine, on multiple different machines, in a distributed fashion in a network, or as program module components implemented on any of the foregoing. In addition, systems such as computerized personal assistant 108, computerized collaborative dialog manager system 100, 900, and their components, can likewise be implemented on a single machine, on multiple different machines, in a distributed fashion in a network, or as program module components implemented on any of the foregoing. In addition, such components can be implemented on a variety of different possible devices. For example, computerized personal assistant 108, computerized collaborative dialog manager system 100, 900, and their components, can be implemented on devices such as mobile phones, desktop computers, Internet of Things (IoT) enabled appliances, networks, cloud-based servers, or any other suitable device, or as one or more components distributed amongst one or more such devices. In addition, devices and components of them can, for example, be distributed about a network or other distributed arrangement.
Although embodiments are described herein with reference to “utterances” of a computerized personal assistant, including by generating natural language utterances, it should be understood that a variety of different possible ways of interacting with a human user can be implemented in accordance with an embodiment of the invention. For example, depending on the technical context and desired user experience, one or more computerized components of a system can generate, and communicate using, “utterances” that include one or more of: computer-generated natural language speech utterances, computer-generated text messages, computer-generated graphical displays, computer-generated color indicators, computer-generated tactile messages for the visually impaired, and a variety of other possible computer-generated utterances. Similarly, although embodiments are described herein that include a computerized system performing natural language understanding in order to process a human user's actions or decisions (such as accepting constraints, rejecting constraints, and proposing new task intentions), it should be appreciated that the human user's interaction with the computerized system can be in a variety of different possible forms. For example, one or more computerized components of a system can receive a human user's utterances in the form of: natural language speech, text messages, interactions with a graphical display by gesture or tactile contact with the display, buttons and other devices, or a variety of other possible computer input techniques.
In one embodiment, the processor routines 92 and data 94 are a computer program product (generally referenced 92), including a non-transitory computer-readable medium (e.g., a removable storage medium such as one or more DVD-ROM's, CD-ROM's, diskettes, tapes, etc.) that provides at least a portion of the software instructions for the invention system. The computer program product 92 can be installed by any suitable software installation procedure, as is well known in the art. In another embodiment, at least a portion of the software instructions may also be downloaded over a cable communication and/or wireless connection. In other embodiments, the invention programs are a computer program propagated signal product embodied on a propagated signal on a propagation medium (e.g., a radio wave, an infrared wave, a laser wave, a sound wave, or an electrical wave propagated over a global network such as the Internet, or other network(s)). Such carrier medium or signals may be employed to provide at least a portion of the software instructions for the present invention routines/program 92. In alternative embodiments, the propagated signal is an analog carrier wave or digital signal carried on the propagated medium. For example, the propagated signal may be a digitized signal propagated over a global network (e.g., the Internet), a telecommunications network, or other network. In one embodiment, the propagated signal is a signal that is transmitted over the propagation medium over a period of time, such as the instructions for a software application sent in packets over a network over a period of milliseconds, seconds, minutes, or longer.
P. A. Crook, A. Marin, V. Agarwal, K. Aggarwal, T. Anastasakos, R. Bikkula, D. Boies, A. Celikyilmax, S. Chandramohan, Z. Feizollahi, R. Holenstein, J. Jeoong, O. Z. Khan, Y. B. Kim, E. Krawczyk, X. Liu, D. Panic, V. Radostev, N. Ramesh, J. P. Robichaud, A. Rochett, L. Sronberg, and R. Sarikaya. 2015. Task completion platform: A self-serve multi-domain goal oriented dialogue platform. In NIPS-SLU15.
Kathleen Dahlgren. 2013. Formal linguistic semantics and dialogue. In Annual Semantic Technology Conference.
David Griol and Jose Manuel Molina. 2016. A proposal to manage multi-task dialogs in conversational interfaces. Advances in Distributed Computing and Artificial Intelligence Journal, 5(2):53-65.
Barbara J. Grosz and Luke Hunsberger. 2006. The dynamics of intentions in collaborative intentionality. Cognition, Joint Action and Collective Intentionality, Special Issue, Cognitive Systems Research, 7(2-3):259-272.
Barbara J. Grosz and Sarit Kraus. 1996. Collaborative plans for complex group action. Artificial Intelligence, 86(1):269-357.
Barbara Grosz and Candace Sidner. 1990. Plans for discourse. In P. Cohen, J. Morgan, and M. Pollack, editors, Intentions in Communication, pages 417-444. Bradford Books/MIT Press, Cambridge, Mass.
Hans Kamp and Uwe Reyle. 1993. From Discourse to Logic: Introduction to Model-theoretic Semantics of Natural Language, Formal Logic, and Discourse Representation Theory. Kluwer Academic Publishers, Dordrecht, the Netherlands.
S. Larsson and D. R. Traum. 2000. Information state and dialogue management in the trindi dialogue move engine toolkit. Natural language engineering, 6(3&4):323-340.
Oliver Lemon, Alexander Gruenstein, Alxis Battle, and Stanley Peters. 2002. Multi-tasking and collaborative activities in dialogue systems. In Proceedings of the Third SIGdial Workshop on Discourse and Dialogue, pages 113-124. Karen E. Lochbaum. 1998. A collaborative planning model of intentional structure. Computational Linguistics, 34(4):525-572.
Yub-Nun Chen Ming Sum and Alexander I. Rudnicky. 2015. Understanding users cross-domain intentions in spoken dialog systems. In NIPS.
Charles Ortiz and Luke Hunsberger. 2013. On the revision of dynamic intention structures. In Proceedings of the Eleventh International Symposium on Logical Formalizations of Commonsense Reasoning.
Charles Ortiz and haying Shen. 2014. Dynamic intention structures for dialogue processing. In Proceedings of the 18th Workshop on the Semantics and Pragmatics of Dialogue (SemDial 2014).
R. Pieraccini and J. Huerta. 2005. Where do we go from here? research and commercial spoken dialog systems. In 6th SIGdial Workshop on Discourse and Dialogue.
Charles Rich and Candace L. Sidner. 2012. Using collaborative discourse theory to partially automated dialogue tree authoring. In 14th International Conference on Intelligent Virtual Agents, September.
A. Rudnicky and X. Wei. 1999. An agenda-based dialog management architecture for spoken language systems. In IEEE ASRU, Seattle, Wash.
Candace L. Sidner. 1994. An artificial discourse language for collaborative negotiation. In Proceedings of AAAI, pages 814-819.
Ye-Yi Wang, Li Deng, and Alex Acero. 2011. Semantic Frame Based Spoken Language Understanding. Wiley, January.
Fan Yang, Peter A. Heeman, and Andrew L. Kun. 2011. An investigation of interruptions and resumptions in multi-tasking dialogues. Association for Computational Linguistics, 37(51):75-104.
Peng Yu and Brian Williams. 2013. Continuously relaxing over-constrained conditional temporal problems through generalized conflict learning and resolution. In Proceedings of the Twenty-third International Joint Conference on Artificial Intelligence (IJCAI-2013).
The teachings of all patents, published applications and references cited herein are incorporated by reference in their entirety.
While example embodiments have been particularly shown and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the embodiments encompassed by the appended claims.
This application claims the benefit of U.S. Provisional Application No. 62/682,800, filed on Jun. 8, 2018, and U.S. Provisional Application No. 62/725,370, filed on Aug. 31, 2018. The entire teachings of the above applications are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62682800 | Jun 2018 | US | |
62725370 | Aug 2018 | US |