The present disclosure relates generally to digital assistants, and more particularly, but not necessarily exclusively, to techniques for routing a user input to an action and associated parameters to generate a response to an utterance using a digital assistant and large language models.
Artificial intelligence (AI) has diverse applications, with a notable evolution in the realm of digital assistants or chatbots. Originally, many users sought instant reactions through instant messaging or chat platforms. Organizations, recognizing the potential for engagement, utilized these platforms to interact with entities, such as end users, in real-time conversations.
However, maintaining a live communication channel with entities through human service personnel proved to be costly for organizations. In response to this challenge, digital assistants or chatbots, also known as bots, emerged as a solution to simulate conversations with entities, particularly over the Internet. The bots enabled entities to engage with users through messaging apps they already used or other applications with messaging capabilities.
Initially, traditional chatbots relied on predefined skill or intent models, which required entities to communicate within a fixed set of keywords or commands. Unfortunately, this approach limited an ability of the bot to engage intelligently and contextually in live conversations, hindering its capacity for natural communication. Entities were constrained by having to use specific commands that the bot could understand, often leading to difficulties in conveying intention effectively.
The landscape has since transformed with the integration of Large Language Models (LLMs) into digital assistants or chatbots. LLMs are deep learning algorithms that can perform a variety of natural language processing (NLP) tasks. They use a neural network architecture called a transformer, which can learn from the patterns and structures of natural language and conduct more nuanced and contextually aware conversations for various domains and purposes. This evolution marks a significant shift from rigid keyword-based interactions to a more adaptive and intuitive communication experience compared to traditional chatbots, enhancing the overall capabilities of digital assistants or chatbots in understanding and responding to user queries.
In various embodiments, a computer-implemented method can be used for identifying an action and associated parameters for generating an execution plan for a response to a user using a digital assistant. The method can include receiving an input query from a user in which the input query includes particular data. The method can include identifying, among one or more candidate actions, an action based on the input query. The method can include identifying a set of input argument slots within a schema associated with an action. For each input argument slot of the set of input argument slots, the method can include filling the input argument slot by determining whether one or more parameters corresponding with the argument slot are derivable from the particular data, and in accordance with the one or more parameters corresponding with the input argument slot, (i) deriving the one or more parameters from the particular data and (ii) filling the input argument slot with a version of the one or more parameters that conforms to the schema. The method can include transmitting an execution plan that includes the action that includes the set of filed input arguments slots to an execution engine configured to execute the action for generating a response to the input query.
In some embodiments, receiving the input query can further include receiving contextual information. The contextual information can include (i) a conversation history associated with the user and (ii) a historical execution plan. Additionally or alternatively, identifying the action can include identifying the action based on the input query, the conversation history, and the historical execution plan.
In some embodiments, identify the action can further include using a generative artificial intelligence model to select the action, among the candidate actions, to be executed based on the input query, the conversation history, and the historical execution plan.
In some embodiments, the method can further include determining that at least one input argument slot of the set of input argument slots cannot be filled using the one or more parameters. The one or more parameters may be missing at least one parameter. The method can further include extracting, using the generative artificial intelligence model, that at least one parameter from the conversation history. The method can further include filling the at least one input argument slot using the at least one parameter.
In some embodiments, the method can further include determining the at least one input argument slot of the set of input argument slots cannot be filled using the one or more parameters. The version of the one or more parameters may not conform to the schema. The method can further include adjusting the version of the one or more parameters to conform to the schema. The method can further include filling the at least one input argument slot using the adjusted version of the one or more parameters in the schema.
In some embodiments, the method can further include determining a first subset of the set of input argument slots, where the first subset of input argument slots includes input argument slots that are required to execute the action. The method can further include determining a second subset of the set of input argument slots, where the second subset includes input argument slots that are optional to execute the action.
In some embodiments, the method can further include in accordance with determining that the first subset includes at least one input argument slot that cannot be filled with the version of the one or more parameters, determining whether contextual information included in the input query includes one or more indications of the version of the one or more parameters. In accordance with determining that the contextual information includes the one or more indications of the version of the one or more parameters, the method can further include using the version of the one or more parameters to ill the at least one input argument slot. The method can further include, in accordance with determining the contextual information does not include the one or more indications of the version of the one or more parameters, generating an output that is usable for requesting subsequent input from the user to receive the version of the one or more parameters.
In some embodiments, the method can further include, in accordance with determining that the second subset includes at least one input argument slot that cannot be filled with the version of the one or more parameters, determining whether contextual information included in the input query includes one or more indications of the version of the one or more parameters. The method can further include, in accordance with determining that the contextual information includes the one or more indications of the version of the one or more parameters, using the version of the one or more parameters to fill the at least one input argument slot. The method can further include, in accordance with determining that the contextual information does not the one or more indications of the version of the one or more parameters, populating the set of filled input argument slots with an empty slot for the at least one input argument slot and transmitting the set of filled input argument slots to the execution engine.
In some embodiments, the method can further include executing, using the execution engine, the execution plan using the set of filled input argument slots to generate a response to the input query. The method can further include transmitting the response to the user for facilitating an interaction involving the user.
Some embodiments include a system including one or more processors and one or more computer-readable media storing instructions which, when executed by the one or more processors, cause the system to perform part or all of the operations and/or methods disclosed herein.
Some embodiments include one or more non-transitory computer-readable media storing instructions which, when executed by one or more processors, cause a system to perform part or all of the operations and/or methods disclosed herein.
The techniques described above and below may be implemented in a number of ways and in a number of contexts. Several example implementations and contexts are provided with reference to the following figures, as described below in more detail. However, the following implementations and contexts are but a few of many.
In the following description, for the purposes of explanation, specific details are set forth in order to provide a thorough understanding of certain embodiments. However, it will be apparent that various embodiments may be practiced without these specific details. The figures and description are not intended to be restrictive. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or designs.
Artificial intelligence techniques have broad applicability. For example, a digital assistant can be or include an artificial intelligence driven interface that helps users accomplish a variety of tasks using natural language conversations. Conventionally, for each digital assistant, a customer may assemble one or more skills that are focused on specific types of tasks, such as tracking inventory, submitting timecards, and creating expense reports. When an end user engages with the digital assistant, the digital assistant evaluates the end user input for the intent of the user and routes the conversation to and from the appropriate skill based on the user's perceived intent. However, there are some disadvantages of traditional intent-based skills including a limited understanding of natural language, inability to handle unknown inputs, limited ability to hold natural conversations off script, and challenges integrating external knowledge.
The advent of large language models (LLMs), such as GPT-4, has propelled the field of digital assistant design to unprecedented levels of sophistication and overcome these disadvantages and others of traditional intent-based skills. An LLM is a neural network that employs a transformer architecture, specifically crafted for processing and generating sequential data, such as text or words in conversations. LLMs undergo training with extensive textual data, gradually honing an ability to generate text that closely mimics human-written or spoken language. One of the key advantages of an LLM over a traditional Language Model (LM) is the ability to generalize to novel scenarios and domains much more effectively. Given the inherent flexibility of LLMs, it is desirable to utilize an LLM in a digital assistant as an agent framework to respond to user questions.
These agent frameworks are currently not very mature, but ultimately agent frameworks reimagined with LLMs will allow seamless, out-of-the-box human-like conversation, easier asset unlocking, and high-quality routing and orchestration of requests. Current agent frameworks are limited by the complexity of skills, as they need to continue supporting current ways to define actions (Intents), knowledge (Answer Intents), and Dialog (YAML, freemarker). While the digital assistant routing can be enhanced and some of the core routing tenets still hold, the freedom to rethink routing (in a mostly LLM world) and properly incorporate planning, reasoning, and orchestration has been a challenge give conventional digital assistant components and architecture. Nonetheless, to improve upon and provide value in the long term, certain digital assistant components and architecture need to be revised or redesigned to make the agent framework LLM-centric, which includes redefining its reasoning/routing engine, composition units, and inherent conversation capability.
To address these challenges and others, routing and planning components and techniques have been incorporated into the digital assistants, as described herein in detail. For each digital assistant, a user may assemble one or more agents. Agents, which can include, at least in part, one or more Large Language Models (LLMs), are individual bots that provide human-like conversation capabilities for various types of tasks such as tracking inventory, submitting timecards, updating accounts, and creating expense reports. The agents are primarily defined using natural language. Users, such as developers, can create a functional agent by pointing the agent to assets such as Application Programming Interfaces (APIs), knowledge-based assets such as documents, URLs, images, etc., data stores, prior conversations, etc. The assets are imported to the agent, and then, because the agent is LLM-based, the user can customize the agent using natural language again to provide additional API customizations for dialog and routing/reasoning. The operations performed by an agent are realized via execution of one or more actions. An action can be an explicit one that's authored (e.g., action created for generating natural language text or audio response in reply to an authored natural language prompt such as the query-‘What is the impact of XYZ on my 401k Contribution limit?’) or an implicit one that is created when an asset is imported (e.g., actions created for Change Contribution and Get Contribution API, available through a API asset, configured to change a user's 401k contribution).
Accurately mapping an utterance from a user to an action can be difficult without proper routing and planning. In many instances, executing an action may require a number of contextually relevant information from a user. APIs, for example, often require parameters when called and may use a specific schema with set arguments to obtain a desired output. A digital assistant executing such actions may repeatedly ask a user for missing information based on a failed execution of an action or may be unable to execute complex actions that require a large amount of contextual information. A routing engine can be employed in an LLM-based environment to determine an appropriate action among a set of candidate actions to execute in response to an utterance by a user. Routing and planning may be implemented by the routing engine to identify, obtain, and provide input arguments for executing an action. By implementing routing, an action may be executed without repeatedly prompting a user for information or failing execution of an action. Knowing user preferences and goals while selecting an action without needing prompting by a user can also help increase the efficiency of a digital assistant and reduce user burden. A routing engine may be configured to perform slot-filling and gather information needed for executing an action tailored to a user. In some embodiments, the routing engine may retrieve information from a context containing a searchable conversation history between a user and a digital assistant, historical execution plans, and user profile and preferences information.
In various embodiments, a computer-implemented method can be used for identifying an action and associated parameters for generating an execution plan for a response to a user using a digital assistant. The method can include receiving an input query from a user in which the input query includes particular data. The method can include identifying, among one or more candidate actions, an action based on the input query. The method can include identifying a set of input argument slots within a schema associated with an action. For each input argument slot of the set of input argument slots, the method can include filling the input argument slot by determining whether one or more parameters corresponding with the argument slot are derivable from the particular data, and in accordance with the one or more parameters corresponding with the input argument slot, (i) deriving the one or more parameters from the particular data and (ii) filling the input argument slot with a version of the one or more parameters that conforms to the schema. The method can include transmitting an execution plan that includes the action that includes the set of filed input arguments slots to an execution engine configured to execute the action for generating a response to the input query.
As used herein, when an action is “based on” something, this means the action is based at least in part on at least a part of the something. As used herein, the terms “similarly”, “substantially,” “approximately” and “about” are defined as being largely but not necessarily wholly what is specified (and include wholly what is specified) as understood by one of ordinary skill in the art. In any disclosed embodiment, the term “similarly”, “substantially,” “approximately,” or “about” may be substituted with “within [a percentage] of” what is specified, where the percentage includes 0.1, 1, 5, and 10 percent.
A bot (also referred to as an agent, chatbot, chatterbot, or talkbot), implemented as part of or as a digital assistant, is a computer program that can perform conversations with end users. The bot can generally respond to natural-language messages (e.g., questions or comments) through a messaging application that uses natural-language messages. Enterprises may use one or more bot systems to communicate with end users through a messaging application. The messaging application, which may be referred to as a channel, may be an end user preferred messaging application that the end user has already installed and familiar with. Thus, the end user does not need to download and install new applications in order to chat with the bot system. The messaging application may include, for example, over-the-top (OTT) messaging channels (such as Facebook Messenger, Facebook WhatsApp, WeChat, Line, Kik, Telegram, Talk, Skype, Slack, or SMS), virtual private assistants (such as Amazon Dot, Echo, or Show, Google Home, Apple HomePod, etc.), mobile, web, and cloud application extensions or plugins that extend native or hybrid/responsive mobile, web, or cloud applications with chat capabilities, or voice based input (such as devices or apps with interfaces that use Siri, Cortana, Google Voice, or other speech input for interaction).
In some examples, a bot system may be associated with a Uniform Resource Identifier (URI). The URI may identify the bot system using a string of characters. The URI may be used as a webhook for one or more messaging application systems. The URI may include, for example, a Uniform Resource Locator (URL) or a Uniform Resource Name (URN). The bot system may be designed to receive a message (e.g., a hypertext transfer protocol (HTTP) post call message) from a messaging application system. The HTTP post call message may be directed to the URI from the messaging application system. In some embodiments, the message may be different from a HTTP post call message. For example, the bot system may receive a message from a Short Message Service (SMS). While discussion herein may refer to communications that the bot system receives as a message, it should be understood that the message may be an HTTP post call message, a SMS message, or any other type of communication between two systems.
End users may interact with the bot system through a conversational interaction (sometimes referred to as a conversational user interface (UI)), just as interactions between people. In some cases, the interaction may include the end user saying “Hello” to the bot and the bot responding with a “Hi” and asking the end user how it can help. In some cases, the interaction may also be a transactional interaction with, for example, a banking bot, such as transferring money from one account to another; an informational interaction with, for example, a HR bot, such as checking for vacation balance; or an interaction with, for example, a retail bot, such as discussing returning purchased goods or seeking technical support.
In some embodiments, the bot system may intelligently handle end user interactions without interaction with an administrator or developer of the bot system. For example, an end user may send one or more messages to the bot system in order to achieve a desired goal. A message may include certain content, such as text, emojis, audio, image, video, or other method of conveying a message. In some embodiments, the bot system may convert the content into a standardized form (e.g., a representational state transfer (REST) or API call against enterprise services with the proper parameters) and generate a natural language response. The bot system may also prompt the end user for additional input parameters or request other additional information. In some embodiments, the bot system may also initiate communication with the end user, rather than passively responding to end user utterances. Described herein are various techniques for identifying an explicit invocation of a bot system and determining an input for the bot system being invoked. In certain embodiments, explicit invocation analysis is performed by a master bot based on detecting an invocation name in an utterance. In response to detection of the invocation name, the utterance may be refined or pre-processed for input to a bot that is identified to be associated with the invocation name and/or communication.
DABP 105 can be used to create one or more digital assistant systems (or DAs). For example, as illustrated in
To create one or more digital assistant systems 115, the DABP 105 is equipped with a suite of tools 120, enabling the acquisition of LLMs, agent creation, asset identification, and deployment of digital assistant systems within a service architecture for users via a computing platform such as a cloud computing platform described in detail with respect to
In other instances, the tools 120 can be utilized to pre-train and/or fine-tune the LLMs. The tools 120, or any subset thereof, may be standalone or part of a machine-learning operationalization framework, inclusive of hardware components like processors (e.g., CPU, GPU, TPU, FPGA, or any combination), memory, and storage. This framework operates software or computer program instructions (e.g., TensorFlow, PyTorch, Keras, etc.) to execute arithmetic, logic, input/output commands for training, validating, and deploying machine-learning models in a production environment. In certain instances, the tools 120 implement the training, validating, and deploying of the models using a cloud platform such as Oracle Cloud Infrastructure (OCI). Leveraging a cloud platform can make machine-learning more accessible, flexible, and cost-effective, which can facilitate faster model development and deployment for developers.
The tools 120 further include a prompt-based agent composition unit for creating agents and their associated actions that an end-user can end up invoking. An agent is a container of agent actions and can be part of one or more digital assistants. Each digital assistant may contain one or more agents through a digital assistant relation, which is the intersection entity that links an agent to a digital assistant. The agent and digital assistant are implemented as bot subtypes and may be persisted into an existing BOTS table. This has advantages in terms of reuse of design-time code (e.g., Java code) and UI artefacts.
An agent action is of a specific action type (e.g., knowledge, service or API, LLM, etc.) and contains a description and schema (e.g., JSON schema) which defines the action parameters. The action description and parameters schema are indexed by semantic index and sent to the planner to select the appropriate action(s) to execute. The action parameters are key-value pairs that are input for the action execution. They are derived from the properties in the schema but may also include additional UI/dialog properties that are used for slot filling dialogs. The actions can be part of one or more classes. For example, some actions may be part of an application event subscription class, which defines an agent action that should be executed when an application event is received. The application event can be received in the form of un update application context command message. An application event property mapping class (part of the application event subscription class) specifically maps the application event payload properties to corresponding agent action parameters. An action can optionally be part of an action group. An action group may be used when importing a plugin manifest, or when importing an external API spec such as an Open API spec. An action group is particularly useful when re-importing a plugin or open API spec, so new actions can be added, existing actions can be updated, or actions that are no longer present in the new manifest or Open API spec can be removed. At runtime, an action group may only be used to limit the application context groups that are sent to the LLM as conversation context by looking up the action group name which corresponds to a context group context.
The agents (e.g., 401k Change Contribution Agent) may be primarily defined as a compilation of agent artifacts using natural language within the prompt-based agent composition unit. Users 110 can create functional agents quickly by providing agent artifact information, parameters, and configurations and by pointing to assets. The assets can be or include resources, such as APIs for interfacing with applications, files and/or documents for retrieving knowledge, data stores for interacting with data, and the like, available to the agents for the execution of actions. The assets are imported, and then the users 110 can use natural language again to provide additional API customizations for dialog and routing/reasoning. Most of what an agent does may involve executing actions. An action can be an explicit action that's authored using natural language (similar to creating agent artifacts—e.g., ‘What is the impact of XYZ on my 401k Contribution limit?’ action in the below ‘401k Contribution Agent’ figure) or an implicit action that is created when an asset is imported (automatically imported upon pointing to a given asset based on metadata and/or specifications associated with the asset—e.g., actions created for Change Contribution and Get Contribution API in the below ‘401k Contribution Agent’ figure). The design time user can easily create explicit actions. For example, the user can choose the ‘Rich Text’ action type (see Table 1 for a list of exemplary action types) and creates the name artifact ‘What is the impact of XYZ on my 401k Contribution limit?’ when the user learns that a new FAQ needs to be added, as it's not currently in the knowledge documents (assets) the agent references (thus was not implicitly added as an action).
There are various ways in which the agents and assets can be associated or added to a digital assistant 115. In some instances, the agents can be developed by an enterprise and then added to a digital assistant using DABP 105. In other instances, the agents can be developed and created using DABP 105 and then added to a digital assistant created using DABP 105. In yet other instances, DABP 105 provides an online digital store (referred to as an “agent store”) that offers various pre-created agents directed to a wide range of tasks and actions. The agents offered through the agent store may also expose various cloud services. In order to add the agents to a digital assistant being generated using DABP 105, a user 110 of DABP 105 can access assets via tools 120, select specific assets for an agent, initiate a few mock chat conversations with the agent, and indicate that the agent is to be added to the digital assistant created using DABP 105.
Once deployed in a production environment, such as the architecture described with respect to
As part of a conversation, a user 125 may provide one or more user inputs 130 to digital assistant 115A and get responses 135 back from digital assistant 115A via a user interface element such as a chat window. A conversation can include one or more of user inputs 130 and responses 135. Via these conversations, a user 125 can request one or more tasks to be performed by the digital assistant 115A and, in response, the digital assistant 115A is configured to perform the user-requested tasks and respond with appropriate responses to the user 125 using one or more LLMs 140. Conversations shown in the chat window can be organized by thread. For example, in some applications, a conversation related to one page of an application should not be mixed with a conversation related to another page of the application. The application and/or the plugins for the application define the thread boundaries (e.g., a set of (nested) plugins can run within their own thread). Effectively, the chat window will only show the history of messages that belong to the same thread. Setting and changing the thread can be performed via the application and/or the plugins using an update application context command message. Additionally or alternatively, the thread can be changed via an execution plan orchestrator when a user query is matched to a plugin semantic action and the plugin runs in a thread different than the current thread. In this case, the planner changes threads, so that any messages sent in response to the action being executed are shown in the correct new thread. Per agent dialog thread, the following information can be maintained by the digital assistant: the application context, the LLM conversation history, the conversation history with the user, and the agent execution context which holds information about the (stacked) execution plan(s) related to this thread.
User inputs 130 are generally in a natural language form and are referred to as utterances, which may also be referred to as prompts, queries, requests, and the like. The user inputs 130 can be in text form, such as when a user types in a sentence, a question, a text fragment, or even a single word and provides it as input to digital assistant 115A. In some embodiments, a user input 130 can be in audio input or speech form, such as when a user says or speaks something that is provided as input to digital assistant 115A. The user inputs 130 are typically in a language spoken by the user 125. For example, the user inputs 130 may be in English, or some other language. When a user input 130 is in speech form, the speech input is converted to text form user input 130 in that particular language and the text utterances are then processed by digital assistant 115A. Various speech-to-text processing techniques may be used to convert a speech or audio input to a text utterance, which is then processed by digital assistant 115A. In some embodiments, the speech-to-text conversion may be done by digital assistant 115A itself. For purposes of this disclosure, it is assumed that the user inputs 130 are text utterances that have been provided directly by a user 125 of digital assistant 115A or are the results of conversion of input speech utterances to text form. This however is not intended to be limiting or restrictive in any manner.
The user inputs 130 can be used by the digital assistant 115A to determine a list of candidate agents 145A-N. The list of candidate agents (e.g., 145A-N) includes agents configured to perform one or more actions that could potentially facilitate a response 135 to the user input 130. The list may be determined by running a search, such as a semantic search, on a context and memory store that has one or more indices comprising metadata for all agents 145 available to the digital assistant 115A. Metadata for the candidate agents 145A-N in the list of candidate agents is then combined with the user input to construct an input prompt for the one or more LLMs 140.
Digital assistant 115A is configured to use one or more LLMs 140 to apply NLP techniques to text and/or speech to understand the input prompt and apply natural language understanding (NLU) including syntactic and semantic analysis of the text and/or speech to determine the meaning of the user inputs 130. Determining the meaning of the utterance may involve identifying the goal of the user, one or more intents of the user, the context surrounding various words or phrases or sentences, one or more entities corresponding to the utterance, and the like. The NLU processing can include parsing the received user inputs 130 to understand the structure and meaning of the utterance, refining and reforming the utterance to develop a better understandable form (e.g., logical form) or structure for the utterance. The NLU processing performed can include various NLP-related processing such as sentence parsing (e.g., tokenizing, lemmatizing, identifying part-of-speech tags for the sentence, identifying named entities in the sentence, generating dependency trees to represent the sentence structure, splitting a sentence into clauses, analyzing individual clauses, resolving anaphoras, performing chunking, and the like). In certain instances, the NLU processing, or any portions thereof, is performed by the LLMs 140 themselves. In other instances, the LLMs 140 use other resources to perform portions of the NLU processing. For example, the syntax and structure of an input utterance sentence may be identified by processing the sentence using a parser, a part-of-speech tagger, a named entity recognition model, a pretrained language model such as BERT, or the like.
Upon understanding the meaning of an utterance, the one or more LLMs 140 generate an execution plan that identifies one or more agents (e.g., agent 145A) from the list of candidate agents to execute and perform one or more actions or operations responsive to the understood meaning or goal of the user. The one or more actions or operations are then executed by the digital assistant 115A on one or more assets (e.g., asset 150A-knowledge, API, SQL operations, etc.) and/or the context and memory store. The execution of the one or more actions or operations generates output data from one or more assets and/or relevant context and memory information from a context and memory store comprising context for a present conversation with the digital assistant 115A. The output data and relevant context and memory information are then combined with the user input 130 to construct an output prompt for one or more LLMs 140. The LLMs 140 synthesize the response 135 to the user input 130 based on the output data and relevant context and memory information, and the user input 130. The response 135 is then sent to the user 125 as an individual response or as part of a conversation with the user 125.
For example, a user input 130 may request a pizza to be ordered by providing an utterance such as “I want to order a pizza.” Upon receiving such an utterance, digital assistant 115A is configured to understand the meaning or goal of the utterance and take appropriate actions. The appropriate actions may involve, for example, providing responses 135 to the user with questions requesting user input on the type of pizza the user desires to order, the size of the pizza, any toppings for the pizza, and the like. The questions requesting user may be generated by executing an action via an agent (e.g., agent 145A) on a knowledge asset (e.g., a menu for a pizza restaurant) to retrieve information that is pertinent to ordering a pizza (e.g., to order a pizza a user must provide type, seize, topping, etc.). The responses 135 provided by digital assistant 115A may also be in natural language form and typically in the same language as the user input 130. As part of generating these responses 135, digital assistant 115A may perform natural language generation (NLG) using the one or more LLMs 140. For the user ordering a pizza, via the conversation between the user and digital assistant 115A, the digital assistant 115A may guide the user to provide all the requisite information for the pizza order, and then at the end of the conversation cause the pizza to be ordered. The ordering may be performed by executing an action via an agent (e.g., agent 145A) on an API asset (e.g., an API for ordering pizza) to upload or provide the pizza order to the ordering system of the restaurant. Digital assistant 115A may end the conversation by generating a final response 135 providing information to the user 125 indicating that the pizza has been ordered.
While the various examples provided in this disclosure describe and/or illustrate utterances in the English language, this is meant only as an example. In certain embodiments, digital assistants 115 are also capable of handling utterances in languages other than English. Digital assistants 115 may provide subsystems (e.g., components implementing NLU functionality) that are configured for performing processing for different languages. These subsystems may be implemented as pluggable units that can be called using service calls from an NLU core server. This makes the NLU processing flexible and extensible for each language, including allowing different orders of processing. A language pack may be provided for individual languages, where a language pack can register a list of subsystems that can be served from the NLU core server.
While the embodiment in
In instances where the user provides the utterance 202 and/or performs an action while using an application supported by a digital assistant, the application issues update application context commands as the user interacts with the application (e.g., provides an utterance via text or audio, triggers a user interface element, navigates between pages of the application, and the like). Whenever an update application context command message is received by the digital assistant from the application, the application context processor (part of the context manager) is implemented. The application context processor performs the following tasks: (i) manages dialog threads based on the application context message, e.g., if the threadId specified with the message doesn't exist yet, a new dialog thread is created and made current, and if the threadId already exists, the corresponding dialog thread is made current, (ii) creates or updates the application context object for the current dialog thread, (iii) if a service call ID such as a REST request ID is included, the application context may be enriched (as described in greater detail herein). As should be understood, the application context only contains information that reflects the state of the application user interface and plugins (if available), it does not contain other state information (e.g., user or page state information/context).
Is some instances, when an update application context command message is received, an application event processor checks on whether the update application context command message includes an event definition. The event is uniquely identified by the following properties in the message payload: (i) context: the context path and/or the plugin path (For a top-level workspace plugin the context is set to the plugin name, for nested plugins the plugin path is included where plugins are separated with a slash, for example Patient/Vitalschart), (ii) eventType: the type of event can be one of the built in events or a custom event, and (iii) semantic object: the semantic object to which the event applies. An event can be mapped to one or more actions, and the message payload properties can be mapped to action parameters. This mapping takes place through an application event subscription. Each property in the message payload can be mapped to an agent action parameter using an application event property mapping.
In some instances, the utterance 202 and/or action performed by the user is provided directly as input to a routing engine 208 (also referred to as a planner). In other instances where the application event processor is implemented, the utterance 202 and/or action performed by the user is provided as input to the routing engine 208 when the application event processor determines an event such as receipt of utterance 202 is mapped to an agent or action associated with the digital assistant. The routing engine 208 is used by the digital assistant to create an execution plan 210 with specified parameters either from the utterance 202, the action performed by the user, the context, or any combination thereof. The execution plan 210 identifies one or more agents and/or one or more actions for the one or more agents to execute in response to the utterance 202 and/or action performed by the user.
A two-step approach can be taken via the routing engine 208 to generate the execution plan 210. First, a search 212 can be performed to identify a list of candidate agents and/or actions. The search 212 comprises running a query on indices 213 (e.g., semantic indices) of a context and memory store 214 based on the utterance 202 and/or action performed by the user. In some instances, the search 212 is a semantic search performed using words from the utterance 202 and/or representative of the action performed by the user. The semantic search uses NLP and optionally machine learning techniques to understand the meaning of the utterance 202 and/or action performed by the user and retrieve relevant information from the context and memory store 214. In contrast to traditional keyword-based searches, which rely on exact matches between the words in the query and the data in the context and memory store 214, a semantic search takes into account the relationships between words, the context of the query and/or action, synonyms, and other linguistic nuances. This allows the digital assistant to provide more accurate and contextually relevant results, making it more effective in understanding the user's intent in the utterance 202 and/or action performed by the user.
In order to run the query, the routing engine 208 calls the context and memory store 214 (e.g., a semantic index of the context and memory store 214) to get the list of candidate agents and/or actions. The following information is passed in the call: (i) the ID of the digital assistant (the ID scopes the set of agent and/or actions the semantic index will search for and thus the agents and/or actions must be part of the digital assistant), and (ii) the last X number of user messages and/or actions (e.g., X can be set to the last 5 turns), which can be configurable through the digital assistant settings. Upon receiving the list of candidate agents and/or actions, the routing engine 208 can identify an associated schema with the actions and perform slot-filling to determine any missing input arguments for the schema.
The context and memory store 214 is implemented using a data framework for connecting external data to LLMs 216 to make it easy for users to plug in custom data sources. The data framework provides rich and efficient retrieval mechanisms over data from various sources such as files, documents, datastores, APIs, and the like. The data can be external (e.g., enterprise assets) and/or internal (e.g., user preferences, memory, digital assistant, and agent metadata, etc.). In some instances, the data comprises metadata extracted from artifacts 217 associated with the digital assistant and its agents 218 (e.g., 218a and 218b). The artifacts 217 for the digital assistant include information on the general capabilities of the digital assistant and specific information concerning the capabilities of each of the agents 218 (e.g., actions) available to the digital assistant (e.g., agent artifacts). Additionally or alternatively, the artifacts 217 can encompass parameters or information associated with the artifacts 217 and that can be used to define the agents 218 in which the parameters or information associated with the artifacts 217 can include a name, a description, one or more actions, one or more assets, one or more customizations, etc. In some instances, the data further includes metadata extracted from assets 219 associated with the digital assistant and its agents 218 (e.g., 218a and 218b). The assets 219 may be resources, such as APIs 220, files and/or documents 222, data stores 223, and the like, available to the agents 218 for the execution of actions (e.g., actions 225a, 225b, and 225c). The data is indexed in the context and memory store 214 as indices 213, which are data structures that provide a fast and efficient way to look up and retrieve specific data records within the data. Consequently, the context and memory store 214 provides a searchable comprehensive record of the capabilities of all agents and associated assets that are available to the digital assistant for responding to the request and/or action.
The response of context and memory store 214 is converted into a list of agent and/or action instances that are not just available to the digital assistant for responding to the request but also potentially capable of facilitating the generation of a response to the utterance 202 and/or action performed by the user. The list of candidate agents and/or actions includes the metadata (e.g., metadata extracted from artifacts 217 and assets 219) from the context and memory store 214 that is associated with each of the candidate agents and/or actions. The list can be limited to a predetermined number of candidate agents and/or actions (e.g., top 10) that satisfy the query or can include all agents and/or actions that satisfy the query. The list of candidate agents and/or actions with associated metadata is appended to the utterance 202 and/or action performed by the user to construct an input prompt 227 for the LLM 216. The search 212 is important to the digital assistant because it filters out agents and/or actions that are unlikely to be capable of facilitating the generation of a response to the utterance 202 and/or action performed by the user. This filter ensures that the number of tokens (e.g., word tokens) generated from the input prompt 227 remains under a maximum token limit or context limit set for the LLM 216. Token limits represent the maximum amount of text that can be inputted into an LLM. This limit is of a technical nature and arises due to computational constraints, such as memory and processing resources, and thus makes certain that the LLMs can take the input prompt as input.
In some instances, one or more knowledge actions are additionally appended to the list of candidate agents and the utterance 202. The knowledge actions allow for additional knowledge to be acquired that is pertinent to the utterance 202 and/or action performed by the user (this knowledge is typically outside the scope of the knowledge used to train an LLM of the digital assistant). The are two types of knowledge action sources: (i) structure: the knowledge source defines a list of pre-defined questions that the user might ask and exposes them as some APIs (e.g., Multum), and (ii) unstructured: with the knowledge source, the user has unlimited ways to ask questions and the knowledge source exposes a generic query interface (e.g., medical documents (SOAP notes, discharge summary, etc.)).
In some instances, conversation context 229 concerning the utterance 202 are additionally appended to the list of candidate agents and the utterance 202. The conversation context 229 can be retrievable from one or more sources including the context and memory store 214, and includes user session information, dialog state, conversation or contextual history, application context, page context, user information, or any combination thereof. For example, the conversation context 229 can include: the current date and time, needed to resolve temporal references in user query like “yesterday”, or “next Thursday”, additional context, which contains information such as user profile properties and application context groups with semantic object properties, and the chat history with the digital assistant (and/or other digital assistant or system internal or external to the computing environment 200.
The second step of the two-step approach is for the LLM 216 to generate an execution plan 210 based on the input prompt 227. The LLM 216 can be invoked by creating an LLM chat message with role system passing in the input prompt 227, converting the candidate agents and/or actions into LLM function definitions, retrieving a proper LLM client based on the LLM configuration options, optionally transforming the input prompt 227, LLM chat message, etc. into a proper format for the LLM client, and sending the LLM chat message to the LLM client for invoking the LLM 216. The LLM client then sends back an LLM success response in CLMI format or a provider specific response is converted back to the LLM success response in CLMI format using an adapter such as OpenAIAdapter (or send back or is converted to an LLM error response in case an unexpected error occurred). An LLM call instance is created and added to the conversation history which captures all the request and response details including the execution time.
The LLM 216 has a deep generative model architecture (e.g., a reversible or autoregressive architecture with) for generating the execution plan 210. In some instances, the LLM 216 has over 100 billion parameters and generates the execution plan 210 using autoregressive language modeling within a transformer architecture, allowing the LLM 216 to capture complex patterns and dependencies in the input prompt 227. The LLM's 216 ability to generate the execution plan 210 is a result of its training on diverse and extensive textual data, enabling the LLM to understand human language across a wide range of contexts. During training, the LLM 216 learns to predict the next word in a sequence given the context of the preceding words. This process involves adjusting the model's parameters (weights and biases) based on the errors between its predictions and the actual next words in the training data. When the LLM 216 receives an input such as the input prompt 227, the LLM 216 tokenizes the text into smaller units such as words or sub-words. Each token is then represented as a vector in a high-dimensional space. The LLM 216 processes the input sequence token by token, maintaining an internal representation of context. The LLM's 216 attention mechanism allows it to weigh the importance of different tokens in the context of generating the next word. For each token in the vocabulary, the LLM 216 calculates a probability distribution based on its learned parameters. This probability distribution represents the likelihood of each token being the next word given the context. For example, to generate the execution plan 210, the LLM 216 samples a token from the calculated probability distribution. The sampled token becomes the next word in the generated sequence. This process is repeated iteratively, with each newly generated token influencing the context for generating the subsequent token. The LLM 216 can continue generating tokens until a predefined length or stopping condition is reached.
In some instances, as illustrated in
In some instances, the utterance 202 by the user may be determined by the LLM 216 to be non-sequitur (i.e., an utterance that does not logically follow from the previous utterance in a dialogue or conversation). In such an instance, an execution plan orchestrator can be used to handle the switch among different dialog paths. The execution plan orchestrator is configured to track all the ongoing conversation paths, create a new entry if a new dialog path is created and pause the current ongoing conversation if any, remove the entry if the conversation completes based on the metadata of the new action or user preference, it might generate a prompt message when starting a non-sequitur or resuming the previous one, manage the dialog for the prompt message and either proceed or restore the current conversation, confirm or cancel when the user responds to the prompt for the non-sequitur. and manages a cancel or exit from a dialog.
The execution plan 210 includes an ordered list of agents and/or actions that can be used and/or executed to sufficiently respond to the request such as the additional query 238. For example, and as illustrated in
The execution plan 210 is then transmitted to an execution engine 250 for implementation. The execution engine 250 includes a number of engines, including a natural language-to-programming language translator 252, a knowledge engine 254, an API engine 256, a prompt engine 258, and the like. for executing the actions of agents and implementing the execution plan 210. For example, the natural language-to-programming language translator 252, such as a Conversation to Oracle Meaning Representation Language (C2OMRL) model, may be used by an agent to translate natural language into a intermedial logical for (e.g., OMRL), convert the intermediate logical form into a system programming language (e.g., SQL) and execute the system programming language (e.g., execute an SQL query) on an asset 219 such as data stores 223 to execute actions and/or obtain data or information. The knowledge engine 254 may be used by an agent to obtain data or information from the context and memory store 214 or an asset 219 such as files/documents 222. The API engine 256 may be used by an agent to call an API 220 and interface with an application such as retirement fund account management application to execute actions and/or obtain data or information. The prompt engine 258 may be used by an agent to construct a prompt for input into an LLM such as an LLM in the context and memory store 214 or an asset 219 to execute actions and/or obtain data or information.
The execution engine 250 implements the execution plan 210 by running each agent and executing each action in order based on the ordered list of agents and/or actions using the appropriate engine(s). To facilitate this implementation, the execution engine 250 is communicatively connected (e.g., via a public and/or provue network) with the agents (e.g., 242a, 242b, etc.), the context and memory store 214, and the assets 219. For example, as illustrated in
The result of implementing the execution plan 210 is output data 269 (e.g., results of actions, data, information, etc.), which is transmitted to an output pipeline 270 for generating end-user responses 272. For example, the output data 269 from the assets 219 (knowledge, API, dialog history, etc.) and relevant information from the context and memory store 214 can be transmitted to the output pipeline 270. The output data 269 is appended to the utterance 202 to construct an output prompt 274 for input to the LLM 236. In some instances, context 229 concerning the utterance 202 are additionally appended to the output data 269 and the utterance 202. The context 229 is retrievable from the context and memory store 214 and includes user session information, dialog state, conversation or contextual history, user information, or any combination thereof. The LLM 236 generates responses 272 based on the output prompt 274. In some instances, the LLM 236 is the same or similar model as LLM 216. In other instances, the LLM 236 different from LLM 216 (e.g., trained on a different set of data, a different architecture, trained for a one or more different tasks, etc.). In either instance, the LLM 236 has a deep generative model architecture (e.g., a reversible or autoregressive architecture with) for generating the responses 272 using similar training and generative processes described above with respect to LLM 216. In some instances, the LLM 236 has over 100 billion parameters and generates the responses 272 using autoregressive language modeling within a transformer architecture, allowing the LLM 236 to capture complex patterns and dependencies in the output prompt 274.
In some instances, the end-user responses 272 may be in the format of a Conversation Message Model (CMM) and output as rich multi-modal responses. The CMM defines the various message types that the digital assistant can send to the user (outbound), and the user can send to the digital assistant (inbound). In certain instances, the CMM identifies the following message types:
Messages that are defined in CMM are channel-agnostic and can be created using CMM syntax. The channel-specific connectors transform the CMM message into the format required by the specific channel, allowing a user to run the digital assistant on multiple channels without the need to create separate message formats for each channel.
Lastly, the output pipeline 270 transmits the responses 272 to the end user such as via a user device or interface. In some instances, the responses 272 are rendered within a dialogue box of a GUI allowing for the user to view and reply using the dialogue box (or alternative means such as a microphone). In other instances, the responses 272 are rendered within a dialogue box of a GUI having one or more GUI elements allowing for an easier response by the user. In this particular instance, a first response 272 (What is my current 401k Contribution? Also, can you tell me the contribution limit?) to the additional query 238 is rendered within the dialogue box of a GUI. Additionally, in order to follow-up on obtaining information still required for the initial utterance 202, the LLM 236 generates another response 272 prompting the user for the missing information (Would you like to change your contribution by percentage or amount? [Percentage] [Amount]).
While the embodiment of computing environment 200 in
The input 302 may be provided to a routing engine 304. The routing engine 304 may generate an execution plan based on the input 302 and based on context provided to the routing engine 304. The routing engine 304 may receive the input 302 and may make a call to a semantic context and memory store 306 to retrieve the context. In some embodiments, the semantic context and memory store 306 includes one or more assets 308, which may be similar or identical to the assets 219. The routing engine 304 may provide at least a portion of the input 302 to the semantic context and memory store 306, which can perform a semantic search on the assets 308 and/or other knowledge included in the semantic context and memory store 306. The semantic search may generate a list of candidate actions, from among all actions that can be performed via one or more of the assets 308, that may be used to address the input 302 or any subset thereof. In some embodiments, the candidate actions may be generated only based on contextual information. For example, the input 302 may be compared with metadata of the actions to generate the candidate actions. Table 2 lists particular examples of context information and candidate actions that can be received by the routing engine 304 and an execution plan that can be generated by the routing engine 304.
As a particular example, the routing engine 304 receives the context information and candidate actions listed in Table 2 as part of a prompt transmitted to the routing engine 304. The input 302 can be an indication the user wants to create an expense, or it may a continuation of the conversation listed with the conversation history in Table 2. Each candidate action has an associated agent, description, JSON schema, and action type. The JSON schema may contain input argument slots with specified types that are required or optional. As an example, the “Get Expense Details” action has an associated JSON schema with one input argument slot for an expense ID that is an integer and is required. Some actions such as “Get Expense Categories” have an associated schema without any input argument slots.
The routing engine 304 may use the candidate actions to form an input prompt for a generative artificial intelligence model. The generative artificial intelligence model may be or be included in generative artificial intelligence models 310, which may include one or more large language models (LLMs). The routing engine 304 may be communicatively coupled with the generative artificial intelligence models 310 via a common language model interface layer (CLMI layer 312). The CLMI layer 312 may be an adapter layer that can allow the routing engine 304 to call a variety of different generative artificial intelligence models that may be included in the generative artificial intelligence models 310. For example, the routing engine 304 may generate an input prompt and may provide the input prompt to the CLMI layer 312 that can convert the input prompt into a model-specific input prompt for being input into a particular generative artificial intelligence model. The routing engine 304 may receive output from the particular generative artificial intelligence model that can be used to generate an execution plan. The output may be or include the execution plan. In other embodiments, the output may be used as input by the routing engine 304 to allow the routing engine 304 to generate the execution plan. The output may include a list that includes one or more executable actions based on the utterance included in the input 302. In some embodiments, the execution plan may include an ordered list of actions to execute for addressing the input 302.
In some instances, the routing engine 304 may perform slot-filling to supplement any information required by the execution engine 314 to execute the execution plan. In some examples, the output of the routing engine 304 to be sent to the execution engine 314 can be in a JSON schema format. The output may have an associated schema with specified key-value pairs required to pass to the execution engine 314 and the routing engine 304 can determine if any information needed for a selected action is missing. The routing engine 304 may use the conversation history, text from the input 302, the context or any combination thereof to determine the missing information. For example, an action may require information related to the current date and the routing engine 304 can retrieve the current data from the input 302 or from available information within a context. The routing engine 304 may tailor an action to a user by identifying user preferences and filling input argument slots within a schema according to the user preferences.
As a particular example, the routing engine 304 may use the contextual information listed in Table 2 and the input 302 to select the “Create Expense” action among the candidate actions listed in Table 2. “Create Expense” is associated with an API call as its action type and is associated with a JSON schema with input arguments including a required integer employee ID, a required float expense amount, a required date formatted as type date, a required string merchant name, and an optional string describing the location of the expense. The routing engine 304 retrieves the employee ID from the user profile data retrieved with the contextual information and can determine the merchant by looking at the conversation history and recognizing the user previously uttered “Burger King was the merchant.” The routing engine 304 may be unable to determine the remaining input argument slots and instead sets their values to null. The routing engine 304 generates an execution plan as listed in Table 2 including the action, agent, and arguments for the schema.
The routing engine 304 can transmit the execution plan to the execution engine 314 for executing the execution plan. The routing engine 304 may transmit the execution plan along with any information required by the execution engine 314. The execution engine 314 may perform an iterative process for each executable action included in the execution plan. For example, the execution engine 314 may, for each executable action, identify an action type, may invoke one or more states for executing the action type, and may execute the executable action using an asset to obtain an output. The execution engine 314 may be communicatively coupled with an action executor 316 that may be configured to perform at least a portion of the iterative process. For example, the action executor 316 can identify one or more action types for each executable action included in the execution plan. In a particular example, the action executor 316 may identify a first action type 318a for a first executable action of the execution plan. The first action type 318a may be or include a semantic action such as summarizing text or other suitable semantic action. Additionally or alternatively, the action executor 316 may identify a second action type 318b for a second executable action of the execution plan. The second action type 318b may involve invoking an API such as an API for making an adjustment to an account or other suitable API. Additionally or alternatively, the action executor 316 may identify a third action type 318c for a third executable action of the execution plan. The third action type 318c may be or include a knowledge action such as providing an answer to a technical question or other suitable knowledge action. In some embodiments, the third action type 318c may involve making a call to at least one generative artificial intelligence model of the generative artificial intelligence models 310 to retrieve specific knowledge or a specific answer. In other embodiments, the third action type 318c may involve making a call to the semantic context and memory store 306 or other knowledge documents.
In some instances, the execution engine 314 may not receive all of the information required by the action executor 316 to perform a requested action. The execution engine 314 may instead generate an execution failed status and the execution failed status may be sent to the response engine 320. As a particular example, the execution engine 314 may receive the execution plan listed in Table 2. The arguments for amount, date, and location for the “Create Expense” action in the execution plan are set to null. According to the JSON schema associated with the “Create Expense” action, amount and date are required arguments. A call to an API without the required argument can fail, and the execution engine 314 may indicate the missing arguments in an execution status or attempt to call the API and generate an execution status based on the output of the API call.
The action executor 316 may continue the iterative process based on the action types indicated by the executable actions included in the execution plan. Once the action executor 316 identifies the action types, the action executor 316 may identify and/or invoke one or more states for each executable action based on the action type. A state of an action may involve an indication of if or whether an action can be or has been executed. For example, the state for a particular executable action may include “preparing” “ready” “executing” “success” “failure” or any other suitable states. The action executor 316 can determine, based on the invoked state of the executable action, whether the executable action is ready to be executed, and, if the executable action is not ready to be execute, the action executor 316 can identify missing information or assets required for proceeding with executing the executable action. In response to determining that the executable action is ready to be executed, and in response to determining that no dependencies exist (or existing dependencies are satisfied) for the executable action, the action executor 316 can execute the executable action to generate an output.
The action executor 316 can execute each executable action, or any subset thereof, included in the execution plan to generate a set of outputs. The set of outputs may include knowledge outputs, semantic outputs, API outputs, and other suitable outputs. The action executor 316 may provide the set of outputs to an output engine 320. The output engine 320 may be configured to generate a second input prompt based on the set of outputs. The second input prompt can be provided to at least one generative artificial intelligence model of the generative artificial intelligence models 310 to generate a response 322 to the input 302. The output engine 320 may make a call to the at least one generative artificial intelligence model to cause the at least one generative artificial intelligence model to generate the response 322, which can be provided to the user in response to the input 302.
In some instances, the response engine 320 may receive an execution failed status from the execution engine 314. The execution failed status may contain information about why the execution engine 314 was unable to complete an action. The response engine 320 may produce a response 322 to the user indicating the agent requires supplemental information to complete an action. The response 322 may request the user to input the required information. In some examples, the response 322 may indicate that an action cannot be completed and may provide a reason indicating why the action cannot be completed. In some examples, the response 322 may be sent to a user before an action is executed and may ask a user to confirm details of an execution plan. As a particular example, the response engine 320 can request the expense amount and date from the user to complete the “Create Expense” action as listed in Table 2. Upon receiving the requested expense amount from the user, the routing engine 304 can update the execution plan.
In some embodiments, the at least one generative artificial intelligence model used to generate the response 322 may be similar or identical to, or otherwise the same model, as the at least one generative artificial intelligence model used to generate output for generating the execution plan.
As illustrated in the first data flow 400a, the routing engine 402 can receive an input query 404, which can contain a set of parameters and contextual information. In some examples, the input query 404 may be a rich text utterance from a user. The routing engine 402 can retrieve a list of candidate actions 410 and a context 412 from a semantic context and memory store 408. The candidate actions 410 may be retrieved from a data store 414. The context 412 can contain contextual information about the user (e.g. profile information, employee id) and user preferences related to specific actions or goals. The context 412 may further contain a conversation history 416 between the digital assistant and the user and one or more historical execution plans 418. The historical execution plans 418 may contain one or more previously executed actions and the results of action execution. The routing engine 402 may be an LLM that can select an action or may use an LLM to select an appropriate action in response to the input query 404. The routing engine 402 can select an action based on the information determined from the input query 404. In some examples, the input query 404 may not provide enough information for the routing engine 402 to select an action 420. The routing engine may use contextual information retrieved from the input query 404, the context 412, the conversation history 416, the historical execution plans 418, or any combination thereof to select an appropriate action.
The action 420 can have an associated schema requiring one or more argument slots. Additionally or alternatively, the one or more argument slots may include optional argument slots. As an example, the action may be an API call with an associated JSON schema that may have required and/or optional key-value pairs. The routing engine 402 can identify or determine one or more parameters 422 corresponding to one or more missing argument slots. The process of determining and assigning parameters 422 to missing arguments in a schema associated with the action 420 may be referred to as slot filling. In some instances, the parameters 422 may be determined from the input query 404. In some instances, one or more parameters 422 may not be determinable from the input query 404. The routing engine may determine one or more parameters 422 from the input query 404, the context 412, historical execution plan, conversation history, or any combination thereof. In some examples, the input query 404 may refer to a previous part of a conversation between the user and the digital assistant and the routing engine 402 may search the conversation history 416 for information related to the input query 404 and/or relevant parameters 422 for the action 420.
In some examples, the action 420 may be an API-based action and may require a JSON schema with specific arguments to be executed. As an example, the input query 404 may be a request to change a 401k contribution for a user. The routing engine 402 may select calling a 401k Change Contribution API as the action 420 and the 401k Change Contribution API may require a contribution amount as an argument slot. The routing engine 402 may search the available contextual information from the input query 404 or the context 412 to identify the contribution amount. In some instances, the user may have previously mentioned a contribution amount and subsequently asked to change their contribution amount. In this instance, the routing engine 402 may search the conversation history 416 for the previously mentioned contribution amount and input the amount accordingly. As a particular example, the routing engine 402 selects the “Create Expense” action detailed in Table 2 and uses context information listed in Table 2 to determine arguments (e.g. employee ID, expense amount, expense date, merchant, location) for the associated JSON schema.
In some examples, the action 420 may be a rich text action. The routing engine 402 may produce a JSON with a single argument for a query and generate a corresponding query with contextual information required to respond to the input query 404.
In some instances, the parameters 422 need to be of a specific version and the routing engine 402 can adjust the type of the parameters 422 to conform to the schema. As an example, the 401k Change Contribution API may require the contribution amount to be of type float, but the routing engine 402 may retrieve a contribution amount as a type string. The routing engine 402 may correct the data type of the contribution amount from string to float before inputting the argument. In some examples, the routing engine 402 may be unable to correct the version of the parameters 422 to conform to the schema. The routing engine 402 may search the input query 404, the context 412, the conversation history 416, the historical execution plans 418, or any combination thereof to retrieve the correct versions of the one or more parameters 422. As a particular example, the “Create Expense” action listed in Table 2 requires the “Expense Amount” argument to be a float, but the routing engine 402 may receive an expense amount as a string and correct the type to float before filling the “Expense Amount” argument slot.
In some examples, the routing engine 402 may be unable to retrieve parameters 422 or may be unable to retrieve the correct versions of the parameters 422. The routing engine 402 may leave the missing argument slots in the schema empty. In some examples, the corresponding parameters 422 may be set to null. If the missing argument slots within the schema are optional, the action 420 may be executed with the argument slots missing. As a particular example, the routing engine 402 may select the “Create Expense” action from the candidate action listed in Table 2. The routing engine 402 retrieves an employee ID from the user profile information and the merchant from the conversation history listed with the context information in Table 2. The routing engine 402 may be unable to determine the amount, date and location and set the corresponding input arguments slots within the associate JSON schema to null. Amount and date are required arguments, and subsequent information may be requested from the user if the execution is not successful.
Upon completing slot-filling, the routing engine 402 can transmit an execution plan comprising the action 420 and the parameters 422 corresponding to argument slots in the associated schema to the execution engine 406. The execution engine 406 can execute the transmitted action 420 and produce an output 424. The execution engine 406 may update the context 412 by adding the execution plan and the output 424 to the historical execution plans 418 and can transmit the output 424 to a response engine 426. Upon a successful execution of the action 420, the output 424 may indicate the action 420 was executed successfully. The response engine 426 may generate a response to the user indicating the action 420 was successfully completed.
In some examples, the routing engine 402 may route to an agent 428 before slot-filling is performed for a selected action. The routing engine 402 may retrieve one or more candidate agents from the semantic context and memory store 408 through a semantic search. The routing engine 402 can select an appropriate agent 428 based on the input query 404 and the context 412. The agent 428 can be associated with a pool of actions performable by the agent 428. The routing engine 402 or a secondary LLM-based action routing model may select an action 420 from the pool of actions based on the input query 404, the context 412, the conversation history 416, the historical execution plans 418, or any combination thereof. Upon selection of an action 420, the routing engine 402 or action routing model can determine one or more parameters 422 and perform slot-filling as described above. The action 420 and parameters 422 can then be transmitted to the execution engine 406 as part of an execution plan.
In some instances, the action 420 may not be executed successfully. The output 424 may indicate the action 420 was not executed successfully and a reason for a failed execution. In some instances, the action 420 may not be executed successfully due to one or more missing required parameters 422. The response engine 426 can generate a response to the user indicating more information may be needed for executing the action 420 and requesting the one or more missing parameters 422 from the user. In some instances, the action 420 may not be executed successfully due to one or more parameters 422 being an incorrect version. The response engine 426 can generate a response to the user requesting the correct version of one or more parameters 422. The response engine 426 may update the context 412 by adding the response to the user to the conversation history 416. Upon receiving a subsequent response from the user with a missing parameter or a corrected version of a parameter, the routing engine 402 may fill the corresponding slots for the associated schema for the action 420 and transmit the updated parameters 422 to the execution engine 406.
As illustrated in the second data flow 400b, the routing engine 402 may not perform any slot-filling before transmitting a selected action 428 to the execution engine 406. The routing engine 402 can retrieve a set of candidate actions 410 from the semantic context and memory store 408. In some examples, the input query 404 and/or one or more recent preceding queries can be used to retrieve the candidate actions 410. In some examples, one or more candidate agents can be retrieved from the semantic context and memory store 408 and the candidate actions 410 can be selected from a set of actions associated with the one or more candidate agents. The routing engine 402 may use limited contextual information sufficient for action planning and can determine a selected action 430 based on the input query 404 and context 412.
In some examples, the candidate actions 410 and the selected action 430 may not have an associated schema and the routing engine 402 may determine that slot-filling is not relevant to a routing decision. In some examples, the execution engine 406 may be configurable to perform slot-filling as described above as needed for API-based actions. In some examples, the execution engine 406 may be configured to collect missing slot values. Upon receiving a selected action 430, the execution engine 406 may determine that one or more input argument slots in an associated schema are missing. Instead of attempting execution of the selected action 430, the execution engine 406 may generate an output 424 indicating which parameters are missing and/or the version of the parameters needed. The response engine 426 can generate a response to the user indicating an action was not executed due to missing information and requesting the missing information from the user. The missing parameters may be gathered in a limited number of requests to the user and failed execution of the selected action 430 due to missing parameters may be prevented.
At step 505, an input query is received from a user. The input query can include particular data. In some examples, receiving the input query can include receiving contextual information. The contextual information can include a conversation history associated with the user and a historical execution plan. The input query can be a natural language utterance from the user.
At step 510, an action among one or more candidate actions is identified based on the input query. In some examples, the action is identified based on the input query, the conversation history, and the historical execution plan. In some examples, the action is identified using a generative artificial intelligence model to select the action, among the candidate actions, to be executed based on the input query, the conversation history, and the historical execution plan.
At step 515, a set of input argument slots within a schema associated with the action is identified. In some examples, a first subset of input argument slots is determined, where the first subset includes input argument slots that are required to execute the action. In some examples, a second subset of input argument slots is determined, where the second subset includes input argument slots that are optional to execute the action.
At step 520, each input argument slot of the set of input arguments is filled. For each input argument slot, the process 500 includes determining whether one or more parameters corresponding with the input argument slot are derivable from the particular data. In accordance with the one or more parameters corresponding with the input argument slot, the one or more parameters may be derived from the particular data and the input argument slot may be filled with a version of the one or more parameters that conforms to the schema.
In some examples, it is determined that at least one input argument of the set of input argument slots cannot be filled using the one or more parameters, where the one or more parameters may be missing at least one parameter. Using the generative artificial intelligence model, the missing parameter may be extracted from the conversation history and the input argument slot may be filled using the at least one parameter.
In some examples, it is determined that at least one input argument of the set of input argument slots cannot be filled using the one or more parameters, where the version of the one or more parameters does not conform to the schema. The version of the one or more parameters may be adjusted to conform to the scheme and the at least one input argument slot may be filled using the adjusted version of the one or more parameters in the schema.
In some examples, the process 500 may determine that the first subset includes at least one input argument slot that cannot be filled with the version of the one or more parameters. The process 500 may determine whether contextual information included in the input query includes one or more indications of the version of the one or more parameters. In accordance with determining that the contextual information includes the one or more indications of the version of the one or more parameters, the version of the one or more parameters is used to fill the at least one input argument slot. In accordance with determining that the contextual information does not include the one or more indications of the version of the one or more parameters, an output that is usable for requesting subsequent input from the user to receive the version of the one or more parameters is generated.
In some examples, the process 500 may determine that the second subset includes at least one input argument slot that cannot be filled with the version of the one or more parameters. The process 500 may determine whether contextual information included in the input query includes one or more indications of the version of the one or more parameters. In accordance with determining that the contextual information includes the one or more indications of the version of the one or more parameters, the version of the one or more parameters is used to fill the at least one input argument slot. In accordance with determining that the contextual information does not include the one or more indications of the version of the one or more parameters, the set of filled input argument slots is populated with an empty slot for the at least one input argument slot. The set of filled input arguments slots can be transmitted to the execution engine.
At step 525, an execution plan that includes the action that includes the set of filled input argument slots is transmitted to an execution engine. The execution engine can be configured to execute the action for generating a response to the input query. In some examples, the execution plan is executed using the set of filled input argument slots and the execution engine to generate a response to the input query. The response can be transmitted to the user for facilitating an interaction involving the user.
As noted above, infrastructure as a service (IaaS) is one particular type of cloud computing. IaaS can be configured to provide virtualized computing resources over a public network (e.g., the Internet). In an IaaS model, a cloud computing provider can host the infrastructure components (e.g., servers, storage devices, network nodes (e.g., hardware), deployment software, platform virtualization (e.g., a hypervisor layer), or the like). In some cases, an IaaS provider may also supply a variety of services to accompany those infrastructure components (example services include billing software, monitoring software, logging software, load balancing software, clustering software, etc.). Thus, as these services may be policy-driven, IaaS users may be able to implement policies to drive load balancing to maintain application availability and performance.
In some instances, IaaS customers may access resources and services through a wide area network (WAN), such as the Internet, and can use the cloud provider's services to install the remaining elements of an application stack. For example, the user can log in to the IaaS platform to create virtual machines (VMs), install operating systems (OSs) on each VM, deploy middleware such as databases, create storage buckets for workloads and backups, and even install enterprise software into that VM. Customers can then use the provider's services to perform various functions, including balancing network traffic, troubleshooting application issues, monitoring performance, managing disaster recovery, etc.
In most cases, a cloud computing model will require the participation of a cloud provider. The cloud provider may, but need not be, a third-party service that specializes in providing (e.g., offering, renting, selling) IaaS. An entity might also opt to deploy a private cloud, becoming its own provider of infrastructure services.
In some examples, IaaS deployment is the process of putting a new application, or a new version of an application, onto a prepared application server or the like. It may also include the process of preparing the server (e.g., installing libraries, daemons, etc.). This is often managed by the cloud provider, below the hypervisor layer (e.g., the servers, storage, network hardware, and virtualization). Thus, the customer may be responsible for handling (OS), middleware, and/or application deployment (e.g., on self-service virtual machines (e.g., that can be spun up on demand)) or the like.
In some examples, IaaS provisioning may refer to acquiring computers or virtual hosts for use, and even installing needed libraries or services on them. In most cases, deployment does not include provisioning, and the provisioning may need to be performed first.
In some cases, there are two different challenges for IaaS provisioning. First, there is the initial challenge of provisioning the initial set of infrastructure before anything is running. Second, there is the challenge of evolving the existing infrastructure (e.g., adding new services, changing services, removing services, etc.) once everything has been provisioned. In some cases, these two challenges may be addressed by enabling the configuration of the infrastructure to be defined declaratively. In other words, the infrastructure (e.g., what components are needed and how they interact) can be defined by one or more configuration files. Thus, the overall topology of the infrastructure (e.g., what resources depend on which, and how they each work together) can be described declaratively. In some instances, once the topology is defined, a workflow can be generated that creates and/or manages the different components described in the configuration files.
In some examples, an infrastructure may have many interconnected elements. For example, there may be one or more virtual private clouds (VPCs) (e.g., a potentially on-demand pool of configurable and/or shared computing resources), also known as a core network. In some examples, there may also be one or more inbound/outbound traffic group rules provisioned to define how the inbound and/or outbound traffic of the network will be set up and one or more virtual machines (VMs). Other infrastructure elements may also be provisioned, such as a load balancer, a database, or the like. As more and more infrastructure elements are desired and/or added, the infrastructure may incrementally evolve.
In some instances, continuous deployment techniques may be employed to enable deployment of infrastructure code across various virtual computing environments. Additionally, the described techniques can enable infrastructure management within these environments. In some examples, service teams can write code that is desired to be deployed to one or more, but often many, different production environments (e.g., across various different geographic locations, sometimes spanning the entire world). However, in some examples, the infrastructure on which the code will be deployed must first be set up. In some instances, the provisioning can be done manually, a provisioning tool may be utilized to provision the resources, and/or deployment tools may be utilized to deploy the code once the infrastructure is provisioned.
The VCN 606 can include a local peering gateway (LPG) 610 that can be communicatively coupled to a secure shell (SSH) VCN 612 via an LPG 610 contained in the SSH VCN 612. The SSH VCN 612 can include an SSH subnet 614, and the SSH VCN 612 can be communicatively coupled to a control plane VCN 616 via the LPG 610 contained in the control plane VCN 616. Also, the SSH VCN 612 can be communicatively coupled to a data plane VCN 618 via an LPG 610. The control plane VCN 616 and the data plane VCN 618 can be contained in a service tenancy 619 that can be owned and/or operated by the IaaS provider.
The control plane VCN 616 can include a control plane demilitarized zone (DMZ) tier 620 that acts as a perimeter network (e.g., portions of a corporate network between the corporate intranet and external networks). The DMZ-based servers may have restricted responsibilities and help keep breaches contained. Additionally, the DMZ tier 620 can include one or more load balancer (LB) subnet(s) 622, a control plane app tier 624 that can include app subnet(s) 626, a control plane data tier 628 that can include database (DB) subnet(s) 630 (e.g., frontend DB subnet(s) and/or backend DB subnet(s)). The LB subnet(s) 622 contained in the control plane DMZ tier 620 can be communicatively coupled to the app subnet(s) 626 contained in the control plane app tier 624 and an Internet gateway 634 that can be contained in the control plane VCN 616, and the app subnet(s) 626 can be communicatively coupled to the DB subnet(s) 630 contained in the control plane data tier 628 and a service gateway 636 and a network address translation (NAT) gateway 638. The control plane VCN 616 can include the service gateway 636 and the NAT gateway 638.
The control plane VCN 616 can include a data plane mirror app tier 640 that can include app subnet(s) 626. The app subnet(s) 626 contained in the data plane mirror app tier 640 can include a virtual network interface controller (VNIC) 642 that can execute a compute instance 644. The compute instance 644 can communicatively couple the app subnet(s) 626 of the data plane mirror app tier 640 to app subnet(s) 626 that can be contained in a data plane app tier 646.
The data plane VCN 618 can include the data plane app tier 646, a data plane DMZ tier 648, and a data plane data tier 650. The data plane DMZ tier 648 can include LB subnet(s) 622 that can be communicatively coupled to the app subnet(s) 626 of the data plane app tier 646 and the Internet gateway 634 of the data plane VCN 618. The app subnet(s) 626 can be communicatively coupled to the service gateway 636 of the data plane VCN 618 and the NAT gateway 638 of the data plane VCN 618. The data plane data tier 650 can also include the DB subnet(s) 630 that can be communicatively coupled to the app subnet(s) 626 of the data plane app tier 646.
The Internet gateway 634 of the control plane VCN 616 and of the data plane VCN 618 can be communicatively coupled to a metadata management service 652 that can be communicatively coupled to public Internet 654. Public Internet 654 can be communicatively coupled to the NAT gateway 638 of the control plane VCN 616 and of the data plane VCN 618. The service gateway 636 of the control plane VCN 616 and of the data plane VCN 618 can be communicatively coupled to cloud services 656.
In some examples, the service gateway 636 of the control plane VCN 616 or of the data plane VCN 618 can make application programming interface (API) calls to cloud services 656 without going through public Internet 654. The API calls to cloud services 656 from the service gateway 636 can be one-way: the service gateway 636 can make API calls to cloud services 656, and cloud services 656 can send requested data to the service gateway 636. But, cloud services 656 may not initiate API calls to the service gateway 636.
In some examples, the secure host tenancy 604 can be directly connected to the service tenancy 619, which may be otherwise isolated. The secure host subnet 608 can communicate with the SSH subnet 614 through an LPG 610 that may enable two-way communication over an otherwise isolated system. Connecting the secure host subnet 608 to the SSH subnet 614 may give the secure host subnet 608 access to other entities within the service tenancy 619.
The control plane VCN 616 may allow users of the service tenancy 619 to set up or otherwise provision desired resources. Desired resources provisioned in the control plane VCN 616 may be deployed or otherwise used in the data plane VCN 618. In some examples, the control plane VCN 616 can be isolated from the data plane VCN 618, and the data plane mirror app tier 640 of the control plane VCN 616 can communicate with the data plane app tier 646 of the data plane VCN 618 via VNICs 642 that can be contained in the data plane mirror app tier 640 and the data plane app tier 646.
In some examples, users of the system, or customers, can make requests, for example create, read, update, or delete (CRUD) operations, through public Internet 654 that can communicate the requests to the metadata management service 652. The metadata management service 652 can communicate the request to the control plane VCN 616 through the Internet gateway 634. The request can be received by the LB subnet(s) 622 contained in the control plane DMZ tier 620. The LB subnet(s) 622 may determine that the request is valid, and in response to this determination, the LB subnet(s) 622 can transmit the request to app subnet(s) 626 contained in the control plane app tier 624. If the request is validated and requires a call to public Internet 654, the call to public Internet 654 may be transmitted to the NAT gateway 638 that can make the call to public Internet 654. Metadata that may be desired to be stored by the request can be stored in the DB subnet(s) 630.
In some examples, the data plane mirror app tier 640 can facilitate direct communication between the control plane VCN 616 and the data plane VCN 618. For example, changes, updates, or other suitable modifications to configuration may be desired to be applied to the resources contained in the data plane VCN 618. Via a VNIC 642, the control plane VCN 616 can directly communicate with, and can thereby execute the changes, updates, or other suitable modifications to configuration to, resources contained in the data plane VCN 618.
In some embodiments, the control plane VCN 616 and the data plane VCN 618 can be contained in the service tenancy 619. In this case, the user, or the customer, of the system may not own or operate either the control plane VCN 616 or the data plane VCN 618. Instead, the IaaS provider may own or operate the control plane VCN 616 and the data plane VCN 618, both of which may be contained in the service tenancy 619. This embodiment can enable isolation of networks that may prevent users or customers from interacting with other users', or other customers', resources. Also, this embodiment may allow users or customers of the system to store databases privately without needing to rely on public Internet 654, which may not have a desired level of threat prevention, for storage.
In other embodiments, the LB subnet(s) 622 contained in the control plane VCN 616 can be configured to receive a signal from the service gateway 636. In this embodiment, the control plane VCN 616 and the data plane VCN 618 may be configured to be called by a customer of the IaaS provider without calling public Internet 654. Customers of the IaaS provider may desire this embodiment since database(s) that the customers use may be controlled by the IaaS provider and may be stored on the service tenancy 619, which may be isolated from public Internet 654.
The control plane VCN 716 can include a control plane DMZ tier 720 (e.g., the control plane DMZ tier 620 of
The control plane VCN 716 can include a data plane mirror app tier 740 (e.g., the data plane mirror app tier 640 of
The Internet gateway 734 contained in the control plane VCN 716 can be communicatively coupled to a metadata management service 752 (e.g., the metadata management service 652 of
In some examples, the data plane VCN 718 can be contained in the customer tenancy 721. In this case, the IaaS provider may provide the control plane VCN 716 for each customer, and the IaaS provider may, for each customer, set up a unique compute instance 744 that is contained in the service tenancy 719. Each compute instance 744 may allow communication between the control plane VCN 716, contained in the service tenancy 719, and the data plane VCN 718 that is contained in the customer tenancy 721. The compute instance 744 may allow resources, that are provisioned in the control plane VCN 716 that is contained in the service tenancy 719, to be deployed or otherwise used in the data plane VCN 718 that is contained in the customer tenancy 721.
In other examples, the customer of the IaaS provider may have databases that live in the customer tenancy 721. In this example, the control plane VCN 716 can include the data plane mirror app tier 740 that can include app subnet(s) 726. The data plane mirror app tier 740 can reside in the data plane VCN 718, but the data plane mirror app tier 740 may not live in the data plane VCN 718. That is, the data plane mirror app tier 740 may have access to the customer tenancy 721, but the data plane mirror app tier 740 may not exist in the data plane VCN 718 or be owned or operated by the customer of the IaaS provider. The data plane mirror app tier 740 may be configured to make calls to the data plane VCN 718 but may not be configured to make calls to any entity contained in the control plane VCN 716. The customer may desire to deploy or otherwise use resources in the data plane VCN 718 that are provisioned in the control plane VCN 716, and the data plane mirror app tier 740 can facilitate the desired deployment, or other usage of resources, of the customer.
In some embodiments, the customer of the IaaS provider can apply filters to the data plane VCN 718. In this embodiment, the customer can determine what the data plane VCN 718 can access, and the customer may restrict access to public Internet 754 from the data plane VCN 718. The IaaS provider may not be able to apply filters or otherwise control access of the data plane VCN 718 to any outside networks or databases. Applying filters and controls by the customer onto the data plane VCN 718, contained in the customer tenancy 721, can help isolate the data plane VCN 718 from other customers and from public Internet 754.
In some embodiments, cloud services 756 can be called by the service gateway 736 to access services that may not exist on public Internet 754, on the control plane VCN 716, or on the data plane VCN 718. The connection between cloud services 756 and the control plane VCN 716 or the data plane VCN 718 may not be live or continuous. Cloud services 756 may exist on a different network owned or operated by the IaaS provider. Cloud services 756 may be configured to receive calls from the service gateway 736 and may be configured to not receive calls from public Internet 754. Some cloud services 756 may be isolated from other cloud services 756, and the control plane VCN 716 may be isolated from cloud services 756 that may not be in the same region as the control plane VCN 716. For example, the control plane VCN 716 may be located in “Region 1,” and cloud service “Deployment 6,” may be located in Region 1 and in “Region 2.” If a call to Deployment 6 is made by the service gateway 736 contained in the control plane VCN 716 located in Region 1, the call may be transmitted to Deployment 6 in Region 1. In this example, the control plane VCN 716, or Deployment 6 in Region 1, may not be communicatively coupled to, or otherwise in communication with, Deployment 6 in Region 2.
The control plane VCN 816 can include a control plane DMZ tier 820 (e.g., the control plane DMZ tier 620 of
The data plane VCN 818 can include a data plane app tier 846 (e.g., the data plane app tier 646 of
The untrusted app subnet(s) 862 can include one or more primary VNICs 864(1)-(N) that can be communicatively coupled to tenant virtual machines (VMs) 866(1)-(N). Each tenant VM 866(1)-(N) can be communicatively coupled to a respective app subnet 867(1)-(N) that can be contained in respective container egress VCNs 868(1)-(N) that can be contained in respective customer tenancies 870(1)-(N). Respective secondary VNICs 872(1)-(N) can facilitate communication between the untrusted app subnet(s) 862 contained in the data plane VCN 818 and the app subnet contained in the container egress VCNs 868(1)-(N). Each container egress VCNs 868(1)-(N) can include a NAT gateway 838 that can be communicatively coupled to public Internet 854 (e.g., public Internet 654 of
The Internet gateway 834 contained in the control plane VCN 816 and contained in the data plane VCN 818 can be communicatively coupled to a metadata management service 852 (e.g., the metadata management system 652 of
In some embodiments, the data plane VCN 818 can be integrated with customer tenancies 870. This integration can be useful or desirable for customers of the IaaS provider in some cases such as a case that may desire support when executing code. The customer may provide code to run that may be destructive, may communicate with other customer resources, or may otherwise cause undesirable effects. In response to this, the IaaS provider may determine whether to run code given to the IaaS provider by the customer.
In some examples, the customer of the IaaS provider may grant temporary network access to the IaaS provider and request a function to be attached to the data plane app tier 846. Code to run the function may be executed in the VMs 866(1)-(N), and the code may not be configured to run anywhere else on the data plane VCN 818. Each VM 866(1)-(N) may be connected to one customer tenancy 870. Respective containers 871(1)-(N) contained in the VMs 866(1)-(N) may be configured to run the code. In this case, there can be a dual isolation (e.g., the containers 871(1)-(N) running code, where the containers 871(1)-(N) may be contained in at least the VM 866(1)-(N) that are contained in the untrusted app subnet(s) 862), which may help prevent incorrect or otherwise undesirable code from damaging the network of the IaaS provider or from damaging a network of a different customer. The containers 871(1)-(N) may be communicatively coupled to the customer tenancy 870 and may be configured to transmit or receive data from the customer tenancy 870. The containers 871(1)-(N) may not be configured to transmit or receive data from any other entity in the data plane VCN 818. Upon completion of running the code, the IaaS provider may kill or otherwise dispose of the containers 871(1)-(N).
In some embodiments, the trusted app subnet(s) 860 may run code that may be owned or operated by the IaaS provider. In this embodiment, the trusted app subnet(s) 860 may be communicatively coupled to the DB subnet(s) 830 and be configured to execute CRUD operations in the DB subnet(s) 830. The untrusted app subnet(s) 862 may be communicatively coupled to the DB subnet(s) 830, but in this embodiment, the untrusted app subnet(s) may be configured to execute read operations in the DB subnet(s) 830. The containers 871(1)-(N) that can be contained in the VM 866(1)-(N) of each customer and that may run code from the customer may not be communicatively coupled with the DB subnet(s) 830.
In other embodiments, the control plane VCN 816 and the data plane VCN 818 may not be directly communicatively coupled. In this embodiment, there may be no direct communication between the control plane VCN 816 and the data plane VCN 818. However, communication can occur indirectly through at least one method. An LPG 810 may be established by the IaaS provider that can facilitate communication between the control plane VCN 816 and the data plane VCN 818. In another example, the control plane VCN 816 or the data plane VCN 818 can make a call to cloud services 856 via the service gateway 836. For example, a call to cloud services 856 from the control plane VCN 816 can include a request for a service that can communicate with the data plane VCN 818.
The control plane VCN 916 can include a control plane DMZ tier 920 (e.g., the control plane DMZ tier 620 of
The data plane VCN 918 can include a data plane app tier 946 (e.g., the data plane app tier 646 of
The untrusted app subnet(s) 962 can include primary VNICs 964(1)-(N) that can be communicatively coupled to tenant virtual machines (VMs) 966(1)-(N) residing within the untrusted app subnet(s) 962. Each tenant VM 966(1)-(N) can run code in a respective container 967(1)-(N), and be communicatively coupled to an app subnet 926 that can be contained in a data plane app tier 946 that can be contained in a container egress VCN 968. Respective secondary VNICs 972(1)-(N) can facilitate communication between the untrusted app subnet(s) 962 contained in the data plane VCN 918 and the app subnet contained in the container egress VCN 968. The container egress VCN can include a NAT gateway 938 that can be communicatively coupled to public Internet 954 (e.g., public Internet 654 of
The Internet gateway 934 contained in the control plane VCN 916 and contained in the data plane VCN 918 can be communicatively coupled to a metadata management service 952 (e.g., the metadata management system 652 of
In some examples, the pattern illustrated by the architecture of block diagram 900 of
In other examples, the customer can use the containers 967(1)-(N) to call cloud services 956. In this example, the customer may run code in the containers 967(1)-(N) that requests a service from cloud services 956. The containers 967(1)-(N) can transmit this request to the secondary VNICs 972(1)-(N) that can transmit the request to the NAT gateway that can transmit the request to public Internet 954. Public Internet 954 can transmit the request to LB subnet(s) 922 contained in the control plane VCN 916 via the Internet gateway 934. In response to determining the request is valid, the LB subnet(s) can transmit the request to app subnet(s) 926 that can transmit the request to cloud services 956 via the service gateway 936.
It should be appreciated that IaaS architectures 600, 700, 800, 900 depicted in the figures may have other components than those depicted. Further, the embodiments shown in the figures are only some examples of a cloud infrastructure system that may incorporate an embodiment of the disclosure. In some other embodiments, the IaaS systems may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration or arrangement of components.
In certain embodiments, the IaaS systems described herein may include a suite of applications, middleware, and database service offerings that are delivered to a customer in a self-service, subscription-based, elastically scalable, reliable, highly available, and secure manner. An example of such an IaaS system is the Oracle Cloud Infrastructure (OCI) provided by the present assignee.
Bus subsystem 1002 provides a mechanism for letting the various components and subsystems of computer system 1000 communicate with each other as intended. Although bus subsystem 1002 is shown schematically as a single bus, alternative embodiments of the bus subsystem may utilize multiple buses. Bus subsystem 1002 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. For example, such architectures may include an Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, which can be implemented as a Mezzanine bus manufactured to the IEEE P1386.1 standard.
Processing unit 1004, which can be implemented as one or more integrated circuits (e.g., a conventional microprocessor or microcontroller), controls the operation of computer system 1000. One or more processors may be included in processing unit 1004. These processors may include single core or multicore processors. In certain embodiments, processing unit 1004 may be implemented as one or more independent processing units 1032 and/or 1034 with single or multicore processors included in each processing unit. In other embodiments, processing unit 1004 may also be implemented as a quad-core processing unit formed by integrating two dual-core processors into a single chip.
In various embodiments, processing unit 1004 can execute a variety of programs in response to program code and can maintain multiple concurrently executing programs or processes. At any given time, some or all of the program code to be executed can be resident in processor(s) 1004 and/or in storage subsystem 1018. Through suitable programming, processor(s) 1004 can provide various functionalities described above. Computer system 1000 may additionally include a processing acceleration unit 1006, which can include a digital signal processor (DSP), a special-purpose processor, and/or the like.
I/O subsystem 1008 may include user interface input devices and user interface output devices. User interface input devices may include a keyboard, pointing devices such as a mouse or trackball, a touchpad or touch screen incorporated into a display, a scroll wheel, a click wheel, a dial, a button, a switch, a keypad, audio input devices with voice command recognition systems, microphones, and other types of input devices. User interface input devices may include, for example, motion sensing and/or gesture recognition devices such as the Microsoft Kinect® motion sensor that enables users to control and interact with an input device, such as the Microsoft Xbox® 360 game controller, through a natural user interface using gestures and spoken commands. User interface input devices may also include eye gesture recognition devices such as the Google Glass® blink detector that detects eye activity (e.g., ‘blinking’ while taking pictures and/or making a menu selection) from users and transforms the eye gestures as input into an input device (e.g., Google Glass®). Additionally, user interface input devices may include voice recognition sensing devices that enable users to interact with voice recognition systems (e.g., Siri® navigator), through voice commands.
User interface input devices may also include, without limitation, three dimensional (3D) mice, joysticks or pointing sticks, gamepads and graphic tablets, and audio/visual devices such as speakers, digital cameras, digital camcorders, portable media players, webcams, image scanners, fingerprint scanners, barcode reader 3D scanners, 3D printers, laser rangefinders, and eye gaze tracking devices. Additionally, user interface input devices may include, for example, medical imaging input devices such as computed tomography, magnetic resonance imaging, position emission tomography, medical ultrasonography devices. User interface input devices may also include, for example, audio input devices such as MIDI keyboards, digital musical instruments and the like.
User interface output devices may include a display subsystem, indicator lights, or non-visual displays such as audio output devices, etc. The display subsystem may be a cathode ray tube (CRT), a flat-panel device, such as that using a liquid crystal display (LCD) or plasma display, a projection device, a touch screen, and the like. In general, use of the term “output device” is intended to include all possible types of devices and mechanisms for outputting information from computer system 1000 to a user or other computer. For example, user interface output devices may include, without limitation, a variety of display devices that visually convey text, graphics and audio/video information such as monitors, printers, speakers, headphones, automotive navigation systems, plotters, voice output devices, and modems.
Computer system 1000 may comprise a storage subsystem 1018 that provides a tangible non-transitory computer-readable storage medium for storing software and data constructs that provide the functionality of the embodiments described in this disclosure. The software can include programs, code modules, instructions, scripts, etc., that when executed by one or more cores or processors of processing unit 1004 provide the functionality described above. Storage subsystem 1018 may also provide a repository for storing data used in accordance with the present disclosure.
As depicted in the example in
System memory 1010 may also store an operating system 1016. Examples of operating system 1016 may include various versions of Microsoft Windows®, Apple Macintosh®, and/or Linux operating systems, a variety of commercially-available UNIX® or UNIX-like operating systems (including without limitation the variety of GNU/Linux operating systems, the Google Chrome® OS, and the like) and/or mobile operating systems such as iOS, Windows® Phone, Android® OS, BlackBerry® OS, and Palm® OS operating systems. In certain implementations where computer system 1000 executes one or more virtual machines, the virtual machines along with their guest operating systems (GOSs) may be loaded into system memory 1010 and executed by one or more processors or cores of processing unit 1004.
System memory 1010 can come in different configurations depending upon the type of computer system 1000. For example, system memory 1010 may be volatile memory (such as random access memory (RAM)) and/or non-volatile memory (such as read-only memory (ROM), flash memory, etc.) Different types of RAM configurations may be provided including a static random access memory (SRAM), a dynamic random access memory (DRAM), and others. In some implementations, system memory 1010 may include a basic input/output system (BIOS) containing basic routines that help to transfer information between elements within computer system 1000, such as during start-up.
Computer-readable storage media 1022 may represent remote, local, fixed, and/or removable storage devices plus storage media for temporarily and/or more permanently containing, storing, computer-readable information for use by computer system 1000 including instructions executable by processing unit 1004 of computer system 1000.
Computer-readable storage media 1022 can include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to, volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information. This can include tangible computer-readable storage media such as RAM, ROM, electronically erasable programmable ROM (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disk (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible computer readable media.
By way of example, computer-readable storage media 1022 may include a hard disk drive that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive that reads from or writes to a removable, nonvolatile magnetic disk, and an optical disk drive that reads from or writes to a removable, nonvolatile optical disk such as a CD ROM, DVD, and Blu-Ray® disk, or other optical media. Computer-readable storage media 1022 may include, but is not limited to, Zip® drives, flash memory cards, universal serial bus (USB) flash drives, secure digital (SD) cards, DVD disks, digital video tape, and the like. Computer-readable storage media 1022 may also include, solid-state drives (SSD) based on non-volatile memory such as flash-memory based SSDs, enterprise flash drives, solid state ROM, and the like, SSDs based on volatile memory such as solid state RAM, dynamic RAM, static RAM, DRAM-based SSDs, magnetoresistive RAM (MRAM) SSDs, and hybrid SSDs that use a combination of DRAM and flash memory based SSDs. The disk drives and their associated computer-readable media may provide non-volatile storage of computer-readable instructions, data structures, program modules, and other data for computer system 1000.
Machine-readable instructions executable by one or more processors or cores of processing unit 1004 may be stored on a non-transitory computer-readable storage medium. A non-transitory computer-readable storage medium can include physically tangible memory or storage devices that include volatile memory storage devices and/or non-volatile storage devices. Examples of non-transitory computer-readable storage medium include magnetic storage media (e.g., disk or tapes), optical storage media (e.g., DVDs, CDs), various types of RAM, ROM, or flash memory, hard drives, floppy drives, detachable memory drives (e.g., USB drives), or other type of storage device.
Communications subsystem 1024 provides an interface to other computer systems and networks. Communications subsystem 1024 serves as an interface for receiving data from and transmitting data to other systems from computer system 1000. For example, communications subsystem 1024 may enable computer system 1000 to connect to one or more devices via the Internet. In some embodiments communications subsystem 1024 can include radio frequency (RF) transceiver components for accessing wireless voice and/or data networks (e.g., using cellular telephone technology, advanced data network technology, such as 3G, 4G or EDGE (enhanced data rates for global evolution), WiFi (IEEE 802.11 family standards, or other mobile communication technologies, or any combination thereof)), global positioning system (GPS) receiver components, and/or other components. In some embodiments communications subsystem 1024 can provide wired network connectivity (e.g., Ethernet) in addition to or instead of a wireless interface.
In some embodiments, communications subsystem 1024 may also receive input communication in the form of structured and/or unstructured data feeds 1026, event streams 1028, event updates 1030, and the like on behalf of one or more users who may use computer system 1000.
By way of example, communications subsystem 1024 may be configured to receive data feeds 1026 in real-time from users of social networks and/or other communication services such as Twitter® feeds, Facebook® updates, web feeds such as Rich Site Summary (RSS) feeds, and/or real-time updates from one or more third party information sources.
Additionally, communications subsystem 1024 may also be configured to receive data in the form of continuous data streams, which may include event streams 1028 of real-time events and/or event updates 1030, that may be continuous or unbounded in nature with no explicit end. Examples of applications that generate continuous data may include, for example, sensor data applications, financial tickers, network performance measuring tools (e.g., network monitoring and traffic management applications), clickstream analysis tools, automobile traffic monitoring, and the like.
Communications subsystem 1024 may also be configured to output the structured and/or unstructured data feeds 1026, event streams 1028, event updates 1030, and the like to one or more databases that may be in communication with one or more streaming data source computers coupled to computer system 1000.
Computer system 1000 can be one of various types, including a handheld portable device (e.g., an iPhone® cellular phone, an iPad® computing tablet, a PDA), a wearable device (e.g., a Google Glass® head mounted display), a PC, a workstation, a mainframe, a kiosk, a server rack, or any other data processing system.
Due to the ever-changing nature of computers and networks, the description of computer system 1000 depicted in the figure is intended only as a specific example. Many other configurations having more or fewer components than the system depicted in the figure are possible. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, firmware, software (including applets), or a combination. Further, connection to other computing devices, such as network input/output devices, may be employed. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments.
Although specific embodiments have been described, various modifications, alterations, alternative constructions, and equivalents are also encompassed within the scope of the disclosure. Embodiments are not restricted to operation within certain specific data processing environments, but are free to operate within a plurality of data processing environments. Additionally, although embodiments have been described using a particular series of transactions and steps, it should be apparent to those skilled in the art that the scope of the present disclosure is not limited to the described series of transactions and steps. Various features and aspects of the above-described embodiments may be used individually or jointly.
Further, while embodiments have been described using a particular combination of hardware and software, it should be recognized that other combinations of hardware and software are also within the scope of the present disclosure. Embodiments may be implemented only in hardware, or only in software, or using combinations thereof. The various processes described herein can be implemented on the same processor or different processors in any combination. Accordingly, where components or services are described as being configured to perform certain operations, such configuration can be accomplished, e.g., by designing electronic circuits to perform the operation, by programming programmable electronic circuits (such as microprocessors) to perform the operation, or any combination thereof. Processes can communicate using a variety of techniques including but not limited to conventional techniques for inter process communication, and different pairs of processes may use different techniques, or the same pair of processes may use different techniques at different times.
The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that additions, subtractions, deletions, and other modifications and changes may be made thereunto without departing from the broader spirit and scope as set forth in the claims. Thus, although specific disclosure embodiments have been described, these are not intended to be limiting. Various modifications and equivalents are within the scope of the following claims.
The use of the terms “a” and “an” and “the” and similar referents in the context of describing the disclosed embodiments (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. The term “connected” is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments and does not pose a limitation on the scope of the disclosure unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the disclosure.
Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is intended to be understood within the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
Preferred embodiments of this disclosure are described herein, including the best mode known for carrying out the disclosure. Variations of those preferred embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. Those of ordinary skill should be able to employ such variations as appropriate and the disclosure may be practiced otherwise than as specifically described herein. Accordingly, this disclosure includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the disclosure unless otherwise indicated herein.
All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.
In the foregoing specification, aspects of the disclosure are described with reference to specific embodiments thereof, but those skilled in the art will recognize that the disclosure is not limited thereto. Various features and aspects of the above-described disclosure may be used individually or jointly. Further, embodiments can be utilized in any number of environments and applications beyond those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded as illustrative rather than restrictive.
The present application is a non-provisional application of and claims the benefit and priority under 35 U.S.C. 119 (e) of U.S. Provisional Application No. 63/583,225, filed on Sep. 15, 2023, the disclosure of which is incorporated herein by reference in its entirety for all purposes
Number | Date | Country | |
---|---|---|---|
63583225 | Sep 2023 | US |