One or more implementations relate to the field of database systems, and more specifically, to customizing or personalizing interactions with artificial intelligence systems capable of automatically responding to conversational user input.
Modern software development has evolved towards web applications and cloud-based applications that provide access to data and services via the Internet or other networks. Businesses also increasingly interface with customers using different electronic communications channels, including online chats, text messaging, email or other forms of remote support. Artificial intelligence (AI) may also be used to provide information to users via online communications with “chatbots” or other automated interactive tools. Using chat-bots, automated AI systems conduct text-based chat conversations with users, through which users request and receive information. Chatbots or other AI systems generally provide information to users for predetermined situations and applications, and in practice, may be limited depending on the nature of the training data utilized to develop the chatbot.
Chatbots or other AI systems have been developed using large language models (LLMs) that have access to or knowledge of a larger data set and vocabulary, such that they are more likely to have applicable information for a wide range of potential input prompts. That said, there may still be scenarios where the chatbot or AI system does not have access to all applicable information or is otherwise unable to provide a satisfactory answer. For example, LLMs may lack context or other understanding of information or situations that are not represented within their training data, which can impair the ability of LLMs to provide accurate or contextually relevant responses. Accordingly, it is desirable to provide systems and methods that facilitate more accurate and contextually relevant output responses from a chatbot or other AI system to a particular input prompt that might otherwise be outside the scope of the training data.
The following figures use like reference numbers to refer to like elements. Although the following figures depict various exemplary implementations, alternative implementations are within the spirit and scope of the appended claims. In the drawings:
The subject matter described herein generally relates to computing systems and methods for customizing or personalizing interactions with a chatbot or other external artificial intelligence (AI) system or service to automatically generate automated responses to conversational user inputs in a customizable or personalized manner. As described in greater detail below, an intermediate service utilizes a personalized model or other personalized or custom data associated with the particular user interacting with a large language model-based chatbot, alternatively referred to herein as a personal model, to effectively ground the large language model (LLM) by providing additional knowledge or context associated with the particular user to enhance the response provided by the chatbot and better reflect the user's experience, knowledge, data or other information associated with the user that is not known or otherwise available to the chatbot. For example, the chatbot or other AI system or service may utilize one or more large language models or corresponding training data sets that are intended to be generic and lack various pieces of data or information contained in the user's personal model or other user-specific data, which, in turn, results in the chatbot providing a more superficial response to the user that may be suitable for a general audience or generic purpose but lacks the specificity, depth or comprehensiveness that may be desired by the user.
Rather than retraining the chatbot or other external AI system, the intermediate service utilizes the personal model to ground the LLM and provide additional user context to the chatbot or AI system in concert with a prompt, request or other input from a user to correspondingly adjust, augment or otherwise tailor the resulting response generated by the LLM-based chatbot in a manner that reflects the user context. The personal model and the intermediate service effectively extends the understanding of the chatbot or AI system by capturing and providing information that is pertinent to the user that represents the particular user context (e.g., the user's knowledge, education, experience, behavior and/or the like) to allow the chatbot to provide a response that is more specific or comprehensive and more personalized for the user, rather than providing generic responses devoid of any user context.
In one or more exemplary implementations, the conversational user inputs and responses described herein are unstructured and free form using natural language that is not constrained to any particular syntax or ordering of speakers or utterances thereby. In this regard, an utterance should be understood as a discrete uninterrupted chain of language provided by an individual conversation participant or actor or otherwise associated with a particular source of the content of the utterance, which could be a human user or speaker (e.g., a customer, a sales representative, a customer support representative, a live agent, and/or the like) or an automated actor or speaker (e.g., a chatbot or other automated system). For example, in a chat messaging or text messaging context, each separate and discrete message that originates from a particular actor that is part of the conversation constitutes an utterance associated with the conversation, where each utterance may precede and/or be followed by a subsequent utterance by the same actor or a different actor within the conversation. In this regard, the conversational user input that functions as the input prompt for which an automated response is to be generated may be constructed from one or more utterances by the same actor within a conversation, and is not necessarily limited to an individual message or utterance. Additionally, it should be noted that although the subject matter may be described herein in the context of conversations (e.g., chat logs, text message logs, call transcripts, comment threads, feeds and/or the like) for purposes of explanation, the subject matter described herein is not necessarily limited to conversations and may be implemented in an equivalent manner with respect to any particular type of database record or database object including text fields.
In one or more exemplary implementations, the database system 102 includes one or more application servers 104 that support an application platform 124 capable of providing instances of virtual web applications 140, over the network 110, to any number of client devices 108 that users may interact with to view, access or obtain data or other information from one or more data records 114 maintained in one or more data tables 112 at a database 106 or other repository associated with the database system 102. For example, a database 106 may maintain, on behalf of a user, tenant, organization or other resource owner, data records 114 entered or created by that resource owner (or users associated therewith), files, documents, objects or other records uploaded by the resource owner (or users associated therewith), and/or files, documents, objects or other records automatically generated by one or more computing processes (e.g., by the server 104 based on user input or other records or files stored in the database 106). In this regard, in one or more implementations, the database system 102 is realized as an on-demand multi-tenant database system that is capable of dynamically creating and supporting virtual web applications 140 based upon data from a common database 106 that is shared between multiple tenants, which may alternatively be referred to herein as a multi-tenant database. Data and services generated by the virtual web applications 140 may be provided via the network 110 to any number of client devices 108, as desired, where instances of the virtual web application 140 may be suitably generated at run-time (or on-demand) using a common application platform 124 that securely provides access to the data in the database 106 for each of the various tenants subscribing to the multi-tenant system. In one or more exemplary implementations, the virtual web application 140 is realized as a customer relationship management (CRM) application.
The application server 104 generally represents the one or more server computing devices, server computing systems or other combination of processing logic, circuitry, hardware, and/or other components configured to support remote access to data records 114 maintained in the data tables 112 at the database 106 via the network 110. Although not illustrated in
In exemplary implementations, the application server 104 generally includes at least one processing system 120, which may be implemented using any suitable processing system and/or device, such as, for example, one or more processors, central processing units (CPUs), controllers, microprocessors, microcontrollers, processing cores, application-specific integrated circuits (ASICs) and/or other hardware computing resources configured to support the operation of the processing system described herein. Additionally, although not illustrated in
The client device 108 generally represents an electronic device coupled to the network 110 that may be utilized by a user to access an instance of the virtual web application 140 using an application 109 executing on or at the client device 108. In practice, the client device 108 can be realized as any sort of personal computer, mobile telephone, tablet or other network-enabled electronic device coupled to the network 110 that executes or otherwise supports a web browser or other client application 109 that allows a user to access one or more GUI displays provided by the virtual web application 140. In exemplary implementations, the client device 108 includes a display device, such as a monitor, screen, or another conventional electronic display, capable of graphically presenting data and/or information along with a user input device, such as a touchscreen, a touch panel, a mouse, a joystick, a directional pad, a motion sensor, or the like, capable of receiving input from the user of the client device 108. Some implementations may support text-to-speech, speech-to-text, or other speech recognition systems, in which case the client device 108 may include a microphone or other audio input device that functions as the user input device, with a speaker or other audio output device capable of functioning as an output device. The illustrated client device 108 executes or otherwise supports a client application 109 that communicates with the application platform 124 provided by the processing system 120 at the application server 104 to access an instance of the virtual web application 140 using a networking protocol. In some implementations, the client application 109 is realized as a web browser or similar local client application executed by the client device 108 that contacts the application platform 124 at the application server 104 using a networking protocol, such as the hypertext transport protocol secure (HTTPS). In this manner, in one or more implementations, the client application 109 may be utilized to access or otherwise initiate an instance of a virtual web application 140 hosted by the database system 102, where the virtual web application 140 provides one or more web page GUI displays within the client application 109 that include GUI elements for interfacing and/or interacting with records 114 maintained at the database 106.
In exemplary embodiments, the database 106 stores or otherwise maintains data for integration with or invocation by a virtual web application 140 in objects organized in object tables 112. In this regard, the database 106 may include any number of different object tables 112 configured to store or otherwise maintain alphanumeric values or other descriptive information that define a particular instance of a respective type of object associated with a respective object table 112. For example, the virtual application 140 may support a number of different types of objects that may be incorporated into or otherwise depicted or manipulated by the virtual application, with each different type of object having a corresponding object table 112 that includes columns or fields corresponding to the different parameters or criteria that define a particular instance of that object. For example, a virtual CRM application 140 may utilize standard objects such as “account” objects, “opportunity” objects, “contact” objects, or the like having respective objects tables 112 maintaining data records 114 for the respective object type, along with custom object types that may be specific to a particular tenant, individual user or other resource owner. In this regard, the data records 114 maintain values for various fields associated with that respective object type along with metadata or other information pertaining to the particular object type defining the structure (e.g., the formatting, functions and other constructs) of each respective object and the various fields associated therewith.
In some implementations, the database 106 stores or otherwise maintains application objects (e.g., an application object type) where the application object table 112 includes columns or fields corresponding to the different parameters or criteria that define a particular virtual web application 140 capable of being generated or otherwise provided by the application platform 124 on a client device 108. In this regard, the database 106 may also store or maintain graphical user interface (GUI) objects that may be associated with or referenced by a particular application object and include columns or fields that define the layout, sequencing, and other characteristics of GUI displays to be presented by the application platform 124 on a client device 108 in conjunction with that application 140.
In exemplary implementations, the database 106 stores or otherwise maintains additional database objects for association and/or integration with a virtual web application 140, which may include custom objects and/or standard objects. For example, an administrator user associated with a particular resource owner may utilize an instance of a virtual web application 140 to create or otherwise define a new custom field to be added to or associated with a standard object, or define a new custom object type that includes one or more new custom fields associated therewith. In this regard, the database 106 may also store or otherwise maintain metadata that defines or describes the fields, process flows, workflows, formulas, business logic, structure and other database components or constructs that may be associated with a particular application database object. In various implementations, the database 106 may also store or otherwise maintain validation rules providing validation criteria for one or more fields (or columns) of a particular database object type, such as, minimum and/or maximum values for a particular field, a range of allowable values for the particular field, a set of allowable values for a particular field, or the like, along with workflow rules or logical criteria associated with respective types of database object types that define actions, triggers, or other logical criteria or operations that may be performed or otherwise applied to entries in the various database object tables 112 (e.g., in response to creation, changes, or updates to a record in an object table 112).
Still referring to
In exemplary implementations, the chatbot service 142 receives or otherwise obtains a conversational input from a user of the client device 108 (e.g., via client application 107 and network 110) and parses the conversational input using the conversational vocabulary associated with the chatbot service 142 to identify or otherwise discern an intent of the user or another action that the user would like to perform and automatically respond in a corresponding manner, including by updating the chat window or other GUI display associated with the conversation with the chatbot service 142 to include a graphical representation of a conversational response generated by the chatbot service 142 responsive to the conversational user input prompt received from the user. In this manner, a user of a client device 108 interacts or otherwise communicates with the chatbot service 142 via an associated GUI display within the client application 109 (e.g., a chat window) to transmit or otherwise provide conversational user input in the context of a conversation with the chatbot service 142. Depending on the implementation, the conversational input may be received by the user selecting or otherwise activating a GUI element presented within the chat window, or the user may input (e.g., via typing, swiping, touch, voice, or any other suitable method) a conversational string of words in a free-form or unconstrained manner, which is captured by a user input device of the client device 108 and provided over the network 110 to the application platform 124 and/or the chatbot service 142 via the client application 109. The chatbot service 142 then parses or otherwise analyzes the conversational input using natural language processing (NLP) to identify the intent or other action desired by the user based on the content, syntax, structure and/or other linguistic characteristics of the conversational input.
In one or more implementations, when the chatbot service 142 determines it is unable to ascertain the intent of a received conversational user input or is otherwise unable to respond to the received conversational user input based on the vocabulary and/or other data that is accessible to or otherwise associated with the chatbot service 142, the chatbot service 142 analyzes the received conversational user input to determine whether or not to forward the received conversational user input as an input prompt to a LLM-based chatbot service 152 for generating a corresponding LLM-based automated conversational response to the received conversational user input. In this regard, the LLM-based chatbot service 152 may be realized as an application programming interface (API), software agent, or the like that is capable of receiving a textual input prompt and providing a corresponding natural language textual response to the received input prompt using a LLM and corresponding artificial intelligence or machine learning techniques such that the natural language textual response represents a logical and coherent response to the textual input prompt. In practice, the LLM chatbot 152 may utilize NLP or other linguistic analytic techniques to analyze a received conversational input prompt and automatically generate a conversational response to the received conversational input prompt using neural networks or other AI techniques based on generative pre-trained transformers (GPTs) or other LLMs, the details of which are not germane to this disclosure.
In one or more exemplary implementations, the LLM-based chatbot service 152 is hosted or otherwise implemented at an external computing system 150 on the network 110. The external computing system 150 generally includes at least one server communicatively coupled to the network 110 to support access to the LLM-based chatbot service 152. In this regard, in some implementations, the external computing system 150 is physically and logically distinct from the database system 102 and/or the application platform 124. For example, the external computing system 150 may be owned, controlled, or otherwise operated by a third party different from the parties that own, control and/or operate the database system 102 and/or the application platform 124. That said, in other implementations, the external computing system 150 may be affiliated with the same party that owns, controls and/or operates the database system 102 and/or the application platform 124.
In exemplary embodiments, the virtual web application 140 provided by the application platform 124 includes or otherwise supports chat messaging, text messaging, instant messaging or a similar feature where users communicate or otherwise interact with one another or another system (e.g., external system 150) in the context of a conversation using the web application 140. In practice, the application server 104 and/or the web application 140 at the application platform 124 may store or otherwise maintain conversation data for a conversation in a database. For example, the conversation data may include a transcript for each conversation existing within instances of the web application 140 at the application platform 124 that maintains the sequence of utterances associated with the conversation and the respective speaker or source of each respective utterance of the conversation. The conversation data for a given conversation may also include user identifiers or other information identifying the participants associated with the conversation and other metadata associated with the conversation, such as, for example, whether or not the conversation is a group conversation, whether or not the group conversation is public or private, and the like. For example, some implementations of the web application 140 may support public channels, private channels, one-to-one direct messages and group messages.
In exemplary implementations, a user of the client device 108 may interact with the web application 140 provided by the application platform 124 to initiate or otherwise invoke one or more services, such as chatbot service 142, to initiate a conversation or other user interaction with the LLM chatbot 152 and/or third party system 150 external to the application platform 124. In this regard, the chatbot service 142 may include, incorporate, or otherwise be realized as an application programming interface (API), software agent, or the like that is capable of interacting with the LLM chatbot 152 and/or third party system 150. For example, a GUI display associated with the web application 140 provided by the application platform 124 may include a GUI element that is manipulable by a user to input or otherwise provide indicia of the LLM chatbot 152 and/or third party system 150 that the user would like to engage or interact with within the context of a conversation depicted within the GUI display.
As described in greater detail below, in exemplary implementations, the application platform 124 and/or the chatbot service 142 includes or otherwise incorporates a contextual personalization service configurable to develop and maintain one or more models or other digital representations associated with a particular individual user that are personalized or customized to reflect that particular individual user to be maintained at the database system 102, for example, based on one or more data records 114 in the database 106 that are associated with the particular user. Thereafter, the chatbot service 142 may utilize a personal model associated with the particular user to ground an input prompt provided to the LLM chatbot 152 by adding context to or otherwise customizing interactions between that particular individual user at a client device 108 and the third party system 150. In this regard, the chatbot service 142 utilizes identifiers associated with the user of the client device 108 participating in the conversation with the chatbot service 142 at the application platform 124 to identify the particular model(s), digital representation(s) or other data associated with that particular individual user in the database 106 to be utilized to provide additional context or information to the third party system 150 in connection with the conversational user interactions with the LLM chatbot 152 in accordance with the permissions associated with that user at the database system 102. For example, based on the various permissions associated with the various data records 114 (e.g., the objects, files, documents or other pieces of data or information) associated with the user at the database system 102, the chatbot service 142 at the database system 102 may identify what model(s), digital representation(s) or other data associated with the user can be shared with the LLM chatbot 152 and/or third party system 150.
Still referring to
The LLM chatbot 152 automatically parses or analyzes the contextual data provided as part of the augmented conversational user input prompt in concert with generating a conversational response to the semantic and/or syntactic content of the initial conversational user input using the pretrained GPTs, LLMs, or other algorithms or configurations associated with the LLM chatbot 152 and/or third party system 150. In this regard, rather than the LLM chatbot 152 and/or third party system 150 retraining or regenerating the GPTs, LLMs, neural networks and/or the like associated with the LLM chatbot 152 and/or third party system 150, the LLM chatbot 152 and/or third party system 150 is configurable to apply the existing GPTs, LLMs, or other algorithms or configurations to the personalized contextual data before and/or after applying the existing GPTs, LLMs, or other algorithms or configurations to the initial conversational user input, such that the autogenerated conversational response to the semantic and/or syntactic content of the initial conversational user input reflects the additional knowledge or context specific to the individual end user that is gleaned from the added contextual data. As a result, the autogenerated conversational response to the received conversational user input is customized or personalized to reflect the individual user providing the initial conversational user input based on that user's individual personalization model(s) or other contextual data provided by the contextual personalization service at the database system 102 based on that individual user's data maintained in the database 106. In this regard, the customized autogenerated conversational response differs from the conversational response that would otherwise be generated by the LLM chatbot 152 and/or third party system 150 applying the existing GPTs, LLMs, or other algorithms or configurations to the initial conversational user input without the personalized contextual data. For example, the customized autogenerated conversational response may be more comprehensive or reflect knowledge gleaned from the user's contextual data that would not otherwise be available using the pretrained GPTs, LLMs, or other algorithms or configurations associated with the LLM chatbot 152 and/or third party system 150.
The chatbot service 142 at the database system 102 receives the customized autogenerated conversational response and transmits or otherwise provides a corresponding conversational response to the virtual application 140 and/or the application platform 124 to be rendered, displayed or otherwise generated within the context of the conversation with the LLM chatbot 152 and/or third party system 150 within the GUI display associated with the web application 140 provided by the application platform 124. In some implementations, the chatbot service 142 at the database system 102 may retransmit the customized autogenerated conversational response from the LLM chatbot 152 without modification. That said, in other implementations, the chatbot service 142 may utilize one or more models, digital representations or other data or information associated with the user to further modify or augment the customized autogenerated conversational response before providing an augmented customized conversational response to the user. For example, based on the individual's unique models and/or data maintained in the database 106, the chatbot service 142 at the database system 102 may modify the semantic content and/or the syntactic structure of the customized autogenerated conversational response to better suit the individual user, for example, by eliminating textual content that the individual user is likely to consider extraneous or superfluous given the individual user's experience or knowledge derivable from the individual user's data maintained in the database 106, reformatting or rewording the customized autogenerated conversational response to better reflect the individual user's education level, vocabulary, diction, conversational preferences, etc.
By virtue of the contextual personalization provided by the chatbot service 142 at the database system 102, the conversational response received at the application platform 124 better suits the needs or desires of the individual user providing the initial conversational input by accounting for the user's individual background knowledge, experience and/or other preferences relative to a generic conversational response that would otherwise be generated by the LLM chatbot 152 and/or third party system 150 responsive to the initial conversational input absent any contextual data or other personalization.
Referring to
In addition to providing one or more user interfaces that allow the user to identify the user's personal data that he or she would like to be incorporated into any personal model(s) created and/or maintained by the contextual personalization service for the user, in practice, the contextual personalization service may also provide one or more user interfaces (or user interface elements) to identify personal data that he or she would like to be excluded from any personal model(s). For example, the contextual personalization service may provide a GUI display that includes GUI elements manipulable by the user to identify particular types of database objects or records 114 that the user would like to be included in any personal model, and similarly, identify other types of database objects or records 114 that the user would like to exclude from modeling.
After identifying or obtaining a set of user data to be utilized for modeling, the contextual personalization service at the database system 102 tokenizes the individual pieces of user data and then generates a corresponding personal model for the user that numerically or mathematically represents the set of user data based on the tokenized user data. For example, the textual content of a particular file, document, record, transcript, database object or other piece of data associated with the individual user at the database system 102 may be input to an encoder model or other word embedding algorithm to generate a corresponding vector or numerical representation of the textual content. In this regard, in some implementations, the textual content of the particular piece of user data may be lemmatized, normalized, and/or divided into smaller segments prior to tokenization or embedding to improve the relationship between the numerical representation of the respective piece of user data and the semantic and/or syntactic content of that respective piece of user data. The resulting personal model for the individual user may be realized as a bag-of-words model or another suitable model including one or more matrices that captures the different numerical or vector representations of the different pieces of user data associated with the user.
After generating a personal model for a particular user, the chatbot customization process 200 continues by receiving or otherwise obtaining a conversational user input associated with that particular user, selecting or otherwise identifying a subset of user data most relevant to the received conversational user input using the personal model, and providing an augmented conversational user input to the chatbot using the identified subset of user data (tasks 208, 210, 212). For example, in response to receiving a conversional user input at the application platform 124, the contextual personalization service at the database system 102 performs the same techniques utilized to generate the numerical representations of that user's data (e.g., tokenization, lemmatization, normalization and/or encoding) to generate a corresponding numerical representation or vector word embedding of the received conversational user input. After generating a numerical representation of the received conversational user input, the contextual personalization service at the database system 102 utilizes the user's personal model to identify a subset of the user's data that is most relevant to the received conversational user input. For example, the contextual personalization service at the database system 102 may utilize cosine similarity, Euclidean distance, or other mathematical techniques to identify which vectors or matrices of the user's personal bag-of-words model are closest to the numerical vector representation of the received conversational user input.
After identifying the subset of user data closest to the received conversational user input using the model, the contextual personalization service at the database system 102 selects or otherwise obtains the textual content of the closest pieces of user data for use in grounding or otherwise providing additional personalized context associated with the conversational user input to the LLM chatbot 152 and/or third party system 150. In some implementations, the contextual personalization service at the database system 102 may select a fixed number of pieces of user data (e.g., the ten closest pieces of user data) for augmenting the conversational user input, while in other implementations, the contextual personalization service at the database system 102 may select or otherwise obtain textual content from the closest pieces of user data until the cumulative amount of words or characters obtained by the contextual personalization service at the database system 102 reaches a maximum threshold number of words or characters supported for input to the LLM chatbot 152 and/or third party system 150. For example, in implementations where the LLM chatbot 152 and/or third party system 150 supports a maximum number of characters to be input to a chatbot, the contextual personalization service at the database system 102 may select or otherwise obtain additional supplemental textual content from the closest pieces of user data until the total number of characters between the conversational user input and the retrieved textual content from the user data is equal to the maximum number of characters supported by the LLM chatbot 152.
After obtaining the desired amount of supplemental textual content from the user data for augmenting the conversational user input, the contextual personalization service at the database system 102 automatically generates an augmented conversational user input prompt by combining, summarizing or otherwise amalgamating the user's supplemental textual content with the textual content of the conversational user input in a manner that preserves the semantic and/or syntactic nature of the conversational user input while conveying the user's supplemental textual content as related information, knowledge or context associated with the conversational user input. In this regard, the augmented conversational user input prompt generated by the contextual personalization service at the database system 102 may be structured or formatted such that the chatbot at the LLM chatbot 152 and/or third party system 150 ingests or interprets the user's textual content as relevant to the conversational user input while generating a conversational response to the conversational user input. As a result, the chatbot at the LLM chatbot 152 and/or third party system 150 automatically generates a personalized conversational response that is responsive to the conversational user input but accounts for or otherwise reflects the user's supplemental textual content.
The chatbot customization process 200 continues by transmitting or otherwise providing the personalized conversational response to the received conversational user input for presentation to the user on behalf of the chatbot (task 214). In this regard, the chatbot service 142 at the database system 102 receives the autogenerated personalized conversational response provided by the LLM chatbot 152 and/or third party system 150 and provides a corresponding conversational response to the application platform 124 and/or virtual application 140 for rendering or displaying the conversational response to the received conversational user input within the context of the conversation depicted on or within the GUI display associated with the virtual application 140 and/or chatbot service 142 at the client device 108. In some implementations, a contextual personalization component of the chatbot service 142 at the database system 102 may modify, alter or otherwise augment the personalized conversational response to generate an augmented personalized conversational response to be provided within the context of the virtual application 140 provided by the application platform 124 that reflects the individual's user data or other models or digital representations of the user maintained in the database 106 at the database system 102. For example, depending on the particular user's education level, experience, job title, employer, industry, and/or the like, a contextual personalization service at the database system 102 may augment, tailor or otherwise fine tune the autogenerated personalized conversational response provided by the LLM chatbot 152 to utilize vocabulary or syntax that is specific to the particular individual user's characteristics or preferences, which are not known or unavailable to the LLM chatbot 152 and/or third party system 150. Similarly, the contextual personalization service at the database system 102 may remove textual content from the autogenerated personalized conversational response that is extraneous, superfluous or otherwise irrelevant to the particular individual submitting the conversational user input based on that individual user's data maintained in the database 106 at the database system 102. In this manner, the resulting augmented personalized conversational response that is generated by the chatbot service 142 at the application platform 124 reflects the individual user's available data along with the user's background knowledge, experience, and other personal preferences or idiosyncrasies that are not known or available to the LLM chatbot 152 and/or third party system 150, thereby improving the user experience and usefulness of the conversational response to the conversational user input.
After generating the personal model for the particular user, the user subsequently interacts with a GUI display of the virtual application 140 to interact with the virtual application 140 and/or the chatbot service 142 to input or otherwise provide 310 a conversational user input to the application platform 124 over the network 110. The virtual application 140 provides 312 the conversational user input to the chatbot service 142 to invoke or otherwise initial the contextual personalization service provided by the chatbot service 142. As described above, in response to receiving 312 the conversional user input, the chatbot service 142 performs the same techniques utilized to generate the personal model (e.g., tokenization, lemmatization, normalization and/or encoding) to generate a corresponding numerical representation or vector word embedding of the received conversational user input. After generating a numerical representation of the received conversational user input, the chatbot service 142 searches, queries or otherwise utilizes 314 the user's personal model in the database 106 identify a subset of the user's data records 114 that are most relevant to the received conversational user input (e.g., using cosine similarity, Euclidean distance, or other mathematical techniques) and then selects, retrieves or otherwise obtains the textual content of the closest data record(s) 114 for supplementing the received conversational user input.
After obtaining the desired amount of supplemental textual content from the user's data records 114 for augmenting the conversational user input, the chatbot service 142 automatically generates an augmented conversational user input prompt incorporating the textual content of the received conversational user input with the textual content derived from the user's data records 114 and then transmits or otherwise provides 316 the augmented conversational user input prompt to the LLM chatbot 152 at the third party system 150 over the network 110. As described above, the augmented conversational user input prompt may be structured or formatted such that the supplemental textual content from the user's data records 114 is ingested or interpreted by the LLM chatbot 152 as grounding information or other contextual information that is specific to the particular user and relevant to the associated textual content of the received conversational user input. In response, the LLM chatbot 152 automatically generates a personalized conversational response to the textual content of the conversational user input that also accounts for or otherwise reflects the user's supplemental textual content that was input to the LLM chatbot 152 as grounding information or otherwise for grounding purposes. The LLM chatbot 152 then automatically transmits or otherwise provides 318 the autogenerated conversational response back to the chatbot service 142 responsive to the augmented conversational user input prompt.
As described above, in one or more implementations, after receiving the autogenerated conversational response from the LLM chatbot 152, the contextual personalization component of the chatbot service 142 may apply an additional layer of personalization by further augmenting or modifying the autogenerated personalized conversational response from the LLM chatbot 152 in a manner that reflects the particular user's data records 114 and/or personal model(s) maintained in the database 106. For example, the chatbot service 142 may subsequently query 320 the database 1060 for information indicative of the particular user's education level, experience, job title, employer, industry, and/or the like to further augment, tailor or otherwise fine tune the autogenerated personalized conversational response provided by the LLM chatbot 152 to utilize vocabulary or syntax that is specific to the particular individual user's characteristics or preferences, which are not known or unavailable to the LLM chatbot 152 and/or third party system 150 (e.g., by removing textual content that is extraneous, superfluous or otherwise irrelevant, etc.). Thereafter, the chatbot service 142 provides 322 the resulting augmented personalized conversational response to the virtual application 140 at the application platform 124 for transmitting or otherwise providing 324 the personalized conversational response to the received conversational user input back to the client application 109 for presentation to the user on behalf of the chatbot service 142 within a GUI display at the client application 109. For example, in exemplary implementations, the virtual application 140 may dynamically update a graphical representation of a conversation depicted within a chat window or other GUI associated with the virtual application 140 to include a graphical representation of the textual content of the personalized conversational response as an utterance on behalf of the chatbot service 142 that is responsive to or otherwise follows the one or more utterances associated with the user that contain the conversational user input that formed the basis of the augmented conversational user input prompt that the personalized conversational response is responsive to. In this manner, the user of the client device 108 may perceive the received conversational response as having emanated from the chatbot service 142 and/or the LLM chatbot 152.
Referring now to
Still referring to
In exemplary implementations, after identifying an intended action to be automatically performed by the personalization agent service 400, the agent management component 410 invokes or otherwise interacts with the contextual personalization service 402 to determine a plan or sequence for performing the intended action using the user data 406 associated with that particular user using the LLM chatbot service 152. In this regard, after determining the intended action to be performed, the agent management component 410 queries or otherwise interacts with the contextual personalization service 402 to retrieve or otherwise obtain user data 406 to be utilized to ground, supplement or otherwise augment an input prompt to be provided to the LLM chatbot service 152 to obtain a new plan for performing the intended action. For example, in a similar manner as described above, the agent management component 410 may interact with the contextual personalization service 402 to utilize the user's personal model 460 maintained in the database 106 to identify a subset of the user's data records 114 that are most relevant to the received conversational user input and obtain the textual content from the data record(s) 114 for supplementing the received conversational user input. Additionally, the agent management component 410 may interact with the contextual personalization service 402 to retrieve or otherwise obtain textual content characterizing a prior plan 464 that was previously utilized by the personalization agent service 400 to perform a same or similar action (e.g., based on cosine similarity, Euclidean distance, or other similarity between the intended action and a prior action associated with a prior plan 464).
After obtaining a relevant subset of user data 406 pertaining to the intended action, the agent management component 410 automatically generates an input prompt for the LLM chatbot service 152 requesting the LLM chatbot service 152 formulate or otherwise provide a plan for performing the intended action using the supplemental textual content from the user's data records 114 and/or prior plans 464 for the user. In this manner, the agent management component 410 grounds an input prompt asking how to perform the intended action with information specific to the particular user to tailor the resulting response provided by the LLM chatbot service 152 in a user-specific manner. The agent management component 410 transmits or otherwise provides the personalized augmented input prompt for how to perform the intended action to the LLM chatbot service 152, which, in turn, automatically generates a conversational response comprising a new autogenerated plan for performing the intended action based on the personalized augmented input prompt using the supplemental textual content obtained from the user data 406 maintained in the database 106. In this regard, the autogenerated plan provided by the LLM chatbot service 152 may include a sequence of steps (or sub-actions) to be performed using the LLM chatbot service 152 and/or other auxiliary services 404 to obtain a result that corresponds to performance of the intended action. For example, when the intended action corresponds to sending an email to schedule a meeting with a particular contact, the autogenerated plan provided by the LLM chatbot service 152 may include a sequence of steps identifying the order or manner in which the personalization agent service 400 should retrieve or otherwise analyze calendar data for the particular contact or other prospective meeting attendees to identify availability, identify a particular date, time and/or location for the meeting based on the calendar data, retrieve the email addresses or other information associated with the particular contact or other prospective meeting attendees from the corresponding contact data records 114 in the database 106, and then automatically generate the textual content for the email to be sent to the retrieved email addresses using the LLM chatbot service 152.
After receiving a new autogenerated plan for performing the intended action from the LLM chatbot 152, the agent management component 410 provides the plan to a plan validation component 412 of the personalization agent service 400, which generally represents the software component of the personalization agent service 400 that interacts with the contextual personalization service 402 to verify or otherwise confirm that the plan aligns or otherwise conforms with the particular user based on the user data 406 maintained in the database 106. In this regard, in exemplary implementations, the contextual personalization service 402 stores or otherwise maintains user profile data 462 associated with a respective user having a corresponding personal model 460, where the user profile data 462 includes user-specific information indicative of the user's personal preferences or settings. For example, in the context of a CRM virtual application 140, the user profile data 462 may include information identifying different sales objectives or other CRM-related objectives for the user for different timeframes or contexts. In this regard, a user may define different objectives for different time periods, such as, for example, maximizing new sales or some other CRM-related metric within an upcoming quarter or other shorter term time period, while maximizing revenue growth or some other CRM-related metric year over year or some other longer term time period. In this regard, the plan validation component 412 may validate the new autogenerated plan provided by the agent management component 410 aligns with the individual user's objectives. Additionally, the user profile data 462 may include other user preference information, including, but not limited to, preferred vendors or third parties, blacklisted vendors or third parties, corporate governance parameters or factors, social equity parameters or factors, and/or the like, which, in turn, may be utilized by the plan validation component 412 validate that the new autogenerated plan provided by the agent management component 410 aligns with the individual user's personal preferences.
Still referring to
After arriving at a validated plan for achieving the intended action, the plan validation component 412 provides the validated new autogenerated plan to an execution agent component 414, which generally represents the software component of the personalization agent service 400 that sequentially executes the steps or sub-actions of the plan in the defined order to arrive at a result corresponding to performance of the intended action in the manner dictated by the autogenerated plan. In exemplary implementations, for each constituent step or sub-action of the plan, the execution agent component 414 may interact with the contextual personalization service 402 to verify or otherwise confirms the individual constituent action to be performed by the execution agent component 414 is consistent with or otherwise aligns with the user data 406 maintained in the database 106 prior to performing the respective step. For example, the user profile data 462 may include security data or preferences, privacy data or preferences, and other information that may be utilized by the contextual personalization service 402 to analyze, assess or otherwise determine a risk associated with performance of the respective step by the execution agent component 414. In this regard, when the contextual personalization service 402 identifies a risk metric associated with a particular step or sub-action of the plan is greater than a notification threshold or otherwise fails to satisfy the applicable risk or permissions logic, the contextual personalization service 402 may provide a corresponding indication to the execution agent component 414 to pause execution of the respective step until receiving authorization from the user (e.g., human-in-the-loop). In such scenarios, the execution agent component 414 may interact with the virtual application 140 and/or chatbot service 142 to automatically generate a conversational response or other user notification that includes information characterizing or otherwise pertaining to the particular risk associated with the respective step or sub-action for user approval in connection with a button or similar GUI element for receiving user authorization to proceed with execution of the respective step or sub-action.
When the execution agent component 414 determines the particular step or sub-action of the plan is authorized or otherwise permitted based on the user data 406, the execution agent component 414 automatically interacts with the LLM chatbot 152 or another auxiliary service 404 to facilitate performance of the respective step or sub-action. In this regard, the auxiliary service 404 generally represents an API or other service associated with the application platform 124 at the database system 102 or another external computing system 150 that is capable of providing a response to the execution agent component 414 that includes textual content or other data or information responsive to a particular request provided by the execution agent component 414.
For example, for a step or sub-action associated with scheduling a meeting, the execution agent component 414 may utilize an API associated with an external or third party calendar service 404 on the network 110 to obtain data indicative of the events and respective timing of the events for a particular meeting attendee. Thereafter, in a subsequent step or sub-action, the execution agent component 414 may provide the obtained calendar data to an API associated with a scheduling algorithm or service 404 configurable to identify a particular date and/or time for scheduling the meeting based on the calendar data. To invite participants to the meeting, the execution agent component 414 may utilize an API associated with the application platform 124 to obtain email addresses or other contact information from the appropriate contact data records 114 in the database 106. Thereafter, the execution agent component 414 may provide an input prompt to the LLM chatbot 152 that includes the meeting scheduling information to obtain a corresponding conversational response from the LLM chatbot 152 including autogenerated textual content for a body of an email to be sent to the meeting invitees. In this regard, in some implementations, the execution agent component 414 may retrieve and utilize supplemental data from the user's personal model 460 to facilitate generating the textual content in a manner that reflects the individual user's knowledge, experience, grammar, usage and other personal preferences or idiosyncrasies, such that the autogenerated textual content emulates the individual user. The execution agent component 414 may then utilize an API associated with the application platform 124 to automatically generate an email and corresponding email data record 114 in the database 106 having a body populated with the autogenerated textual content from the LLM chatbot 152 conveying the scheduling information and a to field (or destination address field) that is populated with the email addresses previously obtained from the appropriate contact data records 114 in the database 106. In this manner, the execution agent component 414 sequentially executes the steps or sub-actions of the plan previously generated by the agent management component 410 using the LLM chatbot 152 to arrive at a result (e.g., an email data record 114) that corresponds to performance of the intended action (e.g., sending an email to schedule a meeting).
Still referring to
Referring to
After identifying the intended action to be performed, the agent management component 410 of the personalization agent service 400 utilizes the user's personal model 460, prior execution plans 464 and potentially other user data 406 maintained in the database 106 to generate a grounded personalized input prompt requesting an execution plan for performing the intended action to be provided to the LLM chatbot 152 or another suitable AI system, such as a GPT-based chatbot or the like. In this regard, the subject matter described herein is not limited to any particular type of chatbot or AI system or AI techniques to be implemented by the system or service invoked to generate the execution plan, where the system or service invoked may be implemented at the database system 102 or an external computing system 150 on the network 110. After generating a personalized input prompt for performing the intended action that is grounded with information from the user's personal model 460 or prior execution plans 464, the agent management component 410 of the personalization agent service 400 transmits or otherwise provides the grounded personalized input prompt to the LLM chatbot 152 to receive a corresponding conversational response from the LLM chatbot 152 that includes textual content indicative of a sequence of steps or sub-actions to be executed to achieve the intended action by the user.
The personalization agent service 400 receives or otherwise obtains the execution plan for performing the intended action corresponding to the user's current objective from an AI system using the generated prompt and then verifies or otherwise confirms that the execution plan aligns with the individual user on whose behalf the action is being performed (tasks 506, 508). In this regard, after receiving the execution plan from the LLM chatbot 152 or other external AI system 150, a plan validation component 412 of the personalization agent service 400 validates the execution plan against the individual user's profile data 462 and/or other user data 406 maintained in the database 106. For example, the plan validation component 412 of the personalization agent service 400 may verify or otherwise confirm that a step of the execution plan does not involve use of an auxiliary service 404 that is blacklisted by the user or associated with a third party system 150 blacklisted by the user. Additionally or alternatively, the plan validation component 412 of the personalization agent service 400 may verify or otherwise confirm that a step of the execution plan does not involve use of an auxiliary service 404 or third party system 150 where an another auxiliary service 404 or third party system 150 that is more preferred by the individual user is capable of analogously performing the same step of the execution plan. In this manner, the plan validation component 412 may verify and validate an execution plan that utilizes auxiliary services 404 or third party systems 150 that align with the individual user's corporate governance preferences, social equity preferences, vendor preferences, third party preferences, and other parameters or factors indicative of the individual user's values or beliefs. When the execution plan fails to be validated, the personalization agent service 400 iteratively repeats the loop defined by tasks 504, 506 and 508 to iteratively and dynamically adjust the input prompt to include additional grounding user data 406 until arriving at an execution plan that aligns with the individual user's profile or preferences defined at the database system 102.
After validating the execution plan, the personalization agent service 400 continues by sequentially executing the constituent steps or sub-actions of the execution plan in the defined order to achieve a result corresponding to performance of the intended action by the user and then automatically provides a corresponding response to the user indicative of performance of the intended action (tasks 510, 512). As described above in the context of
Referring to
After receiving an executable response for the execution step, the personalized agent execution process 600 verifies or otherwise confirms that the executable response is aligned with the individual user's profile or settings before invoking the particular service for performing execution of the respective execution step using the executable response (tasks 606, 608). In this regard, when the executable response received from the LLM chatbot 152 aligns with the individual user's profile data 462 and/or personal model 460, the execution agent component 414 executes or otherwise performs the executable response from the LLM chatbot 152 to invoke an auxiliary service 404 to perform the respective step of the execution plan. As described above, in some implementations, the execution agent component 414 calculates or otherwise determines a risk metric associated with performance of the respective execution step based on the executable response using the user data 406 to verify or otherwise confirm that the respective execution step does not violate any risk thresholds or other permissions or settings associated with the user.
After performing a respective step of the execution plan, the personalized agent execution process 600 determines whether or not the execution plan has been completed, and if not, repeats the loop defined by tasks 602, 604, 606, 608 and 610 to sequentially execute each step of the execution plan until reaching the end of the execution plan sequence. In this regard, when the executable response from the LLM chatbot or other AI system for a respective step does not align with the individual user's profile or settings or otherwise violates applicable risk thresholds, permissions or settings, the personalized agent execution process 600 continues by identifying or otherwise determining whether an alternative for performing the respective execution step that is also aligned with the user's profile is available or otherwise exists. In this regard, when a particular response from the LLM chatbot 152 is not aligned with the user's profile or otherwise violates a risk threshold or setting associated with the user, the execution agent component 414 may utilize the user's personal model 460, profile data 462 and/or other user data 406 to identify an alternative for performing the respective step. For example, the execution agent component 414 may interact with the contextual personalization service 402 to obtain a relevant subset of user data 406 to be utilized for augmenting or otherwise modifying the grounding data provided with the input prompt to the LLM chatbot 152 in a manner that is likely to cause the LLM chatbot 152 to generate an executable response that is aligned with the individual user's profile or settings.
When an alternative is available, the personalized agent execution process 600 invokes the alternative service for performing execution of the respective execution step using the alternative executable response (task 614), before repeating the loop defined by tasks 602, 604, 606, 608, 610 and 614 to continue progressing through the execution plan. On the other hand, when an alternative is unable to be automatically or autonomously identified, the personalized agent execution process 600 automatically generates or otherwise provides notification to the user indicative of the misaligned step of the execution plan (task 616). In this regard, the execution agent component 414 may interact with the response generator 416 to automatically generate a conversational response, a push notification, or another user notification indicative of the misalignment associated with a respective execution step for the execution plan. For example, the user notification may include information identifying the potential risk(s) associated with performing the respective execution step or otherwise identify the particular corporate governance preferences, social equity preferences, vendor preferences, third party preferences, and other parameters or factors that are implicated by the respective execution step. Accordingly, the user may be provided with the opportunity to authorize the personalization agent service 400 to proceed with the respective execution step, and thereby enable the user to control the behavior or manner in which the personalization agent service 400 achieves the user's intended action. In this regard, when the user authorizes performance of a misaligned step, the loop defined by tasks 602, 604, 606, 608, 610 and 614 may continue until completion of the execution plan.
Still referring to
By virtue of the personalization agent service 400 described herein in the context of
One or more parts of the above implementations may include software. Software is a general term whose meaning can range from part of the code and/or metadata of a single computer program to the entirety of multiple programs. A computer program (also referred to as a program) comprises code and optionally data. Code (sometimes referred to as computer program code or program code) comprises software instructions (also referred to as instructions). Instructions may be executed by hardware to perform operations. Executing software includes executing code, which includes executing instructions. The execution of a program to perform a task involves executing some or all of the instructions in that program.
An electronic device (also referred to as a device, computing device, computer, etc.) includes hardware and software. For example, an electronic device may include a set of one or more processors coupled to one or more machine-readable storage media (e.g., non-volatile memory such as magnetic disks, optical disks, read only memory (ROM), Flash memory, phase change memory, solid state drives (SSDs)) to store code and optionally data. For instance, an electronic device may include non-volatile memory (with slower read/write times) and volatile memory (e.g., dynamic random-access memory (DRAM), static random-access memory (SRAM)). Non-volatile memory persists code/data even when the electronic device is turned off or when power is otherwise removed, and the electronic device copies that part of the code that is to be executed by the set of processors of that electronic device from the non-volatile memory into the volatile memory of that electronic device during operation because volatile memory typically has faster read/write times. As another example, an electronic device may include a non-volatile memory (e.g., phase change memory) that persists code/data when the electronic device has power removed, and that has sufficiently fast read/write times such that, rather than copying the part of the code to be executed into volatile memory, the code/data may be provided directly to the set of processors (e.g., loaded into a cache of the set of processors). In other words, this non-volatile memory operates as both long term storage and main memory, and thus the electronic device may have no or only a small amount of volatile memory for main memory.
In addition to storing code and/or data on machine-readable storage media, typical electronic devices can transmit and/or receive code and/or data over one or more machine-readable transmission media (also called a carrier) (e.g., electrical, optical, radio, acoustical or other forms of propagated signals—such as carrier waves, and/or infrared signals). For instance, typical electronic devices also include a set of one or more physical network interface(s) to establish network connections (to transmit and/or receive code and/or data using propagated signals) with other electronic devices. Thus, an electronic device may store and transmit (internally and/or with other electronic devices over a network) code and/or data with one or more machine-readable media (also referred to as computer-readable media).
Software instructions (also referred to as instructions) are capable of causing (also referred to as operable to cause and configurable to cause) a set of processors to perform operations when the instructions are executed by the set of processors. The phrase “capable of causing” (and synonyms mentioned above) includes various scenarios (or combinations thereof), such as instructions that are always executed versus instructions that may be executed. For example, instructions may be executed: 1) only in certain situations when the larger program is executed (e.g., a condition is fulfilled in the larger program; an event occurs such as a software or hardware interrupt, user input (e.g., a keystroke, a mouse-click, a voice command); a message is published, etc.); or 2) when the instructions are called by another program or part thereof (whether or not executed in the same or a different process, thread, lightweight thread, etc.). These scenarios may or may not require that a larger program, of which the instructions are a part, be currently configured to use those instructions (e.g., may or may not require that a user enables a feature, the feature or instructions be unlocked or enabled, the larger program is configured using data and the program's inherent functionality, etc.). As shown by these exemplary scenarios, “capable of causing” (and synonyms mentioned above) does not require “causing” but the mere capability to cause. While the term “instructions” may be used to refer to the instructions that when executed cause the performance of the operations described herein, the term may or may not also refer to other instructions that a program may include. Thus, instructions, code, program, and software are capable of causing operations when executed, whether the operations are always performed or sometimes performed (e.g., in the scenarios described previously). The phrase “the instructions when executed” refers to at least the instructions that when executed cause the performance of the operations described herein but may or may not refer to the execution of the other instructions.
Electronic devices are designed for and/or used for a variety of purposes, and different terms may reflect those purposes (e.g., user devices, network devices). Some user devices are designed to mainly be operated as servers (sometimes referred to as server devices), while others are designed to mainly be operated as clients (sometimes referred to as client devices, client computing devices, client computers, or end user devices; examples of which include desktops, workstations, laptops, personal digital assistants, smartphones, wearables, augmented reality (AR) devices, virtual reality (VR) devices, mixed reality (MR) devices, etc.). The software executed to operate a user device (typically a server device) as a server may be referred to as server software or server code), while the software executed to operate a user device (typically a client device) as a client may be referred to as client software or client code. A server provides one or more services (also referred to as services) to one or more clients.
The term “user” refers to an entity (e.g., an individual person) that uses an electronic device. Software and/or services may use credentials to distinguish different accounts associated with the same and/or different users. Users can have one or more roles, such as administrator, programmer/developer, and end user roles. As an administrator, a user typically uses electronic devices to administer them for other users, and thus an administrator often works directly and/or indirectly with server devices and client devices.
During operation, an instance of the software 728 (illustrated as instance 706 and referred to as a software instance; and in the more specific case of an application, as an application instance) is executed. In electronic devices that use compute virtualization, the set of one or more processor(s) 722 typically execute software to instantiate a virtualization layer 708 and one or more software container(s) 704A-704R (e.g., with operating system-level virtualization, the virtualization layer 708 may represent a container engine (such as Docker Engine by Docker, Inc. or rkt in Container Linux by Red Hat, Inc.) running on top of (or integrated into) an operating system, and it allows for the creation of multiple software containers 704A-704R (representing separate user space instances and also called virtualization engines, virtual private servers, or jails) that may each be used to execute a set of one or more applications; with full virtualization, the virtualization layer 708 represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM)) or a hypervisor executing on top of a host operating system, and the software containers 704A-704R each represent a tightly isolated form of a software container called a virtual machine that is run by the hypervisor and may include a guest operating system; with para-virtualization, an operating system and/or application running with a virtual machine may be aware of the presence of virtualization for optimization purposes). Again, in electronic devices where computer virtualization is used, during operation, an instance of the software 728 is executed within the software container 704A on the virtualization layer 708. In electronic devices where computer virtualization is not used, the instance 706 on top of a host operating system is executed on the “bare metal” electronic device 700. The instantiation of the instance 706, as well as the virtualization layer 708 and software containers 704A-704R if implemented, are collectively referred to as software instance(s) 702.
Alternative implementations of an electronic device may have numerous variations from that described above. For example, customized hardware and/or accelerators might also be used in an electronic device.
The system 740 is coupled to user devices 780A-780S over a network 782. The service(s) 742 may be on-demand services that are made available to one or more of the users 784A-784S working for one or more entities other than the entity which owns and/or operates the on-demand services (those users sometimes referred to as outside users) so that those entities need not be concerned with building and/or maintaining a system, but instead may make use of the service(s) 742 when needed (e.g., when needed by the users 784A-784S). The service(s) 742 may communicate with each other and/or with one or more of the user devices 780A-780S via one or more APIs (e.g., a REST API). In some implementations, the user devices 780A-780S are operated by users 784A-784S, and each may be operated as a client device and/or a server device. In some implementations, one or more of the user devices 780A-780S are separate ones of the electronic device 700 or include one or more features of the electronic device 700.
In some implementations, the system 740 is a multi-tenant system (also known as a multi-tenant architecture). The term multi-tenant system refers to a system in which various elements of hardware and/or software of the system may be shared by one or more tenants. A multi-tenant system may be operated by a first entity (sometimes referred to a multi-tenant system provider, operator, or vendor; or simply a provider, operator, or vendor) that provides one or more services to the tenants (in which case the tenants are customers of the operator and sometimes referred to as operator customers). A tenant includes a group of users who share a common access with specific privileges. The tenants may be different entities (e.g., different companies, different departments/divisions of a company, and/or other types of entities), and some or all of these entities may be vendors that sell or otherwise provide products and/or services to their customers (sometimes referred to as tenant customers). A multi-tenant system may allow each tenant to input tenant specific data for user management, tenant-specific functionality, configuration, customizations, non-functional properties, associated applications, etc. A tenant may have one or more roles relative to a system and/or service. For example, in the context of a customer relationship management (CRM) system or service, a tenant may be a vendor using the CRM system or service to manage information the tenant has regarding one or more customers of the vendor. As another example, in the context of Data as a Service (DAAS), one set of tenants may be vendors providing data and another set of tenants may be customers of different ones or all of the vendors' data. As another example, in the context of Platform as a Service (PAAS), one set of tenants may be third-party application developers providing applications/services and another set of tenants may be customers of different ones or all of the third-party application developers.
Multi-tenancy can be implemented in different ways. In some implementations, a multi-tenant architecture may include a single software instance (e.g., a single database instance) which is shared by multiple tenants; other implementations may include a single software instance (e.g., database instance) per tenant; yet other implementations may include a mixed model; e.g., a single software instance (e.g., an application instance) per tenant and another software instance (e.g., database instance) shared by multiple tenants. In one implementation, the system 740 is a multi-tenant cloud computing architecture supporting multiple services, such as one or more of the following types of services: Customer relationship management (CRM); Configure, price, quote (CPQ); Business process modeling (BPM); Customer support; Marketing; External data connectivity; Productivity; Database-as-a-Service; Data-as-a-Service (DAAS or DaaS); Platform-as-a-service (PAAS or PaaS); Infrastructure-as-a-Service (IAAS or IaaS) (e.g., virtual machines, servers, and/or storage); Analytics; Community; Internet-of-Things (IoT); Industry-specific; Artificial intelligence (AI); Application marketplace (“app store”); Data modeling; Authorization; Authentication; Security; and Identity and access management (IAM). For example, system 740 may include an application platform 744 that enables PAAS for creating, managing, and executing one or more applications developed by the provider of the application platform 744, users accessing the system 740 via one or more of user devices 780A-780S, or third-party application developers accessing the system 740 via one or more of user devices 780A-780S.
In some implementations, one or more of the service(s) 742 may use one or more multi-tenant databases 746, as well as system data storage 750 for system data 752 accessible to system 740. In certain implementations, the system 740 includes a set of one or more servers that are running on server electronic devices and that are configured to handle requests for any authorized user associated with any tenant (there is no server affinity for a user and/or tenant to a specific server). The user devices 780A-780S communicate with the server(s) of system 740 to request and update tenant-level data and system-level data hosted by system 740, and in response the system 740 (e.g., one or more servers in system 740) automatically may generate one or more Structured Query Language (SQL) statements (e.g., one or more SQL queries) that are designed to access the desired information from the multi-tenant database(s) 746 and/or system data storage 750.
In some implementations, the service(s) 742 are implemented using virtual applications dynamically created at run time responsive to queries from the user devices 780A-780S and in accordance with metadata, including: 1) metadata that describes constructs (e.g., forms, reports, workflows, user access privileges, business logic) that are common to multiple tenants; and/or 2) metadata that is tenant specific and describes tenant specific constructs (e.g., tables, reports, dashboards, interfaces, etc.) and is stored in a multi-tenant database. To that end, the program code 760 may be a runtime engine that materializes application data from the metadata; that is, there is a clear separation of the compiled runtime engine (also known as the system kernel), tenant data, and the metadata, which makes it possible to independently update the system kernel and tenant-specific applications and schemas, with virtually no risk of one affecting the others. Further, in one implementation, the application platform 744 includes an application setup mechanism that supports application developers' creation and management of applications, which may be saved as metadata by save routines. Invocations to such applications, including the server-side services and/or client-side services, may be coded using Procedural Language/Structured Object Query Language (PL/SOQL) that provides a programming language style interface. Invocations to applications may be detected by one or more system processes, which manages retrieving application metadata for the tenant making the invocation and executing the metadata as an application in a software container (e.g., a virtual machine).
Network 782 may be any one or any combination of a LAN (local area network), WAN (wide area network), telephone network, wireless network, point-to-point network, star network, token ring network, hub network, or other appropriate configuration. The network may comply with one or more network protocols, including an Institute of Electrical and Electronics Engineers (IEEE) protocol, a third Generation Partnership Project (3GPP) protocol, a fourth generation wireless protocol (4G) (e.g., the Long Term Evolution (LTE) standard, LTE Advanced, LTE Advanced Pro), a fifth generation wireless protocol (5G), and/or similar wired and/or wireless protocols, and may include one or more intermediary devices for routing data between the system 740 and the user devices 780A-780S.
Each user device 780A-780S (such as a desktop personal computer, workstation, laptop, Personal Digital Assistant (PDA), smartphone, smartwatch, wearable device, augmented reality (AR) device, virtual reality (VR) device, etc.) typically includes one or more user interface devices, such as a keyboard, a mouse, a trackball, a touch pad, a touch screen, a pen or the like, video or touch free user interfaces, for interacting with a graphical user interface (GUI) provided on a display (e.g., a monitor screen, a liquid crystal display (LCD), a head-up display, a head-mounted display, etc.) in conjunction with pages, forms, applications and other information provided by system 740. For example, the user interface device can be used to access data and applications hosted by system 740, and to perform searches on stored data, and otherwise allow one or more of users 784A-784S to interact with various GUI pages that may be presented to the one or more of users 784A-784S. User devices 780A-780S might communicate with system 740 using TCP/IP (Transfer Control Protocol and Internet Protocol) and, at a higher network level, use other networking protocols to communicate, such as Hypertext Transfer Protocol (HTTP) or HTTP Secure (HTTPS), File Transfer Protocol (FTP), Andrew File System (AFS), Wireless Application Protocol (WAP), Network File System (NFS), an application program interface (API) based upon protocols such as Simple Object Access Protocol (SOAP), Representational State Transfer (REST), etc. In an example where HTTP is used, one or more user devices 780A-780S might include an HTTP client, commonly referred to as a “browser,” for sending and receiving HTTP messages to and from server(s) of system 740, thus allowing users 784A-784S of the user devices 780A-780S to access, process and view information, pages and applications available to it from system 740 over network 782.
In the above description, numerous specific details such as resource partitioning/sharing/duplication implementations, types and interrelationships of system components, and logic partitioning/integration choices are set forth in order to provide a more thorough understanding. The invention may be practiced without such specific details, however. In other instances, control structures, logic implementations, opcodes, means to specify operands, and full software instruction sequences have not been shown in detail since those of ordinary skill in the art, with the included descriptions, will be able to implement what is described without undue experimentation.
References in the specification to “one implementation,” “an implementation,” “an example implementation,” etc., indicate that the implementation described may include a particular feature, structure, or characteristic, but every implementation may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same implementation. Further, when a particular feature, structure, and/or characteristic is described in connection with an implementation, one skilled in the art would know to affect such feature, structure, and/or characteristic in connection with other implementations whether or not explicitly described.
For example, the figure(s) illustrating flow diagrams sometimes refer to the figure(s) illustrating block diagrams, and vice versa. Whether or not explicitly described, the alternative implementations discussed with reference to the figure(s) illustrating block diagrams also apply to the implementations discussed with reference to the figure(s) illustrating flow diagrams, and vice versa. At the same time, the scope of this description includes implementations, other than those discussed with reference to the block diagrams, for performing the flow diagrams, and vice versa.
Bracketed text and blocks with dashed borders (e.g., large dashes, small dashes, dot-dash, and dots) may be used herein to illustrate optional operations and/or structures that add additional features to some implementations. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain implementations.
The detailed description and claims may use the term “coupled,” along with its derivatives. “Coupled” is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other.
While the flow diagrams in the figures show a particular order of operations performed by certain implementations, such order is exemplary and not limiting (e.g., alternative implementations may perform the operations in a different order, combine certain operations, perform certain operations in parallel, overlap performance of certain operations such that they are partially in parallel, etc.).
While the above description includes several example implementations, the invention is not limited to the implementations described and can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus illustrative instead of limiting. Accordingly, details of the exemplary implementations described above should not be read into the claims absent a clear intention to the contrary.
This application claims the benefit of U.S. Provisional Application No. 63/506,298, filed Jun. 5, 2023, which is incorporated by reference herein in its entirety. This application is related to U.S. patent application Ser. No. ______ (Attorney Docket No. 102.0492US1), filed concurrently herewith.
Number | Date | Country | |
---|---|---|---|
63506298 | Jun 2023 | US |