Digital content providers increasingly distribute digital surveys and collect digital data from network users and other audiences. As digital content proliferates and the need for collecting information regarding network users and other audiences increases, digital content providers send and collect an increased amount of data. This increase in collecting and receiving digital data complicates the administration of implementing feedback from digital surveys and responding to additional information collected from network users and other audiences.
Collecting and receiving digital information (e.g., digital surveys and additional information such as digital journeys) generally requires digital content providers to undergo a complicated and tedious process. For instance, digital content providers are generally required to configure a step-by-step response to various events identified from data received within their platform. In particular, digital content providers can configure for specific information coming in, a specific response (e.g., if X response is received then perform Y). However, despite the ability of digital content providers to design workflows to have specific responses to specific situations, digital content providers face a variety of technological problems in the realm of collecting and receiving digital information.
For example, in configuring how to collect and receive digital information, digital content providers are typically required to undergo a tedious and computationally demanding process. Specifically, digital content providers are typically required to customize responses and actions to take for each type of survey or digital journey information or data. In doing so, digital content providers consume a high number of computational resources to tailor appropriate responses and actions to collecting certain types of data (e.g., survey responses or digital journey data) and further face challenges with differentiating between different types of data and determining which system or entity should handle the different types of data.
Due to the sheer volume of digital survey data and additional digital information collected, digital content providers also struggle with efficiently analyzing incoming data. When digital content providers capture data, the large amount of information consumes significant resources to process data to identify the type, topic, and sentiment, for example, of feedback within the data. Furthermore, due to the significant number of resources required to process the journey data, digital content providers struggle with efficiently assigning action steps associated with the feedback to the correct segment of an entity or organization.
Moreover, some digital content providers, even after expending a large number of computational resources to process the journey or feedback data, incorrectly assign action steps associated with the feedback. For instance, due to survey responses and additional information containing vague or indirect information, conventional systems struggle to accurately process the journey data. In many instances, conventional systems are unable to accurately gage the sentiment and context surrounding feedback data and as such, inaccurately identify actions to take and/or inaccurately route the actions to be taken to the wrong segment of an entity or organization.
Furthermore, digital content providers typically provide a response to a user of a respondent device in response to receiving a survey response or additional information from a respondent of a respondent device. Digital content providers do so to confirm for the user that their survey response or additional information was received. However, digital content providers struggle with operational flexibility. For instance, digital content providers struggle to generate tailored and accurate responses in a computationally timely manner in response to data received from a respondent. In particular, due to digital content providers suffering from the above-mentioned efficiency and accuracy issues, operational flexibility issues are further exacerbated.
Accordingly, these and other disadvantages decrease the utility of conventional digital survey systems and display mediums.
This disclosure describes solutions to some or all the foregoing problems with systems and methods that collect survey responses and additional information from network users and other audiences. For instance, the systems and methods implement intelligent action loops that incorporate machine learning for routing feedback to segment(s) of an entity and responding to a user's survey response or additional information in the moment. For example, the systems and methods define a prompt for a large language model that references dynamic experience data content. Further, the systems and methods receive an instance of the experience data from a respondent of a respondent device and sends the experience data instance along with the defined prompt to the large language model. Moreover, the systems and methods receive a model output from the large language model where the model output was generated based on the large language model analyzing the experience data instance according to the prompt. In doing so, the disclosed systems and methods further determines additional downstream actions to perform with respect to the experience data instance based on the model output.
The following description sets forth additional features and advantages of one or more embodiments of the disclosed systems and methods. In some cases, such features and advantages will be obvious to a skilled artisan from the description or may be learned by the practice of the disclosed embodiments.
The detailed description refers to the drawings briefly described below.
This disclosure describes one or more embodiments of a XM input processing system that defines a prompt for a large language model and receives a model output from the large language model analyzed according to the defined prompt. By defining the prompt for the large language model that references dynamic experience data content, in one or more embodiments, the XM input processing system efficiently and accurately analyzes experience data instances according to the defined prompt. For instance, the XM input processing system generates a model output by parsing an experience data instance according to the prompt and determines an action to perform. Specifically, in some embodiments the action to perform can include sending the model output to a determined administrator client device.
For instance, in some embodiments, the XM input processing system defines additional prompts to provide to the large language model. In some such instances, the XM input processing system defines an additional prompt that references the model output. Moreover, based on the model output and the additional defined prompt, the XM input processing system receives an additional output (e.g., a tailored email or message) from the large language model. Specifically, in some embodiments, the XM input processing system can send the additional output to a respondent device that sent the experience data instance (e.g., a survey response).
Additionally, in some embodiments, the XM input processing system defines additional prompts for additional actions to occur. For instance, in some embodiments, the XM input processing system defines an additional prompt that references the model output to further receive an additional model output from the large language model. In some embodiments, the additional model output includes a list of recommendations for an administrator, a digital gift card, a coupon, or a suggested response (e.g., reviewed by an administrator prior to sending to a respondent of a respondent device).
The XM input processing system further defines prompts to analyze data according to various factors and categories. For instance, in some embodiments, the XM input processing system defines a prompt to determine responsive engagement factors and experience data content categories. In particular, in some embodiments, the responsive engagement factors include sentiment, intensity, and urgency level. Further, in some embodiments, the XM input processing system in defining the prompt indicates to the large language model to identify specific content categories such as request, suggestion, or bug. Moreover, in some embodiments, the XM input processing system also indicates a task to the large language model that includes summarizing or translating, the experience data instance.
In one or more embodiments, the XM input processing system identifies contextual data (e.g., contextual information). For instance, in some embodiments, the XM input processing system defines a prompt to reference dynamic experience data content plus content apart from the experience data content. Specifically, the XM input processing system can define a prompt to reference previously received experience data content associated with a specific user. In doing so, the XM input processing system adds context for the large language model to generate a more accurate model output. In particular, the XM input processing system receives the model output from the large language model generated based on contextual data, the defined prompt, and the experience data instance.
In some embodiments, the XM input processing system implements a unique combination of components to address technological issues of conventional digital content provider systems. Specifically, in some embodiments, the XM input processing system by defining a prompt for a large language model that references dynamic experience data content and sending the experience data instance along with the prompt, overcomes technological problems faced by conventional digital content providers. For instance, in some embodiments, the XM input processing system automates many of the previous tedious and manual tasks to overcome technological problems (e.g., accuracy, efficiency, and operational flexibility problems) such as accounting for variations in data received, understanding the data, and performing downstream tasks in response to the received data.
For instance, as discussed above, conventional digital content providers face issues of complicated and tedious processes of configuring a step-by-step response for each data type. Unlike conventional digital content providers, in some embodiments the XM input processing system allows for an efficient and non-complicated process of defining a prompt. Specifically, in some embodiments, defining a prompt delegates the tedious and complicated process of accounting for and understanding various data types to a large language model. Further, in some embodiments, the XM input processing system implements the defined prompt that dynamically references experience data content to receive various outputs from the large language model. Further, in some embodiments, based on the various outputs, the XM input processing system also performs various downstream actions. As mentioned previously, the downstream actions include the XM input processing system routing model output to a relevant segment of an entity or organization or providing additional defined prompts to the large language model (e.g., to receive recommendations based on the model output).
Further, similar to above, in some embodiments the XM input processing system overcomes computational inefficiencies of tedious personalization for each type of data by identifying different types of feedback within data and assigning corresponding action steps. In particular, in some embodiments the XM input processing system does so by sending the experience data instance with the defined prompt to the large language model. For instance, the XM input processing system receives the first output from the large language model which was generated based on analyzing the experience data instance according to the prompt. As such, in some embodiments, the XM input processing system receives the first output with the type of feedback identified by the large language model and based on the identified type of feedback determines a corresponding action step.
Further, in some embodiments, the XM input processing system also overcomes computational accuracy issues of identifying the content of feedback within data. For instance, in some embodiments, the XM input processing system by defining a prompt and providing the defined prompt along with the experience data instance to the large language model. In doing so, the XM input processing system overcomes accuracy issues by parsing data according to a defined prompt to understand vague or indirect information. Further, in some embodiments, the XM input processing system receives the model output from the large language model analyzed according to the prompt without having to rely upon manual and tedious processes of analyzing data. As such, the XM input processing system accurately determines specific segments of an entity to route the model output to and/or to perform accurate actions in response to the received data.
As discussed above, in some embodiments by improving upon efficiency and accuracy, the XM input processing system also improves upon operational flexibility. For instance, the XM input processing system defines a prompt according to various responsive engagement factors and experience data content categories to send to the large language model along with experience data instances. As such, in some embodiments, the XM input processing system flexibly operates with a wide range of data types in an efficient and accurate manner. Moreover, in some embodiments, the XM input processing system also flexibly executes actions in response to receiving model outputs from the large language model (e.g., the XM input processing system sends a tailored response to a respondent in in the moment).
Turning now to the figures,
As illustrated in
Although
In general, the administrator device 108 and the respondent devices 112 communicate with server device(s) 106, including the XM input processing system 102 within the survey system 104, over a network 110. As described below, the server device(s) 106 enable various functions, features, processes, methods, and systems described herein using, for example, the XM input processing system 102. Additionally, or alternatively, the server device(s) 106 coordinate with the administrator device 108, and/or the respondent devices 112 to perform or provide the various functions, features, processes, methods, and systems described in more detail below. Although
Within the arrangement shown in
Additionally, the server device(s) 106 can include one or more computing devices, including those explained below with reference to
As an overview of the environment 100, the server device(s) 106 provide the administrator device 108 access to the XM input processing system 102 through the network 110. In one or more embodiments, by accessing the survey system 104, the server device(s) 106 provide one or more digital documents (e.g., webpages) to the administrator device 108 to allow the administrator device 108 via the administrator application 109 to compose a digital survey. The digital documents include tools and options that facilitate composing a digital survey for distribution to the respondent devices 112. Further, in one or more embodiments, by accessing the XM input processing system 102, the server device(s) 106 further access a large language model 105. By accessing the XM input processing system 102 and components such as the large language model 105, the server device(s) 106 receive various model outputs to route to the administrator device 108 or to respondent devices 112.
Moreover, as shown in
Referring back now to
Upon receiving a digital survey, the respondent device 112a, for example, presents the textual queries of the digital survey to the survey respondent 120a. The survey respondent 120a may respond to textual queries within the digital survey by providing user input via the respondent device application 114a (e.g., by selecting an answer using a touch screen or a mouse, or by inputting text data using a keyboard). After the survey respondent 120a replies to a textual query (within a digital survey) using the respondent device application 114a the respondent device application 114a instructs the respondent device 112a to send a data packet representing a response to the server device(s) 106. Upon receipt of the data packet, the survey system 104 directs the storage and analysis of the data packet to the XM input processing system 102. For instance, the survey system 104 passes the data packet to the XM input processing system 102 to send to the large language model 105 along with a defined prompt. In response, the XM input processing system 102 receives model outputs from the large language model 105 and sends the model outputs to the administrator device 108 or to the respondent devices 112 via the server device(s) 106.
Turning now to
For ease of reference, the following paragraphs describe the XM input processing system 102 as performing one or more of the acts described below rather than the server device(s) 106. As suggested above, the XM input processing system 102 comprises computer-executable instructions that cause the server device(s) 106 to perform one or more of the acts described below. Rather than repeatedly describe the relationship between the instructions within the XM input processing system 102, on the one hand, and the server device(s) 106, on the other hand, this disclosure will describe the XM input processing system 102 as performing the acts as a shorthand for that relationship. Additionally, while the paragraphs below often describe the acts of the XM input processing system 102 in relation to a single digital survey question, or a single digital journey, certain embodiments of the acts described below involve multiple digital survey questions, multiple digital journeys, and/or multiple responses.
As discussed above, the XM input processing system 102 defines a prompt to reference dynamic experience data content. In one or more embodiments, the XM input processing system 102 receives experience data content from one or more respondent devices. For instance, experience data content includes a survey response(s) or digital journey data. Additional details regarding the survey response and the digital journey data are described below in
As shown in
Further, as shown in
As shown in
As just mentioned, and as shown in
As also shown in
As further shown in
Moreover, as shown,
As mentioned previously, in one or more embodiments, the XM input processing system 102 sends the experience data instance 202 in the moment to the large language model 206 and receives the model output 210 in the moment. For example, in the moment includes milliseconds to a couple of seconds (1 millisecond to 10 seconds). For instance, upon the XM input processing system 102 receiving the experience data instance 202, the XM input processing system 102 passes the data packet 204 to the large language model 206 within the aforementioned time range and receives the model output 210 from the large language model. Accordingly, the whole process of the XM input processing system 102 receiving the model output 210 occurs within the range of 1 millisecond to 10 seconds. Furthermore, in some embodiments, the XM input processing system 102 also sends a message based on a model output from the large language model 206 to a respondent device within a couple of moments (e.g., a time range of a couple of milliseconds to a couple of seconds).
Turning now to
As mentioned above,
In order to receive a survey response, the survey system 104 sends out a digital survey to a respondent device. The term “digital survey” refers to a digital communication that collects information concerning one or more respondents by capturing information from (or posing questions to) such respondents. Accordingly, a digital survey may include one or more digital survey questions. The term “digital survey question” in turn refers to a prompt within a digital communication that invokes a response from a respondent. A digital survey question may include a textual query. The term “textual query” refers to human-readable characters that form a question. For example, a textual query includes interrogative sentences (e.g., “How are you?”) and imperative sentences (e.g., “Please identify the clothing brand you prefer”) written in text. Textual queries may come in various formats, including but not limited to, multiple choice, open-ended, ranking, scoring, summation, demographic, dichotomous, differential, cumulative, dropdown, matrix, net promoter score (“NPS”), single textbox, heat map, or any other type of formatting prompt that invokes a response from a respondent.
In one or more embodiments, the XM input processing system 102 receives the experience data instance from the respondent device as a survey response. For instance, the survey response includes the XM input processing system 102 receiving feedback from a respondent device. Moreover, in some embodiments, the survey response includes a textual response.
In one or more embodiments, the XM input processing system 102 receives the experience data instance from the respondent device as a digital journey. For example, a digital journey includes a sequence of events performed by a respondent of the respondent device. For instance, the digital journey includes the respondent of the respondent device navigating within a specific application. Specifically, the digital journey can include starting at a home page of an application, selecting an element on the home page to transition to a second page of an application, and then selecting the account profile element within the application. Moreover, the XM input processing system 102 can define the digital journey to span a predetermined amount of time and/or steps. Furthermore, the digital journey can include data relating to a sequence of events such as purchase price of an item, browser type, gender, or age group.
In one or more embodiments, the XM input processing system 102 receives the experience data instance as an agent-client interaction. For example, agent-client interaction data includes conversations extracted from a phone call (e.g., digital or analog), conversations between an agent and client on a mobile/web application or a messaging application (e.g., email), or an interaction between an agent and client on a social media application. For instance, the agent-client interaction includes in a call center environment an agent finishes a call with a client and queries a call summary from the XM input processing system 102. In some such embodiments, the XM input processing system 102 receives from the large language model a suggestion to take specific automated actions based on the call summary.
Moreover, as mentioned,
In one or more embodiments, the sentiment of an experience data instance includes the underlying emotion or attitude conveyed. For instance, the sentiment of the experience data instance includes an emotional tone such as positive, negative, or neutral. Further, the disclosed system receives the determined sentiment of the experience data instance to make further decisions such as sending the experience data instance to a specific administrator client device.
In one or more embodiments, the intensity of an experience data instance includes the strength or degree of the sentiment expressed. For instance, the intensity indicates the level of positive, negative, or neutral from the sentiment of the experience data instance. Further, for a statement such as “I liked using the product” versus “I absolutely loved using the product” both include positive sentiment, however the latter statement includes a higher intensity level. Specifically, in some embodiments the XM input processing system 102 indicates to the large language model to rank the intensity of the experience data instance from a scale of 1-10.
In one or more embodiments, the urgency of an experience data instance includes a level of priority of the experience data instance. For instance, the urgency includes categories such as low priority, mid priority, and high priority. Furthermore, the XM input processing system 102 indicates to the large language model to determine the urgency of an experience data instance. Although the above outlines responsive engagement factors 302 (e.g., enrichments) that include, but is not limited to, the sentiment 304, the intensity 306, and/or the urgency level 308, in some embodiments, the responsive engagement factors 302 further include (but are not limited to) actionability, emotion, emotional intensity, effort, loyalty, profanity, measurements (e.g., weight, distance, etc.), currency, conversation outcomes, participant outcomes, empathy scores, reasoning, perspective, contextual information, modality (e.g., level of certainty), audience awareness (e.g., what group is the message tailored for), emphasis, attitude, intent, cultural references, clarity, and mood.
Further,
Moreover,
Further,
Additionally,
As previously discussed,
In one or more embodiments, the XM input processing system 102 sends the contextual data 407 to the large language model 404 by dynamically referencing contextual data sources. Specifically, the XM input processing system 102, as part of defining the prompt references as part of the dynamic content contextual data sources associated with user accounts, such as referencing data streams that continuously capture previous respondent actions or interactions.
Additionally, in some embodiments, the contextual data 407 includes the XM input processing system 102 identifying previous digital journey data captured from the same respondent device. Furthermore, in some embodiments, the contextual data 407 includes integration of applications that capture a respondent's activities. For instance, by integrating applications that capture a respondent's activities, the prompt defining process can include dynamic references to specifically integrated applications to draw data that references a specific respondent. In other words, the XM input processing system 102 feeds information from sources in addition to the experience data instance to the large language model 404.
As also discussed previously, the XM input processing system 102 receives from the large language model 404 a first output 408 (e.g., a model output as discussed in
In one or more embodiments, the additional prompt 410 includes the XM input processing system 102 defining a prompt to indicate the generation of a translation, an email or message via a messaging application, a digital gift card, a digital coupon, a code snippet, a function call, a list of recommendations, and/or a bug report. For instance, in defining the additional prompt 410 to indicate the generation of an email, the XM input processing system 102 indicates to use property values from the first output 408 to create a personalized email response. Specifically, the additional prompt 410 can include highlighting the specific problem identified from the experience data instance, the relevant administrative client devices that will address the problem, and the format/structure of the email along with information for providing additional feedback. Similarly, in some embodiments, defining the additional prompt 410 to indicate the generation of a message includes a similar process as just described, however, the format can be structured according to a specific messaging application.
In some instances, the additional prompt 410 includes an indication to generate a translation of text. Specifically, the additional prompt 410 indicates to translate the first output 408 to a second language. Further, in some instance, the additional prompt 410 includes an indication to generate a digital gift card. For example, the XM input processing system 102 in defining the additional prompt 410 uses property values from the first output to draft a tailored apology addressing the identified issue and an embedded link for a digital gift card. In particular, to include the embedded link for the digital gift card, in one or more embodiments, the XM input processing system 102 integrates an application that creates digital gift cards. In doing so, the additional prompt 410 dynamically references the digital gift card application.
Further, in some embodiments, the additional prompt 410 includes an indication to generate a code snippet. Specifically, the additional prompt 410 includes the indication to generate a code snippet based on the first output 408 property values. For instance, the code snippet executes specific tasks such as fixing a product feature, changing a user interface, or updating account information based on an issue identified within the first output 408. Furthermore, in some instances, the XM input processing system 102 receives the code snippet to send it to an administrator client device.
In one or more embodiments, the additional prompt 410 includes an indication to generate a function call that conforms with an application programming interface (API) specification. In some such embodiments, the indication to generate the function call allows the XM input processing system 102 to take different actions based on the function call. Accordingly, the function call further enhances the flexibility of the XM input processing system 102 to execute actions previously undetermined.
Moreover, in some embodiments, the additional prompt 410 includes an indication to generate a list of recommendations. Specifically, the additional prompt 410 includes generating a list of recommendations based on the property values of the first output 408. For instance, the list of recommendations includes action points for an administrator client device to perform or other possibilities such as sending a digital gift card, sending an apology email, fixing a bug, or any combination of the aforementioned.
As shown in
To illustrate, in one or more embodiments, the additional prompt 410 includes various conditions. For instance, if the XM input processing system 102 receives the first output 408 with a negative sentiment regarding a certain product feature (e.g., as identified by the large language model 404), the additional prompt 410 can include an indication to the large language model 404 to generate a ticket that identifies specific bug features in the survey response 402, where the ticket is formatted specifically for a bug reporting application.
In some instance, the XM input processing system 102 receives the survey response 402 that includes a posted review on the internet (e.g., Google reviews, Yelp, etc.). In this instance, as part of sending the survey response 402 along with the defined prompt 400, the XM input processing system 102 can also send the contextual data 407 that includes previous responses to internet reviews. By feeding this type of contextual data to the large language model 404, the large language model 404 can generate a response with a similar tone, brand, and attitude that matches previous responses to internet reviews.
As discussed previously,
As shown,
Further,
For example,
As shown,
As further illustrated, the subsequent step includes a step 604 which shows a defined prompt as part of the workflow editor 600. Accordingly, as previously discussed, the step 604 includes the XM input processing system 102 sending a defined prompt along with the received survey response to the large language model. Further, as shown,
In one or more embodiments, rather than the XM input processing system 102 receiving a defined prompt or an additional defined prompt (e.g., steps 604 or 606), the XM input processing system 102 provides pre-defined options to the administrator client device. For instance, the pre-defined options include summarizing text, composing a reply to customer feedback, categorizing text, and/or translating text. Accordingly, selecting one or more of the pre-defined options allows the administrator client device to utilize pre-defined prompts and expedites the process for analyzing incoming data.
Further,
For instance, the step 608 can include the XM input processing system 102 sending a notification to a relevant administrator client device to approval sending an email to a respondent within a predetermined time frame (e.g., 1 minute to 30 minutes) and if the administrator client device fails to approve within the time frame, not sending the email. Alternatively, in some embodiments, the step 608 includes sending a notification to the administrator client device to approve sending an email to a respondent within a predetermined time frame, and if the administrator client device fails to approve within the time frame, automatically sending the email.
Moreover, in one or more embodiments, the step 608 includes executing code based on the model output to create a bug report. For instance, if the survey response included a bug report for a specific feature, the XM input processing system 102 can receive an additional model output that has a code snippet relevant to the survey response. Moreover, the step 608 can include the XM input processing system 102 running code to generate a bug report for the issue and including the code snippet in the bug report for the engineering team to review. Similar approval/denial mechanisms can be configured for other actions such as creating a bug report, sending a digital gift card, drafting an apology email, sending a message via a messaging application, or responding to an internet review.
Additionally, as shown in
Moreover, although
Furthermore, in one or more embodiments, the XM input processing system 102 provides an option to add a step in the workflow editor 600 for sanitizing a model output from a large language model. For instance, the XM input processing system 102 provides an option to add a step for security precautions such as checking a model output for a SQL injection or other type of malicious data. In doing so, the XM input processing system 102 prevents bad actors from injecting malicious code via the large language model to the respondent device.
Moreover, in one or more embodiments, the XM input processing system 102 provides via the workflow editor 600 an option to run tests for the various steps. For instance, the workflow editor 600 allows a test run to demonstrate the types of model outputs generated by the large language model and to further demonstrate executing code snippets or sending an email.
For example, as shown in
Further the XM input processing system 102 defines the prompt 700 to analyze various responsive engagement factors. Specifically, the prompt 700 reads “analyze the sentiment, intensity, urgency for someone to respond, and themes. Moreover, the XM input processing system 102 defines the prompt 700 to categorize the survey response. Specifically, the prompt 700 reads “categorize the response in one of the following categories: product feature request, product improvement suggestion, product bug, product configuration issue, documentation gap, user enablement issue, and other.”
Moreover, the XM input processing system 102 defines the prompt 700 to suggest a segment of administrator client devices to review the survey response. Specifically, the prompt 700 reads “also suggest which Qualtrics team should review this feedback: user experience, product and engineering, product support and resolution, customer success, website documentation and enablement/training team, professional services team, or sales leadership.”
In addition, the XM input processing system 102 indicates to the large language model to summarize the survey response. Specifically, the prompt 700 reads “give a summary of the feedback.” In one or more embodiments, the XM input processing system 102 can provide a variety of instructions such as “translate the feedback to Spanish.” Furthermore, the XM input processing system 102 indicates to the large language model to provide a model output in a specific format. Specifically, the prompt 700 reads “give back the response in JSON format with the following properties: first name, team, sentiment, intensity, urgency for someone to respond, themes, response summary, original response, and category. Only respond with the JSON object, don't include any explanations in your responses.”
Moreover, as previously discussed,
As mentioned above,
For example, as shown in
For example,
As indicated by the example survey response in
For example,
Additionally, based on the survey response shown in
Moreover, the additional model output recites additional text to demonstrate a tailored and personalized email. For example, the additional model output recites “thank you for your suggestions regarding improvements to the survey platform. We particularly appreciate your input on adding an ‘unfold all’ option, setting display logic for ranges or criteria, and carrying over page breaks when importing questions. We will make sure to review these suggestions with our development team.” Accordingly,
Turning now to
As shown in
In particular, the act 902 includes defining a prompt for a large language model that references dynamic experience data content, the act 904 includes receiving an experience data instance from a respondent device, the act 906 includes sending the experience data instance with the prompt to the large language model, the act 908 includes receiving, a first output from the large language model, the first output being generated based on the large language model analyzing the experience data instance according to the prompt, and the act 910 includes based on the first output, determining at least one action to perform with respect to the experience data instance.
For example, in one or more embodiments, the series of acts 900 includes defining an additional prompt of the large language model that references the first output. Further, in one or more embodiments, the series of acts 900 includes based on the first output and the additional prompt, receiving a second output from the large language model to send to the respondent device. Moreover, in some embodiments, the series of acts 900 includes wherein the experience data instance comprises one of a survey response, or digital journey data. Additionally, in some embodiments, the series of acts 900 includes indicating a question type associated with the survey response.
For example, in one or more embodiments, the series of acts 900 includes indicating instructions to the large language model to determine responsive engagement factors that comprise at least one of a sentiment of the experience data instance, an intensity of the experience data instance, or an urgency level of the experience data instance. Further, in one or more embodiments, the series of acts 900 includes wherein the dynamic experience data content comprises experience data instances from a plurality of respondent devices. Moreover, in some embodiments, the series of acts 900 includes defining experience data content categories for the large language model. Additionally, in some embodiments, the series of acts 900 includes receiving from the large language model a determined category of the experience data instance based on the defined experience data content categories.
For example, in one or more embodiments, the series of acts 900 includes based on the first output, determining an administrator client device to which to send the first output. Further, in one or more embodiments, the series of acts 900 includes determining, from the first output, responsive engagement factors and experience data content categories. Moreover, in some embodiments, the series of acts 900 includes selecting the administrator client device from a set of administrator devices based on the responsive engagement factors and the experience data content categories. Additionally, in some embodiments, the series of acts 900 includes identifying contextual data relating to the respondent device apart from the experience data instance. Moreover, in some embodiments, the series of acts 900 includes providing the contextual data to the large language model with the experience data instance and the prompt. Further, in some embodiments, the series of acts 900 includes receiving the first output from the large language model, the first output generated based on the contextual data, the prompt, and the experience data instance.
In one or more embodiments, the processor 1002 includes hardware for executing instructions, such as those making up a computer program. As an example, and not by way of limitation, to execute instructions, the processor 1002 may retrieve (or fetch) the instructions from an internal register, an internal cache, the memory 1004, or the storage device 1006 and decode and execute them. In one or more embodiments, the processor 1002 may include one or more internal caches for data, instructions, or addresses. As an example, and not by way of limitation, the processor 1002 may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (“TLBs”). Instructions in the instruction caches may be copies of instructions in the memory 1004 or the storage device 1006.
The memory 1004 may be used for storing data, metadata, and programs for execution by the processor(s). The memory 1004 may include one or more of volatile and non-volatile memories, such as Random Access Memory (“RAM”), Read Only Memory (“ROM”), a solid state disk (“SSD”), Flash, Phase Change Memory (“PCM”), or other types of data storage. The memory 1004 may be internal or distributed memory.
The storage device 1006 includes storage for storing data or instructions. As an example, and not by way of limitation, storage device 1006 can comprise a non-transitory storage medium described above. The storage device 1006 may include a hard disk drive (“HDD”), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (“USB”) drive or a combination of two or more of these. The storage device 1006 may include removable or non-removable (or fixed) media, where appropriate. The storage device 1006 may be internal or external to the computing device 1000. In one or more embodiments, the storage device 1006 is non-volatile, solid-state memory. In other embodiments, the storage device 1006 includes read-only memory (“ROM”). Where appropriate, this ROM may be mask programmed ROM, programmable ROM (“PROM”), erasable PROM (“EPROM”), electrically erasable PROM (“EEPROM”), electrically alterable ROM (“EAROM”), or flash memory or a combination of two or more of these.
The I/O interface 1008 allows a user to provide input to, receive output from, and otherwise transfer data to and receive data from the computing device 1000. The I/O interface 1008 may include a mouse, a keypad or a keyboard, a touch screen, a camera, an optical scanner, network interface, modem, other known I/O devices or a combination of such I/O interfaces. The I/O interface 1008 may include one or more devices for presenting output to a user, including, but not limited to, a graphics engine, a display (e.g., a display screen), one or more output drivers (e.g., display drivers), one or more audio speakers, and one or more audio drivers. In certain embodiments, the I/O interface 1008 is configured to provide graphical data to a display for presentation to a user. The graphical data may be representative of one or more graphical user interfaces and/or any other graphical content as may serve a particular implementation.
The communication interface 1010 can include hardware, software, or both. In any event, the communication interface 1010 can provide one or more interfaces for communication (such as, for example, packet-based communication) between the computing device 1000 and one or more other computing devices or networks. As an example and not by way of limitation, the communication interface 1010 may include a network interface controller (“NIC”) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (“WNIC”) or wireless adapter for communicating with a wireless network, such as a WI-FI.
Additionally, or alternatively, the communication interface 1010 may facilitate communications with an ad hoc network, a personal area network (“PAN”), a local area network (“LAN”), a wide area network (“WAN”), a metropolitan area network (“MAN”), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, the communication interface 1010 may facilitate communications with a wireless PAN (“WPAN”) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (“GSM”) network), or other suitable wireless network or a combination thereof.
Additionally, the communication interface 1010 may facilitate communications various communication protocols. Examples of communication protocols that may be used include, but are not limited to, data transmission media, communications devices, Transmission Control Protocol (“TCP”), Internet Protocol (“IP”), File Transfer Protocol (“FTP”), Telnet, Hypertext Transfer Protocol (“HTTP”), Hypertext Transfer Protocol Secure (“HTTPS”), Session Initiation Protocol (“SIP”), Simple Object Access Protocol (“SOAP”), Extensible Mark-up Language (“XML”) and variations thereof, Simple Mail Transfer Protocol (“SMTP”), Real-Time Transport Protocol (“RTP”), User Datagram Protocol (“UDP”), Global System for Mobile Communications (“GSM”) technologies, Code Division Multiple Access (“CDMA”) technologies, Time Division Multiple Access (“TDMA”) technologies, Short Message Service (“SMS”), Multimedia Message Service (“MMS”), radio frequency (“RF”) signaling technologies, Long Term Evolution (“LTE”) technologies, wireless communication technologies, in-band and out-of-band signaling technologies, and other suitable communications networks and technologies.
The communication infrastructure 1012 may include hardware, software, or both that couples components of the computing device 1000 to each other. As an example and not by way of limitation, the communication infrastructure 1012 may include an Accelerated Graphics Port (“AGP”) or other graphics bus, an Enhanced Industry Standard Architecture (“EISA”) bus, a front-side bus (“FSB”), a HYPERTRANSPORT (“HT”) interconnect, an Industry Standard Architecture (“ISA”) bus, an INFINIBAND interconnect, a low-pin-count (“LPC”) bus, a memory bus, a Micro Channel Architecture (“MCA”) bus, a Peripheral Component Interconnect (“PCI”) bus, a PCI-Express (“PCIe”) bus, a serial advanced technology attachment (“SATA”) bus, a Video Electronics Standards Association local (“VLB”) bus, or another suitable bus or a combination thereof.
This disclosure contemplates any suitable network 1106. As an example and not by way of limitation, one or more portions of network 1106 may include an ad hoc network, an intranet, an extranet, a virtual private network (“VPN”), a local area network (“LAN”), a wireless LAN (“WLAN”), a wide area network (“WAN”), a wireless WAN (“WWAN”), a metropolitan area network (“MAN”), a portion of the Internet, a portion of the Public Switched Telephone Network (“PSTN”), a cellular telephone network, or a combination of two or more of these. Network 1106 may include one or more networks 1106.
Links may connect client device, and server device 1102 to communication network 1106 or to each other. This disclosure contemplates any suitable links. In particular embodiments, one or more links include one or more wireline (such as for example Digital Subscriber Line (“DSL”) or Data Over Cable Service Interface Specification (“DOCSIS”)), wireless (such as for example Wi-Fi or Worldwide Interoperability for Microwave Access (“WiMAX”)), or optical (such as for example Synchronous Optical Network (SONET) or Synchronous Digital Hierarchy (“SDH”)) links. In particular embodiments, one or more links each include an ad hoc network, an intranet, an extranet, a VPN, a LAN, a WLAN, a WAN, a WWAN, a MAN, a portion of the Internet, a portion of the PSTN, a cellular technology-based network, a satellite communications technology-based network, another link, or a combination of two or more such links. Links need not necessarily be the same throughout network environment 1100. One or more first links may differ in one or more respects from one or more second links.
In particular embodiments, client system 1108 may be an electronic device including hardware, software, or embedded logic components or a combination of two or more such components and capable of carrying out the appropriate functionalities implemented or supported by client system 1108. As an example, and not by way of limitation, a client system 1108 may include any of the computing devices discussed above in relation to
In particular embodiments, client system 1108 may include a web browser, such as MICROSOFT INTERNET EXPLORER, GOOGLE CHROME, or MOZILLA FIREFOX, and may have one or more add-ons, plug-ins, or other extensions, such as TOOLBAR or YAHOO TOOLBAR. A user at client system 1108 may enter a Uniform Resource Locator (“URL”) or other address directing the web browser to a particular server (such as server, or a server associated with a third-party system), and the web browser may generate a Hyper Text Transfer Protocol (“HTTP”) request and communicate the HTTP request to server. The server may accept the HTTP request and communicate to client system 1108 one or more Hyper Text Markup Language (“HTML”) files responsive to the HTTP request. Client system 1108 may render a webpage based on the HTML files from the server for presentation to the user. This disclosure contemplates any suitable webpage files. As an example, and not by way of limitation, webpages may render from HTML files, Extensible Hyper Text Markup Language (“XHTML”) files, or Extensible Markup Language (“XML”) files, according to particular needs. Such pages may also execute scripts such as, for example and without limitation, those written in JAVASCRIPT, JAVA, MICROSOFT SILVERLIGHT, combinations of markup language and scripts such as AJAX (Asynchronous JAVASCRIPT and XML), and the like. Herein, reference to a webpage encompasses one or more corresponding webpage files (which a browser may use to render the webpage) and vice versa, where appropriate.
In particular embodiments, server device 1102 may include a variety of servers, sub-systems, programs, modules, logs, and data stores. In particular embodiments, server device 1102 may include one or more of the following: a web server, action logger, API-request server, relevance-and-ranking engine, content-object classifier, notification controller, action log, third-party-content-object-exposure log, inference module, authorization/privacy server, search module, advertisement-targeting module, user-interface module, user-profile store, connection store, third-party content store, or location store. Server device 1102 may also include suitable components such as network interfaces, security mechanisms, load balancers, failover servers, management-and-network-operations consoles, other suitable components, or any suitable combination thereof.
In particular embodiments, server device 1102 may include one or more user-profile stores for storing user profiles. A user profile may include, for example, biographic information, demographic information, behavioral information, social information, or other types of descriptive information, such as work experience, educational history, hobbies or preferences, interests, affinities, or location. Interest information may include interests related to one or more categories. Categories may be general or specific. Additionally, a user profile may include financial and billing information of users (e.g., survey respondents 120, customers, etc.).
The foregoing specification is described with reference to specific exemplary embodiments thereof. Various embodiments and aspects of the disclosure are described with reference to details discussed herein, and the accompanying drawings illustrate the various embodiments. The description above and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of various embodiments.
The additional or alternative embodiments may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope.