The following relates generally to content generation, and more specifically to feedback-guided content generation using machine learning. In some cases, content is generated based on meaningful information learned from data processing. Data processing refers to a collection and manipulation of data to produce the meaningful information. Machine learning is an information processing field in which algorithms or models such as artificial neural networks are trained to make predictive outputs in response to input data without being specifically programmed to do so.
In some cases, conventional content generation systems do not offer a mechanism for modifying content based on feedback from a content provider or a user. There is therefore a need in the art for a content generation system that offers an efficient method for responding to feedback.
Embodiments of the present disclosure provide a content generation system that obtains feedback for a content distribution campaign and generates content for a modified content distribution campaign based on the feedback using a machine learning model.
In some cases, by generating the content for the modified content distribution campaign based on the feedback using the machine learning model, the content generation system dynamically and continuously regenerates the content distribution campaign to be more reflective of content provider objectives and/or user preferences.
A method, apparatus, non-transitory computer readable medium, and system for feedback-guided content generation using machine learning are described. One or more aspects of the method, apparatus, non-transitory computer readable medium, and system include identifying a content distribution campaign; obtaining feedback for the content distribution campaign; and generating content for a modified content distribution campaign based on the feedback using a machine learning model.
A system and apparatus for feedback-guided content generation using machine learning are described. One or more aspects of the system and apparatus include at least one processor; at least one memory storing instructions executable by the at least one processor; a user experience platform designed to obtain feedback for a content distribution campaign; and a machine learning model including machine learning model parameters stored in the at least one memory and trained to generate content for a modified content distribution campaign based on the feedback.
In some cases, a content distribution campaign is informed by a wide variety of factors, including ever-changing market trends, user preferences, and signals from social, economic, and political landscapes. An ability to quickly and intelligently understand, plan for, and react to such factors greatly assists a content provider in achieving an objective of the content distribution campaign.
At the same time, users are increasingly embracing digital channels to engage with content providers and are demanding that content providers personalize their interactions. Therefore, both users and content providers benefit when an intent, stage, and context of users are understood and a digital experience is tailored for the users. In some cases, a confluence of personalization at scale along with a myriad of macro influences presents an opportunity for a content generation system for digital user experience management that operates at a granularity of an individual user's journey and sequence of experiences, all while helping a content provider to achieve its goals.
However, producing an effective content distribution campaign by synthesizing external and internal data into actionable opportunities, creating superior campaign components (e.g., content, journeys, objectives, etc.), and optimizing a content distribution strategy over time is not easily achievable for a content provider team, and added challenges of a demand from content providers for new, fresh, and personalized user experiences and siloed teams balancing various overlapping efforts creates further complication for creating an effective campaign.
In some cases, conventional content generation systems do not offer a mechanism for modifying content based on feedback from a content provider or a user, resulting in less-optimal content generation and/or distribution.
For example, in some cases, a superior campaign draws upon vast and disparate external and internal data that individual strategists and analysts are not able to effectively comprehend or synthesize within an allotted time. In some cases, a significant part of an analyst's time is spent on retrieving basic key performance indicator (KPI) questions with little bandwidth for deep analysis, while in some cases, strategists such as campaign owners and managers rely on an ad hoc analysis of internal and external sources from analysts to come to a point solution campaign.
Furthermore, in some cases, a process of conceiving, executing, and evaluating a campaign is laborious and time-consuming and is constrained both by a number of available team members and an ability to rapidly and effectively respond to quickly moving user preferences. For example, in some cases, a conventional process for creating a campaign brief includes one or more of determining a campaign objective, a channel for content distribution, a program for content distribution, a target audience, and content to be distributed. Additionally, in some cases, an end-to-end content distribution effort is scattered across different roles, making an ability to quickly and dynamically adjust campaign components based on ever-changing trends a challenge.
For example, in some cases, strategists rely on operations teams, creative teams, and other team members to execute a point solution campaign. During such a process, in some cases, a performance-based adjustment to the campaign is time-consuming, as the adjustment demands waiting for a full cycle to re-engage team members that are now occupied with different tasks. Additionally, in some cases, a content distribution effort is hampered by a lack of healthy knowledge-sharing practices across teams, resulting in silos, inefficiencies, and bottlenecks.
Still further, in some cases, content distribution workflows are heavy, manual, and dependent upon a constant supply of human ingenuity and accuracy. For example, in some cases, operations team members that are focused on building user journeys perform numerous iterations according to an intuition of what aspects of a prospective user journey might be effective. Additionally, in some cases, creative team members have a limited capacity to create variations of content for campaigns, particularly based on historical performance and content affinity variations for clients and consumers.
According to some aspects, a content generation system includes a user experience platform and a machine learning model. In some cases, the user experience platform is designed to obtain feedback for a content distribution campaign. In some cases, the machine learning model includes machine learning model parameters stored in the at least one memory and is trained to generate content for a modified content distribution campaign based on the feedback.
In some cases, by generating the content for the modified content distribution campaign based on the feedback using the machine learning model, the content generation system dynamically and continuously regenerates the content distribution campaign to be more reflective of content provider objectives and/or user preferences.
In some cases, a feedback element of a user interface provided by the content generation system accepts one or more of a text content provider input and an image content provider input. In some cases, the feedback element is summoned from any user interface or user interface element provided by the content generation system. In some cases, the content generation system presents the feedback element unsummoned when there is a relevant suggestion or action to be added. In some cases, the content generation system adapts context to a specific solution or a workflow from which the feedback element was summoned. In some cases, the content generation system retains past context and conversations and considers the content provider's role, title, and position within an organization when interfacing with the content provider.
In some cases, the content generation system provides one or more of a text output and a beyond-text output, such as an image, a chart, a representation of a user journey, a representation of a campaign program, etc.
In some cases, the machine learning model is a transformer-based sequence-to-sequence model that is trained on a public corpus of information and is able to understand semantics of a natural language input. In some cases, the machine learning model is further contextualized with a content provider's specific content distribution content, such as a campaign brief, a campaign program, a user journey, etc. In some cases, the machine learning model is optimized and adjusted via one or more of implicit feedback and explicit feedback.
In some cases, therefore, when the content generation system receives feedback from a content provider, the content generation system provides a prompt including the feedback and the context for the natural language input to the machine learning model, and the trained machine learning model is able to understand the prompt to generate content that is responsive to the feedback and relevant to the context. Therefore, in some cases, the content generation system is able to modify the content distribution campaign with the generated content such that the modified content distribution campaign better aligns with objectives of the content provider.
In some cases, the user experience platform receives user feedback. In some cases, the user experience platform identifies context for the user feedback. In some cases, the user experience platform generates a prompt including the user feedback and the context for the user feedback. In some cases, the machine learning model generates content based on the prompt. Therefore, in some cases, the content generation system is able to modify the content distribution campaign with the generated content such that the modified content distribution campaign better aligns with expectations of the user.
A content generation system according to an aspect of the present disclosure is used in a content distribution campaign context. In an example, a content provider is using the content generation system to work on a content distribution campaign project, where the campaign will target users who travel by themselves. The content provider is currently viewing a user interface provided by the content generation system for reviewing additional potential user segments for the content distribution campaign to target.
The user experience platform obtains context for the content provider's present and past interactions within the user experience platform, such as profile information for the content provider (e.g., a content provider identifier, content provider preferences, etc.) and a history of the content distribution campaign project (for example, identifying a data trend of an increase in travelers who travel by themselves, identifying an objective of the content distribution campaign with respect to a user segment of solo travelers, generating a campaign brief for a content distribution campaign targeting the user segment of solo travelers according to the objective of the campaign, and identifying three additional user segments, “Dual Income No Kids”, “Retired”, and “Business Travelers” who are potential targets of the content distribution campaign).
In an example, the content provider summons a feedback element on the user interface by a feedback element summoning input (such as a selection of a summoning element, a keyboard input, a voice command, etc.) provided to the user interface by a content provider device. The content provider provides a text input “What's the likelihood of these audiences to book this year?” to the feedback element.
The user experience platform generates a prompt based on the context and the content provider input. The user experience platform provides the prompt to the machine learning model. The machine learning model generates content according to the prompt.
In the example, the content includes text that is responsive to the content provider input (e.g., statements of a 76%, 68%, and 62% respective likelihood that users in the “Dual Income No Kids”, “Retired”, and “Business Travelers” user segment will book a trip this year, a statement “The likelihood to book travel this year ranges from 62% to 76% for these audiences likely to travel solo. This predictive score is based on the last two years of booking behavior.”, and instructions for the user experience platform to display the statements. The user experience platform displays the statements according to the instructions.
In the example, a content provider instructs the content generation system to modify the content distribution campaign by selecting the Dual Income No Kids and Retired user segments and providing a content provider input to a “Generate new plan” button. The user experience platform generates an additional prompt based on the context (including the current campaign brief), the selected user segments, and the content provider input, and the machine learning model generates an additional campaign brief based on the additional prompt. Finally, the user experience platform displays a representation of the modified content distribution campaign according to the additional campaign brief.
Further example applications of the present disclosure in a user experience project creation context are provided with reference to
As used herein, in some cases, a “content provider” refers to a person or entity that interacts with the content generation system and/or content generation apparatus. As used herein, in some cases, a “content provider preference” refers to any information provided by the content provider to the content generation system. In some cases, a content provider preference includes one or more of a preferred content, a preferred communication channel, a preferred campaign objective, a preferred user segment, a preferred time period for content distribution, and any other information that is used in developing a content distribution campaign for a content provider.
As used herein, a “user segment” or “audience” refers to a group of users corresponding to a group of user profiles and identified by a group of user identifiers. As used herein, a “user profile” refers to data corresponding to a user. Examples of data corresponding to a user include a name, contact information, demographic data, user device information, a purchase history, a correspondence history, and any other data relating to the user. As used herein, a “user identifier” refers to a unique identifier (such as a name, an email address, an identification number, etc.) for a user. In some cases, a user profile includes a user identifier. In some cases, the user segment includes one or more users corresponding to user profiles that include a common attribute or quality.
As used herein, “feedback” refers to any input received from a content provider or a user with respect to information relating to a content distribution campaign. In some cases, feedback is direct (e.g., a content provider input to a user interface element that asks a content provider to provide feedback, a content provider adjustment to a parameter associated with the content distribution campaign, such as a change in preferences, objectives, user segments, program, journey, content element, etc., a user input to a user interface associated with content received by the user and provided by the content generation system for the content distribution campaign, such as a text review, a “like”, a “dislike”, etc.) or indirect (a text input provided by the content provider, where the machine learning model identifies that the text input is feedback based on a context associated with the text input).
As used herein, a “context” refers to contextual information for a user experience platform. As used herein, a “profile information for a content provider” or a “content provider profile” refers to any available data for a content provider, such as an identifier, a title, a role within an organization, content provider preferences, etc. As used herein, an “interaction history of the content provider” refers to data relating to any interaction of the content provider or a user with the content generation system, such as inputs provided to the content generation system, user interfaces viewed by content provider, content provided by the content generation system for the content provider, time spent interacting with the content generation system, an active user interface element, a currently selected user interface element, information represented in a user interface element, content received by the user from the content generation system, information relating to the content received by the user, etc.
In some cases, a context includes one or more of: the profile information for a content provider; the interaction history of the content provider; information corresponding to an active user interface (such as data corresponding to information being represented by the user interface, link selection within the user interface, a mouse or other input position in the user interface, an amount of time spent in the user interface, etc.); an analytics context; an audience segmentation context; a campaign generation context; structured information representing a user journey, a campaign brief, or a campaign program; information in multiple modalities, such as a text modality or an image modality; data corresponding to a data trend or anomaly; an insight; an opportunity; content provider preferences (such as a campaign objective, a campaign goal, a preferred user segment, a preferred communication channel, preferred content, etc.); a content provider playbook; a key performance indicator; previous content provider feedback; and any other available information that is or has been presented to or retrieved by the user experience platform.
As used herein, in some cases, a “user experience platform” includes a set of creative, analytics, social, advertising, media optimization, targeting, Web experience management, journey orchestration and content management tools. In some cases, a user experience platform comprises one or more ANNs trained to generate content. In some cases, a user experience platform provides the user interface. In some cases, the user experience platform communicates with a database. In some cases, the user experience platform comprises the database.
As used herein, a “prompt” refers to an input to a machine learning model. In some cases, the prompt includes the context. In some cases, the prompt includes an instruction to generate content based on the context. In some cases, the prompt includes text provided by a content provider. In some cases, the prompt includes an instruction to generate the content according to one or more of the context, the text provided by the content provider, and the instruction. In some cases, one or more of the text provided by the content provider and the instruction includes natural language. As used herein, “natural language” refers to any language that has emerged through natural use.
In some cases, the user experience platform generates the prompt. In some cases, the user experience platform generates the prompt in response to a content provider input to an element of a user interface. In some cases, the user experience platform generates the prompt in response to a change in the context. In some cases, the content provider provides the prompt to the machine learning model (for example, via a user interface configured to communicate with the machine learning model).
In some cases, the prompt includes one or more embeddings. As used herein, an “embedding” refers to a mathematical representation of an object (such as text, an image, a chart, audio, etc.) in a lower-dimensional space, such that information about the object is more easily captured and analyzed by a machine learning model. For example, in some cases, an embedding is a numerical representation of the object in a continuous vector space in which objects that have similar semantic information correspond to vectors that are numerically similar to and thus “closer” to each other, providing for an ability of a machine learning model to effectively compare the objects corresponding to the embeddings with each other.
In some cases, an embedding is produced in a “modality” (such as a text modality, a chart modality, an image modality, an audio modality, etc.) that corresponds to a modality of the corresponding object. In some cases, embeddings in different modalities include different dimensions and characteristics, which makes a direct comparison of embeddings from different modalities difficult. In some cases, an embedding for an object is generated or translated into a multimodal embedding space, such that objects from multiple modalities are effectively compared with each other.
As used herein, in some cases, “content” includes any form of media, including goods, services, physically tangible media, and the like, and digital content, including media such as text, audio, images, video, or a combination thereof. In some cases, content includes text, such as descriptions, summaries, insights, opportunities, structured queries, campaign briefs, user journey outlines, copy, subject headlines, hashtags, text for inclusion in an image, etc. In some cases, content includes instructions or code (such as a macro) that is executed by the user experience platform to retrieve and/or generate content, such as an image, a chart, a slide presentation, a video, audio, etc. In some cases, content includes instructions or code that is executed by the user experience platform to display content or a representation of the content via the user interface. As used herein, a “content element” refers to a discrete item of content or a portion of the content.
As used herein, in some cases, an “insight” includes a natural language description of a data trend or anomaly. In some cases, the insight includes a natural language analysis of the data trend (such as an identification of a predicted cause or contributing factor for the data trend and a prediction of an effect of the data trend). In some cases, the insight includes a text instruction provided in an appropriate format (such as an instruction, code, a macro, etc.) for a different component (such as the user experience platform) to take some action (such as retrieve, generate, and or/display content). In some cases, an insight is synthesized and curated from external and internal sources in various forms, such as data stories, charts, visuals, etc.
In some cases, the data sources are inclusive of at least one of a foundational enterprise/content distribution focus and a unique content provider-specific foundation.
Examples of data having a foundational enterprise/content distribution focus include publicly available competitor information and announcements, market research reports, brand awareness and perception data, company and industry data, demographic data, seasonal data, macroeconomic data, microeconomic data, and data relating to world events.
Examples of data having a content provider-specific foundation include user and segmentation data, content affinity data based on historical responses to content distribution campaigns, user journey preferences (such as frequency, channels, and content preferences) based on historical performance of content distribution campaigns, share partner or purchased data, historical content distribution campaign details and performance data, brand guidelines and historical content experiences, previous experiments and results, and user research, such as market research and churn analysis.
As used herein, in some cases, an “opportunity” includes a natural language description of an action for the content provider to take based on the insight. In some cases, the opportunity includes a natural language suggestion to the content provider to instruct the content generation system to take a further action (such as identifying a group of content providers, generating a content distribution campaign, etc.) based on the insight. In some cases, the opportunity includes a text instruction provided in an appropriate format (such as a programming language) for a different component (such as the user experience platform) to take an action (such as identifying the group of content providers, generating the content distribution campaign, etc.). In some cases, an opportunity is curated with consideration to one or more of a content provider's goals, playbook, active or previous campaigns, and historical performance.
As used herein, a “structured query” refers to a statement provided in a structured query language (SQL). In some cases, SQL is a programming language for storing and processing information in a database. In some cases, SQL integrates with various programming languages. In some cases, the structured query is embedded with a programming language. In some cases, the structured query includes common keywords. In some cases, a database stores information in tabular form according to SQL, with rows and columns representing different data attributes and the various relationships between the data values. In some cases, the structured query is used to store, update, remove, search, or retrieve information from the database organized according to one or more SQL tables. In some cases, a structured query includes a translation by a machine learning model of information included in or associated with a prompt into SQL.
As used herein, a “content distribution campaign”, “communication campaign”, or “campaign” refers to a coordinated distribution of content through one or more content channels in order to achieve one or more goals, such as a number of product purchases, a number of content views, a number of sign-ups, etc.
In some cases, a content distribution campaign is planned according to a campaign brief. As used herein, a “campaign brief” refers to text including a description of one or more components or elements of a content distribution campaign, such as an identification of one or more of content to be distributed, an identification of a user segment for receiving the distributed content, a channel for distributing the content through, and a period (either stage-based or calendar-based) for distributing the content. As used herein, “stage-based” refers to periods determined according to an order of occurrence. As used herein, “calendar-based” refers to periods determined according to calendar dates.
In some cases, a campaign brief defines aspects of a campaign, including one or more of a target audience, a key performance indicator (KPI), an objective for the campaign, a timeframe for providing content according to the campaign, personnel assignments, campaign budget information, and content associated with the campaign. In some cases, the campaign brief is a roadmap for a content provider to execute on and a source of truth for the campaign.
In some cases, the campaign brief identifies a plurality of stages and a program for each of the set of periods. As used herein, in some cases, a “program” refers to a plotted timeline of content distribution according to the set of periods. In some cases, the communication channel is associated with the program for at least one of the set of periods.
In some cases, the campaign brief includes a description of a user journey. As used herein, in some cases, a “user journey” refers to a process through which a user becomes aware of and interacts with the content provider or a client of the content provider. In some cases, the user journey includes a set of planned touchpoints in which content is provided to the user. In some cases, a touchpoint of the set of planned touchpoints is planned according to one or more of a period of time, an interaction of the user with a content channel (such as a visit to a physical location or a digital content channel such as a social media feed), or an occurrence of a previous touchpoint. In some cases, the campaign brief includes a summary of the content distribution campaign. In some cases, the summary comprises a natural language statement. In some cases, the campaign brief includes a title or a label for the content distribution campaign.
In some cases, a content distribution campaign package includes one or more of a campaign brief, a user segment, a generated content experience, a user journey, a simulation of a user journey, and a prediction of a result of a user journey. In some cases, the campaign brief is generated based on one or more of the content provider's goals, a playbook, active or previous campaigns, and a historical performance. In some cases, a user segment is generated based on one or more of a campaign focus, a campaign goal, a historical affinity, and a historical performance.
In some cases, a content experience is generated based on a composition of individual content pieces, such as images, copy, video, audio, subject headlines, hashtags, etc. In some cases, a user journey is generated, simulated, and/or predicted based on one or more of a historical user journey, results of a historical user journey, and industry standards derived from external data.
In some cases, the content distribution campaign package is dynamically optimized and adjusted based on generated insights of emerging external or internal trends. In some cases, the content distribution campaign package is dynamically optimized and adjusted based on feedback loops from the content distribution campaign, where the feedback is provided by one or more of the content provider and a user targeted by the content distribution campaign.
As used herein, a “communication channel” or a “content distribution channel” refers to a physical channel (such as a mailing service, a physical location such as a store, a hotel, an amusement park, etc., and the like) or a digital channel (such as a website, a software application, an Internet-based application, an email service, a messaging service such as SMS, instant messaging, etc., a television service, a telephone service, etc.) through which content or digital content is provided. As used herein, “customized content” refers to content that is customized according to data associated with a content provider or a user.
As used herein, in some cases, a “project” refers to a process of using the content generation system to produce an output or outcome.
According to some aspects, the content generation system assists with an ideation, definition, expansion, and refinement of an audience for the content distribution campaign. For example, in some cases, the machine learning model qualifies and quantifies the audience using summary statistics and described traits of the audience along with projected performance of the audience towards the content distribution objective of the content provider.
According to some aspects, the content generation system employs at least one of the user experience platform and the machine learning model to generate a complete content distribution campaign, including a program, messaging, content, and journey, or a combination thereof. For example, in some cases, the content generation system optimizes the content distribution campaign for a target audience to meet the content distribution objective of the content provider.
According to some aspects, the content generation system infuses capabilities of the machine learning model with capabilities of the user experience cloud to provide a multimodal conversational interface capable of brainstorming, ideation, and reasoning, that retains and adapts to context. In some cases, the conversational interface is implemented as a copilot for user experience management.
According to some aspects, the content generation system is directed by feedback (such as additional inputs and/or dimensions) to dynamically and continuously regenerate generated outputs. According to some aspects, journeys, journey simulation, and performance predictions are based on historical journey data of a content provider combined with external journey data leveraged by the machine learning model.
Accordingly, in some cases, the content generation system provides a content provider with efficiency, efficacy, scale, agility, velocity, ideation, collaboration, and/or execution, thereby allowing the content provider to do more with less.
A system and an apparatus for feedback-guided content generation are described with reference to
In some aspects, the user experience platform is further trained to update the content distribution campaign with the modified content distribution campaign based on a content provider input to accept the modified content distribution campaign. In some aspects, the user experience platform is further designed to monitor a performance of the content distribution campaign to obtain a performance metric, wherein the feedback is based on the performance metric.
Referring to
Content generation apparatus 115 receives the content provider feedback and the user feedback and uses a machine learning model (such as the machine learning model described with reference to
Content provider device 110 is an example of, or includes aspects of, the corresponding element described with reference to
According to some aspects, a content provider device user interface enables content provider 105 to interact with content provider device 110. In some embodiments, the content provider device user interface includes an audio device, such as an external speaker system, an external display device such as a display screen, or an input device (e.g., a remote-control device interfaced with the user interface directly or through an I/O controller module). In some cases, the content provider device user interface is a graphical user interface.
Content generation apparatus 115 is an example of, or includes aspects of, the corresponding element described with reference to
In some cases, content generation apparatus 115 is implemented on a server. A server provides one or more functions to content providers linked by way of one or more of various networks, such as cloud 120. In some cases, the server includes a single microprocessor board, which includes a microprocessor responsible for controlling all aspects of the server. In some cases, the server uses microprocessor and protocols to exchange data with other devices or content providers on one or more of the networks via one or more of hypertext transfer protocol (HTTP), simple mail transfer protocol (SMTP), file transfer protocol (FTP), and simple network management protocol (SNMP). In some cases, the server is configured to send and receive hypertext markup language (HTML) formatted files (e.g., for displaying web pages). In various embodiments, the server comprises a general-purpose computing device, a personal computer, a laptop computer, a mainframe computer, a supercomputer, or any other suitable processing apparatus.
Further detail regarding the architecture of content generation apparatus 115 is provided with reference to
Cloud 120 is a computer network configured to provide on-demand availability of computer system resources, such as data storage and computing power. In some examples, cloud 120 provides resources without active management by a content provider. The term “cloud” is sometimes used to describe data centers available to many content providers over the Internet.
Some large cloud networks have functions distributed over multiple locations from central servers. A server is designated an edge server if it has a direct or close connection to a content provider. In some cases, cloud 120 is limited to a single organization. In other examples, cloud 120 is available to many organizations.
In one example, cloud 120 includes a multi-layer communications network comprising multiple edge routers and core routers. In another example, cloud 120 is based on a local collection of switches in a single physical location. According to some aspects, cloud 120 provides communications between content provider device 110, content generation apparatus 115, database 125, and user device 135.
Database 125 is an example of, or includes aspects of, the corresponding element described with reference to
User device 135 is an example of, or includes aspects of, the corresponding element described with reference to
According to some aspects, a user device user interface enables user 130 to interact with user device 135. In some embodiments, the user device user interface includes an audio device, such as an external speaker system, an external display device such as a display screen, or an input device (e.g., a remote-control device interfaced with the user interface directly or through an I/O controller module). In some cases, the user device user interface is a graphical user interface.
Processor unit 205 includes one or more processors. A processor is an intelligent hardware device, such as a general-purpose processing component, a digital signal processor (DSP), a central processing unit (CPU), a graphics processing unit (GPU), a microcontroller, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof.
In some cases, processor unit 205 is configured to operate a memory array using a memory controller. In other cases, a memory controller is integrated into processor unit 205. In some cases, processor unit 205 is configured to execute computer-readable instructions stored in memory unit 210 to perform various functions. In some aspects, processor unit 205 includes special purpose components for modem processing, baseband processing, digital signal processing, or transmission processing.
Memory unit 210 includes one or more memory devices. Examples of a memory device include random access memory (RAM), read-only memory (ROM), or a hard disk. Examples of memory devices include solid state memory and a hard disk drive. In some examples, memory is used to store computer-readable, computer-executable software including instructions that, when executed, cause at least one processor of processor unit 205 to perform various functions described herein.
In some cases, memory unit 210 includes a basic input/output system (BIOS) that controls basic hardware or software operations, such as an interaction with peripheral components or devices. In some cases, memory unit 210 includes a memory controller that operates memory cells of memory unit 210. For example, in some cases, the memory controller includes a row decoder, column decoder, or both. In some cases, memory cells within memory unit 210 store information in the form of a logical state.
User experience platform 215 is an example of, or includes aspects of, the corresponding element described with reference to
According to some aspects, user experience platform 215 is omitted from content generation apparatus 200 and is implemented in at least one apparatus separate from content generation apparatus 200 (for example, at least one apparatus comprised in a cloud, such as the cloud described with reference to
For example, in some cases, content generation apparatus 200 is implemented as an edge server in a content generation system (such as the content generation system described with reference to
According to some aspects, user experience platform 215 includes a set of creative, analytics, social, advertising, media optimization, targeting, Web experience management, journey orchestration and content management tools. In some cases, user experience platform 215 includes one or more of a graphic design component providing image generation and/or editing capabilities, a video editing component, a web development component, and a photography component.
In some cases, user experience platform 215 comprises one or more of an enterprise content management component; a digital asset management component; an enterprise content distribution component that manages direct content distribution campaigns, leads, resources, user data, and analytics, and allows content providers to design and orchestrate targeted and personalized campaigns via channels such as direct mail, e-mail, SMS, and MMS; a data management component for data modeling and predictive analytics; and a web analytics system that provides web metrics, dimensions, and allows content provider to define tags implemented in webpage for web tracking to provide customized dimensions, metrics, segmentations, content provider reports, and dashboards.
In some cases, user experience platform 215 has comprehensive end-to-end capabilities with content distribution-specific technology across conceptualization, execution, and insights to merge with machine learning model 220 and generative machine learning experiences. In some cases, user experience platform 215 builds a cohesive user view, supporting but not limited to analytics, digital advertising, email, user data management, social media, call centers, and commerce. In some cases, user experience platform 215 consolidates, identifies, and builds full profiles from datasets that provide differentiating data for generating content that benefits from personalization.
According to some aspects, user experience platform 215 comprises one or more artificial neural networks (ANNs), and one or more components of user experience platform 215 are implemented via the one or more ANNs. In some cases, user experience platform 215 comprises one or more generative machine learning models configured to generate content.
An ANN is a hardware component or a software component that includes a number of connected nodes (i.e., artificial neurons) that loosely correspond to the neurons in a human brain. Each connection, or edge, transmits a signal from one node to another (like the physical synapses in a brain). When a node receives a signal, it processes the signal and then transmits the processed signal to other connected nodes.
In some cases, the signals between nodes comprise real numbers, and the output of each node is computed by a function of the sum of its inputs. In some examples, nodes determine their output using other mathematical algorithms, such as selecting the max from the inputs as the output, or any other suitable algorithm for activating the node. Each node and edge are associated with one or more node weights that determine how the signal is processed and transmitted.
In ANNs, a hidden (or intermediate) layer includes hidden nodes and is located between an input layer and an output layer. Hidden layers perform nonlinear transformations of inputs entered into the network. Each hidden layer is trained to produce a defined output that contributes to a joint output of the output layer of the ANN. Hidden representations are machine-readable data representations of an input that are learned from hidden layers of the ANN and are produced by the output layer. As the understanding of the ANN of the input improves as the ANN is trained, the hidden representation is progressively differentiated from earlier iterations.
During a training process of an ANN, the node weights are adjusted to improve the accuracy of the result (i.e., by minimizing a loss which corresponds in some way to the difference between the current result and the target result). The weight of an edge increases or decreases the strength of the signal transmitted between nodes. In some cases, nodes have a threshold below which a signal is not transmitted at all. In some examples, the nodes are aggregated into layers. Different layers perform different transformations on their inputs. The initial layer is known as the input layer and the last layer is known as the output layer. In some cases, signals traverse certain layers multiple times.
According to some aspects, user experience platform 215 identifies a content distribution campaign. In some examples, user experience platform 215 obtains feedback for the content distribution campaign. In some examples, user experience platform 215 identifies a set of content elements of the content distribution campaign, where the feedback includes individual feedback for each of the set of content elements. In some examples, user experience platform 215 receives content provider input indicating to accept the modified content distribution campaign. In some examples, user experience platform 215 updates the content distribution campaign with the modified content distribution campaign based on the content provider input. In some examples, user experience platform 215 receives user input from a user of the content distribution campaign, where the feedback includes the user input. In some examples, user experience platform 215 monitors a performance of the content distribution campaign to obtain a performance metric, where the feedback is based on the performance metric. In some examples, user experience platform 215 receives a modification input from a content provider.
In some aspects, the user experience platform 215 is further trained to update the content distribution campaign with the modified content distribution campaign based on a content provider input to accept the modified content distribution campaign. In some aspects, the user experience platform 215 is further designed to monitor a performance of the content distribution campaign to obtain a performance metric, where the feedback is based on the performance metric.
Machine learning model 220 is an example of, or includes aspects of, the corresponding element described with reference to
According to some aspects, machine learning model 220 includes machine learning parameters stored in memory unit 210. Machine learning parameters are variables that provide a behavior and characteristics of a machine learning model. In some cases, machine learning parameters are learned or estimated from training data and are used to make predictions or perform tasks based on learned patterns and relationships in the data.
In some cases, machine learning parameters are adjusted during a training process to minimize a loss function or to maximize a performance metric. The goal of the training process is to find optimal values for the parameters that allow the machine learning model to make accurate predictions or perform well on a given task.
For example, during the training process, an algorithm adjusts machine learning parameters to minimize an error or loss between predicted outputs and actual targets according to optimization techniques like gradient descent, stochastic gradient descent, or other optimization algorithms. Once the machine learning parameters are learned from the training data, the machine learning parameters are used to make predictions on new, unseen data.
In some cases, parameters of an ANN include weights and biases associated with each neuron in the ANN that control a strength of connections between neurons and influence the ability of the ANN to capture complex patterns in data.
According to some aspects, machine learning model 220 comprises a large language model. A large language model is a machine learning model that is designed and/or trained to learn statistical patterns and structures of human language. Large language models are capable of a wide range of language-related tasks such as text completion, question answering, translation, summarization, and creative writing, in response to a prompt. In some cases, the term “large” refers to a size and complexity of the large language model, usually measured in terms of a number of parameters of the large language model, where more parameters allow a large language model to understand more intricate language patterns and generate more nuanced and coherent text.
In some cases, the large language model comprises a sequence-to-sequence (seq2seq) model. A seq2seq model comprises one or more ANNs configured to transform a given sequence of elements, such as a sequence of words in a sentence, into another sequence using sequence transformation.
In some cases, machine learning model 220 comprises one or more transformers (such as the transformer described with reference to
In some cases, a transformer comprises an encoder-decoder structure. In some cases, the encoder of the transformer processes an input sequence and encodes the input sequence into a set of high-dimensional representations. In some cases, the decoder of the transformer generates an output sequence based on the encoded representations and previously generated tokens. In some cases, the encoder and the decoder are composed of multiple layers of self-attention mechanisms and feed-forward ANNs.
In some cases, the self-attention mechanism allows the transformer to focus on different parts of an input sequence while computing representations for the input sequence. In some cases, the self-attention mechanism captures relationships between words of a sequence by assigning attention weights to each word based on a relevance to other words in the sequence, thereby enabling the transformer to model dependencies regardless of a distance between words.
An attention mechanism is a key component in some ANN architectures, particularly ANNs employed in natural language processing (NLP) and sequence-to-sequence tasks, that allows an ANN to focus on different parts of an input sequence when making predictions or generating output.
NLP refers to techniques for using computers to interpret or generate natural language. In some cases, NLP tasks involve assigning annotation data such as grammatical information to words or phrases within a natural language expression. Different classes of machine-learning algorithms have been applied to NLP tasks. Some algorithms, such as decision trees, utilize hard if-then rules. Other systems use neural networks or statistical models which make soft, probabilistic decisions based on attaching real-valued weights to input features. In some cases, these models express the relative probability of multiple answers.
Some sequence models (such as recurrent neural networks) process an input sequence sequentially, maintaining an internal hidden state that captures information from previous steps. However, in some cases, this sequential processing leads to difficulties in capturing long-range dependencies or attending to specific parts of the input sequence.
The attention mechanism addresses these difficulties by enabling an ANN to selectively focus on different parts of an input sequence, assigning varying degrees of importance or attention to each part. The attention mechanism achieves the selective focus by considering a relevance of each input element with respect to a current state of the ANN.
In some cases, an ANN employing an attention mechanism receives an input sequence and maintains its current state, which represents an understanding or context. For each element in the input sequence, the attention mechanism computes an attention score that indicates the importance or relevance of that element given the current state. The attention scores are transformed into attention weights through a normalization process, such as applying a softmax function. The attention weights represent the contribution of each input element to the overall attention. The attention weights are used to compute a weighted sum of the input elements, resulting in a context vector. The context vector represents the attended information or the part of the input sequence that the ANN considers most relevant for the current step. The context vector is combined with the current state of the ANN, providing additional information and influencing subsequent predictions or decisions of the ANN.
In some cases, by incorporating an attention mechanism, an ANN dynamically allocates attention to different parts of the input sequence, allowing the ANN to focus on relevant information and capture dependencies across longer distances.
In some cases, calculating attention involves three basic steps. First, a similarity between a query vector Q and a key vector K obtained from the input is computed to generate attention weights. In some cases, similarity functions used for this process include dot product, splice, detector, and the like. Next, a softmax function is used to normalize the attention weights. Finally, the attention weights are weighed together with their corresponding values V. In the context of an attention network, the key K and value V are typically vectors or matrices that are used to represent the input data. The key K is used to determine which parts of the input the attention mechanism should focus on, while the value V is used to represent the actual data being processed.
According to some aspects, machine learning model 220 generates content for a modified content distribution campaign based on the feedback. In some examples, machine learning model 220 generates an additional content element based on the individual feedback for a corresponding content element of the set of content elements, where the modified content distribution campaign includes the additional content element. In some examples, machine learning model 220 identifies a modification intent based on the modification input, where the modified content distribution campaign is based on the modification intent.
User interface 225 is an example of, or includes aspects of, the corresponding element described with reference to
According to some aspects, multimodal encoder 230 is implemented as software stored in memory unit 210 and executable by processor unit 205, as firmware, as one or more hardware circuits, or as a combination thereof. According to some aspects, multimodal encoder 230 comprises one or more ANNs designed and/or trained to generate an embedding for an input. According to some aspects, multimodal encoder 230 comprises multimodal encoder parameters stored in memory unit 210.
According to some aspects, multimodal encoder 230 comprises a text encoder comprising one or more ANNs (such as a recurrent neural network or a transformer) that are designed and/or trained to generate a text embedding in a text embedding space or a multimodal embedding space based on a text input.
A recurrent neural network (RNN) is a class of ANN in which connections between nodes form a directed graph along an ordered (i.e., a temporal) sequence. This enables an RNN to model temporally dynamic behavior such as predicting what element should come next in a sequence. Thus, an RNN is suitable for tasks that involve ordered sequences such as text recognition (where words are ordered in a sentence). In some cases, an RNN includes a finite impulse recurrent network (characterized by nodes forming a directed acyclic graph) or an infinite impulse recurrent network (characterized by nodes forming a directed cyclic graph).
According to some aspects, multimodal encoder 230 comprises an image encoder comprising one or more ANNs (such as a convolution neural network or a transformer) configured to generate an image embedding in an image embedding space or the multimodal embedding space based on an image input.
A convolution neural network (CNN) is a class of ANN that is commonly used in computer vision or image classification systems. In some cases, a CNN enables processing of digital images with minimal pre-processing. In some cases, a CNN is characterized by the use of convolutional (or cross-correlational) hidden layers. These layers apply a convolution operation to the input before signaling the result to the next layer. In some cases, each convolutional node processes data for a limited field of input (i.e., the receptive field). In some cases, during a forward pass of the CNN, filters at each layer are convolved across the input volume, computing the dot product between the filter and the input. In some cases, during a training process, the filters are modified so that they activate when they detect a particular feature within the input.
According to some aspects, multimodal encoder 230 comprises a multimodal encoder comprising one or more ANNs (such as a CLIP model) configured to generate an embedding in a multimodal embedding space based on an input, such as text or an image.
Contrastive Language-Image Pre-Training (CLIP) is an ANN architecture that is trained to efficiently learn visual concepts from natural language supervision. In some cases, CLIP is instructed in natural language to perform a variety of classification benchmarks without directly optimizing for the benchmarks' performance, in a manner building on “zero-shot” or zero-data learning. In some cases, CLIP learns from unfiltered, highly varied, and highly noisy data, such as text paired with images found across the Internet, in a similar but more efficient manner to zero-shot learning, thus reducing the need for expensive and large labeled datasets.
In some cases, a CLIP model is applicable to nearly arbitrary visual classification tasks so that the model predicts the likelihood of a text description being paired with a particular image, removing the need for content providers to design their own classifiers and the need for task-specific training data. For example, in some cases, a CLIP model is applied to a new task by inputting names of the task's visual concepts to the model's text encoder. The model then outputs a linear classifier of CLIP's visual representations.
According to some aspects, multimodal encoder 230 encodes a content provider input to a feedback element to obtain a feedback embedding in a multimodal embedding space. According to some aspects, multimodal encoder 230 encodes a prompt to obtain a prompt embedding in a multimodal embedding space.
According to some aspects, training component 235 is implemented as software stored in memory unit 210 and executable by processor unit 205, as firmware, as one or more hardware circuits, or as a combination thereof. According to some aspects, training component 235 is omitted from content generation apparatus 200 and is implemented in at least one apparatus separate from content generation apparatus 200 (for example, at least one apparatus comprised in a cloud, such as the cloud described with reference to
In some cases, encoder 305 includes multi-head self-attention sublayer 310 and feed-forward network sublayer 315. In some cases, decoder 320 includes first multi-head self-attention sublayer 325, second multi-head self-attention sublayer 330, and feed-forward network sublayer 335.
According to some aspects, a machine learning model (such as the machine learning model described with reference to
For example, in some cases, encoder 305 parses input 340 into tokens and vectorizes the parsed tokens to obtain input embedding 345, and adds input positional encoding 350 (e.g., positional encoding vectors for input 340 of a same dimension as input embedding 345) to input embedding 345. In some cases, input positional encoding 350 includes information about relative positions of words or tokens in input 340.
In some cases, encoder 305 comprises one or more encoding layers (e.g., six encoding layers) that generate contextualized token representations, where each representation corresponds to a token that combines information from other input tokens via a self-attention mechanism. In some cases, each encoding layer of encoder 305 comprises a multi-head self-attention sublayer (e.g., multi-head self-attention sublayer 310). In some cases, the multi-head self-attention sublayer implements a multi-head self-attention mechanism that receives different linearly projected versions of queries, keys, and values to produce outputs in parallel. In some cases, each encoding layer of encoder 305 also includes a fully connected feed-forward network sublayer (e.g., feed-forward network sublayer 315) comprising two linear transformations surrounding a Rectified Linear Unit (ReLU) activation:
In some cases, each layer employs different weight parameters (W1, W2) and different bias parameters (b1, b2) to apply a same linear transformation each word or token in input 340.
In some cases, each sublayer of encoder 305 is followed by a normalization layer that normalizes a sum computed between a sublayer input x and an output sublayer(x) generated by the sublayer:
In some cases, encoder 305 is bidirectional because encoder 305 attends to each word or token in input 340 regardless of a position of the word or token in input 340.
In some cases, decoder 320 comprises one or more decoding layers (e.g., six decoding layers). In some cases, each decoding layer comprises three sublayers including a first multi-head self-attention sublayer (e.g., first multi-head self-attention sublayer 325), a second multi-head self-attention sublayer (e.g., second multi-head self-attention sublayer 330), and a feed-forward network sublayer (e.g., feed-forward network sublayer 335). In some cases, each sublayer of decoder 320 is followed by a normalization layer that normalizes a sum computed between a sublayer input x and an output sublayer(x) generated by the sublayer.
In some cases, decoder 320 generates previous output embedding 360 of previous output 355 and adds previous output positional encoding 365 (e.g., position information for words or tokens in previous output 355) to previous output embedding 360. In some cases, each first multi-head self-attention sublayer receives the combination of previous output embedding 360 and previous output positional encoding 365 and applies a multi-head self-attention mechanism to the combination. In some cases, for each word in an input sequence, each first multi-head self-attention sublayer of decoder 320 attends only to words preceding the word in the sequence, and so transformer 300's prediction for a word at a particular position only depends on known outputs for a word that came before the word in the sequence. For example, in some cases, each first multi-head self-attention sublayer implements multiple single-attention functions in parallel by introducing a mask over values produced by the scaled multiplication of matrices Q and K by suppressing matrix values that would otherwise correspond to disallowed connections.
In some cases, each second multi-head self-attention sublayer implements a multi-head self-attention mechanism similar to the multi-head self-attention mechanism implemented in each multi-head self-attention sublayer of encoder 305 by receiving a query Q from a previous sublayer of decoder 320 and a key K and a value V from the output of encoder 305, allowing decoder 320 to attend to each word in the input 340.
In some cases, each feed-forward network sublayer implements a fully connected feed-forward network similar to feed-forward network sublayer 315. In some cases, the feed-forward network sublayers are followed by a linear transformation and a softmax to generate a prediction of output 370 (e.g., a prediction of a next word or token in a sequence of words or tokens). Accordingly, in some cases, transformer 300 generates content as described herein based on a predicted sequence of words or tokens.
Content generation system 400 is an example of, or includes aspects of, the corresponding element described with reference to
Referring to
A method for feedback-guided content generation is described with reference to
Some examples of the method further include identifying a plurality of content elements of the content distribution campaign, wherein the feedback comprises individual feedback for each of the plurality of content elements. Some examples of the method further include generating an additional content element based on the individual feedback for a corresponding content element of the plurality of content elements, wherein the modified content distribution campaign includes the additional content element.
Some examples of the method further include receiving content provider input indicating to accept the modified content distribution campaign. Some examples further include updating the content distribution campaign with the modified content distribution campaign based on the content provider input. Some examples of the method further include receiving user input from a user of the content distribution campaign, wherein the feedback includes the user input.
Some examples of the method further include monitoring a performance of the content distribution campaign to obtain a performance metric, wherein the feedback is based on the performance metric. Some examples of the method further include receiving a modification input from a content provider. Some examples further include identifying a modification intent based on the modification input, wherein the modified content distribution campaign is based on the modification intent.
Referring to
At operation 505, a content provider provides feedback for a content distribution campaign. In some cases, the operations of this step refer to, or are performed by, a content provider as described with reference to
At operation 510, the system generates a prompt based on the feedback. In some cases, the operations of this step refer to, or are performed by, a content generation apparatus as described with reference to
At operation 515, the system generates content based on the prompt. In some cases, the operations of this step refer to, or are performed by, a content generation apparatus as described with reference to
At operation 520, the system modifies the content distribution campaign based on the content. In some cases, the operations of this step refer to, or are performed by, a content generation apparatus as described with reference to
Referring to
In some cases, by generating the content for the modified content distribution campaign based on the feedback using the machine learning model, the content generation system dynamically and continuously regenerates the content distribution campaign to be more reflective of content provider objectives and/or user preferences.
At operation 605, the system identifies a content distribution campaign. In some cases, the operations of this step refer to, or are performed by, a user experience platform as described with reference to
At operation 610, the system obtains feedback for the content distribution campaign. In some cases, the operations of this step refer to, or are performed by, a user experience platform as described with reference to
In some cases, the user experience platform receives feedback from a content provider or a user with respect to information relating to the content distribution campaign. In some cases, the feedback is direct or expressed with a clear intent, such as one or more of a content provider input to a user interface element that asks a content provider to provide feedback, a content provider adjustment to a parameter associated with the content distribution campaign, such as a change in preferences, objectives, user segments, program, journey, content element, etc., and a user input to a user interface associated with content received by the user and provided by the content generation system for the content distribution campaign, such as a text review, a “like”, a “dislike”, etc.
In some cases, the feedback is indirect or diffuse, such a text input provided by the content provider, where the machine learning model identifies that the text input is feedback based on a context associated with the text input. In some cases, examples of indirect feedback provided to a feedback element of the user interface include instructions to add additional data sources (e.g., “add data from the recent Census”), instructions to identify specific time periods of data (e.g., “consider just the last quarter”), instructions to change a campaign goal or object (e.g., “we need results in six weeks”, or “optimize revenue but keep churn under 10%”), etc. An example of feedback provided based on a feedback element is described with reference to
In some cases, the user experience platform identifies a set of content elements of the content distribution campaign, where the feedback comprises individual feedback for one or more of the set of content elements. For example, in some cases, the set of content elements comprises content elements that are optionally distributed to a one or more users according to the content distribution campaign, and the feedback comprises a content provider input to modify each of the set of content elements. An example of a user interface for feedback for a set of content elements is described with reference to
In some cases, the user experience platform receives user input from a user of the content distribution campaign, wherein the feedback includes the user input. For example, in some cases, the user experience platform monitors a content distribution channel in which a user has received content to obtain a user input for the content (such as a review, a reaction, etc.), and the user experience platform includes the user input as feedback.
In some cases, the user experience platform monitors a performance of the user distribution campaign to obtain a performance metric, wherein the feedback is based on the performance metric. For example, in some cases, the user experience platform displays a representation of the performance metric via the user interface, and the content provider provides feedback based on the performance metric. An example of a user interface for monitoring a content distribution campaign is described with reference to
According to some aspects, the user experience platform obtains a context for the feedback. In some cases, the user experience platform obtains at least a portion of the context from a database (such as a database described with reference to
In some aspects, the context includes information in multiple modalities including a text modality and an image modality. In some aspects, the context comprises a user journey, analytics context, an audience segmentation context, a campaign generation context, or any combination thereof. In some aspects, the context includes structured information representing a user journey, a campaign brief, or a campaign program.
In some cases, the user experience platform obtains the context in response to a change in a previous context (such as a display of a new user interface element, a change in a content provider interaction with a user interface such as the user interface described with reference to
In some cases, the user experience platform displays a feedback element via the user interface based on the interaction history. In some cases, the content generation apparatus displays the feedback element unsummoned. For example, in some cases, the content generation system monitors data corresponding to the content provider context, and displays the feedback element when new data is detected (such as a change in information displayed by a user interface) or when a predetermined feedback element trigger occurs (such as a change in a mouse position to an element of the user interface, an amount of time is spent viewing one user interface, a user interface element is selected, etc.). In some cases, the content generation apparatus displays the feedback element after the feedback element is summoned by a feedback element summoning content provider input (such as a selection of a summoning element displayed by the user interface, a keyboard shortcut input, a touchscreen swipe, an audio command provided by a microphone, etc.).
In some cases, the user experience platform receives a content provider input (e.g., a modification input) via the feedback element. In some cases, the content provider input to the feedback element is a text input (for example, a request to generate content). In some cases, the content provider input to the feedback element is an image input. In some cases, a multimodal encoder (such as the multimodal encoder described with reference to
At operation 615, the system generates content for a modified content distribution campaign based on the feedback using a machine learning model. In some cases, the operations of this step refer to, or are performed by, a machine learning model as described with reference to
According to some aspects, the user experience platform generates a prompt based on one or more of the context and the feedback. In some cases, the user experience platform generates the prompt by including the one or more of the context and the feedback in the prompt. In some cases, the user experience platform includes the feedback or the feedback embedding in the prompt. In some cases, the multimodal encoder encodes the prompt to obtain a prompt embedding in the multimodal embedding space.
According to some aspects, the machine learning model receives the prompt or the prompt embedding as input. In some cases, the machine learning model includes a large language model trained to generate the content based on the prompt or the prompt embedding. In some cases, the machine learning model includes a transformer trained to generate the content based on the prompt or the prompt embedding. In some cases, the content is generated based on the information in the multiple modalities. For example, in some cases, the machine learning model generates the content based on the prompt embedding in the multimodal embedding space. According to some aspects, the user experience platform updates the machine learning model by providing the prompt to the machine learning model.
According to some aspects, by generating the content based on a prompt including feedback and a context corresponding to a modification input, the machine learning model is able to identify a modification intent based on the modification input.
According to some aspects, the user experience platform includes the content in the content distribution campaign to obtain a modified content distribution campaign. According to some aspects, the user experience platform displays a representation of the modified content distribution campaign to the content provider via the user interface. An example of a representation of a modified content distribution campaign is described with reference to
According to some aspects, the machine learning model generates an additional content element based on the individual feedback for a corresponding content element of the set of content elements, where the modified content distribution campaign includes the additional content element.
According to some aspects, the user experience platform receives a content provider input indicating to accept the modified content distribution campaign. In some cases, the content provider input is provided to a user interface element corresponding to the representation of the modified content distribution campaign. In some cases, the user experience platform updates the content distribution campaign with the modified content distribution campaign based on the content provider input.
According to some aspects, the content generation system provides a content element of the modified content distribution campaign to a user identified by the modified content distribution campaign via a content distribution channel corresponding to the modified content distribution campaign.
An example of feedback provided based on a feedback element is described with reference to
Referring to
In some cases, the content provider provides feedback by selecting user interface elements corresponding to two of the user segments and providing an input to “Generate new plan” element 820 to instruct the content generation system to modify the content distribution campaign based on the selected user segments. User interface 900 displays a modified first user interface element 905 including first additional content element 910, second additional content element 915, and third additional content element 920, where first additional content element 910, second additional content element 915 are included in a modified content distribution campaign package generated based on the content provider feedback.
An example of a user interface for feedback for a set of content elements is described with reference to
Referring to
As shown in
Referring to
At operation 1305, the system obtains training data. In some cases, the operations of this step refer to, or are performed by, a training component as described with reference to
At operation 1310, the system trains a machine learning model to generate content using the training data. In some cases, the operations of this step refer to, or are performed by, a training component as described with reference to
For example, in some cases, the training component provides one or more training prompts to a machine learning model (such as the machine learning model described with reference to
The term “loss function” refers to a function that impacts how a machine learning model is trained in a supervised learning model. For example, during each training iteration, the output of the machine learning model is compared to the known annotation information in the training data. The loss function provides a value (a “loss”) for how close the predicted annotation data is to the actual annotation data. After computing the loss, the parameters of the model are updated accordingly and a new set of predictions are made during the next iteration.
Supervised learning is one of three basic machine learning paradigms, alongside unsupervised learning and reinforcement learning. Supervised learning is a machine learning technique based on learning a function that maps an input to an output based on example input-output pairs. Supervised learning generates a function for predicting labeled data based on labeled training data consisting of a set of training examples. In some cases, each example is a pair consisting of an input object (typically a vector) and a desired output value (i.e., a single value, or an output vector). In some cases, a supervised learning algorithm analyzes the training data and produces the inferred function, which is used for mapping new examples. In some cases, the learning results in a function that correctly determines the class labels for unseen instances. In other words, the learning algorithm generalizes from the training data to unseen examples.
In some cases, the training component trains the machine learning model by updating the machine learning parameters of the machine learning model according to the loss function.
The description and drawings described herein represent example configurations and do not represent all the implementations within the scope of the claims. For example, the operations and steps can be rearranged, combined, or otherwise modified. Also, in some cases, structures and devices are represented in the form of block diagrams to represent the relationship between components and avoid obscuring the described concepts. In some cases, similar components or features have the same name but have different reference numbers corresponding to different figures.
Some modifications to the disclosure are readily apparent to those skilled in the art, and the principles defined herein can be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein, but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.
In some embodiments, the described methods are implemented or performed by devices that include a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof. In some embodiments, a general-purpose processor is a microprocessor, a conventional processor, controller, microcontroller, or state machine. In some embodiments, a processor is implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). Thus, in some embodiments, the functions described herein are implemented in hardware or software and are executed by a processor, firmware, or any combination thereof. In some embodiments, if implemented in software executed by a processor, the functions are stored in the form of instructions or code on a computer-readable medium.
Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of code or data. In some embodiments, a non-transitory storage medium is any available medium that is accessible by a computer. For example, in some cases, non-transitory computer-readable media comprise random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), compact disk (CD) or other optical disk storage, magnetic disk storage, or any other non-transitory medium for carrying or storing data or code.
Also, in some embodiments, connecting components are properly termed computer-readable media. For example, if code or data is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, or microwave signals, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technology are included in the definition of medium. Combinations of media are also included within the scope of computer-readable media.
In this disclosure and the following claims, the word “or” indicates an inclusive list such that, for example, the list of X, Y, or Z means X or Y or Z or XY or XZ or YZ or XYZ. Also the phrase “based on” is not used to represent a closed set of conditions. For example, a step that is described as “based on condition A” can be based on both condition A and condition B. In other words, the phrase “based on” shall be construed to mean “based at least in part on.” Also, the words “a” or “an” indicate “at least one.”
This application claims the benefit, under 35 U.S.C. § 119, of the filing date of U.S. Provisional Application No. 63/491,499, filed on Mar. 21, 2023, in the United States Patent and Trademark Office. The disclosure of U.S. Provisional Application No. 63/491,499 is incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
63491499 | Mar 2023 | US |