The present specification generally relates to text processing and analysis, and more specifically, to analyzing informal texts made in association with posted content according to various embodiments of the disclosure.
With the prevalence of online content sharing platforms, users have been able to seamlessly publicize or otherwise share user-generated content on the Internet and/or other media. The online content sharing platforms provide the back-end technologies and computer data storage that enables the users to post user-generated content to the platforms such that the user-generated content can be accessed by other users on the Internet. The user-generated content may include text (e.g., an article, a blog, etc.) and/or multi-media content (e.g., images, a video, etc.). When the user-generated content includes video content, an online content sharing platform may enable streaming of the content in real-time, such that other users may view the content as the content is being generated.
In order to facilitate engagement with the content and/or the content creator, many online content sharing platforms also enable viewers to provide feedback to the content. A common mechanism for enabling viewers to provide feedback is a “like” button and/or a “dislike” button. The “like” and “dislike” button mechanism enables the viewers to provide feedback very quickly, and allows an easy tally of positive and negative reactions to the content for the content creator. However, feedback that is received via the “like” and “dislike” button mechanism is limited to a single dimension. The “like” and “dislike” button mechanism also prevents the viewers from providing more elaborate opinions (e.g., including both positive and negative aspects in an opinion, etc.).
Some online content sharing platforms provide a comment mechanism that enables viewers to provide text-based comments in association with a content. The comment mechanism provides a text input box on a user interface (e.g., on the same user interface that presents the content, etc.), and viewers may insert free-form texts as comments via the comment mechanism for the content. Since the viewers can provide free-form texts as feedback to the content, the feedback can be multi-dimensional (or in an unlimited number of dimensions). One drawback of the comment mechanism is that because the feedback can include free-form texts (also referred to as “unstructured texts”), it is challenging (and time consuming) to review and/or understand the feedback in a cumulative manner. The problem is exacerbated when the volume of the comments is large (e.g., when the content generator and/or the content itself is popular, etc.). Thus, there is a need for developing a tool that automatically parses and analyzes feedback, and provides a meaningful summarization of the feedback to users.
Embodiments of the present disclosure and their advantages are best understood by referring to the detailed description that follows. It should be appreciated that like reference numerals are used to identify like elements illustrated in one or more of the figures, wherein showings therein are for purposes of illustrating embodiments of the present disclosure and not for purposes of limiting the same.
The present disclosure describes methods and systems for parsing and analyzing unstructured texts, and providing an interactive graphical representation that represents the unstructured texts in a summarized format. As discussed herein, content generators who share contents with others over a medium (e.g., the Internet, radio broadcast, television broadcast, broadcasted within a virtual reality environment such as a metaverse, etc.) often desire to interact with viewers of their content. One way to engage with the viewers is to enable the viewers to provide feedback in association with a content. For example, an online content sharing platform may provide, on a user interface (e.g., the user interface that presents the content or another user interface, etc.), one or more feedback mechanisms that enable the viewers to provide feedback to the content. The feedback mechanisms may include a “like” and/or “dislike” selector mechanism, a free-form text input mechanism, and others.
While the “like” and/or “dislike” selector mechanism is simple to use and easy for the content generator to understand quickly, it is limited to a single dimension and does not allow for more elaborate or meaningful feedback for the content. The comment mechanism solves this problem by enabling viewers to provide free-form texts (also referred to as “unstructured texts”) as feedback to the content. However, while the feedback provided via the comment mechanism can be multi-dimensional, it is challenging for the content generator to understand and analyze the feedback.
Consider an example of a fitness trainer who frequently shares fitness content (e.g., fitness videos) on an online content sharing platform. As a popular fitness trainer, her content may be viewed by millions of viewers. The amount of feedback for each content provided by the trainer may include as high as hundreds or thousands of comments. The fitness trainer may wish to understand the preferences of her audience (e.g., what her audience likes or dislikes, what type of exercises do they prefer, what type of exercises do they find challenging, etc.), such that the fitness trainer can generate content in the future that better caters to the interests and preferences of her audience. However, manually reading through the hundreds or thousands of comments may take too much time and effort, which takes away the time for her to generate new content.
Consider another example of a speaker giving a speech in a live environment. The speech may include different topics, and may be conducted in a fluid manner, where the speaker may determine to spend more time on one topic over another topic while the speech is being given. While a feedback mechanism may be used to provide immediate, real-time feedback to the speaker, it is a challenge for the speaker to understand the feedback in real-time while giving the speech, especially when numerous new comments are continuously being added as the speech is being given.
As such, according to various embodiments of the disclosure, a system (also referred to as a “feedback analysis system”) may automatically parse and analyze unstructured texts (e.g., the feedback in the form of free-form text inputs), and provide an interactive graphical representation of the unstructured texts. When content (e.g., text content, audio content, video content, etc.) is presented via a media (e.g., over the Internet via an online content sharing platform, over a radio broadcast, over a television broadcast, broadcasted within a virtual reality environment, etc.), an online platform may provide an interface that enables viewers of the content to provide feedback to the content. Via the interface, viewers may provide various feedback to the content, such as a binary indication (e.g., a “like” or “dislike” indication), free-form texts, and/or other inputs.
The system may detect the feedback to the content uploaded to the online platform, and may obtain the feedback from a server associated with the online platform. In some embodiments, when the feedback includes unstructured texts, the system may perform a series of natural language processing to the unstructured texts to derive meanings to the comments. For example, using one or more machine learning-based natural language processing (NLP) models (e.g., the Bidirectional Encoder Representation from Transformers (BERT) model, spaCy, etc.) and other text analysis tools (e.g., term frequency-inverse document frequency (TF-IDF), etc.), the system may analyze each comment based on the words in the comment (e.g., frequencies of the words in the comment, the positions of the words in the comments, relationships of each word with other surrounding words in the comments, etc.).
The system may then derive different information about the comments based on the analysis. In some embodiments, the system may derive information on a global level and a local level. On a global level, the system may derive information that applies to the entire collection of comments in association with a content. For example, the system may determine, based on a frequency analysis (e.g., using the TD-IDF analysis), that certain keywords are more relevant to the overall feedback associated with the content than other words. Those keywords may appear more frequently than other words in the feedback, and may be unique to the content (e.g., the keywords do not appear as frequently in other documents or other types of comments, etc.).
In some embodiments, the system may cluster the words in the comments into different clusters, based on the relatedness of different words (e.g., how close are the words appear with each other in the comments). The system may identify one or more clusters having a density (e.g., an amount of connections, an amount of words, etc.) above a pre-determined threshold. The system may then designate at least some of the words within the identified clusters as keywords (or popular topics) associated with the content.
On a local level, the system may derive various information associated with individual comments. For example, the system may derive a sentiment from each comment. A sentiment is an attitude or a judgment toward an object. As such, the sentiment that the system derived from a comment may indicate an attitude or a judgment that a viewer (e.g., the viewer who provides the comment) has toward the content. In some embodiments, since a viewer may provide multiple comments for the same content on the online platform, the system may combine the comments that are associated with the same viewer, and analyze the comments collectively to derive the sentiment, such that the derived sentiment represents the overall attitude or judgment of the single viewer toward the content. In some embodiments, the sentiment can be binary in nature (e.g., positive or negative, etc.). In some embodiments, however, the sentiment can be a value on a spectrum (e.g., a value within a range, such as 0-100, where 0 indicates most negative and 100 indicates most positive).
In some embodiments, the derived sentiments can be multi-faceted. For example, the system may derive a sentiment toward the subject matter that is being presented in the content and another sentiment toward the presentation of the subject matter in the content. In some embodiments, the system may also associate specific sentiments toward different portions of the content, such as different segments of the content.
After deriving the information associated with the feedback, the system may generate and present, on a user interface of a device, an interactive graphical representation of the feedback. In some embodiments, the interactive graphical representation represents a summary of the feedback. Thus, instead of reading and parsing through thousands, or hundreds of thousands, of comments, a user may gain an accurate understanding of the feedback (and also the content) by merely viewing and interacting with the interactive graphical representation generated by the system (instead of manually reading through the comments themselves).
In some embodiments, the interactive graphical representation may include a cluster of icons, where the centroid of the cluster represents a corresponding content associated with the feedback. The icons surrounding the centroid (and linked to the centroid in some embodiments) may represent different information derived from the feedback by the system. For example, the icons surrounding the centroid may represent sentiments of the viewers, where each icon may represent a sentiment associated with a distinct viewer. Each of the icon may be selectable. When an icon representing a particular sentiment of a viewer is selected, the system may present, on the user interface, the one or more comments posted by the viewer, from which the sentiment was derived.
In some embodiments, the icons surrounding the centroid may represent the keywords extracted from the feedback using the techniques described herein. For example, each icon surrounding the centroid may represent a distinct keyword extracted from the feedback. In some embodiments, the system may generate the cluster such that each of the icons may have one or more attributes (e.g., a size, a color, a distance from the centroid, etc.) that represent different characteristics of the keyword. For example, each icon may have a first attribute (e.g., a size attribute, a distance attribute, etc.) that represents a relatedness of the corresponding keyword to the overall feedback and/or the content. An icon having a larger size (or closer to the centroid) may indicate that the corresponding keyword has a higher relatedness (or correlation) to the feedback and/or the content, whereas an icon having a smaller size (or farther away from the centroid) may indicate that the corresponding keyword has a lower relatedness (or correlation) to the feedback and/or the content.
Each icon may also have a second attribute (e.g., a color attribute) that represents the sentiment of the comments that include the corresponding keyword. For example, a particular sentiment (e.g., a positive sentiment) may be represented by a first color and the opposite sentiment (e.g., a negative sentiment) may be represented by a second color. Based on the overall sentiment associated with the comments that include the corresponding keyword, a particular color (or a combination of colors) may be associated with the icon. In some embodiments, the icon may be divided into two portions, where one portion is associated with the first color and the other portion is associated with the second color. When the overall sentiment is neutral (e.g., when half of the comments that include the corresponding keyword is positive and the other half of the comments that include the keyword is negative), the portions are of equal size. However, when more comments that include the keyword is positive (or negative), the portion that represents the positive sentiment (or the negative sentiment) may be larger. In some embodiments, the system may mix the two colors according to a ratio that represents the overall sentiment (e.g., the ratio between the number of comments that have positive sentiment and the number of comments that have negative sentiment, etc.). The system may then present the icon in the mixed color.
In some embodiments, the icons that represent the keywords are also selectable. For example, upon receiving a selection of an icon in the interactive graphical representation, the system may present comments that include the corresponding keywords, and the sentiment analysis of the individual comments on the user interface.
As such, by viewing and interacting with the graphical representation, a user can quickly understand the feedback and/or the content. In the instances where the content generator provides the content in real-time, by viewing and interacting with the graphical representation while presenting the content, the content generator can quickly digest the feedback and react to the feedback (e.g., determine what subject matter to be included in the content, modify the content, etc.).
In some embodiments, the system may also analyze feedback associated with multiple contents. It is common that a content generator may generate and share multiple contents (e.g., a series of related content, etc.) over a period of time. For example, a fitness trainer may generate and share a series of fitness training videos over a period of time. In another example, a fashion designer may generate and present a series of different designs over a period of time. The system may analyze the feedback associated with the series of content that were generated and shared over the period of time, and derive information for the series of content.
In some embodiments, the system may derive trend information based on analyzing the feedback data associated with the multiple content. For example, the system may determine changes in the ratio between positive and negative feedback (e.g., comments associated with the positive sentiment and comments associated with the negative sentiment) across the different content, and present the changes as a trend on the user interface.
In some embodiments, the system may identify certain keywords that are associated with a first content having more positive feedback and that do not appear (or appear less frequently) in a second content having less positive feedback. The system may also identify certain keywords that are associated with a third content having more negative feedback and that do not appear (or appear less frequently) in a fourth content having less negative feedback. Those keywords may indicate the reasons (or indicators) why some contents have more positive (or negative) comments than others. As such, the system may present those keywords on the user interface to the user as signals to positive and negative feedback.
In some embodiments, when feedback associated with multiple contents is being analyzed together, the system may generate two clusters of icons on the user interface, where each cluster of icons represent words included in the feedback associated with a distinct content. In some embodiments, when a word is included in comments that associated with the two different contents, the icon that represents the word is linked to both centroids, representing the two contents. As such, when the two clusters are presented on the user interface side by side, a user can easily tell the common words used in the comments associated with the two contents.
In some embodiments, the icon that represents a word shared by the comments of the two contents may show attributes that represent the sentiments from the comments of the two contents. In some embodiments, the system may divide the icon into two portions representing respective sentiments of feedback associated with the two contents. For example, the system may determine a first sentiment associated with feedback that includes the keyword and associated with a first content, and may fill a first portion of the icon with a first color representing the first sentiment. The system may also determine a second sentiment associated with feedback that includes the keyword and associated with a second content, and may fill a second portion of the icon with a second color representing the second sentiment. Thus, with a glance of the icon, a user can determine a shift of sentiment between the two contents in association with a particular keyword quickly.
In some embodiments, based on analyzing historic feedback related to various contents, such as by training a machine learning model using historic feedback data, the system may (e.g., using the trained machine learning model), predict sentiments of future feedback associated with a content based on an initial set of feedback. For example, when a content is initially published (or shared) for a first period of time (e.g., an hour, a day, a week, after the content is shared, etc.), the system may analyze the feedback posted during the first period of time, and provide the analytical data to the machine learning model. The analytical data may include a timing of the feedback, the words included in the feedback, and the sentiment derived from the feedback, etc. Based on the analytical data, the machine learning model may be configured and trained to predict a sentiment of other viewers who have yet to provide feedback associated with the content (or who have yet to even view the content). Those viewers may view the content and provide feedback during a second period of time after the first period of time. As such, the system may provide information about a predicted sentiment of viewers even before the feedback of the viewers is posted. Consider a speaker providing a speech during a live event. A server may be receiving up-to-date feedback from viewers. Since the event is a live event, the initial amount of feedback may be limited while the speech is being given. However, using the machine learning model, the system may predict additional feedback (e.g., sentiments of the additional feedback) that may be received in the future, and provide the prediction to the speaker. The predicted sentiment may allow the speaker to modify the speech on the fly (e.g., in order to improve the sentiment of the viewers).
The user device 110, in one embodiment, may be utilized by a user 140 to interact with the content hosting server 120, the service provider server 130, and/or other user devices similar to the user device 110 over the network 160. For example, the user 140 may use the user device 110 to post one or more contents on the content hosting server 120 via an interface generated by the interface server 124, may view contents posted by other users on the interface, and may submit a request to the service provider server 130 for analyzing feedback associated with content posted by the user 140 on the content hosting server 120.
The user device 110, in various embodiments, may be implemented using any appropriate combination of hardware and/or software configured for wired and/or wireless communication over the network 160. In various implementations, the user device 110 may include at least one of a wireless cellular phone, wearable computing device, PC, laptop, etc. The user device 110, in one embodiment, includes a user interface (UI) application 112 (e.g., a web browser), which may be utilized by the user 140 to interact with the content hosting server 120 and/or the service provider server 130 over the network 160.
In various implementations, the user 140 is able to input data and information into an input component (e.g., a keyboard) of the user device 110 to generate content, to post content on the content hosting server 120, and transmit various instructions to the content hosting server 120 and/or the service provider server 130.
Each of the devices 180 and 190 may be similar to the user device 110. Specifically, the users of the devices 180 and 190 may use the respective device to post contents on the content hosting server 120, view contents that are posted on the content hosting server 120, submit feedback to content that is hosted by the content hosting server, and/or transmit a request for analyzing feedback data to the service provider server 130.
The service provider server 130, in one embodiment, may be maintained by an online service provider, which may provide services (e.g., data analytics services, etc.) for users. The service provider server 130 may also include an interface server 134 that is configured to serve content (e.g., web content) to users and interact with users. For example, the interface server 134 may include a web server configured to serve web content in response to HTTP requests. In another example, the interface server 134 may include an application server configured to interact with a corresponding application (e.g., a service provider mobile application) installed on the user device 110 via one or more protocols (e.g., RESTAPI, SOAP, etc.). As such, the interface server 134 may include pre-generated electronic content ready to be served to users. For example, the interface server 134 may store a feedback analysis request page and is configured to receive user requests for analyzing feedback from users. The interface server 134 may also include other electronic pages associated with the different services (e.g., a user interface for presenting feedback analysis data, etc.) offered by the service provider server 130. As a result, a user (e.g., the user 140 or other users of devices 180 and 190, etc.) may transmit requests and view/interact with feedback analysis data via one or more user interfaces provided by the interface server 134. For example, the interface server 134 may present a user interface that enables a user (e.g., the user 140, the users of the devices 180 and 190) to submit a request for analyzing feedback data. Via the interface provided by the interface server 134, the user (e.g., the user 140) may provide a network address associated with a content hosting server (e.g., the content hosting server 120) that stores the feedback data. In some embodiments, the user may provide a network address (e.g., a Uniform Resource Locator (URL), etc.) associated with the content hosting server 120 or the specific webpage that presents the feedback data in the request.
In a particular example, the user 140 may have generated or otherwise obtain content (e.g., a speech, a video, a blog, etc.), and may have uploaded the content to the content hosting server 120 via an interface provided by the interface server 124 for sharing. The interface may include one or more mechanisms, as described herein, that enables users to provide feedback to the content. For example, the interface may include a “like” and “dislike” selector mechanism that enables users to select a binary option (e.g., like or dislike) for the content. The interface may also include a comment mechanism that enables users to provide unstructured texts (e.g., free-form texts) as feedback to the content. The user 140 may desire to understand the sentiments and opinions of her viewers. For example, the user 140 may determine how to generate the next content based on the sentiments and opinions of viewers toward the content that has been shared via the content hosting server 120. The user 140 may continue to generate similar content (having similar topics or subject matters) when the sentiments and opinions are positive and may determine to generate content of different topics or different subject matters when the sentiments and opinions are not positive. Thus, the user may transmit a request to the service provider server 130, via an interface generated by the interface server 134, for analyzing the feedback data associated with the content shared via the content hosting server 120. The user may provide the network address of the interface that presents the feedback data to the service provider server 130.
The service provider server 130 may include a feedback analysis module 132 for accessing and analyzing feedback data based on a request from a user (e.g., the user 140). In some embodiments, the feedback analysis system may implement the functionality of the system (the feedback analysis system) as disclosed herein. In some embodiments, the feedback analysis module 132 may access feedback data stored on the content hosting server 120. For example, through one or more application programming interface (API) calls, the feedback analysis module 132 may transmit a request to the content hosting server 120 to obtain data presented on an interface generated by the interface server 124. In another example, the feedback analysis module 132 may access the interface generated by the interface server 124 using a user interface application (e.g., a web browser), and may scrap data from the interface. The interface may be used by the content hosting server 120 to present a user-generated content (e.g., a content generated by the user 140 or other users) and/or feedback related to the user-generated content. In some embodiments, based on a request and a network address from a user (e.g., the user 140) received via the interface provided by the interface server 134, the feedback analysis module 132 may access the feedback data based on the network address.
The interface 200 may also include various feedback mechanisms that enable users to provide feedback to the content 204. In this example, the interface 200 includes a “like” selector (e.g., a “like” button) 206, where a user may select the “like” selector 206 if the user has a positive opinion about the content 204. In some embodiments, the content hosting platform 120 may tally the total number of users who have selected the “like” selector 206 for the content 204, and may publish the number of “likes” on the interface 200.
The interface 200 in this example also includes an area 210 that implements a comment mechanism configured to receive feedback in the form of unstructured texts from users. As shown, the area 210 presents existing comments 212, 214, 216, and 218 that have been submitted by various viewers of the content 204. The area 210 also includes a text input box 220 that enables a user accessing the interface 200 to provide a new comment as a feedback to the content 204. As discussed herein, the comment mechanism for obtaining feedback can be advantageous over the “like” and/or “dislike” selector mechanism as the comment mechanism allows viewers to provide feedback that can be more than a single dimension. Furthermore, through inputting free-form texts, viewers can describe and elaborate on their opinions in more details using different descriptive words.
Using the example illustrated above, the user 140, who has posted the content 204 to the content hosting server 120, may transmit a request to the service provider server 130 for analyzing feedback to the content 204. The request may include a network address (e.g., the URL 202). Based on the URL, the feedback analysis module 132 may access the data associated with the interface 200, which may include the content 204, the tally of “likes” via the “like” selector mechanism 206, and the comments 212, 214, 216, and 218 submitted by various viewers of the content 204.
In some embodiments, the feedback analysis module 132 may use one or more machine learning-based natural language processing (NLP) models (e.g., the Bidirectional Encoder Representation from Transformers (BERT) model, spaCy, etc.) and other text analysis tools (e.g., term frequency-inverse document frequency (TF-IDF), etc.) to analyze the comments 212, 214, 216, and 218. For example, based on analyzing the words and the relationship of each word in a comment with respect to other words in the comment (e.g., using a machine learning-based NLP model), the feedback analysis module 132 may derive a sentiment for the comment. Through training the NLP model, the NLP model may recognize certain words that are generally associated with a positive sentiment and words that are generally associated with a negative sentiment. The MLP model may also be trained (based on training data, such as historical comments that are labeled with sentiment labels, etc.) to analyze the words based on its position within the comment and surrounding words in either or both directions in the comment. By analyzing the words based on its position within the comment and surrounding words in either or both directions in the comment, a more accurate sentiment may be determined than simply analyzing the words themselves.
Using the models and tools described herein, the feedback analysis module 132 may derive different information about the comments (e.g., the comments 212, 214, 216, and 218) based on the analysis. In some embodiments, the feedback analysis module 132 may derive information on a global level and a local level. On a global level, the feedback analysis module 132 may derive information that applies to the entire collection of comments (e.g., the comments 212, 214, 216, and 218, collectively). For example, the feedback analysis module 132 may determine, based on a frequency analysis (e.g., using the TD-IDF analysis), that certain keywords that appear in the comments are more relevant to the overall feedback to the content 204 than other words. Those keywords may appear more frequently than other words in the feedback (e.g., in the comments 212, 214, 216, and 218), and may be unique to the content (e.g., the keywords do not appear as frequently in other documents or comments to other contents, etc.).
In some embodiments, the feedback analysis module 132 may cluster the words in the comments 212, 214, 216, and 218 into different clusters, based on the relatedness of different words (e.g., how close are the words appear with each other in the comments). The feedback analysis module 132 may identify one or more clusters having a density (e.g., an amount of connections, an amount of words, etc.) above a pre-determined threshold. The feedback analysis module 132 may then designate at least some of the words within the identified clusters as keywords (or popular topics) associated with the content. The feedback analysis module 132 may then present these keywords in an interactive manner to allows a quick understanding of the content 204 and/or the feedback to the content 204 without viewing the content 204 or reading through the feedback. The interactive presentations of feedback will be discussed in more detail below by reference to
On a local level, the feedback analysis module 132 may derive various information associated with each individual comments. For example, the feedback analysis module 132 may derive a sentiment from each comment. The sentiment derived from a comment may indicate an attitude or a judgment that a viewer (e.g., the viewer who provides the comment) has toward the content (e.g., the content 204). In some embodiments, since a viewer may provide multiple comments for the same content, the feedback analysis module 132 may combine the comments that are associated with the same viewer, and analyze the comments collectively to derive the sentiment, such that the derived sentiment represents the overall attitude or judgment of the single viewer toward the content. In some embodiments, the sentiment can be binary in nature (e.g., positive or negative, etc.). In some embodiments, however, the sentiment can be a value on a spectrum (e.g., a value within a range, such as 0-100, where 0 indicates most negative and 100 indicates most positive). As such, based on training a machine learning-based NLP model, the trained NLP model may provide an output based on data associated with the words included in one or more comments. The output may be a value (e.g., a value within the predetermined range) that indicates a sentiment of the one or more comments.
In some embodiments, the derived sentiments can be multi-faceted. For example, the feedback analysis module 132 may derive a sentiment toward the subject matter that is being presented in the content and another sentiment toward the presentation of the subject matter in the content. In some embodiments, the feedback analysis module 132 may also associate specific sentiments toward different portions of the content, such as different segments of the content (e.g., different temporal range within an audio or a video, different paragraphs within an article, etc.).
After deriving the information associated with the feedback, the feedback analysis module 132 may generate and present, on a user interface (e.g., an interface generated by the interface server 134, such as a webpage), an interactive graphical representation of the feedback. In some embodiments, the interactive graphical representation represents a summary of the feedback. Thus, instead of reading and parsing through thousands, or hundreds of thousands, of comments, a user (e.g., the user 140) may gain an accurate understanding of the feedback (and also the content) by merely viewing and interacting with the interactive graphical representation.
Each of the icons 312, 314, 316, 318, 320, 322, and 324 is connected (e.g., linked) to the centroid 302. In some embodiments, the feedback analysis module 132 may generate the interactive graphical representation 300 such that each of the icons 312, 314, 316, 318, 320, 322, and 324 may have one or more attributes (e.g., a size, a color, a distance from the centroid, etc.) that represent different characteristics of the corresponding keyword. In this example, each of the icons 312, 314, 316, 318, 320, 322, and 324 has a size attribute that represents a relatedness of the corresponding keyword to the overall feedback and/or the content 204. An icon having a larger size, in this example, may indicate that the corresponding keyword has a higher relatedness (or correlation) to the feedback and/or the content, whereas an icon having a smaller size (may indicate that the corresponding keyword has a lower relatedness (or correlation) to the feedback and/or the content. As such, based on viewing the interactive graphical representation 300, the user 140 (or any user viewing the interactive graphical representation 300) may immediately understand that the content 204 is about a new Lamborghini, and most viewers think the new car is fast and may beat other Japanese cars and/or European cars.
In some embodiments, each of the icons 312, 314, 316, 318, 320, 322, and 324 may also have a color attribute that represents the sentiments of the comments that include the corresponding keyword. For example, a particular sentiment (e.g., a positive sentiment) may be represented by a first color and the opposite sentiment (e.g., a negative sentiment) may be represented by a second color. Based on the overall sentiment associated with the comments that include the corresponding keyword, a particular color (or a combination of colors) may be associated with the icon. For example, the feedback analysis module 132 may fill the icon 312 representing the word “Lambo” with the first color when the comment(s) that include the word “Lambo” are positive, and may fill the icon 312 with the second color when the comment(s) that include the word “Lambo” are negative.
In some embodiments, the icon may be divided into two portions, where one portion is associated with the first color and the other portion is associated with the second color. When the overall sentiment is neutral (e.g., when half of the comments that include the corresponding keyword is positive and the other half of the comments that include the keyword is negative), the portions are of equal size. However, when more comments that include the keyword is positive (or negative), the portion that represents the positive sentiment (or the negative sentiment) may be larger. In some embodiments, the system may mix the two colors according to a ratio that represents the overall sentiment (e.g., the ratio between the number of comments that have positive sentiment and the number of comments that have negative sentiment, etc.). The system may then present the icon in the mixed color. Thus, without reading through the comments that include the keywords, one can easily tell how the keywords are associated with different sentiments by viewing the interactive graphical representation 300.
In some embodiments, each of the icons 312, 314, 316, 318, 320, 322, and 324 is also selectable. For example, upon receiving a selection of an icon in the interactive graphical representation, the feedback analysis module 132 may present comments (from the comments 212, 214, 216, and 218) that include the corresponding keywords, and the sentiment analysis of the individual comments on the user interface.
As such, by viewing and interacting with the interactive graphical representation 300, a user can quickly understand the feedback and/or the content 204. In the instances where the content is viewed in real-time (e.g., a live event, a live speech, etc.), by viewing and interacting with the interactive graphical representation 300 while presenting the content, the content generator can quickly digest the feedback and react to the feedback (e.g., determine what subject matter to be included in the content, modify the content, etc.).
In some embodiments, the feedback analysis module 132 may configure and train a machine learning model to predict sentiments of future comments (comments that have not yet been submitted) based on existing comments for a content. As such, the sentiment data (e.g., color attributes) of the icons may represent not only sentiments of existing comments, but also predicted sentiments of future comments.
In some embodiments, the feedback analysis module 132 may also analyze feedback associated with multiple contents. It is common that a content generator (e.g., the user 140) may generate and share multiple contents (e.g., a series of related content, etc.) over a period of time. For example, a fitness trainer may generate and share a series of fitness training videos over a period of time. In another example, a fashion designer may generate and present a series of different designs over a period of time. The feedback analysis module 132 may analyze the feedback associated with the series of content that were generated and shared over the period of time, and derive information for the series of content.
In some embodiments, the feedback analysis module 132 may derive trend information based on analyzing the feedback data associated with the multiple content. For example, the feedback analysis module 132 may determine changes in the ratio between positive and negative feedback (e.g., comments associated with the positive sentiment and comments associated with the negative sentiment) across the different content, and present the changes as a trend on the user interface.
In some embodiments, the feedback analysis module 132 may present an integrated graphical representation to represent the feedback data associated with multiple contents.
In some embodiments, the icon that represents a word shared between the two contents (e.g., the icons 314 and 316) may show attributes that represent the sentiments from the comments of the two contents. In some embodiments, the feedback analysis module 132 may divide the icon into two portions representing respective sentiments of feedback associated with the two contents. For example, the feedback analysis module 132 may determine a first sentiment associated with feedback that includes the keyword and associated with a first content (e.g., the content 204), and may fill a first portion of the icon with a first color representing the first sentiment. The system may also determine a second sentiment associated with feedback that includes the keyword and associated with a second content (e.g., the new content), and may fill a second portion of the icon with a second color representing the second sentiment. Thus, with a glance of the icon, a user can determine a shift of sentiment between the two contents in association with a particular keyword quickly.
In some embodiments, instead of, or in addition to, representing keywords associated with the contents, the feedback analysis module 132 may include icons in an interactive graphical representation that represent sentiments of the viewers. For example, the interactive graphical representation may be generated such that each icon may represent a sentiment associated with a distinct viewer of the content.
In some embodiments, each of the icons 512, 514, 516, 518, 522, 524, and 526 is selectable. Upon receiving a selection of an icon, the feedback analysis module 132 may present, on the user interface, the comments submitted by the corresponding user.
The process 600 then analyzes (at step 610) the unstructured texts to derive analytical data and presents (at step 615), on a user device, a graphical representation of the analytical data. For example, the feedback analysis module 132 may use one or more machine learning-based NLP models and/or word analysis tools to analyze the comments obtained from the content hosting server 120 in association with the content 204. In some embodiments, the feedback analysis module 132 may derive a sentiment for each of the comments. In some embodiments, the feedback analysis may derive keywords that are relevant to the comments collectively. The feedback analysis module 132 may then present an interactive graphical representation (e.g., the interactive graphical representations 300, 400, and/or 500) on a device (e.g., the user device 110).
The process 600 receives (at step 620) a user interaction with an element in the graphical representation and presents (at step 625) information associated with a corresponding unstructured text based on the user interaction. For example, each icon in the interactive graphical representations 300, 400, and/or 500 is selectable. When a selection of an icon is received, the feedback analysis module 132 may present a corresponding comment on the user interface.
In another aspect of the disclosure, it has been contemplated that as demands for automated responses/dialogues increase, there is a need for providing a chatbot capable of engaging in advanced and intelligent conversations with users. Conventional chatbots fall into one of two categories. Chatbots under the first category are configured to perform mostly rule-based dialogues with users, with little to no creativity outside of the rules. These chatbots are used mainly by companies that operate commercial websites for assisting their users in navigating the websites or answering simple questions about transactions conducted through the websites. Due to their tight-knit nature with a particular website, these chatbots can typically provide predefined answers to specific questions regarding the operations of the website when the questions are within a predefined scope, but lack the general knowledge and creativity to provide ad-hoc responses to questions outside of the predefined scope. Chatbots under the second category are configured and trained with generic knowledge (e.g., content retrieved from the Internet, etc.), and can provide intelligent answers to general questions, but lack specific knowledge to answer questions related to individual websites.
As such, a chat system for providing a domain-specific and sentiment-aware chatbot is provided. The chat system may initially configure and train a chatbot (which can be implemented as a generative artificial intelligence/machine learning module) based on generic knowledge retrieved from the Internet. The chat system may enable a user to specify one or more data sources, such as one or more website domains, one or more articles, one or more databases, one or more files, etc. After digesting the content of the data sources, the chat system may generate one or more contexts for the chatbot, such that the chatbot may use the one or more contexts to answer user-generated questions specifically related to the data sources. Since the chatbot was trained using generic knowledge retrieved from the Internet, the chatbot is not limited to answering questions related to the website, and is capable of answering generic questions that are outside the domain of the website. In some embodiments, the chat system may be implemented as the chat module 136 as shown in
Upon receiving the domain input, the content retrieval module 704 may retrieve content related to the specified domain over the Internet. For example, if the specified domain comprises a network address associated with a website, the content retrieval module 704 may access the website associated with the network address. In some embodiments, the content retrieval module 704 may crawl through the website (e.g., by traversing the different webpages linked within the website, and parsing the content, such as the source code and presentable content, in the different webpages). Since some of the webpages may include programming code that is configured to display various presentations in response to various triggers (e.g., various user interactions with the webpages), the content retrieval module 704 may also comprise a simulator for simulating a browser application that accesses the website. The simulator may interact with the website similar to how an actual human user would interact with the website (e.g., scrolling through the pages, selecting different selectable items within the website, etc.), such that the various presentations can be accessed by the content retrieval module 704.
If the specified domain comprises a knowledge domain or an organization, the content retrieval module 704 may identify (e.g., through the use of one or more Internet search engines, etc.) one or more websites corresponding to the specified domain, and may perform the same process as described above to extract the content from the one or more websites. The content may then be passed to the embedding module 706 and the encoding module 708.
In some embodiments, the embedding module 706 may be part of an artificial intelligence module (e.g., part of the AI module 712, which may be implemented as a generative machine learning model such as a large language model (LLM), etc.), and configured to generate embeddings based on the content. For example, the embedding module 706 may parse the content and may derive meanings based on different portions of the content. The embedding module 706 may then embed the meanings derived from the different portions of the content. In some embodiments, the embeddings may be implemented in the form of vectors having multiple dimensions. The embedding module 706 may then store the embeddings (e.g., vectors) generated based on the content in the vector database 714. The embeddings stored in the vector database 714 may then be used by the AI module 712 for generating intelligent responses to user queries within the specified domain.
The embedding module 706 can be effective in deriving accurate and complete context based on content that is in a natural language format, such as articles, papers, blogs that use complete sentences and paragraphs, since meanings can be derived solely from the sentences and/or paragraphs. However, in certain scenarios where the content includes discrete and/or disjointed words, the embedding module 706 may inadvertently omit such information. One such example is content within an electronic commerce website that shows availability and pricing for various items. Since the website may only provide discrete indicators such as “available,” “out of stock,” “price,” etc., the embedding module 706 may fail to derive sufficient meaning based on those words, and may not be able to generate embeddings that accurately describe the words. As a result, if only the embeddings generated by the embedding module 706 are provided to the AI module 712 as context, the AI module 712 would likely not be able to accurately respond to the query “which size of XYZ t-shirt in aqua color is sold out?” (e.g., providing an inaccurate response or a default response when the AI module 712 does not have the answer to the query, etc.).
As such, in addition to using the embedding module 706 to generate embeddings for the content, the chat module 700 of some embodiments may also use the encoding module 708 for encoding the content obtained by the content retrieval module 704. In some embodiments, the encoding module 708 may use a bi-directional encoding algorithm (e.g., Google®'s Bidirectional Encoder Representations from Transformers (BERT) model, ElasticSearch, etc.) to encode the content. As such, the encoding module 708 may parse the content in at least two directions (e.g., forward, backward, etc.), and may tokenize each word in the content based on the parsing. The token representing each words may include the word itself, a meaning of the word derived based on a relationship of the word with its neighboring words that come before and/or after the word, and other information. The tokens representing the words in the content may then be stored in the token database 716 for use by the AI module 712 in responding to user queries.
After specifying the domain, the user 140 may then provide, via the user interface provided by the interface module 702, a user query to the chat module 700. The query may be related to the domain previously specified by the user. For example, if the domain corresponds to an electronic commerce website, example queries may include “which size of XYZ t-shirt in aqua color is sold out?”, “Does xyz.com sell the Signature polo shirt in green?”, “what colors of the Signature t-shirt are available?” and other queries.
In some embodiments, the chat module 700 may use the AI module 712 to generate a response for the user query. As discussed above, the AI module 712 may be a generative machine learning model such as a large language model (LLM) (e.g., GPT models developed by OpenAIR, etc.). In some embodiments, the AI module 712 may use the contexts generated by the embedding module 706 and the encoding module 708 to generate the response for the user query. Since the embedding module 706 and the encoding module 708 generated different contexts based on the same content, the context merging module 710 may be configured to merge the contexts (e.g., the embeddings stored in the vector database 714, the tokens stored in the token database 716, etc.) before providing the merged context to the AI module 712.
In some embodiments, each of the embeddings stored in the vector database 714 may include a string array, and each of tokens stored in the token database 716 may also include a string array. As such, in order to merge the contexts, the context merging module 710 may combine at least some of the string arrays from the vector database 714 with at least some of the string arrays from the token database 716. In some embodiments, the context merging module 710 may determine a merge ratio for merging the contexts generated by the embedding module 706 and the encoding module 708. The merge ratio may indicate how much weight is given to each of the contexts. For example, if a 50/50 merge ratio is determined, the context generated by the embedding module 706 and the context generated by the encoding module 708 may be given equal weights in generating the merged context. In some embodiments, when a 50/50 merge ratio is determined, the context merging module 710 may select equal numbers of string arrays from each of the vector database 714 and the token database 716 to generate a compiled set of string arrays.
If a 90/10 merge ratio is determined, the context generated by the embedding module 706 may be give a much larger weight (e.g., a weight of 90) than the context generate by the encoding module 708 (e.g., a weight of 10). For example, the context merging module 710 may select a larger number of string arrays from the vector database 714, and may select a smaller number of string arrays from the token database 716 (where the ratio of the selected string arrays is 90:10), and generate a compiled set of string arrays based on the selected string arrays.
In some embodiments, the context merging module 710 may determine the merge ratio based on the user query. For example, the context merging module 710 may analyze the user query, and may determine a merge ratio (e.g., an optimal ratio) that would enable the AI module 712 to provide an optimal (e.g., a most complete, a most accurate, etc.) response for the user query. In some embodiments, the context merging module 710 may use a machine learning model that is configured and trained to predict optimal merge ratios based on different user queries to determine the merge ratio for the user query. The context merging module 710 may then generate the merged context by merging the contexts generated by the embedding module 706 and the encoding module 708 according to the merge ratio. The merged context may be provided to the AI module for generating a response to the user query.
In some embodiments, the user query may be transformed into a prompt before feeding to the AI module 712 along with the merged context as inputs. The AI module 712 may generate a response based on the prompt and the merged context. For example, for the user query of “what colors of the Signature t-shirt are available?”, the AI module 712 may generate a response “the Signature t-shirt is available in red, orange, and blue.” In another example, for the user query of “which size of XYZ t-shirt in aqua color is sold out?”, the AI module 712 may generate a response “medium size for XYZ t-shirt in aqua color is sold out.” It is noted that the AI module 712 would likely not be able to generate such a response for this user query based solely on the context generated by the embedding module 706, but is able to generate the response based on the merged context by merging contexts generated by the embedding module 706 and the encoding module 708. The interface module 702 may provide the response to the user device 110 via the user interface.
In some embodiments, in addition to merging the contexts of the content generated by the embedding module 706 and the encoding module 708, the context merging module 710 may also insert sentiment context into the merged context. The sentiment context may represent sentiments of one or more people toward an item associated with the user query. For example, when the user query is related to a particular t-shirt sold at a particular website, the context merging module 710 may obtain sentiments associated with the particular t-shirt. In some embodiments, the sentiment context may be obtained using the techniques described above by reference to the feedback analysis system.
In some embodiments, the feedback used by the feedback analysis system for generating the sentiment context for an item is obtained via a third-party platform (e.g., a content sharing platform, a social media platform, a social networking platform, etc.). In some embodiments however, the feedback may be obtained through the chat module 700. For example, in additional to responding to user queries, the chat module 700 may configure the AI module 712 to generate sentiment queries to the users. In a non-limiting example, after responding to the user query related to a specific item from a website, the AI module 712 may be configured to ask the user about her opinion about the item, using queries such as “how do you like XYZ t-shirt?”, “do you think XYZ t-shirt has good quality?”, “how would you compare XYZ t-shirt with t-shirt from ABC brand?”, etc. The AI module 712 may obtain answers from the users via the user interface, and provide the answers to the feedback analysis system as disclosed herein. The feedback analysis system may accumulate the feedback from users and generate the sentiment context that can be used as an additional input for the AI module 712 in generating responses to subsequent user queries related to the same item.
In some embodiments, the sentiment context may enable the AI module 712 to modify the response to the user query. For example, based on the sentiment context, the AI module 712 may alter a manner in which the response is generated (e.g., altering a tone of the response, etc.). In another example, based on the sentiment context, the AI module 712 may insert additional information into the response, such as a recommendation/suggestion when the item associated with the user query is not available (e.g., out of stock) based on what others find as good alternatives, providing a summary of user review of the item based on feedback from other users, a suggested configuration (e.g., a size configuration for clothing, a engine/motor configuration for a vehicle, etc.), and other additional information that is relevant and useful to the user.
Such a chat module 700 can be useful to any users who wish to obtain specific information associated with a domain without manually browse through numerous websites/webpages. It is especially useful to certain groups of disabled people who may have difficulties in manually browsing and viewing websites/webpages (e.g., people with eyesight problems, etc.). Thus, in some embodiments, the chat module 700 may also include a module to transform the response to audio output to be provided through the user device 110, such that the user 140 may obtain information associated with a domain via conducting a verbal conversation with the chat module 700.
In some embodiments, the interface module 802 may be a part of (or communicatively coupled with) the interface server 134 to interact with user devices, such as the user device 110. The interface module 802 may provide a user interface through the interface server 134 and the UI application 112 for display on the user device 110. A user (e.g., the user 140) may interact with the chat module 800 via the user interface. For example, the user 140 may submit, to the chat module 800 via the user interface, queries (e.g., questions in a natural language format, etc.) related to one or more specific domains (e.g., one or more specific websites, one or more subject matter, etc.). In some embodiments, the user interface provided by the interface module 802 may enable (e.g., via a text input field displayed on the user interface) the user 140 to specify and/or upload one or more data sources, such as one or more web domains (e.g., a website addresses, network addresses, etc.), one or more databases, one or more articles, one or more files, etc. In some embodiments, the interface 802 may enable the user 140 to upload files (e.g., document files, multimedia files, etc.) to the chat module 800. The websites, documents, databases, or files that are specified by the user in association with a query may be collectively referred to as data sources 832.
If the user 140 specifies a location of the data source (e.g., a website address, a network address, etc.), the content retrieval module 804 may retrieve the content based on the specified location. Otherwise, the content retrieval module 804 may obtain the content directly from the user device 110 via the interface module 802. In some embodiments, the chat module 800 may generate additional context for the query based on the content specified by the user 140. For example, the chat module 800 may generate an enriched prompt that includes the context extracted from the content for one or more large language models (LLMs), such that the LLMs can consider the additional context for providing a response to the query. However, many LLMs have a size limit for providing the additional context in a prompt (e.g., 80 KB, etc.), while the content specified by the user may well exceed such a limit. As such, the chat module 800 may have to take a more selective approach in generating the context for the query, such that the context generated for the query is within the limit imposed by the one or more LLMs used in the chat module 800 for generating a response and the context is most relevant for generating the response for the query.
In some embodiments, in order to select the most relevant context for the query based on the data sources 832, the chat module 800 may use multiple ranking algorithms to generate rankings for the content in the data sources 832, and use a fusion algorithm to fuse the multiple rankings. For example, the embedding module 806 may generate embeddings for the query, and may generate embeddings for each of the data sources 832 specified by the user 140 in association with the query. The chat module 800 may then use different ranking algorithms to rank the content in the data sources 832 based on a comparison between the embeddings associated with the data sources 832 and the embeddings associated with the query. In some embodiments, the chat module 800 may use a semantic search module 808 for generating a first ranking of the content in the data sources 832. The semantic search module 808 may use a semantic search technique to rank the content in the data sources 832. Specifically, the semantic search module 808 may first derive meanings for the query and the data sources 832 based on words included in the query and the data sources 832. Instead of directly comparing words between the query and the content in the data sources 832, the semantic search module 808 may compare the meaning derived from the query against the meaning derived from each content in the data sources 832. The semantic search module 808 may then generate a first ranking for the content, where the ranking indicates how relevant the contents are to producing a response for the query.
In some embodiments, the chat module 800 may also use another ranking algorithm to generate a second ranking for the content in the data sources 832. For example, the chat module 800 may use the BM25 module 810 to generate the second ranking. Unlike the semantic search module 808, the BM25 module 810 may directly compare word attributes (e.g., a frequency of each word, etc.) of the query against word attributes of each content in the data sources 832. Using such a term-based search and/or ranking algorithm generally produces good results except for a few circumstances, such as when the query includes multiple (and may be conflicting or competing) subject matters. In such circumstances, semantic-based ranking may produce better ranking result. By using both a term-based ranking algorithm and a semantic-based ranking algorithm, the chat module 800 may ensure that the ranking of the content is comprehensive and accurate.
In some embodiments, the chat module 800 may use a rank fusion module 812 to fuse the two rankings generated by the semantic search module 808 and the BM25 module 810. In some embodiments, each of the semantic search module 808 and the BM25 module 810 may generate a respective score for each content in the data sources 832 using their corresponding ranking algorithms. As such, each content that appears in the first ranking generated by the semantic search module 808 and each content that appears in the second ranking generated by the BM25 module 810 may correspond to a score. In some embodiments, the rank fusion module 812 may fuse the two rankings based on the scores associated with the content. In some embodiments, the rank fusion module 812 may obtain a top number of content (e.g., a top 10 content, a top 20 content, etc.) from each ranking, and may fuse the ranking of the top content. If it is determined that a content appears in both of the top rankings, the rank fusion module 812 may increase the score of the content. Based on the score of the content, the rank fusion module 812 may generate a combined ranking for the top content. Content that scores high (e.g., above a threshold) in the combined ranking may be selected for enriching the context of the prompt based on the query.
In some embodiments, similar to the context merging module 710 in the chat module 700, the rank fusion module 812 may also determine a merge ratio for merging the contents identified by the semantic search module 808 and the BM25 module 810 (e.g., in the respective rankings). The merge ratio may indicate how much weight is given to each of the contents. For example, if a 50/50 merge ratio is determined, the contents generated by the semantic search module 808 and the context generated by the BM25 module 810 may be given equal weights in generating the combined ranking. If a 90/10 merge ratio is determined, the contents identified by the generated by the semantic search module 808 may be give a much larger weight (e.g., a weight of 90) than the contents identified by the BM25 module 810 (e.g., a weight of 10).
In some embodiments, the rank fusion module 812 may determine the merge ratio based on the user query. For example, the rank fusion module 812 may analyze the user query, and may determine a merge ratio (e.g., an optimal ratio) that would enable one or more AI models (e.g., the LLM 822 and/or the LLM 824) to provide an optimal (e.g., a most complete, a most accurate, etc.) response for the user query. In some embodiments, the rank fusion module 812 may use a machine learning model that is configured and trained to predict optimal merge ratios based on different user queries to determine the merge ratio for the user query.
Thus, the context enriching module 814 may use the selected content (e.g., top content from the combined ranking) to generate context for the query. In some embodiments, the context enriching module 814 may combine the query and the additional contents (e.g., the top ranked contents, etc.) to generate an enriched prompt, which may be provided to one or more AI models, such as LLM 822, LLM 824, etc. for generating outputs for the query.
In some embodiments, the chat module 800 may enable the user 140, via the interface module 802, to identify one or more AI models (e.g., different versions of ChatGPT, Gemini, Llama, etc.) to be used for the query. Alternatively, the chat module 800 may select two or more different LLMs for use in generating responses for the query. By using responses generated by multiple different LLMs, the chat module 800 may generate a final response for the query that is more precise and accurate than using any response from a single LLM.
As such, the chat module 800 may obtain outputs from the LLM 822 and the LLM 824. In some embodiments, the summarization module 816 may generate a final response based on the outputs from the LLM 822 and the LLM 824. For example, in order to generate a precise and accurate response for the query, the summarization module 816 may extract only portions of the outputs that are common to both outputs (and ignore the remaining portions of the outputs) in generating the final response. In some embodiments, the summarization module 816 may also identify words and phrases that are “fluff” (e.g., fillers that carry no substantive bearings on answering the query) and remove those words and phrases from the final response. The chat module 800 may then provide the response to the user device 110 (e.g., for display via the UI application 112).
The computer system 900 includes a bus 912 or other communication mechanism for communicating information data, signals, and information between various components of the computer system 900. The components include an input/output (I/O) component 904 that processes a user (i.e., sender, recipient, service provider) action, such as selecting keys from a keypad/keyboard, selecting one or more buttons or links, etc., and sends a corresponding signal to the bus 912. The I/O component 904 may also include an output component, such as a display 902 and a cursor control 908 (such as a keyboard, keypad, mouse, etc.). The display 902 may be configured to present a login page for logging into a user account, a checkout page for purchasing an item from a merchant, or a chat interface for facilitating an online chat session. An optional audio input/output component 906 may also be included to allow a user to use voice for inputting information by converting audio signals. The audio I/O component 906 may allow the user to hear audio. A transceiver or network interface 920 transmits and receives signals between the computer system 900 and other devices, such as another user device, a merchant server, or a service provider server via network 922. In one embodiment, the transmission is wireless, although other transmission mediums and methods may also be suitable. A processor 914, which can be a micro-controller, digital signal processor (DSP), or other processing component, processes these various signals, such as for display on the computer system 900 or transmission to other devices via a communication link 924. The processor 914 may also control transmission of information, such as cookies or IP addresses, to other devices.
The components of the computer system 900 also include a system memory component 910 (e.g., RAM), a static storage component 916 (e.g., ROM), and/or a disk drive 918 (e.g., a solid state drive, a hard drive). The computer system 900 performs specific operations by the processor 914 and other components by executing one or more sequences of instructions contained in the system memory component 910. For example, the processor 914 can perform the feedback analysis functionalities described herein according to the process 600.
Logic may be encoded in a computer readable medium, which may refer to any medium that participates in providing instructions to the processor 914 for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. In various implementations, non-volatile media includes optical or magnetic disks, volatile media includes dynamic memory, such as the system memory component 910, and transmission media includes coaxial cables, copper wire, and fiber optics, including wires that comprise the bus 912. In one embodiment, the logic is encoded in non-transitory computer readable medium. In one example, transmission media may take the form of acoustic or light waves, such as those generated during radio wave, optical, and infrared data communications.
Some common forms of computer readable media include, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer is adapted to read.
In various embodiments of the present disclosure, execution of instruction sequences to practice the present disclosure may be performed by the computer system 900. In various other embodiments of the present disclosure, a plurality of computer systems 900 coupled by the communication link 924 to the network (e.g., such as a LAN, WLAN, PTSN, and/or various other wired or wireless networks, including telecommunications, mobile, and cellular phone networks) may perform instruction sequences to practice the present disclosure in coordination with one another.
Where applicable, various embodiments provided by the present disclosure may be implemented using hardware, software, or combinations of hardware and software. Also, where applicable, the various hardware components and/or software components set forth herein may be combined into composite components comprising software, hardware, and/or both without departing from the spirit of the present disclosure. Where applicable, the various hardware components and/or software components set forth herein may be separated into sub-components comprising software, hardware, or both without departing from the scope of the present disclosure. In addition, where applicable, it is contemplated that software components may be implemented as hardware components and vice-versa.
Software in accordance with the present disclosure, such as program code and/or data, may be stored on one or more computer readable mediums. It is also contemplated that software identified herein may be implemented using one or more general purpose or specific purpose computers and/or computer systems, networked and/or otherwise. Where applicable, the ordering of various steps described herein may be changed, combined into composite steps, and/or separated into sub-steps to provide features described herein.
The various features and steps described herein may be implemented as systems comprising one or more memories storing various information described herein and one or more processors coupled to the one or more memories and a network, wherein the one or more processors are operable to perform steps as described herein, as non-transitory machine-readable medium comprising a plurality of machine-readable instructions which, when executed by one or more processors, are adapted to cause the one or more processors to perform a method comprising steps described herein, and methods performed by one or more devices, such as a hardware processor, user device, server, and other devices described herein.
The present application claims priority to U.S. Provisional Patent Application Ser. No. 63/514,438, filed Jul. 19, 2023, and U.S. Provisional Patent Application Ser. No. 63/652,384, filed May 28, 2024, which are incorporated by reference herein in their entirety.
| Number | Date | Country | |
|---|---|---|---|
| 63652384 | May 2024 | US | |
| 63514438 | Jul 2023 | US |