DYNAMIC WORD CORRELATED TOPIC MACHINE LEARNING MODEL

Information

  • Patent Application
  • 20220147716
  • Publication Number
    20220147716
  • Date Filed
    November 15, 2021
    2 years ago
  • Date Published
    May 12, 2022
    2 years ago
  • CPC
    • G06F40/295
    • G06F16/355
    • G06F40/40
  • International Classifications
    • G06F40/295
    • G06F40/40
    • G06F16/35
Abstract
A system implements a dynamic word correlated topic model (DWCTM) to model an evolution of topic popularity, word embedding, and topic correlation within a set of documents, or other dataset, that spans a period of time. For example, the DWCTM receives the set of documents and a quantity of topics for modeling. The DWCTM processes the set computing, for each topic, various distributions to capture a popularity, word embedding, and correlation with other topics across the period of time. In other examples, a dataset of user listening sessions comprised of media content items for modeling by the DWCTM. Media content metadata (e.g., artist or genre) of the media content items, similar to words of a document, can be modeled by the DWCTM.
Description
BACKGROUND

Machine learning relies on the development of algorithms and statistical models to execute tasks using pattern recognition and inference. In some cases, machine learning may provide opportunities to reduce processing and memory resources that are expended in attempting to execute a particular task, such as developing efficient machine-learned models that reduce expenditure of computer resources. For example, depending on a particular machine learning application, server machines having central processing unit (CPU), graphics processing unit (GPU), tensor processing unit (TPU), or application-specific integrated circuit (ASIC) components may be used to execute algorithms to perform some task or to achieve some operational result. Reducing unnecessary usage of the machine components may prolong having equipment to be replaced or refreshed, resulting in less down time. For example, machine learning algorithms may be tailored in attempts to realize an efficient use of server resources for predictive maintenance for the associated servers. Accordingly, machine learning may be used to improve the useful life of server infrastructure by reducing inefficient use of processor and memory resources.


Machine learning has been used in many different types of applications. As an example, machine learning has been used to develop topic models for describing content of textual documents within a large document collection by discovering latent topics underlying the documents. For example, words comprising a document come from a mixture of topics, wherein a topic can be defined as a probability distribution over the words. Dynamic topic modeling is an established tool for capturing temporal dynamics of the topics of a corpus. However, dynamic topic models can only consider a small set of frequent words because of their computational complexity and insufficient data for less frequent words. Moreover, conventional topic models do not consider word correlation. The assumption of word independence given the topic does not allow information sharing across words, which limits in practice the applicability of topic models on corpus with large vocabulary and short documents.


SUMMARY

In general terms, this disclosure is directed to dynamic topic models. In one possible configuration and by non-limiting example, a dynamic word correlated topic model (DWCTM) (also referred to as MIST) identifies underlying topics of a set of documents or user listening sessions that span a period of time, and models, for each topic, a topic popularity, a word embedding, and a correlation with other topics across the period of time to capture the evolution thereof. In some examples, the output of the DWCTM can be provided for post-processing such that the output can be utilized or further applied in a wide array of scenarios. Various aspects are described in this disclosure, which include, but are not limited to, the following aspects.


One aspect is a method for dynamic topic modeling with word correlation, the method comprising: receiving, as input to a DWCTM, a set of documents each comprised of a plurality of words and having associated timestamps, wherein the timestamps of the set of documents span a period of time; identifying, as input to the DWCTM, a quantity of topics for modeling; providing the set of documents as input to the DWCTM for modeling according to the quantity of topics identified; for each topic, modeling via the DWCTM: a document-topic distribution across the period of time to yield a popularity of each of the topics across the period of time; a topic-word distribution across the period of time that captures a correlation among the plurality of words to yield a word embedding; and a series of covariance matrices to yield a correlation of each topic with other topics across the period of time; and providing, as output of the DWCTM: the popularity of each of the topics across the period of time; the word embedding across the period of time; and the correlation of each topic with other topics across the period of time.


Another aspect is a system for dynamic topic modeling with word correlation, the system comprising: a DWCTM; and a server communicatively coupled to the DWCTM, the server comprising at least one processing device and a memory coupled to the at least one processing device and storing instructions, that when executed by the at least one processing device, cause the at least one processing device to: receive a set of documents each comprised of a plurality of words and having associated timestamps, wherein the timestamps for the set of documents span a period of time; identify, as input to the DWCTM, a quantity of topics for modeling; provide the set of documents as input to the DWCTM for modeling according to the quantity of topics identified; for each topic, model via the DWCTM: a document-topic distribution across the period of time to yield a popularity of each of the topics across the period of time; a topic-word distribution at given time points across the period of time that captures a correlation among the plurality of words to yield a word embedding; and a series of covariance matrices to yield a correlation of each topic with other topics across the period of time; and provide, as output of the DWCTM: the popularity of each of the topics across the period of time; the word embedding across the period of time; and the correlation of each topic with other topics across the period of time.


A further aspect is a system for dynamic topic modeling with word correlation related to user consumption of media content items over time, the system comprising: a DWCTM; and a server communicatively coupled to the DWCTM, the server comprising at least one processing device and a memory coupled to the at least one processing device and storing instructions, that when executed by the at least one processing device, cause the at least one processing device to: receive a set of user listening sessions each comprised of a plurality of media content items and having associated timestamps, wherein the plurality of media content items include one or more types of media content metadata and timestamps for the set of user listening sessions that span a period of time; identify, as input to the DWCTM, a quantity of topics for modeling; provide the set of user listening sessions as input to the DWCTM for modeling based on the quantity of topics identified; for each topic, model via the DWCTM: a media content item-topic distribution across the period of time to yield a popularity of each of the topics across the period of time; a topic-media content metadata distribution across the period of time that captures a correlation among the set of user listening sessions comprising a media content metadata embedding; and a series of covariance matrices to yield a correlation of each topic with other topics across the period of time; and provide, as output of the DWCTM: the popularity of each of the topics across the period of time; the media content metadata embedding across the period of time; and the correlation of each topic with other topics across the period of time.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates an example environment that includes a modeling system that supports various types of dynamic topic models.



FIG. 2 illustrates example inputs and outputs to a dynamic correlated topic model (DCTM).



FIG. 3 is an example method of processing performed by the DCTM.



FIG. 4 illustrates a list of topics output by the DCTM.



FIG. 5 is an example graph depicting an evolution of topic popularity.



FIG. 6 is an example graph depicting an evolution of topic representation.



FIG. 7 is an example graph depicting an evolution of topic correlations.



FIG. 8 is a conceptual diagram illustrating example phases of the DCTM.



FIG. 9 illustrates an example training phase of the DCTM.



FIG. 10 illustrates an example predicting phase of the DCTM.



FIG. 11 illustrates an example application system.



FIG. 12 illustrates an example application system for recommending media content items for playback facilitated by output of the DCTM or the dynamic word correlated topic model (DWCTM).



FIG. 13 is a conceptual diagram illustrating a DCTM or DWCTM that models user consumption of media content items over time for facilitating media content item recommendations.



FIG. 14 illustrates an example media content item recommendation facilitated by output of the DCTM or DWCTM.



FIG. 15 illustrates example inputs and outputs to the DWCTM.



FIG. 16 is an example method of processing performed by the DWCTM.



FIG. 17 illustrates an example training phase of the DWCTM.



FIG. 18 is a diagram of a comparison analysis on the effect of the meta-encoder.



FIG. 19 illustrates a comparison of DWCTM with several static and dynamic topic models.



FIG. 20 illustrates a performance analysis empirically evaluating the DWCTM with synthetic datasets.



FIG. 21 illustrates the mean of topics over time as computed by the DWCTM 1501.



FIG. 22 is a visualization of word counts of a sample dataset used with DWCTM 1501.



FIG. 23 is a visualization of a word probability for a topic associated with the dataset of FIG. 22.





DETAILED DESCRIPTION

Various embodiments will be described in detail with reference to the drawings, wherein like reference numerals represent like parts and assemblies throughout the several views. Reference to various embodiments does not limit the scope of the claims attached hereto. Additionally, any examples set forth in this specification are not intended to be limiting and merely set forth some of the many possible embodiments for the appended claims.



FIG. 1 illustrates an example environment 100 where a modeling system 106 that includes a dynamic topic model 107 to identify topics and model an evolution of various parameters. As discussed below, the modeling system 106 can implement and support different types of dynamic topic models 107. According to one aspect, modeling system 106 implements a dynamic correlated topic model (DCTM) 201 (see for example FIGS. 2,3, and 9) to identify topics and model an evolution of topic popularity, topic representations, and correlations in co-occurrences of topics within a collection of data that spans a period of time. According to another aspect, modeling system 106 implements a dynamic word correlated topic model (DWCTM) 1501 (also referred to as MIST) (see for example FIGS. 15-17) to identify topics and model an evolution of topic popularity, word embedding, and correlations in co-occurrences of topics within a collection of data that spans a period of time.


The modeling system 106 is hosted by a service 102. In some examples, the service 102 can also host another system that utilizes outputs 116 of a dynamic topic model 107 as inputs to derive additional information, referred to hereinafter as application system 110. As one non-limiting example, the service 102 is a media streaming service that includes a media delivery system that utilizes the outputs 116 of a dynamic topic model 107 for generating, monitoring, and/or providing media content items to users for consumption, as discussed in greater detail with references to FIGS. 11-14 below. One or more servers or processors 104 are operable to execute one or more components of the service 102, including the modeling system 106 and the application system 110. Additionally or alternatively, one or more application systems 110 can be hosted externally from the service 102.


The service 102 can also include one or more databases 112 for storing at least a portion of inputs 114 to a dynamic topic model 107, such as DCTM 201 or DWCTM 1501 for example. The service 102 can receive the inputs 114 directly from client devices 122 or indirectly from one or more third party services 124 that collect the inputs 114 from the client devices 122 or otherwise store the inputs 114 in a database, library, or archive, for example. The service 102 can receive the inputs 114 over a network 120.


The inputs 114 include the data collection and a quantity of topics to be modeled. The data collection is a set of data items that span a period of time, where the data items are comprised of a vocabulary that can be clustered or grouped to represent one or more topics. A topic is defined as a probability distribution over the vocabulary. As one example and as described with reference to FIGS. 2 to 10, the data collection is a set of textual documents, and words of the textual documents are the vocabulary. For example, each textual document is comprised of words that can be clustered or grouped to represent topics, where a topic is a probability distribution over the words. Additionally, one or more of the textual documents of the set have associated time stamps spanning a period of time. For example, DCTM 201 can dynamically model the topics over the period of time using continuous processes to understand how the popularity, representation, and correlation of the topics have evolved over time. As further example, DWCTM 1501 can dynamically model the topics over the period of time using continuous processes to understand how the popularity, word embedding, and correlation of the topics have evolved over time.


The modeling system 106 can also be useful for processing other data types, such as a collection of timestamped user listening sessions comprised of media content items, to understand how user consumption of media evolves over time and apply that understanding to enhance future listening experiences. Therefore, as another example and as described with reference to FIGS. 11 to 14, the data collection is a set of user listening sessions comprised of media content items, and media content metadata is the vocabulary. For example, one or more types of media content metadata associated with the media content items, such as artists, genres, and mood, can be clustered or grouped to represent one or more topics, where a topic is defined as a probability distribution over a respective metadata type. DCTM 201 or DWCTM 1501 can be applied by modeling system 106 to generate such probability distributions as described below.


The modeling system 106 provides the inputs 114 to the dynamic topic model 107 for modeling. For example, modeling system 106 provides the inputs 114 to DCTM 201 or DWCTM 1501 for modeling. The dynamic topic model 107 is configured to analyze the data items and identify underlying topics by clustering the respective vocabulary into the specified quantity of topics. The outputs 116 of the dynamic topic model 107 include a list of topics, each topic defined by a cluster of the respective vocabulary representing the topic. When modeling system 106 deploys DCTM 201, the outputs 116 include, for each topic, a topic popularity, a topic representation, and a correlation with other topics at given time points across the period of time to illustrate the evolution thereof, thereby extending capabilities of traditional correlated topic models. When modeling system 106 deploys DWCTM 1501, the outputs 116 include, for each topic, a topic popularity, a word embedding, and a correlation with other topics at given time points across the period of time to illustrate the evolution thereof, also extending capabilities of traditional correlated topic models.


For example, in a traditional correlated topic model (CTM), a correlation in co-occurrence of topics is modeled in a non-dynamic manner. In some examples, one or more of the topic popularity and topic representation are also modeled in a non-dynamic manner by the traditional CTM. As described in greater detail with reference to FIG. 9, the DCTM 201 extends the capabilities of the traditional CTM by incorporating temporal dynamics through the use of continuous processes, including Gaussian processes and a generalized a Wishart process, to jointly model the evolution of the popularity of topics, representations of topics, and their correlations over time. Additionally, as described in greater detail with reference to FIG. 9, a stochastic variational inference (SVI) method is used that enables mini-batch training of the DCTM 201 with stochastic gradient descent methods to enable faster training with lower processing requirements, and is scalable to large numbers of data such that the DCTM 201 can be used to analyze large data collections received as inputs 114.


As described in greater detail with reference to FIG. 17, the DWCTM 1501 further extends capabilities of topic models by incorporating temporal dynamics through the use of continuous processes by exploiting the correlation among the words in a vocabulary. The DWCTM 1501 incorporates a multi-output Gaussian process (MOGP) to correlate previously independent temporal processes for words and reliably estimate topic representations that contain less frequent words. For efficient inference, an amortized variational inference method is provided that dramatically reduces computational complexity of that MOGP as well as reducing a number of variational parameters. As described below, a self-normalized importance sampling approximation is applied to a word distribution in a variational lower bound to avoid sampling of all the words in the vocabulary, resulting in additional processor and memory savings.


The outputs 116 of the dynamic topic model 107 can be stored in the databases 112 and/or provided to the application system 110 as inputs for further processing. Processed outputs 118 generated by the application system 110 can be provided to the client devices 122 or the third party services 124. In some examples, the outputs 116 can be provided to the client devices 122 or the third party services 124, alternatively or in addition to the processed outputs 118.



FIG. 2 illustrates a conceptual diagram 200 of example inputs to and outputs from the DCTM 201. As one example, the inputs to the DCTM 201 include a set 202 of documents 204 and a quantity of topics 208 (e.g., N topics) for modeling. Each document 204 is a textual document comprised of a plurality of words 205 and having an associated timestamp 206, the timestamps 206 defining a period of time that the set 202 of documents 204 spans. In some examples, more than one document 204 can be associated with a same timestamp 206. A level of granularity at which the timestamp 206 is processed (e.g., by year, by month, by week, daily, hourly, etc.) can be variable depending on a type of document 204. The granularity can also be based on how rapidly topic popularity, topic representations, and topic correlations are likely to change. For example, if change is more likely, processing the documents 204 with smaller timestamp intervals (e.g., monthly versus yearly) can be desirable.


A topic is a probability distribution over vocabulary. Here, the vocabulary are the words 205 of the documents 204. Thus, the topics identified by the DCTM 201 are each a cluster or grouping of a subset of the words 205, as illustrated and described in greater detail with reference to FIG. 4. The topics are not assumed to be independent of one another. Accordingly, two or more topics can be represented by common words (e.g., a same word can represent more than one topic).


The quantity of topics 208 received as input informs the DCTM 201 of a number of word clusters to identify. Depending on the quantity of topics 208 provided as input and a number of topics that can be drawn from the set 202 of documents 204, a list of topics 210 output by the DCTM 201 can include all topics inferred or the N most probable topics for the documents 204 where N corresponds to the quantity of topics 208. As the topics are clusters of the words 205, in some examples, the most probable words associated with a topic are provided as the topic within the list of topics 210.


Additionally, for each topic 212 in the list of topics 210 (e.g., for each topic 1 through topic N), the DCTM 201 can provide topic popularity 214, topic representation 216, and topic correlation 218 at given time points across the period of time that the set 202 of documents 204 span to illustrate how the popularity, representations, and correlations among topics have evolved over time for the set 202.


As described in more detail with reference to FIG. 5, the topic popularity 214 can be based on document-topic distribution output by the DCTM 201. A document-topic distribution is a distribution over the topics for each document 204. For example, a document is comprised of x % of topic 1, y % of topic 2, z % of topic 3, etc., where the sum of the percentages for each topic equals 1. As described in more detail with reference to FIG. 6, the topic representation 216 can be based on topic-word distribution output by the DCTM 201. A topic-word distribution is a distribution over the words for a given topic. For example, the distribution can include the most frequently associated words with the given topic. As described in more detail with reference to FIG. 7, the topic correlation 218 is a relationship strength between a topic and one or more other topics at a given point in time, which captures the evolution of topic correlations over time. The topic correlation 218 can be based on covariance matrices that yield a correlation coefficient measuring the relationship strength.



FIG. 3 is an example method 300 of processing performed by the DCTM 201. Upon receipt of inputs, such as the set 202 of documents 204 and quantity of topics 208 for modeling described with reference to FIG. 2, the DCTM 201 initiates the method 300 at operation 302 by analyzing words from each document 204 with the set 202 of documents 204. Based on the analysis, the DCTM 201 identifies topics at operation 304. The topics identified are for the set 202 of documents 204. The topics are comprised of clusters of words that represent the topics. At operation 306, for each topic identified at operation 304, the DCTM 201 models topic popularity, topic representation, and topic correlation across the period of time, as described in more detail with reference to FIG. 9. In some examples, the DCTM 201 is an unsupervised machine learning model that performs one or more of operations 302, 304, and 306 iteratively to learn the underlying topics and dynamics of each topic over time as part of training, for example. The DCTM 201 can be trained on at least a portion of the set 202 of documents 204 provided as inputs.



FIG. 4 is a conceptual diagram 400 that illustrates the list of topics 210 output by the DCTM 201 (or list of topics 1510 output by the DWCTM 1501). In FIG. 4, a non-limiting example scenario is presented where the set 202(D) of documents 204(d) provided as input to the DCTM 201 are a set of conference papers related to a neural information processing systems conference, and the quantity of topics 208 provided as input to the DCTM 201 is 7 topics. The set of conference papers includes 5,804 conference papers from each year the conference was held spanning from 1987 to 2015, with an average of about 34 papers per year. Each conference paper is comprised of a plurality of words 205 (e.g., Nd words). Each conference paper is also associated with a timestamp 206 (td) reflecting the year that the paper was published. For example, the about 34 papers published for the 1987 conference are associated with 1987 via the timestamp 206.


The list of topics 210 provided as output of the DCTM 201 includes 7 topics (e.g., topics 1 through 7) corresponding to the quantity of topics 208 provided as input. As illustrated, the topics 402 are represented by clusters 404 of the plurality of words 205. For example, as shown in a blown up representation 406 the list of topics 210, a top thirty most probable words associated with topics 1, 6 and 7 are displayed. The topics 402 are not assumed to be independent from one another. Accordingly, two or more topics can have common words 408, as illustrated by the common words 408 associated with at least two of the topics 1, 6, and 7 that are highlighted in the blown up representation 406. The DCTM 201 is robust enough to discriminate between the topics 402 that share common words 304, and is also able to consider multiple topics with similar interpretation, such as topics 6 and 7, and split facets of a single topic into more than one topic. Additionally, as described in greater detail below in FIG. 7, the presence of common words 408 among the topics 1, 6, and 7 can indicate a positive correlation between each of the topics.


In some examples, the topics can later be labeled or categorized based on a general subject matter reflected by the words. As one example, the topic 1 can be labeled or categorized as neural networks. As another example, topics 6 and 7 can be labeled or categorized as neuroscience.


The conference paper dataset scenario presented in FIG. 4 is continued in FIGS. 5, 6, and 7 to depict examples of the additional outputs of the DCTM 201, including topic popularity, topic representation, and topic correlation for the topics 402 in the list 210 across the period of time spanned by the set 202 of documents 204.



FIG. 5 is an example graph 500 depicting an evolution of topic popularity. An x-axis 502 of the graph 500 represents time, particularly each year spanning from 1987 to 2015 (e.g., the timeframe for the conference paper data set). A y-axis 504 of the graph 500 represents a document-topic distribution at given time points across the period of time. Specifically, the y-axis 504 can represent a mean of the document-topic distribution (e.g., a (no), described in more detail with reference to FIG. 9 below, for each document at a given time point yielded by the DCTM 201.


A document-topic distribution is a distribution over the topics for a document. An example distribution over the topics for a document at a given time point is as follows: x % of topic 1, y % of topic 2, z % of topic 3, etc., where the sum of the percentages for each topic equals 100%. For example, as shown in the graph 500, for a document from 1987, the document-topic distribution is likely to be 8% topic 7, 6% topic 6, 4% topic 5, and so on. A legend 506 indicates a visual scheme to distinguish a proportion of the document that each of the topics 1 through 7 make up as part of the document-topic distribution. For additional clarity, the corresponding data may also be labeled with each of the topics within the graph 500, as illustrated.


As illustrated by the graph 500, a distribution for each of the topics 1, 6 and 7 has generally decreased as time has progressed from 1987 to 2015. This decrease in distribution corresponds to a decrease in trends or decrease in popularity for these topics within conference papers related to neural information processing systems.



FIG. 6 is an example graph 600 depicting an evolution of topic representation. An x-axis 602 of the graph 600 represents time, particularly each year spanning from 1987 to 2015 (e.g., the timeframe for the conference paper dataset). A y-axis 604 of the graph 600 represents a word-topic probability for a given word being associated with a given topic (e.g., topic 3) at a given year. The word-topic probability is computed based on a distribution over the words for the given topic (e.g., the topic-word distribution output by the DCTM 201 that is derived from β as described with reference to FIG. 9 below or the topic-word distribution output by the DWCTM 1501 as described with reference to FIG. 17 below). For example, the cluster of words that represents the given topic includes words associated with the topic accounting for an entirety of the time period across all documents (e.g., words that represent the topics from 1987 to 2015 shared by all documents). The word-topic probability is a likelihood that a given word from that cluster represents the given topic at the given year. The word-topic probability is a value from 0 to 1, where 0 indicates no likelihood, and 1 indicates a certain likelihood. The 50th percentile is shown in the solid line, with the credible interval between the 5th and 95th percentile shown in the shaded areas. A legend 606 indicates a visual scheme to distinguish the word-topic probability for each of a subset of words that represent a given topic (e.g., topic 3) within the list of topics described with reference to FIG. 3.


As illustrated in FIG. 6, the graph 600 may depict an evolution of topic representation for topic 3 that shows how the most probable words that are associated with topic 3 have changed over time. Topic 3 generally relates to topic modeling. The subset of most probable words displayed include dirichlet, expert, lda, topic, document, latent, mixture, and word. While the distribution of topic 3 remains relatively stable over time as shown in FIG. 4, the words associated with topic 3 change drastically over time as shown in FIG. 5. In other words, while topic 3 will consistently make up about 2-3% of a document at any given year from 1987 to 2015, the words that the document will include that are associated with topic 3 drastically change from 1987 to 2015. For example, shortly after 2000, words including topic became increasingly associated with topic 3, whereas words that were previously associated with topic 3, such as expert, generally declined to a word-topic probability of 0. For additional clarity, labels are included within the graph 600 to indicate the word-topic probability corresponding to expert and topic in 1987 and 2015, respectively.



FIG. 7 depicts an example graph 700 depicting an evolution of topic correlation. An x-axis 702 of the graph 700 represents time, particularly each year spanning from 1987 to 2015 (e.g., the timeframe for the conference paper dataset). A y-axis 704 of the graph 700 represents a correlation coefficient measuring a correlation strength of a respective topic (e.g., topic 7) with another given topic at a given year. The correlation coefficient can be computed based on covariance matrices (Σ) dynamically modeled by the DCTM 201, as described in greater detail with reference to FIG. 9. The correlation coefficient is a value from −1 to 1, where a value of −1 indicates that there is a perfect negative correlation between two topics, a value of 0 indicates that there is no linear relationship between the two topics, and a value of 1 indicates there is a perfect positive correlation. The 50th percentile is shown in the solid line, with the credible interval between the 5th and 95th percentile shown in the shaded areas. A legend 706 indicates a visual scheme to distinguish the correlation between topic 7 and each of topics 1 through 6 within the list of topics described with reference to FIG. 4. For additional clarity, the corresponding data may also be labeled with each of the topics within the graph 700, as illustrated.


As illustrated in FIG. 7, the graph 700 may depict an evolution of topic correlation for topic 7 that shows how its relationship with other topics have changed over time. Topic 7 can be labeled as relating to neuroscience. Topic 7 exhibits a positive correlation with topic 6 (also relating to neuroscience) that has been consistent over time, as well as an increasing positive correlation with topic 1 relating to neural networks from 1987 to 2015. Topic 7 otherwise exhibits small or close to 0 correlations or negative correlations with the remaining topics 2, 3, 4, and 5.



FIG. 8 is a conceptual diagram 800 illustrating example phases of the DCTM 201 or DWCTM 1501. The example phases include a training phase 802 described in detail with respect to FIGS. 9 and 17, and a prediction phase 804 described in detail with respect to FIG. 10. For example, DCTM 201 or DWCTM 1501 may first be trained during the training phase 802 using a training dataset (e.g., a set of documents) to learn latent variables and parameters in order to estimate distribution over the topics for each document in the set and distribution over words for each topic. Once sufficiently trained, the DCTM 201 or DWCTM 1501 can then be used to predict a distribution over the topics for a new document in the prediction phase 804. In some embodiments, feedback based on the predictions output by the DCTM 201 or DWCTM 1501 during the prediction phase 804 may be used to further train the DCTM 201 or DWCTM 1501.



FIG. 9 illustrates an example training phase 802 of the DCTM 201. During the training phase 802, the DCTM 201 performs a generative process 902 and an inference process 906 using a training dataset 908. In some examples, the architecture of the DCTM 201 includes one or more deep neural networks utilizing unsupervised learning techniques to perform one or more of the generative process 902 and an inference process 906. The training can occur online (e.g., over a network such as network 120 described with reference to FIG. 1).


The generative process 902 defines a joint probability distribution over observed data and latent variables. The joint probability distribution can be decomposed into a likelihood for the observed data conditioned on the latent variables and a prior distribution from which the latent variables are drawn. In some examples, a probabilistic graphical model 904 is used to illustrate the generative process 902. For example, within the probabilistic graphical model 703, the shaded node represents the observed data, the unshaded nodes represent the latent variables, and the edges represent possible dependencies between the nodes. A goal of the training phase is to learn the latent variables and model parameters. In some examples, at least a portion of the training dataset 908 is held back to be used for testing. For example, 75% of the dataset can be used for training, and 25% can be withheld as a testing dataset.


Here, as illustrated by the probabilistic graphical model 904, the observed data is a word (wdn), which is an nth word of a dth document (e.g., the document d having Nd words) in a set of documents D comprising a corpus W. Corpus W is the training dataset 908. The latent variables, each described in greater detail below, include: distributions of words for topics (β), a topic assignment (zdn) drawn from a mixture of topics (ηd) over the corpus W, the mixture of topics (ηd) being dependent on a mean of the prior distribution of document-topic proportion (e.g., topic probability) (μ) and co-variance matrices (Σ), the co-variance matrices (Σ) drawn from a plurality of Gaussian processes generated independent of one another (f) and a correlation between the Gaussian processes driven by (L).


The set of documents D comprising the corpus Ware associated with one or more evolving indexes. Here, the set of documents D are associated with a single index of time (e.g., indicated by timestamps td associated with one or more documents din the set of documents D). However, in other examples, the index can alternatively or additionally include a geographical location. Taking into account the temporal dynamics underlying the documents din the training dataset 908, the DCTM 201 learns the latent variables and model parameters during the training phase 802 to infer a topic-word distribution 910 for each topic and a document-topic distribution 912. The topic-word distributions 910 include words that are most frequently associated with each of the topics, which can be determined based on β. The document-topic distribution 912 includes a proportion of each document d within the corpus W that is associated with each of the topics, which can be determined based on μ and Σ. Continuous processes are utilized for modeling to enable incorporation of the temporal dynamics into the DCTM 201. For example, as described in greater detail below, to incorporate temporal dynamics for each component of the DCTM 201, Gaussian processes (GP) are used to model β and μ and a generalized Wishart process (GWP) is used to model Σ.


The generative process 902 depicts how the variables and model parameters are learned. For example, the DCTM 201 assumes that a document d having a word count Nd at time to is generated according to the generative process 902, described as follows. A mixture of topics ηd˜custom-charactertd, Σtd) is drawn. Then, for each word n=1, . . . , Nd: a topic assignment znd is drawn from a multinomial distribution with the parameter σ (ηd), and a word wn|zn, β is drawn from a multinomial distribution with the parameter σ (βzn), where a represents a softmax function. An example softmax function is as follows: σ (z)i=eziΣj=1K ezj. Transformation using the softmax function, also referred to herein as the softmax transformation, is used for both ηd and βzn because each are assumed to be defined in an unconstrained space. The softmax transformation converts the parameters to probabilities to encode the proportion of topics for a document and the distribution of the words for a topic, respectively.


Under the above-described generative process 902, the marginal likelihood for the corpus W of documents becomes:










p


(


W
|
μ

,
Σ
,
β

)


=




d
=
1

D






(




z

n
=
1


k




p


(



W
d

|

z
n


,

β

t
d



)




p


(


z
n

|

η
d


)




)



p


(



η
d

|

μ

t
d



,

Σ

t
d



)



d



η
d

.








(
1
)







The individual documents of the set W are assumed to be independent and identically distributed (i.i.d) given the document-topic proportion and word-topic distribution.


In traditional correlated topic models (CTMs), the parameterization of η is relaxed by allowing topics to be correlated with each other (e.g., by allowing a non-diagonal Σtd). For examples, a logistic normal distribution can be used to model η to allow the probability of the topics to be correlated with each other. However, particularly over a long period of time, the correlations among the topics are not likely to remain constant. Intuitively, the degree of correlation among topics changes over time, as the correlation simply reflects the co-occurrence of the concepts appearing in documents. Therefore, the dynamics of the covariance matrices (Σ) of the topics from which correlations can be derived are modeled, extending the capabilities of the traditional CTMs. The dynamics of μ and β are modeled in addition to the dynamics of to enable evolutionary modeling of document topic distribution 912 and topic word distribution 910 (and thus topic popularity and topic representation respectively), along with dynamic correlations over time.


To model the dynamics, the topic probability (μtd)d=1D and the distributions of words for topics (βtd)d=1D are modeled as zero-mean Gausian processes (e.g., p(μ)=custom-character (0, κμ) and p(β)=custom-character (0, κβ). In some examples, a squared exponential kernel can be used for μ and a Matern kernel with parameter ½ can be used for β. The squared exponential kernel allows more freedom for topic probability to change rapidly. The series of co-variance matrices (Σtd)d=1D are modeled using a generalized Wishart process, which is a generalization of Gaussian processes to positive semi-definite matrices. The generalized Wishart process is constructed from the independent and identically distributed (i.i.d.) collections of Gaussian processes as follows. Let






f
di˜custom-character(0,κθ),d≤D,i≤ν  (2)


be D×ν i. i. d. Gaussian processes with zero mean function and (shared) kernel function κθ, where θ denotes any parameters of the kernel function. For example, in the case of θ=θ12* exp (−∥x−y∥2/(2*θ22)), θ=(θ1, θ2) corresponds to the amplitude and length scale of the kernel (assumed to be independent from one another). In some examples, a squared exponential kernel can be used for Σ to allow more freedom for topic correlations to change rapidly. The amplitude and length scale of kernels can be initialed as 1 and 0.1 respectively, which can then be learned using an approximate empirical Bayes approach.


The positive integer-valued ν≥D is denoted as the degrees of freedom parameter. Let Fndk:=fdkn), and let Fn:=(Fndk, d≤D, k≤ν) denote the D×ν matrix of collected function values, for every n≥1. Then, consider





Σn=LFnFnTLT,n≥1,  (3)


where L∈custom-characterD×D satisfies the condition that the symmetric matrix LLT is positive definite. With such construction, Σn is (marginally) Wishart distributed, and Σ is correspondingly called a Wishart process with degrees of freedom ν and scale matrix V=LLT.


Σn˜custom-character(V, ν, κθ) denotes that Σn is drawn from a Wishart process. The dynamics of the process of the covariance matrices Σ are inherited by the Gaussian processes, controlled by the kernel function ice. With this formulation, the dependency between D Gaussian processes is static over time, and regulated by the matrix V.


L is a triangular Cholesky factor of the positive definite matrix V, with M=D(D+1)/2 free elements. Each of the free elements can be vectorized into a vector custom-character=(custom-character1, . . . , custom-characterm) and assigned a spherical normal distribution p(custom-characterm)=custom-character(0,1), where the diagonal elements of L are positive. To ensure that the diagonal elements of L are positive, a change of variables can be applied to the prior distribution of the diagonal elements by applying a soft-plus transformation custom-character=log(1+exp(custom-characteri), custom-characteri˜custom-character(0,1)).


Stochastic gradient estimation with discrete latent variables is difficult, and often results in significantly higher variance in gradient estimation even with state-of-the-art variance reduction techniques. To simplify stochastic gradient estimation, the discrete latent variables in the DCTM 201 can be marginalized out in closed form. For example, the resulting marginalized distribution p(Wd|zn, βtd) becomes a multinomial distribution over the word count in each document,






W
d˜Πn=1Nd Multinomial(1,σ(βtdηd)).  (4)


As discussed above, the generative process 902 defines a joint probability distribution over observed data and latent variables that can be decomposed into a likelihood for the observed data conditioned on the latent variables and a prior distribution from which the latent variables are drawn.


The inference process 906 parametrizes the approximate posterior of the latent variables using variational inference techniques. For example, variational lower bounds are individually derived for the following variables (e.g., components of the DCTM 201): ηd, β, μ, and Σ, the derivation processes for each discussed in turn below. Once derived, the individually derived lower bounds of the components can be assembled together for a stochastic variational inference (SVI) method for the DCTM 201.


The SVI method for the DCTM 201 enables mini-batch training over the documents in the training dataset 908. This is facilitated by the use of amortized inference to derive the variational lower bound of ηd. After defining a variational posterior q(ηd) for each document, a variational lower bound of the log probability over the documents, denoted as custom-characterW and also referred to as lower bound custom-characterW, can be derived as follows,











Log






p


(


W
|
μ

,
Σ
,
β

)








d
=
1

D






q


(

η
d

)



log




p


(



W
d

|

η
d


,

β

t
d



)




p


(



η
d

|

μ

t
d



,

Σ

t
d



)




q


(

η
d

)




d






η
d





=




d
=
1

D




(



𝔼

q


(

η
d

)





[

log






p


(



W
d

|

η
d


,

β

t
d



)



]


-

KL


(


q


(

η
d

)









p


(



η
d

|

μ

t
d



,

Σ

t
d



)



)



)

.






(
5
)







As the lower bound custom-characterW is a summation over individual documents, it is straight-forward to derive a stochastic approximation of the summation by sub-sampling the documents,












W




D
B






i


𝒟
B





(



𝔼

q


(

η
d

)





[

log


p


(



W
d

|

η
d


,

β

t
d



)



]


-

KL


(


q


(

η
d

)









p


(



η
d

|

μ

t
d



,

Σ

t
d



)



)



)




,




(
6
)







where custom-character is a random sub-sampling of the document indices with the size custom-character. The above data sub-sampling enables performance of mini-batch training, where the gradients of the variational parameters are stochastically approximated from a mini-batch. An issue with the above data sub-sampling is that only the variational parameters associated with the mini-batch get updated, which causes synchronization issues when running stochastic gradient descent. To avoid this, it is assumed the variational posteriors q(ηd) for individual documents are generated according to parametric functions,






qd)=custom-characterm(Wd),ϕS(Wd)),  (7)


where ϕm and ϕS are the parametric functions that generate the mean and variance of q(ηd), respectively. This is referred to as amortized inference. With this parameterization of the variational posteriors, a common set of parameters can be updated no matter which documents are sampled into the mini-batch, thus overcoming the synchronization issue.


The lower bound custom-characterW cannot be computed analytically. Instead, an unbiased estimate of custom-characterW is computed using Monte Carlo sampling. As q(ηd) are normal distributions, a low-variance estimate of the gradients of the variational parameters are obtained via the reparameterization strategy.


Both of the word distributions of topics (β) and the mean of the prior distribution of the document-topic proportion (μ), also referred to as topic probability, follow Gaussian processes that take the time stamps of individual documents as inputs, i.e., p(β|t) and p(μ|t). A stochastic variational Gaussian process approach can be used to construct the variational lower bound of β and μ.


For examples, each Gaussian process can be augmented with a set of auxiliary variables with a set of corresponding time stamps, i.e.,






p(β|t)=∫p(β|Uβ,t,custom-characterβ)p(Uβ|custom-characterβ)dUβ  (8)






p(μ|t)=∫p(μ|Uμ,t,custom-characterμ)p(Uμ|custom-characterμ)dUμ  (9)


where Uβ and Uμ are the auxiliary variables for β and μ, respectively, and zβ and zμ are the corresponding time stamps. Both p(β|Uβ, t, zβ) and p(Uβ|zβ) follow a same Gaussian process as the Gaussian process for p(β|t), each having the mean and kernel functions. Similarly, both p(μ|Uμ, t, zβ) and p(Uβ|zβ) follow a same Gaussian process as the Gaussian process for p(μ|t), each having the mean and kernel functions. Despite the augmentation, the prior distributions for β and μ are not changed.


Variational posteriors of β and μ are constructed in the following form: q(β, Uβ)=p(β|Uβ)q(Uβ) and q(μ, Uμ)=p(μ|Uμ)q(Uμ). Both q(Uβ) and q(Uμ) are multivariate normal distributions in which the mean and covariance are variational parameters. For example, q(Uβ)=custom-character(Mβ, Sβ) and q(Uμ)=custom-character(Mμ, Sμ). When β and μ are used in down-stream distributions, a lower bound can be derived,





log p(·|β)≥custom-characterq(β)[p(·|β)]−KL(q(Uβ)∥p(Uβ)),  (10)





log p(·|μ)≥custom-characterQ(μ)[p(·|μ)]−KL(q(Uμ)∥p(Uμ)),  (11)


where q(β)=∫p(β|Uβ)q(Uβ)dUβ and q(μ)=∫p(μ|Uμ)q(Uμ)dUμ.


As previously discussed, the generalized Wishart process for Σ is derived from a set of Gaussian processes. At each time point, the covariance matrix is defined as Σt=LFtFtT LT. The vector stacking of each entry of the matrix Ft across all the time points, fij=((F1)ij, . . . , (FT)ij), follows a Gaussian process p(fij|t)=custom-character(0, κ). A stochastic variational inference method for the Wishart Process can be derived similar to the stochastic variational inference method for Gaussian processes described with reference to β and μ. For example, each p(fij|t) in the Wishart process is augmented with a set of auxiliary variables having a set of corresponding time stamps,






p(fij|t)=∫p(fij|uij,t,custom-characterij)p(uij|custom-characterij)duij,  (12)


where uij is the auxiliary variable and custom-characterij is the corresponding time stamp. The variational posterior of fij is defined as q(fij, uij)=p(fij|uij)q(uij), where (uij)=custom-character(mij, sij)). The variational posterior of custom-character can be defined to be q(custom-character)=(custom-character, custom-character), where S is a diagonal matrix. A change of variable can also be applied to the variational posterior of the diagonal elements, custom-characterm=log (1+exp (custom-characterm)), q(custom-characterm)=custom-character(custom-character, custom-character).


With such a set of variational posterior for all the entries {fij} and custom-character, a variational lower bound can be derived, when Σ is used for down-stream distributions,











log






p


(

·

|
Σ


)







𝔼


q


(
F
)




q


(

)






[

p


(

·

|
Σ


)


]


-




i
,
j




KL


(


q


(

u

i

j


)









p


(

u

i

j


)



)



-

KL


(


q


(

)









p


(

)



)




,




(
13
)







where q(F)=Πijq(fij) with q(fij)=∫p(fij|uij)q(uij)duij.


After deriving the variational lower bound for all the components, the lower bounds of the individual components can be assembled together for a stochastic variational inference for DCTM 201. For example, the document-topic proportion for each document d follows a prior distribution p(ηdtd, Σtd), where the Gaussian process of μ provides the mean and the generalized Wishart process for Σ provides the covariance matrix at the time stamp td. The word distributions for individual topics are used in defining the distribution of individual words for each document d, p(Wdd, βtd). Combining the lower bounds (10), (11) and (13), we can derive the complete variational lower bound custom-character of DCTM 201. log p(W)≥










𝔼


q


(
u
)




q


(
F
)




q


(
L
)




q


(
β
)






[


W

]


-

K


L


(


q


(

U
β

)









p


(

U
β

)



)



-

KL


(


q


(

U
μ

)









p


(

U
μ

)



)


-




i
,
j




KL


(


q


(

u

i

j


)









p


(

u

i

j


)



)



-

KL


(


q


(

)









p


(

)



)



=


,




where the first term of custom-character can be further decomposed by plugging in equation (5),











𝔼


q


(
u
)




q


(
F
)




q


(
L
)




q


(
β
)






[


W

]


=




d
=
1

D




(



𝔼


q


(

η
d

)




q


(

β

t
d


)






[

log






p


(



W
d

|

η
d


,

β

t
d



)



]


-


𝔼


q


(

η
d

)




q


(

μ

t
d


)




q


(

F

t
d


)




q


(
L
)






[

KL


(


q


(

η
d

)









p


(



η
d

|

μ

t
d



,

Σ

t
d



)



)


]



)

.






(
14
)







This formulation allows mini-batch training to be performed by data sub-sampling. For each mini-batch, the training dataset 908 is randomly sub-sampled and the term custom-characterq(u)q(F)q(L)q(β)[custom-characterW] is re-weighted according to the ratio between the size of training dataset a 908 and the size of the mini-batch as shown in Equation (6).


To test or validate an output of the DCTM 201, a test document from the portion of the training dataset 908 withheld for testing can be provided as input to the DCTM 201. A perplexity is computed using the exponential of the average negative predictive log-likelihood for each word, where the evidence lower bound (ELBO) for the test document is computed using Equation (14).


In a traditional correlated topic model, a prior distribution for mixtures of topics is derived from a multivariate distribution, in which the mean encodes the popularity of a topics while the covariance matrix encodes the co-occurrence of topics in a non-dynamic manner. As described in the generative process 902 and inference process 906, the DCTM 201 extends the prior distribution for mixtures of topics into a dynamic distribution by providing a set of Gaussian processes as the prior distribution for the mean, and generalized Wishart Process as the prior distribution for the covariance matrices. Accordingly, the evolution of the popularity of topics, representations of topics, and their correlations can be jointly modeled over time. Additionally, the SVI method for the DCTM 201 utilizes amortized inference to enable mini-batch training that is scalable to large numbers of data. For example, the DCTM 201 utilizes a deep neural network to encode the variational posterior of the mixtures of topics for individual documents. For the Gaussian processes and the generalized Wishart Process, the DCTM 201 can be augmented with auxiliary variables to derive a scalable variational lower bound. Because the final low bound is intractable, the discrete latent variable are marginalized. Further, a Monte Carlo sampling approximation with the reparameterization trick can be applied to enable a low-variance estimate for the gradients.



FIG. 10 illustrates an example predicting phase 804 of the DCTM 201. During the predicting phase 804, a new document 1002 can be fit to the trained DCTM 201. The new document 1002 can be associated with a timestamp. The DCTM 201 can provide as output a predicted topic distribution 1004 for the new document 1002. The predicted topic distribution 1004 is a document-topic distribution that can include topics that the document 1002 is comprised of and the proportions thereof. For example, the predicted topic distribution 1004 of the new document 1002 may be: 50% topic 1, 40% topic 2, 8% topic 3, and 2% topic 4. When determining the predicted topic distribution 1004, the timestamp associated with the new document 1002 is utilized to account for any changes or evolution in topic popularity, representations, and correlations for the topics over time.



FIG. 11 illustrates an example of application system 110, also referred to hereafter as system 110. In this example, the system 110 is associated with a media streaming service, and includes a media playback device 1102 and a media delivery system 1104. The system 110 communicates across a network 1106. In some embodiments, a media-playback engine 1108 runs on the media playback device 1102, and a media content recommendation engine 1110 runs on the media delivery system 1104. In some embodiments, the DCTM 201 or DWCTM 1501 is run on the media delivery system 1104. In other embodiments, the DCTM 201 or DWCTM 1501 is an independent system that communicates with the media delivery system 1104 across the network 1106.


Also shown is a user U who uses the media playback device 1102 to continuously play back a plurality of media content items. In some examples, the media content items may be in a form of a playlist, where the playlist may be created based on recommendations from the media content recommendation engine 1110 informed from an output of the DCTM 201 or DWCTM 1501. In some examples, the DCTM 201 is trained as described with reference to FIG. 9 or DWCTM 1501 with reference to FIG. 17 based on data specific to the user U (e.g., based on past listening sessions of the user U) and is implemented for that user U. In other examples, the DCTM 201 is trained as described with reference to FIG. 9 or DWCTM 1501 with reference to FIG. 17 based on data received from a plurality of different users (e.g., listening sessions of all users that subscribe to the media streaming service).


The media playback device 1102 operates to play media content items to produce media output 1112. In some embodiments, the media content items are provided by the media delivery system 1104 and transmitted to the media playback device 1102 using the network 1106. A media content item is an item of media content, including audio, video, or other types of media content, which are stored in any format suitable for storing media content. Non-limiting examples of media content items include songs, albums, music videos, movies, television episodes, podcasts, other types of audio or video content, and portions or combinations thereof. In this document, the media content items can also be referred to as tracks.


The media-playback engine 1108 operates to facilitate the playing of media content items on the media playback device 1102. The media delivery system 1104 operates to provide the media content items to the media playback device 1102. In some embodiments, the media delivery system 1104 is connectable to a plurality of media playback devices 1102 and provides the media content items to the media playback devices 1102 independently or simultaneously. Additionally, the media delivery system 1104 operates to provide recommendations for playback of media content items (e.g., in a form of a playlist) to the media playback device 1102.


For example, the media content recommendation engine 1110 operates in conjunction with the DCTM 201 or DWCTM 1501 to determine media content items to recommend and provide to the user U for playback, among other recommendations. As described in greater detail with reference to FIG. 13, the DCTM 201 or DWCTM 1501 receives, as input, a set of listening sessions associated with the user U that are associated with timestamps spanning a period of time. The listening sessions can be recorded or collected by the media playback device 1102 and provided to the media delivery system 1104 over the network 1106 for storage. The media delivery system 1104 can then provide the listening sessions to the DCTM 201 or DWCTM 1501. A listening session is comprised of a plurality of media content items, which can also be referred to as tracks, each media content item having associated media content metadata such as artist, genre, mood, acoustic metadata, era, title, album, and length. For purposes of processing by the DCTM 201 or DWCTM 1501, media content metadata for a media content item of a listening session is analogous to words of a document of a set of documents. In other words, the media content metadata is a vocabulary over which topics are distributed.


The DCTM 201 or DWCTM 1501 can process the set of listening sessions of the user U similar to the set of documents, as discussed in detail above, to model an evolution of how the user U is consuming media content items from the media delivery system 1104 over that period of time. For example, as described in greater detail with reference to FIG. 13, the DCTM 201 or DWCTM 1501 can identify clusters or groupings of artists, genres, or moods for example, that represent topics over the period of time. The output of the DCTM 201 can include, for each topic, a popularity of the topic at a given time, a representation of the topic at a given time, and a correlation with other topics at a given time. The output of the DWCTM 1501 can include, for each topic, a popularity of the topic at a given time, a word embedding at a given time, and a correlation with other topics at a given time. In addition to the user-specific output, listening sessions from many different users can be analyzed as a whole such that the evolution modeling performed by the DCTM 201 or DWCTM 1501 can be reflective of how the artists and genres themselves, for example, have evolved over time.


The output of the DCTM 201 or DWCTM 1501 can be provided as input to the media content recommendation engine 1110. The output of the DCTM 201 or DWCTM 1501 can then be used to inform recommendations made by the media content recommendation engine 1110 such that recommended media content items more closely correspond to the evolving media content item preferences of the user U, while also taking into account evolution of the respective artists and genres over time to provide diverse recommendations. In some examples, the recommended media content items can be provided in a form of a playlist.



FIG. 12 illustrates in greater detail the example system 110 described with reference to FIG. 11 for generating media content item recommendations facilitated by evolution modeling performed by the DCTM 110 or DWCTM 1501. As described herein, the media playback device 1102 operates to play media content items. In some embodiments, the media playback device 1102 operates to play media content items that are provided (e.g., streamed, transmitted, etc.) by a system external to the media playback device such as the media delivery system 1104, another system, or a peer device. Alternatively, in some embodiments, the media playback device 1102 operates to play media content items stored locally on the media playback device 1102. Further, in at least some embodiments, the media playback device 1102 operates to play media content items that are stored locally as well as media content items provided by other systems.


In some embodiments, the media playback device 1102 is a computing device, handheld entertainment device, smartphone, tablet, watch, wearable device, or any other type of device capable of playing media content. In yet other embodiments, the media playback device 1102 is a laptop computer, desktop computer, television, gaming console, set-top box, network appliance, blue-ray or DVD player, media player, stereo, or radio.


In at least some embodiments, the media playback device 1102 includes a location-determining device 1202, a touch screen 1204, a processing device 1206, a memory device 1208, a content output device 1210, and a network access device 1212. Other embodiments may include additional, different, or fewer components. For example, some embodiments may include a recording device such as a microphone or camera that operates to record audio or video content. As another example, some embodiments do not include one or more of the location-determining device 1202 and the touch screen 1204.


The location-determining device 1202 is a device that determines the location of the media playback device 1102. In some embodiments, the location-determining device 1202 uses one or more of the following technologies: Global Positioning System (GPS) technology which may receive GPS signals from satellites S, cellular triangulation technology, network-based location identification technology, Wi-Fi positioning systems technology, and combinations thereof.


The touch screen 1204 operates to receive an input from a selector (e.g., a finger, stylus etc.) controlled by the user U. In some embodiments, the touch screen 1204 operates as both a display device and a user input device. In some embodiments, the touch screen 1204 detects inputs based on one or both of touches and near-touches. In some embodiments, the touch screen 1204 displays a user interface 1214 for interacting with the media playback device 1102. As noted above, some embodiments do not include a touch screen 1204. Some embodiments include a display device and one or more separate user interface devices. Further, some embodiments do not include a display device.


In some embodiments, the processing device 1206 comprises one or more central processing units (CPU). In other embodiments, the processing device 1206 additionally or alternatively includes one or more digital signal processors, field-programmable gate arrays, or other electronic circuits. The memory device 1208 operates to store data and instructions. In some embodiments, the memory device 1208 stores instructions for a media-playback engine 1108. The memory device 1208 typically includes at least some form of computer-readable media.


Computer readable media include any available media that can be accessed by the media playback device 1102. By way of example, computer-readable media include computer readable storage media and computer readable communication media. Computer readable storage media includes volatile and nonvolatile, removable and non-removable media implemented in any device configured to store information such as computer readable instructions, data structures, program modules, or other data. Computer readable storage media includes, but is not limited to, random access memory, read only memory, electrically erasable programmable read only memory, flash memory and other memory technology, compact disc read only memory, blue ray discs, digital versatile discs or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information and that can be accessed by the media playback device 1102. In some embodiments, computer readable storage media is non-transitory computer readable storage media.


Computer readable communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” refers to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, computer readable communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency, infrared, and other wireless media. Combinations of any of the above are also included within the scope of computer readable media.


The content output device 1210 operates to output media content. In some embodiments, the content output device 1210 generates media output 1112 (FIG. 11) for the user U. Examples of the content output device 1210 include a speaker, an audio output jack, a Bluetooth transmitter, a display panel, and a video output jack. Other embodiments are possible as well. For example, the content output device 1210 may transmit a signal through the audio output jack or Bluetooth transmitter that can be used to reproduce an audio signal by a connected or paired device such as headphones or a speaker.


The network access device 1212 operates to communicate with other computing devices over one or more networks, such as the network 1106. Examples of the network access device include wired network interfaces and wireless network interfaces. Wireless network interfaces includes infrared, BLUETOOTH® wireless technology, 802.11a/b/g/n/ac, and cellular or other radio frequency interfaces in at least some possible embodiments.


The media-playback engine 1108 operates to play back one or more of the media content items (e.g., music) to the user U. As described herein, the media-playback engine 1108 is configured to communicate with the media delivery system 1104 to receive one or more media content items (e.g., through the stream media 1252), as well as recommendations (e.g., through communications 1254, 1256 or in the form of playlists received through the stream media 1252).


With still reference to FIG. 12, the media delivery system 1104 includes one or more computing devices and operates to provide media content items to the media playback device 1102 and, in some embodiments, other media playback devices as well. In some embodiments, the media delivery system 1104 operates to transmit the stream media 1252 to media playback devices such as the media playback device 1102.


In some embodiments, the media delivery system 1104 includes a media server 1216 and recommendation server 1218. In this example, the media server 1216 includes a media server application 1220, a processing device 1222, a memory device 1224, and a network access device 1226. The processing device 1222, memory device 1224, and network access device 1226 may be similar to the processing device 1206, memory device 1208, and network access device 1212 respectively, which have each been previously described.


In some embodiments, the media server application 1220 operates to stream music or other audio, video, or other forms of media content. The media server application 1220 includes a media stream service 1228, a media data store 1230, and a media application interface 1232.


The media stream service 1228 operates to buffer media content such as media content items 1234 (including 1234A, 1234B, and 1234Z) for streaming to one or more streams 1236A, 1236B, and 1236Z.


The media application interface 1232 can receive requests or other communication from media playback devices or other systems, to retrieve media content items from the media delivery system 1104. For example, in FIG. 12, the media application interface 1232 receives communications 1254 from the media playback device 1102.


In some embodiments, the media data store 1230 stores media content items 1234, media content metadata 1238, and playlists 1240. The media data store 1230 may comprise one or more databases and file systems. Other embodiments are possible as well. As noted above, the media content items 1234 may be audio, video, or any other type of media content, which may be stored in any format for storing media content.


The media content metadata 1238 operates to provide various pieces of information associated with the media content items 1234. In some embodiments, the media content metadata 1238 includes one or more of title, artist name, album name, length, genre, sub-genre, mood, era, etc. In addition, the media content metadata 1238 includes acoustic metadata which may be derived from analysis of the track. Acoustic metadata may include temporal information such as tempo, rhythm, beats, downbeats, tatums, patterns, sections, or other structures. Acoustic metadata may also include spectral information such as melody, pitch, harmony, timbre, chroma, loudness, vocalness, or other possible features.


One or more types of the media content metadata 1238 can be used by the DCTM 201 or DWCTM 1501 to model an evolution of the users' consumption of media content items. For example, artist names can be provided as input the DCTM 201 or DWCTM 1501, and clusters or groupings of artists representing topics can be provided as output to show how the user's taste or preference in artists have changed over time, which can be helpful in predicting new media content items to recommend that align with the user's interest but are also diverse. As another example, genres or sub-genres can be provided as input the DCTM 201 or DWCTM 1501, and clusters or groupings of genres representing topics can be provided as output to show how the user's taste or preference in genres have changed over time.


The playlists 1240 operate to identify one or more of the media content items 1234. In some embodiments, the playlists 1240 identify a group of the media content items 1234 in a particular order. In other embodiments, the playlists 1240 merely identify a group of the media content items 1234 without specifying a particular order. Some, but not necessarily all, of the media content items 1234 included in a particular one of the playlists 1240 are associated with a common characteristic such as a common genre, mood, or era. In some examples, the group of the media content items 1124 identified within the playlist 1240 may be based on recommendations facilitated by output of the DCTM 201 or DWCTM 1501 that are provided by the recommendation server 1218 (e.g., through communications 1256).


In this example, the recommendation server 1218 includes the media content recommendation engine 1110, a recommendation interface 1242, a recommendation data store 1244, a processing device 1246, a memory device 1248, and a network access device 1250. The processing device 1246, memory device 1248, and network access device 1250 may be similar to the processing device 1206, memory device 1208, and network access device 1212 respectively, which have each been previously described.


The media content recommendation engine 1110 operates to determine which of the media content items 1234 to recommend for playback to the user U (e.g., to enhance the listening experience of the user U). In some embodiments, the DCTM 201 or DWCTM 1501 facilitates the media content recommendation determinations. The DCTM 201 can be a component of the media content recommendation engine 1110 or a separate component communicatively coupled to the media content recommendation engine 1110.


The DCTM 201 can process listening sessions of the user U that span a period of time to model an evolution of how the user U is consuming media content items from the media delivery system 1104 over that period of time. For example, as described in greater detail with reference to FIG. 13, the DCTM 201 can identify clusters or groupings of artists, genres, and/or moods, among other examples, as topics over the period of time. The output of the DCTM 201 can include, for each topic, a popularity of the topic, a representation of the topic, and a correlation with other topics at given time points across the period of time. The output of the DWCTM 1501 can include, for each topic, a popularity of the topic, a word embedding, and a correlation with other topics at given time points across the period of time. The output can be communicated to the media content recommendation engine 1110 (e.g., through the communications 1258).


The recommendation interface 1242 can receive requests or other communication from other systems. For example, the recommendation interface 1242 receives communications 1258 from the DCTM 201 or DWCTM 1501, the communications 1258 including above-discussed output of the DCTM 201 or DWCTM 1501 to facilitate a determination of media content recommendations. In some examples, the recommendation interface 1242 provides the media server application 1220 with the media content recommendations through communications 1256, such that the media server application 1220 can select media content items 1234 based on the recommendations to provide to the media-playback engine 1108 of the media playback device 1102 for playback (e.g., as stream media 1252). In some examples, the media content items 1124 selected based on the recommendations may be included in a playlist 1240 for provision to the media-playback engine 1108.


In other embodiments, the recommendation interface 1242 may request media content items corresponding to the media content recommendation from the media server application 1220 via the communications 1256. The recommendation interface 1242 can then provide the recommended media content items directly to the media-playback engine 1108 through communication 1260. In some examples, the recommended media content items are presented in a manner (e.g., via the user interface 1214) that notifies the user U that these media content items are recommendations.


In some embodiments, the recommendation data store 1244 stores the output received from the DCTM 201 or DWCTM 1501 and the recommendations determined. The recommendation data store 1244 may comprise one or more databases and file systems. Other embodiments are possible as well.


Referring still to FIG. 12, the network 1106 is an electronic communication network that facilitates communication between the media playback device 1102 and the media delivery system 1104. An electronic communication network includes a set of computing devices and links between the computing devices. The computing devices in the network use the links to enable communication among the computing devices in the network. The network 1106 can include routers, switches, mobile access points, bridges, hubs, intrusion detection devices, storage devices, standalone server devices, blade server devices, sensors, desktop computers, firewall devices, laptop computers, handheld computers, mobile telephones, and other types of computing devices.


In various embodiments, the network 1106 includes various types of links. For example, the network 1106 can include wired and/or wireless links, including Bluetooth, ultra-wideband (UWB), 802.11, ZigBee, cellular, and other types of wireless links. Furthermore, in various embodiments, the network 1106 is implemented at various scales. For example, the network 1106 can be implemented as one or more local area networks (LANs), metropolitan area networks, subnets, wide area networks (such as the Internet), or can be implemented at another scale. Further, in some embodiments, the network 1106 includes multiple networks, which may be of the same type or of multiple different types.


Although FIG. 12 illustrates only a single media playback device 1102 communicable with a single media delivery system 1104, in accordance with some embodiments, the media delivery system 1104 can support the simultaneous use of multiple media playback devices, and the media playback device can simultaneously access media content from multiple media delivery systems. Additionally, although FIG. 12 illustrates a streaming media based system for recommending media content items based on a modeled evolution of users' consumption of media content items, other embodiments are possible as well.



FIG. 13 is a conceptual diagram 1300 illustrating a DCTM 201 that models user consumption of media content items over time for facilitating media content item recommendations. The DCTM 201 receives a set 1302 of user listening sessions 1304 (e.g., listening sessions of user U) as input. A user listening session 1304 is comprised of tracks 1306, also referred to as media content items. Each track 1306 has metadata 1308 associated with the track 1306. Example metadata 1308 is similar to the media content metadata 1238 (FIG. 12), including an artist, genre (or sub-genre), mood, acoustic metadata title album, and length, among other examples. The user listening sessions 1304 are also associated with timestamps 1310 that reflect a point in time at which the user U was listening to the tracks 1306, where the set 1302 spans a period time. More than one user listening session 1304 can be associated with a same timestamp based on a level of granularity selected for analysis by the DCTM 201. For example, all user listening sessions 1304 that occurred within a same month can be associated with a same timestamp for that month.


Additionally, the DCTM 201 receives as quantity of topics 1312 to be modeled as input. The quantity of topics 1312 may also specify which type of metadata 1308 is to be modeled as the topic. For example, the user listening session 1304 can be modeled based on one or more types of the metadata 1308 associated with the tracks 1306 therein. Here, the artists are to be modeled, and thus the quantity indicates N artist groupings. In other words, the artists are the vocabulary for the user listening session 1304 similar to words of a document, and a cluster or grouping of artists represents a topic (e.g., a distribution over the artists).


Depending on the quantity of topics 1312 (e.g., N artist groupings) provided as input and a number of clusters or groupings of the artists that can be drawn from the set 1302 of user listening sessions 1034, a list of artist groupings 1314 output by the DCTM 201 can include all artist groupings inferred or the N most probable artist groupings for the user listening sessions 1304. As the topics are groupings of artists, in some examples, the most probable artists associated with each grouping are provided within the list of artist groupings 1314. For example a top 30 artists associated with artist grouping 1 (e.g., AG 1 in FIG. 13) can be provided to describe artist grouping 1. The artist groupings do not have to be independent of one another. A same artist can be included in more than one grouping. In some examples, an artist grouping can then be labeled or categorized. For example, if artist grouping 1 was represented by artists such as Britney Spears, Christina Aguilera, N*Sync, Backstreet Boys, and the like, artist grouping 1 can be labeled as late 90s Pop Artists.


Additionally, for each artist grouping 1316 in the list of artist groupings 1314, the DCTM 201 can provide a popularity 1318, representation 1320, and correlation 1322 for the artist grouping 1316 at given time points across the period of time that the set 1302 of user listening sessions 1304 spans to illustrate how the popularity, representations, and correlations among artist groupings have evolved over time for the set 1302.


For example, the popularity 1318 of the artist grouping 1316 can be based on a distribution over the artist groupings for each user listening session 1304 at a given time point (e.g., similar to document-topic word distribution). For example, a user listening session 1304 is comprised of x % of artist grouping 1, y % of artist grouping 2, z % of artist grouping 3, etc., where the sum of the percentages for equals 100%. In other words, x % of the tracks 1306 for a user listening session 1304 are associated with an artist from artist grouping 1, y % of the tracks 1306 for the user listening session 1304 are associated with an artist from artist grouping 2, and z % of the tracks 1306 for the user listening session 1304 are associated with an artist from artist grouping 3. The representation 1320 of the artist grouping 1316 can be based on a distribution over the artists for a given artist grouping at a given time point (e.g., similar to topic-word distribution). For example, the distribution can include the most frequently associated artists with the given artist grouping at the given time point. The correlation 1322 for the artist grouping 1316 is a relationship strength between a given artist grouping and one or more other artist groupings at a given time point. The strength of the relationship can be based, at least in part, on a number of common artists shared between the given artist grouping and another artist grouping.



FIG. 13 depicts a non-limiting example where the DCTM 201 identifies underlying artist groupings within the user listening sessions 304 as the topics. In other examples, the DCTM 201 can identify other underlying track metadata groupings within the user listening sessions 304 as the topics, such as genre or sub-genre groupings, to model the evolution of the popularity, embedding, and correlations thereof. As another example, DWCTM 1501 can be used instead of DCTM 201 to model the evolution of the popularity, metadata embedding, and correlations thereof. For example, when using DWCTM 1501, output of the DWCTM 1501 can be used to identify underlying artist groupings within the user listening sessions 304 as the topics. In other examples, the DCTM 201 can identify other underlying track metadata groupings within the user listening sessions 304 as the topics, such as genre or sub-genre groupings, to model the evolution of the popularity, metadata embedding, and correlations thereof.



FIG. 14 illustrates an example media content item recommendation 1402 facilitated by output of the DCTM 201 or DWCTM 1501. The media content recommendation engine 1110 of the application system 110, described in detail with reference to FIGS. 11 and 12, generates the media content item recommendation 1402 based on output of the DCTM 201 or DWCTM 1501. The output of the DCTM 201 or DWCTM 1501 can include at least the list of artist groupings 1314 described with reference to FIG. 13.


The user U is currently listening to a first media content item 1404 (e.g., a track). For example, the media-playback engine 1108 facilitates the playing of the first media content item 1404 on the media playback device 1102, which operates to play the first media content item 1404 to produce media output 1112. The first media content item 1404 is associated with a first artist. In some examples, a name of the first artist is included as part of metadata for the first media content item 1404.


The media-playback engine 1108 can communicate over the network 1106 with the media delivery system 1104 to indicate that the first media content item 1404 is currently being played back on the media playback device 1102. The media-playback engine 1108 can include associated metadata of the first media content item 1404, such as the first artist, within the communication. Additionally or alternatively, the media content recommendation engine 1110 can retrieve such metadata from the media data store 1230 (e.g., media content metadata 1238).


Using the output of the DCTM 201 or DWCTM 1501 described in detail with respect to FIG. 13, an artist grouping from the list of artist groupings 1314 that includes the first artist of the first media content item 1404 is identified. In some examples, a second artist can be selected from the same artist grouping that includes the first artist. In other examples, the second artist can be selected from a different artist grouping that is highly correlated with the artist grouping that includes the first artist. A second media content item from media content associated with the second artist can then be identified from the media data store 1230 and provided as the recommendation 1402. In other examples, more than one media content item can be identified and provided in a form of a playlist.


Use of the output of the DCTM 201 or DWCTM 1501 to facilitate media content item recommendations, such as the recommendation 1402, enables trend-sensitive recommendations. For example, as the DCTM 201 learns the latent representations or groupings (e.g., artist groupings, genre groupings, mood groupings, etc.) within user listening sessions, they can be used by the media content recommendation engine 1110 to not only predict and recommend the next media content item to listen to, but also and more importantly identify a set of media content items with diverse properties (for example, belonging to different groupings), which can target not only satisfaction metrics of the user U but also diverse recommendations. This is an improvement over current recommendation systems that often train on all user behavior data gathered collectively over time (rather than at individual time points across time like the DCTM 201), which causes these current systems to often be outdated and insensitive to emerging trends in user preference. Using the DCTM 201 or DWCTM 1501, it is possible to recommend media content items that are relevant to the current user taste and sensitive to emerging trends, as well as possibly predict future ones.



FIGS. 11 through 14 provide a non-limiting example of the media delivery system 1104 as the application system 110 that uses an evolution of user consumption of media content items modeled by the DCTM 201 or DWCTM 1501 to inform recommendations of media content items for the user U.


Additionally, the media delivery system 1104 can utilize the DCTM 201 or DWCTM 1501 to discover emerging artists and artist groups to further inform the recommendations. By viewing the artists as the vocabulary, new related artists can be discovered based on a user's recent listening sessions, which is not necessarily what other users are listening to (as “similar artists”), providing further personalization. As every user listening session is a unique collection of topics, the recommendation based on the topics of the current session can offer a new personalized and diversified session creation.


In other examples, the media delivery system 1104 can utilize the DCTM 201 or DWCTM 1501 more generally to understand the evolution of artists groups to keep playlists up to date. For example, using the DCTM 201 to model user listening sessions, the media delivery system 1104 can understand an artist grouping from the point of view of the user (e.g., why artists are grouped together). As one example, the artists can be grouped not only by genre, but also based on geographical reasons (e.g., from a same country or region of a country) or shared themes in their tracks. The DCTM 201 can model the relationship between an artist grouping over time to understand how the grouping evolves and changes. Human or machine editors can use this information to create new playlists of media content items, or enable connections between emerging artists while keeping the playlists up to date, and aligned with users' interests. Additionally, this information can be utilized as part of business decisions to determine whether more editors should be dedicated to creating playlists for certain artist groups or genres related to those artist groups to keep up with user demand.


In further examples, the media delivery system 1104 can utilize the DCTM 201 or DWCTM 1501 to moderate content of media content items. For at least some types of media content items, such as podcasts, it is important to detect harmful content. However, a direct search of the podcasts for banned keywords is likely not an effective detection method, as synonyms or words with a different meaning are likely to be used to avoid detection. The DCTM 201 or DWCTM 1501 can be utilized to understand a relationship between the words and topics that are used in the podcasts, which can aid in detection of potential harmful content. For example, it is likely that the words and topics used to indicate harmful content would be used in a different context than usual, indicating that their meaning could be different and the content of the podcast is potentially harmful.


In yet further examples, the media delivery system 1104 can utilize the DCTM 201 or DWCTM 1501 for knowledge graph integration. Knowledge graphs provide a useful way to link together topics and keywords. However, topics in the knowledge graph are often static, meaning that two entities are linked by a fact (i.e., a family relationship), whereas a manner in which topics and keywords are going to be consumed together can depend on other dynamic factors. For example, based on recent news, topics related to “news” and “sports” can be recommended together if something recently occurred in the sports community, or at other times, topics related to “news” and “politics” if something recently occurred in the political community. The DCTM 201 or DWCTM 1501 can account for these dynamic factors to ensure that the knowledge graph is up to date reflecting the dynamic changes in trends.


The service 102 hosting the DCTM 201 or DWCTM 1501 and/or the application systems 110 that apply the output of the DCTM 201 or DWCTM 1501 to other processes are not limited to media streaming or delivery systems. Other example services 102 and/or application systems 110 can include social networking or professional networking systems. For example, social media posts can be modeled by the DCTM 201 or DWCTM 1501 to determine emerging or trending topics. As one example, in response to a global pandemic, many users that subscribe to the social networking system are having to now work from home. The DCTM 201 or DWCTM 1501 can identify as a topic a distribution of words that are associated with working from home from the social media posts, which be utilized to provide targeted advertising (e.g., noise canceling headphones to deal with those new co-workers) or other similar recommendations within the social networking system. Additionally, the identified topic from social media posts can be provided to other systems, such as a media streaming service, and utilized to inform types of media content recommendations (e.g., stress relieving media content items).


Moreover, in addition to modeling evolution of topic popularity, representation, and correlation over time, the DCTM 201 can also model the evolution thereof based on geography to provide a spatio-temporal perspective. For example, the Gaussian processes utilized for the dynamic modeling, described in detail with respect to FIG. 9, allow multiple indices for each document. Thus, one index can be based on temporal information, and another index can be based on geographic information. As a result, the DCTM 201 can reveal relationships between topics or the probability of a topic based on geographic information.



FIG. 15 illustrates a conceptual diagram 1500 of example inputs to and outputs from DWCTM 1501. As one example, the inputs to the DWCTM 1501 include a set 1502 of documents 1504 and a quantity of topics 1508 (e.g., N topics) for modeling. Each document 1504 is a textual document comprised of a plurality of words 1505 and having an associated timestamp 1506, the timestamps 1506 defining a period of time that the set 1502 of documents 1504 spans. In some examples, more than one document 1504 can be associated with a same timestamp 1506. A level of granularity at which the timestamp 1506 is processed (e.g., by year, by month, by week, daily, hourly, etc.) can be variable depending on a type of document 1504. The granularity can also be based on how rapidly one or more of topic popularity, word embedding, and topic correlations are likely to change. For example, if change is more likely, processing the documents 1504 with smaller timestamp intervals (e.g., monthly versus yearly) can be desirable.


A topic is a probability distribution over vocabulary. Here, the vocabulary are the words 1505 of the documents 1504. Thus, the topics identified by the DWCTM 1501 are each a cluster or grouping of a subset of the words 1505 (see the example of FIG. 4). The topics are not assumed to be independent of one another. Accordingly, two or more topics can be represented by common words (e.g., a same word can represent more than one topic). Moreover, in contrast to conventional dynamic topic models, there is no assumption of independence between words with the DWCTM 1501.


The quantity of topics 1508 received as input informs the DWCTM 1501 of a number of word clusters to identify. Depending on the quantity of topics 1508 provided as input and a number of topics that can be drawn from the set 1502 of documents 1504, a list of topics 1510 output by the DWCTM 1501 can include all topics inferred during an inference process or the N most probable topics for the documents 1504 where N corresponds to the quantity of topics 1508. As the topics are clusters of words 1505, in some examples, the most probable words associated with a topic are provided as the topic within the list of topics 1510. For each topic 1512 in the list of topics 1510 (e.g., for each topic 1 through topic N), the DWCTM 1501 can provide outputs corresponding to a topic popularity 1514, a word embedding 1516, and a topic correlation 1518 at given time points across the period of time that the set 1502 of documents 1504 span to illustrate how the topic popularity, word embeddings, and topic correlations have evolved over time for the set 1502.


The topic popularity 1514 can be based on document-topic distribution output by the DWCTM 1501. A document-topic distribution is a distribution over the topics for each document 1504. For example, a document is comprised of x % of topic 1, y % of topic 2, z % of topic 3, etc., where the sum of the percentages for each topic equals 1. The topic correlation 1518 is a relationship strength between a topic and one or more other topics at a given point in time, which captures the evolution of topic correlations over time. The topic correlation 1518 can be based on covariance matrices that yield a correlation coefficient measuring the relationship strength.


The word embedding 1516 can be based on a topic-word distribution output by the DWCTM 1501. A topic-word distribution is a distribution over the words for a given topic. For example, the distribution can include the most frequently associated words with a given topic. The DWCTM 1501 allows for new words to be introduced for machine learning at test time as well as reliably inferring low frequency words. The DWCTM 1501 automatically learns a word embedding that provides a proxy to model word correlation. As described below, the word embedding procedure allows word embedding to be initialized with pre-trained embeddings; thereby speeding up the word embedding procedure while also accounting for low frequency words in the training vocabulary. The word embedding can be include generating a word embedding matrix that corresponds to the word embedding, wherein words of similar context are grouped together within the word embedding matrix.


The DWCTM 1501 incorporates word correlation in part by augmenting a generative process of topic representations and removing the assumption of independence between words 1505 to leverage information from less frequent words. Accordingly, DWCTM 1501 can obtain sufficient signals about less frequent words by observing the existence of similar words. The DWCTM 1501 correlates previously independent GPs into an MOGP that explicitly captures word correlation in the form of a covariance matrix of all the words 1505. The DWCTM 1501 alleviates the high computational complexity and the large amount of data required for reliable estimates by representing a topic representation with word correlation by embedding words 1505 into a latent space and generating a covariance matrix via a covariance function. Applying a Bayesian treatment to the word representations (e.g., vectors) in the latent space, a reliable estimate can be obtained of the word correlation with a small amount of data which may infer a word embedding. When the DWCTM 1501 is applied to music, a track embedding may be provided in the inference, which may dramatically reduce and improve an inference of listening sessions in the case of music, resulting in a more efficient use of processing and memory resources.


The DWCTM 1501 utilizes a meta-encoder 1503 for the variational inference (e.g., variational posterior of topic mixing proportions) to manage document dynamics as they evolve over time which improves an amortized inference formulation. The dynamic encoding ability of the meta-encoder 1503 is different from a static encoder since meta-encoder 1503 is aware of a document timestamp 1506 during inference operations. Conventional topic models are constrained by not considering document dynamics as they evolve over time. FIG. 18 is a diagram of a comparison analysis on the effect of the meta-encoder 1503.


The DWCTM 1501 overcomes the scalability problem of traditional topic modelling by implementing an approximate word normalization through importance sampling. An approximated normalization constant for the word distribution allows the DWCTM 1501 to scale to millions of words 1505. By not sampling all of the vocabulary (which can include millions of words) each time, the DWCTM 1501 considers a fixed subset of words 1505 that do not appear in the document 1502 (plus the words that do appear in the document) to compute a normalization constant.



FIG. 16 is an example method 1600 of processing performed by the DWCTM 1501. Upon receipt of inputs, such as the set 1502 of documents 1504 and quantity of topics 1508 for modeling described with reference to FIG. 15, the DWCTM 1501 initiates the method 1600 at operation 1602 by analyzing words from each document 1504 with the set 1502 of documents 1504. Based on the analysis, the DWCTM 1501 identifies topics at operation 1604. The topics identified are for the set 1502 of documents 1504. The topics are comprised of clusters of words that represent the topics. At operation 1606, for each topic identified at operation 1604, the DWCTM 1501 models topic popularity, word embedding, and topic correlation across the period of time, as described in more detail with reference to FIG. 17. In some examples, the DWCTM 1501 is an unsupervised machine learning model that performs one or more of operations 1602, 1604, and 1606 iteratively to learn the underlying topics and dynamics of each topic over time as part of training, for example. The DWCTM 1501 can be trained on at least a portion of the set 1502 of documents 1504 provided as inputs. Moreover, DWCTM 1501 can be initialized with pre-trained word embeddings which improves the training process while accounting for low frequency words in the training vocabulary. In contrast to conventional topic models, the DWCTM 1501 allows for new words to be introduced at test time and reliable inference of low frequency words.



FIG. 17 illustrates an example training phase 1700 of the DWCTM 1501. During the training phase 1700, the DWCTM 1501 performs a generative process 1702 and an inference process 1706 using a training dataset 1708. In some examples, the architecture of the DWCTM 1501 includes one or more deep neural networks that utilize unsupervised learning to perform one or more of the generative process 1702 and inference process 1706. The training can occur online (e.g., over a network such as network 120 described with reference to FIG. 1).


The generative process 1702 defines a joint probability distribution over observed data and latent variables. The joint probability distribution can be decomposed into a likelihood for the observed data conditioned on the latent variables and a prior distribution from which the latent variables are drawn. In some examples, a probabilistic graphical model 1704 is used to illustrate the generative process 1702. For example, within the probabilistic graphical model 1704, the shaded node represents the observed data, the unshaded nodes represent the latent variables, and the edges represent possible dependencies between the nodes. A goal of the training phase is to learn the latent variables and model parameters. In some examples, at least a portion of the training dataset 1708 is held back to be used for testing (e.g., 75% of the dataset can be used for training, and 25% can be withheld as a testing dataset).


DWCTM 1501 is a probabilistic generative model that assumes that each document d, associated with a specific time point xd, is generated by sampling a set of words according to K topics. Each document has an unnormalized topic mixing proportion ηd sampled from a prior distribution, ηd˜custom-characterχd, Σχd), where μxd is the mean of the distribution of topics mixing proportions associated to the time point xd, and Σxd is the covariance matrix of topics at xd. When Σχd is diagonal, the mixing proportion for each topic are independent to each other. Then, each word wn in this document is assigned with atopic zn, which is sampled from the distribution σ(ηd), where σ(x)i=exp(xi)/Σj exp(xj) is the softmax function. Finally, the word wn is sampled by picking a word from the vocabulary following the unnormalized word distribution of the assigned topic zn at the time xd, βzn(xd).


The generative process 1702 of a Nd-word document d is summarized as follows. First, draw a mixture of topics ηd˜custom-characterxd, Σχd). Then, for each word n=1, . . . , Nd:


1) Draw a topic assignment znd from a categorical distribution with parameter σ(ηd);


2) Draw a word wn|zn, β from a categorical distribution with parameter σ(βznd)).


The individual documents are assumed to be independent and identically distributed (i.i.d) given the document-topic proportion and topic-word distribution. Under this generative process 1702, the marginal likelihood for a given corpus W that contains D documents becomes:










p


(


W
|
μ

,
Σ
,
β

)


=




d
=
1

D








n
=
1


N
d









(




z

n
=
1


K




p


(



W
dn

|

z
n


,

β

z
n


(

x
d

)



)




p


(


z
n

|

η
d


)




)



p


(



η
d

|

μ

x
d



,

Σ

x
d



)



d


η
d









(
1
)







To model the temporal dynamics of topic mixing proportions ηd, the temporal processes can be considered as the prior distributions for μ and Σ. In particular, a zero-mean Gaussian process can be considered to model the topic probability (ηχd)d=1D, i.e., p(μ)=GP(0, κμ). Similarly, covariance matrices (Σχd)d=1D can be modeled as a generalized Wishart process (GWP), indicated as





Σχd˜GWP(V,ν,κθ).


Conventionally, the topic representations β of dynamic topic models are allowed to change over time by defining a GP prior over time independently for each word in each topic, so that there will be KP independent GPs, where P is the number of words in the vocabulary. The conventional process does not allow information sharing among similar words and results in a large number of variational parameters for inference.


According to an aspect, correlation among words can be defined by a correlated temporal process for all words. First, a latent representation can be defined, hicustom-characterQ, for each word in the vocabulary. The latent representations are given an uninformative prior hi˜N(0, I). Then, a MOGP is defined for the topic representations over time for each topic:






p((βk):|H,x)=N((βk):|0,KH⊗Kx),  (2)


where ( ): denotes a matrix vectorization, ⊗ denotes the Kronecker product, βk is a T×P matrix representing the unnormalized word probabilities over time for the topic k (T is the number of unique time points in the corpus). The covariance matrix Kx is computed using the kernel function κx over all the time points x and the covariance matrix KH is computed using the kernel function κH over all the word representations H=(h1, . . . , hP). With this formulation, all the words at all the time points are jointly modeled with a single GP, in which the word correlation is encoded in the TP×TP covariance matrix. The prior distributions among different topics are assumed to be independent: p(β|x, H)=Πk=1Kp(βk|x, H). The word correlation is encoded through the latent representations of words, which are static over time and shared across all the topics. Although the number of latent vectors are relatively large, they can be reliably estimated by conditioning the whole corpus. The topic assignment variables {zn}n=1Nd for individual words of each document are latent and discrete, making it difficult to infer with variational inference. A solution involves marginalizing out these discrete variables to obtain a closed form likelihood distribution,










p


(



W
d

|

η
d


,
β

)


=




n
=
1


N
d




C

a


t


(

σ


(


β

(

x
d

)




σ


(

η
d

)



)


)








(
3
)







where βd) denotes the representations of all the topics at the time xd. With this formulation, a document can be represented in the form of word-count, allowing for a simplified variational inference procedure.


The MOGP formulation provides a framework to correlate both the temporal dimension and the words in the vocabulary for the topic representations under a single GP but also introduces a computational challenge because calculating the probability density function (PDF) of Equation (2) is O(P3 T3). To overcome this computational challenge, an efficient variational inference method is provided based on the stochastic variational sparse GP formulation (SVGP), reducing the computational complexity to be linear with respect to P and T.


The word correlation is encoded by the latent representations of individual words H. The variational posterior of H can be parameterized as q(H)=N (mH, SH) to derive a variational lower bound as:





log p(W|x)≥Eq(H)[log p(W|x,H)]−KL(q(H)|p(H)),  (4)


where KL(·∥·) denotes the Kullback-Leibler divergence. The KL term in (4) can be computed in closed form because both q(H) and p(H) are normal distributions, but p(W|x, H) is intractable.


To derive a lower bound for the marginalized likelihood p(W|x; H), a variational lower bound is derived for log p(βk x, H) according to the SVGP formulation. Taking advantage of the Kronecker product structure in the covariance matrix, i.e., KH⊗Kx, the inducing variables can be defined to be on a grid in the joint space of the word embedding and the temporal dimension. By letting Uβk be a Mx×MH matrix which follows the distribution,






p(Uβk|Zx,ZH)=N((Uβk):|0,Kuu), where KuuH⊗Kuuχ.


The rows of Uβk correspond to a set of inducing inputs in the temporal dimension, denoted as Zx, and the columns of Uβk correspond to a set of inducing inputs in the word embedding space, denoted as ZH. Then, KuuH is computed on the set of inducing inputs ZH with KH, while Kx is computed on the set Zx with κx.


After defining the inducing variable Uβk, p(βk|x, H) is reformulated as:






pk|x,H)=p(β|Uβ,x,H,Zx,ZH)p(Uβ|Zx,ZH)dUβ.  (5)


The conditional distribution of βk is:






pk|Uβ,Zx,ZH,x,H)=Nk|KfuKuu−1Uβ,KffKfuK−1Kuf),  (6)


where Kfu=KfuH⊗Kfux and Kff=KffH⊗Kffx. KffH is the covariance matrix computed on H with κH, and Kffx is computed on χ with κx.


With the augmented GP formulation, a variational lower bound can be derived. However, a naive parameterization of the variational posterior q(Uβk) using a multivariate normal distribution has a MxMH×MxMH covariance matrix, which is too large for matrix inversion. Instead, q(Uβk) is defined with a Kronecker product covariance matrix similar to p(Uβk) as:






q(Uβk)=N((Uβk):|M,ΣH⊗Σx)  (7)


where M is the mean of the variational posterior, ΣH is a P×P covariance matrix and Σx is a T×T covariance matrix. With this formulation, the covariance matrix can be inverted efficiently by only inverting the two smaller covariance matrices,


H⊗Σx)−1=(ΣH)−1⊗(Σx)−1. Such a parameterization dramatically reduces the number of variational parameters in the covariance matrix from Mx2MH2 to Mx2+MH2.


With the variational posterior q(Uβk), the variational lower bound can be derived for any downstream variable that consumes βk:





log p(·|H)≥Eqk|H)[log p(·|βk)]−KL(q(Uβk)∥p(Uβk)),  (8)


where q(βk|H)=∫p(βk|Uβk, H)q(Uβk)dUβk. The expectation Eq(βk|H)[p(|βk)] may be approximated with Monte Carlo Integration by drawing samples from q(βk|H).


The multivariate normal distribution with a Kronecker product covariance matrix like p(Uβk) and q(Uβk) is a matrix normal distribution. In matrix normal distribution notation, q(Uβk) becomes MN(M, ΣH, Σx). Sampling from the distribution and the KL divergence can be computed efficiently (see Supplemental analysis further below).


To compute the expectation in equation (8), samples are drawn from q(βk|H). As q(βk|H) is a multivariate normal distribution with a full covariance matrix, drawing a correlated sample of βk is computationally very expensive, O(P3T3). Usually, drawing a fully correlated sample can be avoided if βk in the downstream log PDF, log p(·|βk), can be decomposed into a sum of individual entries, e.g., p(·|βk) is a normal distribution. However, such decomposition is not applicable due to the softmax function applied to βk in equation (1). To efficiently sample from q(βk|H), another sparse GP approximation can be applied, the “fully independent training conditional” or FITC approximation for example, to the conditional distribution of βk. The resulting formulation is:






p
FITCk|U,Zx,ZH,X,H)=Nk|KfuKuu−1(UT),diag(Kff−KfuKuu−1Kuf)),  (9)


where diag(·) returns a diagonal matrix while keeping the diagonal entries. Since Kfu, Kff and Kuu have a Kronecker structure, the mean and covariance can be rewritten to compute them efficiently. Sampling from (9) is efficient because individual entries of βk can be sampled independently. This reduces the computational complexity of sampling βk from O(P3T3) to O(PTMx2MH2).


Regarding the variational inference for a mixture of topics, a variational posterior q(ηd) for each document can be used to derive a variational lower bound of the log probability over the documents as:










log


p


(


W
|
μ

,
Σ
,
β

)








d
=
1

D



(




𝔼


q


(

β
|
H

)




q


(

η
d

)






[

log


p


(



W
d

|

η
d


,

β

(

x
d

)



)



]


-

K


L


(


q


(

η
d

)









p


(



η
d

|

μ

x
d



,

Σ

x
d



)



)




=


W








(
10
)







Since the lower bound is a summation over individual documents, the formulation allows for a stochastic approximation by sub-sampling the documents.


Regarding importance sampling, computing the expectation


Eq(β|H)q(η)[log p(Wdd, β(xd)] is still problematic when the number of words in the vocabulary increase, as each word needs to be sampled to compute the normalization constant of the softmax function as in equation (3).


First, let ξd(xd)σ(ηd), and ξd,n=(ξd)n. Equation (3) can be rewritten as p(Wdd, β)=Πn=2NdCat(σ(ξd))=custom-characterW


Then, the derivative can be explicitly written as (see Supplemental analysis further below):














~

w


=


𝔼


q


(

β
|
H

)




q


(

η
d

)









n
=
1


N
d




[




ξ

d
,
n



-




i
=
1

P






exp


(

ξ
d

)


i





j
=
1

P




exp


(

ξ
d

)


j







ξ

d
,
i






]







(
11
)







In the sum inside the parenthesis, sampling is performed from all of the vocabulary (that has size P) which is inefficient and may even be unfeasible for a large vocabulary. To efficiently scale the DWCTM 1501 to an arbitrary large set of words in the vocabulary, the normalization constant can be approximated with a fixed number of words, using a self-normalizing importance sampling. By considering the words appearing in the batch of documents under analysis as positive (e.g., as in a positive class in a classification problem), importance sampling can be used to approximate the normalization constant by considering a random sample of M classes (e.g., words from the vocabulary) to approximate the normalization constant.


Consider a sample vector s∈{1, . . . , P}M+Nd, which represents a sample of words in the vocabulary and stores the index of the Nd positive (words appearing in document d) and the index of the M sampled words. Let ξ′d,i:=ξd,i−ln(Qdi/P) if yi=0 (i.e., word i does appear in document d), ξ′d,i:=ξd,i−ln(Qdi) otherwise, with Qdi proposal distribution. Then shift the true log its by the expected number of occurrences of a word i, ensuring that the sampled softmax is asymptotically unbiased. As one example, Q can be a uniform distribution over the subset of words considered, so Qdi=1/(Nd+M). Then:














~

w





𝔼


q


(

β
|
H

)




q


(

η
d

)









n
=
1


N
d




[




ξ

d
,
n



-




i
=
1


M
+

N
d







exp


(

ξ
i


)






j
=
1


M
+

N
d





exp


(

ξ
j


)








ξ

d
,
i






]







(
12
)







As a result, the complexity of computing expectation from O(PTMx2MH2) is further reduced to O((M+Nd)TMx2MH2) providing additional processing and memory resource conservation.


Referring again to FIG. 15 and the meta-encoder 1503, the variational posteriors q(r a) for individual documents can be parameterized as:






qd)=Nm([WdMβ,xd]),σS([WdMβ,xd])),  (13)


where σm and σS are the parametric functions generating the mean and variance of q(ηd), respectively, Mβ,xd is the mean of the GP prediction at the inducing point location ZH, and [A B] denotes the concatenation of the matrices A and B. Instead of implicitly learning the topic information into σm and σS, a summary of all the topic representation at the time point xd may be explicitly passed. Mβ,xd may be defined as the mean of the GP prediction at inducing point location ZH to keep the complexity constant with respect to the number of words in the vocabulary. The prediction can be treated as the summary of all the topic representations at input xd because the inducing variable in sparse GP can be viewed as a summary of all the data. By having the topic representations as inputs, the meta-encoder 1503 does not need to “memorize” the information about topics but rather can link a document to relevant topic representations. Thus, the meta-encoder 1503 may be representative of and correspond with σm and σS.


The lower bound custom-characterW is intractable, so an unbiased estimate can be computed of custom-characterW via Monte Carlo sampling. As q(ηd) are normal distributions, a low-variance estimate of the gradients can be obtained via reparameterization. The document-topic proportion for each document d follows a prior distribution p(ηd, μxd, Σxd), where the Gaussian process p(μ)=custom-character(0, κμ) provides the mean and the Wishart process p(Σ)=(V, ν, κθ) provides the covariance matrix at xd. To enable efficient inference for both GP and GWP, a SVGP approach can be used to construct a variational lower bound. The complete variational lower bound custom-characterW of DWCTM 1501 may then be determined by combining the lower bounds (10), (4), (8) and the lower bounds for GPs and GWP (see Supplemental analysis further below).



FIG. 19 illustrates a comparison of DWCTM 1501 (i.e., MIST) with several static and dynamic topic models: (i) LDA with a mean-field variational inference (ii) CTM with variational inference; and (iii) dynamic correlated topic. The comparison did not include dynamic models that only consider independent topics, e.g., DTM, FastDTM (i.e., do not handle continuous dynamics), and gDTM (only considers dynamics for β).


As shown in FIG. 20, a performance analysis empirically evaluated the benefit of DWCTM 1501 (i.e., MIST) with synthetic datasets, using the importance sampling to scale the model to tens of millions of words. The comparison of DWCTM 1501 against a DCTM model is on a dataset with increasing words (from 100 to 10M), while keeping a fixed small number of samples at different time points. DWCTM 1501 outperforms the DCTM model independently on the number of words considered as shown by an average time to compute 5 epochs in a dataset with increasing number of words. In particular, the computational benefit DWCTM 1501 is more evident after reaching 100K (and 1M) words. On the dataset with 10M words, the DCTM model was unusable as being computationally intractable. DWCTM 1501 is able to scale to 10M words, with an average computational time which is lower than the time required by the DCTM model with 1M words (see also DWCTM 1501 with 1M words as compared to the DCTM model with 100K).


A quantitative analysis highlights the benefit of incorporating word correlation in topic modelling by comparing DWCTM 1501 with state-of-the-art topic models on public datasets. Full details on the data can be found in Supplemental analysis section further below. In all datasets, there is a timestamp associated with each document. The static topic models (LDA and CTM) are optimized without considering timestamps, while DWCTM 1501 incorporates the timestamps into the inference.


Using a split dataset considering 75% of the samples as training and 25% as test, documents associated with the same timestamps were assigned to the same split. For each dynamic topic model, a Matérn 3/2 kernel was used for β, to allow topics to quickly incorporate new words (e.g., to incorporate neologisms, and particularly for datasets such as NeurIPS conference papers and Elsevier corpus, where the names of novel models become quoted in citations (for example, “LDA” starting to appear in publications together as “topic modeling” after its introduction in 2003)). A squared exponential kernel was used for the parameters μ and f, expecting a smooth temporal evolution of both topic probabilities and their correlation. A full list of experimental settings can be found in the see Supplemental analysis further below.


The average per-word perplexity computed on the held-out test set of the datasets for all the models is shown in FIG. 19. The dynamic models (without FITC approximation) took around 6 s/epoch to analyze 7000 training samples in 3000 dimensions using a single GPU NVIDIA Tesla V100, completing 5000 epochs in 8 hours (on average). Using the FITC approximation achieved a runtime of 2.5 s/epoch, completing 5000 epochs in 3.5 hours. The per-word perplexity is a measure of best fit to compare models, computed as the exponential average negative predictive log-likelihood for each word. In cases where the perplexity is computationally intractable, it may be approximated using a subset of words (see Supplemental analysis further below). DWCTM 1501 consistently outperforms all the baselines on all the datasets. The benefit in using DWCTM 1501 is more evident for the datasets that have a larger time span (SotU) or a shorter document size (Blogs, News, Twitter). There is also a significant performance gap between static and dynamic topic models, which demonstrates the advantages of incorporating temporal information into topic modeling. Comparing DWCTM 1501 to a state-of-the-art dynamic topic model, the perplexity decreases on average by 10%, which shows that incorporating word correlation into topic modeling can significantly improve the quality of modeling. Furthermore, the benefit becomes more evident when using large scale datasets (e.g., News and Twitter extended). DWCTM 1501 achieved better (approximate) perplexity than static topic models. The approximate perplexity metric is a reasonable approximation of the real perplexity which was computed with 1000 randomly sampled words. The approximate perplexity for NeurIPS (3000) with respect to the true perplexity was also compared which achieved similar results (887.74 vs 888.59).


To highlight the benefit of using the meta-encoder 1503, for comparison purposes the DWCTM 1501 was trained using encoders that only consider the document representation as inputs. The comparison is shown in FIG. 18. Utilizing the meta-encoder 1503 resulted in better performance for five out of seven datasets. DoJ is the only dataset on which the meta-encoder 1503 was not as effective because the topics in DoJ change very little across time. Indeed, this result suggests how the benefit of using the meta-encoder 1503 may not be as effective when the temporal dynamics of β do not change much.


A qualitative analysis on NeurIPS dataset provides additional insight about the word correlation in DWCTM 1501 by visualizing the inferred word correlation. Four interred popular topics were selected across all years on the NeurIPS dataset and the top-10 frequent words were collected for each topic (see Supplemental analysis further below). Then, the covariance matrix among these frequent words (duplicate words are removed) was computed by applying the learnt kernel function κH to the mean of the variational posterior of the word representations nix. The covariance matrix can be converted into a correlation matrix for better interpretability. Due to the choice of the kernel function (squared exponential) no anti-correlation is captured in the correlation matrix. A simple hierarchical clustering was applied to the correlation matrix. With only the word relation, the words associated with the same topic are roughly grouped together (topics are unknown to the clustering algorithm). For example, network, weight, neural and layer, which identify the topic neural network, have a very similar embedding. The word pairs that are often used together in some research area show interesting strong correlations such as input-output, image-pixel, time-state. This indicates that the word correlation has contributed to the identification of these topics.


As discussed above, an efficient approach to model word correlation in dynamic topic modeling is provided that incorporates word dynamics through the use of MOGPs. The amortized inference is improved via the meta-encoder 1503 which allows DWCTM 1501 to be sensitive to the changes of topic representations. A scalable inference enabled for large vocabularies is provided by deriving an asymptotically unbiased estimator of the gradient to dramatically subsample the number of words in computation. Incorporating word correlation into DWCTM 1501 significantly improves the modeling quality and allows for leveraging information from related words.


Supplemental Analysis:


Importance sampling can use the probability of words in a document conditioned on the parameter ηd and β as:










p


(



W
d

|

η
d


,
β

)


=





n
=
1


N
d




Multi


(

1
,

σ


(

ξ
d

)



)



=





n
=
1


N
d




C

a


t


(

σ


(

ξ
d

)


)




=


L
~

W







(
1
)







Its derivative can be derived as:
















L
~

W


=




𝔼


q


(

β
|
H

)




q


(

η
d

)









n
=
1


N
d




[



log


C

a


t


(

σ


(

ξ
d

)


)



]









=





𝔼


q


(

β
|
H

)




q


(

η
d

)









n
=
1


N
d






ξ

d
,
n





-


1




j
=
1

P



exp


(

ξ

d
j


)











i
=
1

P





exp


(

ξ
d

)


i



(
3
)












=




𝔼


q


(

β
|
H

)




q


(

η
d

)









n
=
1


N
d





[




ξ

d
,
n



-


1




j
=
1

P



exp


(

ξ

d
j


)











i
=
1

P




exp


(

ξ
d

)


i





]



(
4
)










=




𝔼


q


(

β
|
H

)




q


(

η
d

)









n
=
1


N
d





[




ξ

d
,
n



-




i
=
1

P






exp


(

ξ
d

)


i





j
=
1

P




exp


(

ξ
d

)


j







ξ

d
,
i






]



(
5
)











(
2
)







To approximate this derivative, a random sample of M words can be considered from the vocabulary and used to approximate the normalization constant. Consider a sample vector s∈{1, . . . ,P}M+Nd, which represents a sample of words in the vocabulary and stores the index of the Nd positive (words appearing in document d) and the index of the M sampled words.


Let ξ′d,i:=ξd,i−ln(Qdi/P) if custom-characteri=0 (i.e., word I does not appear in document d), ξ′d,i: =ξd,i−ln(Qdi) otherwise, with Qdi proposal distribution. The true log its can be shifted by the expected number of occurrences of a word i, ensuring that the sampled softmax is asymptotically unbiased. Q can be a uniform distribution over the subset of words considered, so Qdi=1/(Nd+M). Then:















𝔼


q


(

β
|
H

)




q


(

η
d

)









n
=
1


N
d




[




ξ

d
,
n



-




i
=
1


M
+

N
d







exp


(

ξ
i


)






j
=
1


M
+

N
d





exp


(

ξ
j


)








ξ

d
,
i






]







(
6
)







The FITC approximation for the multi-output Gaussian process results into the follow formulation:






p
FITC(β|U,ZX,ZH,X,H) =custom-character(β|KfuKuu−1(UT),diag(Kff−KfuKuu−1Kuf)),


where diag(·) returns a diagonal matrix while keeping the diagonal entries, and A: denotes vec(A), the column-wise vectorization of the matrix A. Since Kfu, Kff and Kuu have a Kronecker structure, the mean and covariance can be rewritten to compute them efficiently as follows:













K
fu





K
uu

-
1




(

U
T

)


:


=


(


K
fu
X



K
uu

X

-
1





U
T



K
uu

H

-
T





K
fu

H
T



)

:









diag


(


K
ff

-


K
fu



K
uu

-
1




K
uf



)


=


diag


(

K
ff

)


-


(


diag


(


K
fu
H



K
uu

-
H




K
fu

H
T



)




diag


(


K
fu
X



K
uu

-
X




K
fu

X
T



)



)

:






Note that the last line becomes a vectorized outer product between vectors and solved efficiently. A similar analysis can be used for diag(Kff). The full derivation is as follows:











K
fu





K
uu

-
1




(

U
T

)


:


=



(


K
fu
H



K
fu
X


)




(


K
uu
H



K
uu
X


)


-
1





(

U
T

)

:


=



(


K
fu
H



K
fu
X


)



(


K
uu

H

-
1





K
uu

X

-
1




)




(

U
T

)

:


=



(


K
fu
H




K
uu

H

-
1





K
fu
X




K
uu

X

-
1




)




(

U
T

)

:


=

(



K
fu
X



K
uu

X

-
1







U
T



(


(


K
fu
H



K
uu

H

-
1




)

T

)


:


=




(


K
fu
X



K
uu

X

-
1





U
T



K
uu

H

-
T





K
fu

H
T



)

:







diag


(


K
ff

-


K
fu



K
uu

-
1




K
uf



)



=



diag


(

K
ff

)


-

diag


(


K
fu



K
uu

-
1




K
uf


)



=


diag


(

K
ff

)


-

diag





(


(


K
fu
H




K
uu

H

-
1





K
fu
X




K
uu

X

-
1




)




(


K
fu
H



K
fu
X


)

T


)

=



diag


(

K
ff

)


-

diag


(


K
fu
H




K
uu
H


-
1





K
fu

H
T




K
fu
X




K
uu

X

-
1





K
fu

X
T



)



=


diag


(

K
ff

)


-


(


diag


(


K
fu
H



K
uu

H

-
1





K
fu

H
T



)




diag


(


K
fu
X



K
uu

X

-
1





K
fu

X
T



)



)

:


















(

matrix





eq

)







The matrix normal is related to the multivariate normal distribution as:






X˜MN
n×p(M,U,V),  (7)





if and only if,






vec(Xcustom-characterp(vec(M),V⊗U)  (8)


where ⊗ denotes the Kronecker product and vec(M) denotes the vectorization of M.


Sampling from the distribution and the KL divergence can be computed efficiently. Uβk can be sampled efficiently following the procedure: (i) sample C˜MNh×x(0,I,I), C∈Rh×x, a collection of independent samples from a standard normal distribution; then (ii) let Uβk=(M+ACB), where ΣH=AAT and ΣX=BTB. The KL divergence between q(Uβk) and p(Uβk) can also be computed efficiently.


Sampling from the matrix normal distribution is a special case of the sampling procedure for the multivariate normal distribution. Let X be an n by p matrix of np independent samples from the standard normal distribution, so that:






X˜MN
n×p(0,I,I).  (9)





Then let,






Y=M+AXB, so that Y˜MNn×p(M,AAT,BTB),  (10)


where A and B can be chosen by Cholesky decomposition or a similar matrix square root operation.


The KL divergence between two matrix-variate normal distributions, e.g., q(Uβk) and p(Uβk), can be analytically computed as:









KL


(


q


(

U

β
k


)









p


(

U

β
k


)



)


=


1
2



(



M
x


log





K
uu
H







H





+


M
H


log





K
uu
X







X





+

tr


(




M
T



(

K
uu
X

)



-
1





M


(

K
uu
H

)



-
1



)


+


tr


(



(

K
uu
H

)


-
1





H


)




tr


(



(

K
uu
X

)


-
1





X


)



-


M
H



M
x



)



)

.




To implement tr[MT(KX)−1M(KH)−1], use KX=LXLTX, KH=LHLTH, A=LX−1 MLH−T, then tr[MT(KX)−1M(KH)−1]=tr(AT A).


Then,






KL(q∥p)=∫q(x)(log q(x)−log p(x))dx  (11)


which in the case of two multivariate Gaussian distributions, say p(x)=N(m1,S1), q(x)=N(m2, S2) is equal to












[



1
2


log





S
2






S
1





-


1
2




(

x
-

m
1


)

T




S
1

-
1




(

x
-

m
1


)



+


1
2




(

x
-

m
2


)

T




S
2

-
1




(

x
-

m
2


)




]



q


(
x
)



dx





(
12
)






=



1
2


log





S
2






S
1





-


1
2


tr


{


𝔼


[


(

x
-

m
1


)




(

x
-

m
1


)

T


]




S
1

-
1



}


+


1
2



𝔼


[



(

x
-

m
2


)

T




S
2

-
1




(

x
-

m
2


)



]








(
13
)






=



1
2


log





S
2






S
1





-


1
2


tr


{


𝔼


[


(

x
-

m
1


)




(

x
-

m
1


)

T


]




S
1

-
1



}


+


1
2



𝔼


[



(

x
-

m
2


)

T




S
2

-
1




(

x
-

m
2


)



]








(
14
)






=



1
2


log





S
2






S
1





-


1
2


tr


{

I
d

}


+


1
2




(


m
1

-

m
2


)

T




S
2

-
1




(


m
1

-

m
2


)



+


1
2


t

r


{


S
2

-
1




S
1


}







(
15
)






=


1
2



[


log





S
2






S
1





-
d
+

t

r


{


S
2

-
1




S
1


}


+



(


m
1

-

m
2


)

T




S
2

-
1




(


m
1

-

m
2


)




]






(
16
)







Now, a Kronecker representation of S1 and S2 can be used as S1=Sh⊗Sx and S2=Kh⊗Kx. Let M=m1-m2. Also, consider a vectorized version of M, and indicated as M. Then the KL divergence becomes: (using |V⊗U|=|V|n|U|p, and mixed product property”










1
2



[


log






K
h



K
x








S
h



S
x






-
d
+

t

r


{



(


K
h



K
x


)


-
1




(


S
h



S
x


)


}


+




M
:
T



(


K
h



K
x


)



-
1




M
:



]





(
17
)






=


1
2



[


n

log





K
h






S
h





+

p

log





K
x






S
x





-
np
+

t

r


{


(


K
h

-
1




K
x

-
1



)



(


S
h



S
x


)


}


+




M
:
T



(


K
h



K
x


)



-
1




M
:



]






(
18
)






=


1
2



[


n





log





K
h






S
h





+

p





log





K
x






S
x





-
np
+

t

r


{


(


K
h

-
1




S
h


)



(


K
x

-
1




S
x


)


}


+


M
:
T



(


(


K
h

-
1




K
x

-
1



)



M
:


)



]






(
19
)






=


1
2



[


n





log





K
h






S
h





+

p





log





K
x






S
x





-
np
+

t


r


(


K
h

-
1




S
h


)



t


r


(


K
x

-
1




S
x


)



+



M
:
T



(


K
x

-
1



M


K
h

-
1



)


:


]






(
20
)






=


1
2



[


n





log





K
h






S
h





+

p





log





K
x






S
x





-
np
+

t


r


(


K
h

-
1




S
h


)



t


r


(


K
x

-
1




S
x


)



+

tr


[


M
T



K
x

-
1



M


K
h

-
1



]



]






(
21
)







The variational inference for Gaussian and Wishart process is described below. The inference for μ includes first augmenting the Gaussian process with a set of auxiliary variables with a set of corresponding time stamps, i.e.,






p(μ|X)=∫p(μ|Uμ,X,custom-characterμ)p(Uμ|custom-characterμ)dUμ  (22)


where Uμ is the auxiliary variable for μ and zμ is the corresponding index. Both p(μ|Uμ,X, zμ) and p(Uμ|zμ) follow the same Gaussian processes as the one for p(μ|X), i.e., these Gaussian processes have the same mean and kernel functions. As shown in Equation (22), the above augmentation does not change the prior distributions for μ.


The variational posterior of μ is constructed in a special form to enable efficient inference: q(μ,Uμ)=p(μ|Uμ)q(Uμ). q(Uμ)=N(Mμ,Sμ) is a multivariate normal distribution, in which the mean and covariance are variational parameters. p(μ|Uμ) is a conditional Gaussian process.


When μ is used in the down-stream distributions, a lower bound can be derived,





log p(·|μ)≥Eq(μ)[p(·|μ)]−KL(q(Uμ)∥p(Uμ)),  (23)


where q(μ)=∫p(μ|Uμ)q(Uμ)dUμ


A similar stochastic variational inference method can be derived for the Wishart Process by augmenting each GP p(fij|X) in the Wishart process with a set of auxiliary variables and a set of the corresponding inputs,






p(fij|X)=∫p(fij|uij,X,custom-characterij)p(uij|custom-characterij)duij  (24)


where uij is the auxiliary variable, zij is the corresponding inputs and p(fij|uij) is a conditional Gaussian process. The variational posterior of fij can be defined to be q(fij, uij)=p(fij|uij)q(uij), where q(uij)=N(mij,sij). The variational posterior of custom-character can be defined to be q(custom-character)=custom-character(custom-character, custom-character), where custom-character is a diagonal matrix. As the diagonal elements of L need to be positive, a change of variable can be applied to the variational posterior of the diagonal elements, i.e., custom-characterm=log(1+exp(custom-characterm)),q(custom-characterm) Note that zμ and zij are variational parameters instead of random variables and may be omitted from the notation. A variational lower bound can be derived with such a set of variational posterior for all the entries {fij} and custom-character, when Σ is used for some down-stream distributions,










log






p


(

·

|



)







𝔼


q


(
F
)




q


(

)






[

p


(

·

|



)


]


-




i
,
j




KL


(


q


(

u
ij

)









p


(

u

i

j


)



)



-

KL


(


q


(

)









p


(

)



)







(
25
)







where q(F)=Πij∫p(fij|uij)q(μij)duij.


After deriving the variational lower bounds for the individual components of DWCTM 1501, the components can be combined to form the final variational lower bound. The word distributions for individual topics are used in defining the distribution of individual words for each document d, p(Wdd, βXd)). The complete variational lower bound L of DWCTM 1501 can be derived by combining the lower bounds (10), (4), (8), (23) and (25):








log






p


(
W
)







𝔼


q


(
μ
)




q


(

)




q


(
F
)




q


(
β
)






[


W

]


-

KL


(


q


(

U
β

)









p


(

U
β

)



)


-

KL


(


q


(
H
)









p


(
H
)



)


-

KL


(


q


(

U
μ

)









p


(

U
μ

)



)


-

KL


(


q


(

)









p


(

)



)


-




i
,
j




KL


(


q


(

u
ij

)









p


(

u
ij

)



)





=


.





The first term of L can be further decomposed by plugging in (3),








𝔼


q


(
μ
)




q


(

)




q


(
F
)




q


(
β
)






[


W

]


=




d
=
1

D



(



𝔼


q


(

η
d

)




q


(
β
)






[

log






p


(



W
d

|

η
d


,

β

(

X
d

)



)



]


-


𝔼


q


(

η
d

)




q


(

μ

X
d


)




q


(



X
d


)






[

KL


(


q


(

η
d

)









p


(



η
d

|

μ

X
d



,



X
d



)



)


]



)






Note that all variational parameters of q(μ), q(′), q(F), q(β), q(η) are optimized. The following datasets were considered: State of the Union corpus (SotU), department of justice press releases (DoJ), Elsevier corpus (Abstracts), Blog Authorship Corpus (Blogs), NeurIPS conference papers (NeurIPS), A Million News Headlines (News), and Twitter sentiment classification (Twitter).


State of the Union corpus (1790-2018) dataset includes a yearly address of the US president, from 1790 to 2018 (229 years). Our vocabulary includes 1442 words after preprocessing, wherein the data is split into 170 documents as training and 57 documents as test data.


Department of justice press releases (2009-2018) dataset includes 13087 press releases from the Department of Justice from 2009 to 2018 (115 unique timestamps), preprocessed to include 2622 unique words. Documents were split into 9674 for training and 3413 testing.


Elsevier OA CC-BY Corpus dataset includes 40 k open access (OA)CC-BY abstracts taken from articles from across Elsevier's journals, published from 2010 to 2019. A random sample of 6898 were considered for training and same size for testing, including 3000 words in the vocabulary.


Blog Authorship Corpus consists of the posts of 19 k bloggers gathered from blogger.com from June 1999 to August 2004. The corpus incorporates a total of 681 k posts, from which a random sample of 5649 were drawn for training and 5650 for testing. After preprocessing, 3000 words were considered in our vocabulary.


NeurIPS conference papers (1987-2015) dataset includes 5804 conference papers from 1987 to 2015 including an average of 34 papers per year. The dataset was preprocessed leading to 4799 (large dataset). In both cases, 4237 documents were used as training data and 1567 as test data.


A Million News Headlines dataset includes 1.2M news headlines published over a period of 17 Years (from 2003 to 2019). After preprocessing, a random sample of size 8526 was used for training and 2822 for test purposes with a vocabulary size of 3000.


Twitter sentiment classification dataset contains 1.6 M tweets, from April to May 2009. 4525 tweets were randomly sampled for training and same for testing for computational efficiency. The samples were preprocessed using a tweet tokenizer, removing usernames and replacing repeated character sequences (length 3 or more) with sequences of length 3. After preprocessing, 3000 tokens were considered.


For the last two datasets, an extended version was considered with the highest number of tokens available, that is 22459 for the headlines dataset, and 83582 for Twitter, and subsampling 1 M documents in each dataset in the experiments. Such datasets as referred to as extended; however, as both the samples and the dimensionality is different, they are effectively different datasets (hence not comparable) with respect to their smaller counterparts.


Experimentally, each dataset was split considering 75% of the samples as training and 25% as test. Documents associated with the same time stamps were assigned to the same split. For each dynamic topic model a Matérn 3/2 kernel was used for β, to allow topics to quickly incorporate new words. A squared exponential kernel was used for μ and f, expecting a smooth temporal evolution of both topic probabilities and their correlation. Amplitude and length scale of kernels were initialed as 1 and 0.5 respectively, and optimized using the approximate empirical Bayes approach.


Experiments were conducted using Adam optimizer with learning rate 0.001 and up to 10 k epochs until convergence. Experiments included different number of topics, reporting the results using a default choice of 30 for all datasets (20 for SotU) to maintain consistency with previous works. Experiments also included a different number of inducing points for the three components β, μ and f, thus controlling the complexity of the variational posterior. The number of inducing points used for such components is 15, 20 and 15, respectively. DWCTM 1501 has an additional component for the latent embedding of words in β; used MH=200 in Q=10 dimensions. The posterior for H was initialized by transforming the words in the vocabulary using ELMO embeddings pre-trained on the 1 Billion Word Benchmark, taking the first Q principal components using a PCA transformation. For the posterior of η, when using a static encoder a dense neural network was considered with three layers with size 500, 300 and 200, respectively. To account for the increased input dimensionality in the meta-encoder 1503, a dense neural network was used with three layers, with size 1000, 600 and 400, respectively.


The perplexity metric can be computed as:









perplexity
=

exp


{


-

1


D









d

D





1

N
d




𝔼


q


(

β
|
H

)




q


(

η
d

)









n
=
1


N
d




[

log






p


(


W
d

|

ξ
d


)



]





}






(
26
)







However, in case of a large vocabulary the log probability cannot be computed exactly, so approximate by sampling M random negative words which do not appear in the document:











𝔼


q


(

β
|
H

)




q


(

η
d

)









n
=
1


N
d




[

log


p


(


W
d

|

ξ
d


)



]






𝔼


q


(

β
|
H

)




q


(

η
d

)






[





n
=
1


N
d




ξ

d
,
n



-

log





n
=
1


M
+

N
d





exp


(


ξ

d
,
n


-

log


(

r
nd

)



)





]






(
27
)







where rnd=M/(P Nd) if n is one of the negative words (and rnd=1 otherwise) is the uniform probability of picking word n.



FIG. 21 illustrates the mean of topics over time as computed by DWCTM 1501 on NeurIPS dataset. The distribution shows a decreasing trend for topic 8, associated with neural networks (consistent with prior knowledge and previous results on this dataset). Similar considerations can be made for topic 1, associated with neuroscience. Topic 19 (associated with topic modeling) has a spike between the years 2004 and 2007. This may be attributed to the interest for topic modeling after the introduction of LDA in 2003, and the following publications of CTM and DTM in 2006.



FIGS. 22 and 23 depict visualizations of the most probable words of the “Bayesian inference” topic, sorted by the word correlation from the word “bayesian.” FIG. 22 is a visualization of word counts of a sample dataset used with DWCTM 1501 (i.e., MIST). FIG. 23 is a visualization of a word probability for the “Bayesian inference” topic associated with the dataset of FIG. 22.


The most correlated word as found by DWCTM 1501 is “wishart” (correlation 0.947), even though “wishart” is a word which is relatively rare in the dataset (see FIG. 22). Such an outcome may be due to the fact that only a restricted subset of documents mention “wishart” (e.g., for Wishart distributions); “wishart” is a very relevant word in the Bayesian inference literature (e.g., as the conjugate prior for the covariance matrix of a multivariate normal distribution). Additionally, words such as “posterior” are slightly less correlated to “bayesian” (correlation 0.850), even though “posterior” has a very high probability of appearing in the “Bayesian inference” topic.


The various examples and teachings described above are provided by way of illustration only and should not be construed to limit the scope of the present disclosure. Those skilled in the art will readily recognize various modifications and changes that may be made without following the examples and applications illustrated and described herein, and without departing from the true spirit and scope of the present disclosure.

Claims
  • 1. A method for dynamic topic modeling with word correlation, the method comprising: receiving, as input to a dynamic word correlated topic model (DWCTM), a set of documents each comprised of a plurality of words and having associated timestamps, wherein the timestamps of the set of documents span a period of time;identifying, as input to the DWCTM, a quantity of topics for modeling;providing the set of documents as input to the DWCTM for modeling according to the quantity of topics identified;for each topic, modeling via the DWCTM: a document-topic distribution across the period of time to yield a popularity of each of the topics across the period of time;a topic-word distribution across the period of time that captures a correlation among the plurality of words to yield a word embedding; anda series of covariance matrices to yield a correlation of each topic with other topics across the period of time; andproviding, as output of the DWCTM: the popularity of each of the topics across the period of time;the word embedding across the period of time; andthe correlation of each topic with other topics across the period of time.
  • 2. The method of claim 1, further comprising: for each topic, modeling via the DWCTM: the document-topic distribution at given time points across the period of time using a Gaussian process (GP) to yield the popularity of each of the topics across the period of time;the topic-word distribution at given time points across the period of time using a multi-output Gaussian process (MOGP) that captures the correlation among the plurality of words to yield the word embedding; andthe series of covariance matrices using a Wishart process to yield the correlation of each topic with other topics across the period of time.
  • 3. The method claim 1, further comprising generating a word embedding matrix that corresponds to the word embedding, wherein words of similar context are grouped together within the word embedding matrix.
  • 4. The method claim 1, further comprising using the word embedding to drive a topic correlation by including correlation between the topics as an additional parameter for the DWCTM.
  • 5. The method of claim 1, further comprising defining inducing variables as part of a joint space that includes the word embedding and a temporal dimension.
  • 6. The method of claim 1, further comprising scaling the DWCTM by approximating a normalization constant with a fixed number of words using a self-normalizing importance sampling.
  • 7. The method of claim 1, further comprising applying a meta-encoder to provide an amortized inference to sensitize the DWCTM to changes of topic representations.
  • 8. The method of claim 7, wherein the meta-encoder uses a document representation and a summary of all the topic representations as inputs enabling the DWCTM to infer a posterior of topic mixing proportions of a particular document.
  • 9. The method of claim 1, further comprising applying an asymptotically unbiased estimator as a gradient of a lower bound of the DWCTM to enable training of the DWCTM to be independent from a vocabulary size.
  • 10. The method of claim 1, further comprising using a latent space embedding formulation of the MOGP to correlate temporal processes of individual words in a topic representation.
  • 11. The method of claim 1, further comprising jointly modeling words with a single GP across the period of time including encoding the word correlation in a covariance matrix.
  • 12. A system for dynamic topic modeling with word correlation, the system comprising: a dynamic word correlated topic model (DWCTM); anda server communicatively coupled to the DWCTM, the server comprising at least one processing device and a memory coupled to the at least one processing device and storing instructions, that when executed by the at least one processing device, cause the at least one processing device to: receive a set of documents each comprised of a plurality of words and having associated timestamps, wherein the timestamps for the set of documents span a period of time;identify, as input to the DWCTM, a quantity of topics for modeling;provide the set of documents as input to the DWCTM for modeling according to the quantity of topics identified; andfor each topic, model via the DWCTM: a document-topic distribution across the period of time to yield a popularity of each of the topics across the period of time;a topic-word distribution at given time points across the period of time that captures a correlation among the plurality of words to yield a word embedding; anda series of covariance matrices to yield a correlation of each topic with other topics across the period of time; andprovide, as output of the DWCTM: the popularity of each of the topics across the period of time;the word embedding across the period of time; andthe correlation of each topic with other topics across the period of time.
  • 13. The system of claim 12, wherein the DWCTM is configured to: for each topic, model: the document-topic distribution at given time points across the period of time using a Gaussian process (GP) to yield the popularity of each of the topics across the period of time;the topic-word distribution at given time points across the period of time using a multi-output Gaussian process (MOGP) that captures the correlation among the plurality of words to yield the word embedding; andthe series of covariance matrices using a Wishart process to yield the correlation of each topic with other topics across the period of time.
  • 14. The system of claim 12, wherein the DWCTM generates a word embedding matrix that corresponds to the word embedding, wherein words of similar context are grouped together within the word embedding matrix.
  • 15. The system of claim 12, further configured to define inducing variables as part of a joint space that includes the word embedding and a temporal dimension.
  • 16. The system of claim 12, further configured to scale the DWCTM by approximating a normalization constant with a fixed number of words using a self-normalizing importance sampling.
  • 17. The system of claim 12, wherein the set of documents are a set of user listening sessions each comprised of a plurality of media content items, and the words are one or more types of media content metadata such that clusters of one or more of the types of media content metadata represent the topics.
  • 18. A system for dynamic topic modeling with word correlation related to user consumption of media content items over time, the system comprising: a dynamic word correlated topic model (DWCTM); anda server communicatively coupled to the DWCTM, the server comprising at least one processing device and a memory coupled to the at least one processing device and storing instructions, that when executed by the at least one processing device, cause the at least one processing device to: receive a set of user listening sessions each comprised of a plurality of media content items and having associated timestamps, wherein the plurality of media content items include one or more types of media content metadata and timestamps for the set of user listening sessions that span a period of time;identify, as input to the DWCTM, a quantity of topics for modeling;provide the set of user listening sessions as input to the DWCTM for modeling based on the quantity of topics identified;for each topic, model via the DWCTM: a media content item-topic distribution across the period of time to yield a popularity of each of the topics across the period of time;a topic-media content metadata distribution across the period of time that captures a correlation among the set of user listening sessions comprising a media content metadata embedding; anda series of covariance matrices to yield a correlation of each topic with other topics across the period of time; andprovide, as output of the DWCTM: the popularity of each of the topics across the period of time;the media content metadata embedding across the period of time; andthe correlation of each topic with other topics across the period of time.
  • 19. The system of claim 18, wherein the DWCTM is configured to: for each topic, model: the media content item—topic distribution at given time points across the period of time using a Gaussian process (GP) to yield the popularity of each of the topics across the period of time;the topic—media content metadata distribution at given time points across the period of time using a multi-output Gaussian process (MOGP) that captures the correlation among the set of user listening sessions comprising the media content metadata embedding; andthe series of covariance matrices using a Wishart process to yield the correlation of each topic with other topics across the period of time.
  • 20. The system of claim 19, wherein the media content metadata includes an artist, and when groupings of the artists represent the topics, the at least one processing device, to identify the new media content item, is further caused to: determine the user is playing back a first media content item associated with a first artist;identify a grouping of the artists that includes the first artist;identify a second artist relating to the first artist by selecting the second artist from one of the same grouping of the artists that includes the first artist or another grouping of the artists determined to be correlated to the grouping of the artists that includes the first artist based on the output of the DWCTM;identify a second media content item from media content associated with the second artist; andpresent the second media content item to the user.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation-in-part of U.S. application Ser. No. 16/932,323, filed Jul. 17, 2020, which is hereby incorporated by reference in its entirety.

Continuation in Parts (1)
Number Date Country
Parent 16932323 Jul 2020 US
Child 17526845 US