The present invention generally relates to the field of information technology and user interface technologies and, more particularly, to methods and systems for ranking media contents.
Ranking is a classical research area in information science. In traditional ranking methods, people simply use metadata, such as titles, authors or keywords as entries to rank items. With the explosive growth of information, people require more efficient ranking methods to help them discover related messages more accurately and quickly.
However, many traditional ranking algorithms used in current social review systems have a lot of limitations on input features. Social media information is characterized as big volume, high velocity, numerous varieties and countless variability. Taking a well-known PageRank algorithm as an example, the PageRank requires page source information, and link information among different pages. In many situations, such information is unavailable. For example, if a user wants to rank a list of reviews according to helpfulness of the reviews, the PageRank may be powerless, because it is hard to obtain one arbitrary review's authority and link information.
The disclosed methods and systems are directed to solve one or more problems set forth above and other problems.
One aspect of the present disclosure includes a method for ranking media contents. The method includes receiving media contents through a network and extracting feature values of the received media contents. The method also includes implementing a parameter reinforcement learning process to obtain automatically distribution over relativeness and irrelativeness of the received media contents. Further, the method includes ranking the received media contents by a multi-armed bandit algorithm based on the obtained distribution over relativeness and irrelativeness of the received media contents.
Another aspect of the present disclosure includes a system for ranking media contents. The system includes a feature extraction module configured to extract feature values of the received media contents. The system also includes a self-learning module configured to implement a parameter reinforcement learning process to obtain automatically distribution over relativeness and irrelativeness of the received media contents. Further, the system includes a ranking module configured to rank the received media contents by a multi-armed bandit algorithm based on the obtained distribution over relativeness and irrelativeness of the received media contents.
Other aspects of the present disclosure can be understood by those skilled in the art in light of the description, the claims, and the drawings of the present disclosure.
Reference will now be made in detail to exemplary embodiments of the invention, which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
TV 102 may include any appropriate type of TV, such as plasma TV, LCD TV, projection TV, non-smart TV, or smart TV. TV 102 may also include other computing system, such as a personal computer (PC), a tablet or mobile computer, or a smart phone, etc. Further, TV 102 may be any appropriate content-presentation device capable of presenting multiple programs in one or more channels.
Smart phone 104 may be an iOS phone, an Android phone, a blackberry phone, or any other computing mobile device capable of performing a web browsing function.
Further, the server 106 may include any appropriate type of server computer or a plurality of server computers for providing personalized media contents to the user 108. The server 106 may also facilitate communication, data storage, and data processing for the smart phone 104 and/or TV 102. TV 102 and/or smart phone 104, and server 106 may communicate with each other through one or more communication networks 110, such as a cable network, a phone network, and/or a satellite network, etc.
The user 108 may interact with TV 102 and/or smart phone 104 to watch various programs, browse webpages and perform other activities of interest. The user 108 may be a single user or a plurality of users, such as family members watching TV programs together.
TV 102, smart phone 104, and/or server 106 may be implemented on any appropriate computing circuitry platform.
As shown in
Processor 202 may include any appropriate processor or processors. Further, processor 202 can include multiple cores for multi-thread or parallel processing. Storage medium 204 may include memory modules, such as ROM, RAM, flash memory modules, and mass storages, such as CD-ROM and hard disk, etc. Storage medium 204 may store computer programs for implementing various processes, when the computer programs are executed by processor 202.
Further, peripherals 212 may include various sensors and other I/O devices, such as keyboard and mouse, and communication module 208 may include certain network interface devices for establishing connections through communication networks. Database 210 may include one or more databases for storing certain data and for performing certain operations on the stored data, such as database searching.
Online social review systems may be integrated on smart TV systems and/or smart phones to help organize and share socially produced information valuable to assist in making purchasing decisions, choosing movies, choosing services and shops, renting DVDs, buying books, etc.
As shown in
The viewer discovery module 302 is configured to detect a viewing activity of at least one user of a content-presentation device capable of presenting multiple programs in one or more channels, and to determine a plurality of user identities of the at least one user.
The feature extraction module 304 is configured to extract feature values of the received media contents. The feature extraction module 304 may include a range scaling unit 3042 and a feature scaling unit 3044. The range scaling unit 3042 is configured to generate a reasonable range based on feature lists of entities. The entities may include any appropriate type of source for media contents and may contain various video sources (i.e., video source 1, video source 2, . . . video source n). The contents from the entities may include both video data and reviews of the entities (e.g., movies). The feature scaling unit 3044 is configured to scale feature values into a reasonable range to distinguish different entities.
The self-learning module 306 may be configured to implement a parameter reinforcement learning process to obtain automatically distribution over relativeness and irrelativeness of the received media contents. The self-learning module 306 may include a probabilistic model generating unit 3062 and a Restricted Boltzmann Machine (RBM) processing unit 3064. The probabilistic model generating unit 3062 is configured to construct a probabilistic model and infer the parameters by Markov Chain Monte Carlo. The Restricted Boltzmann Machine (RBM) processing unit 3064 is configured to implement a self-learning process by RBM.
The ranking module 308 is configured to rank the received media contents by a multi-armed bandit algorithm based on the obtained distribution over relativeness and irrelativeness. The ranking module 308 may include an expectation calculation unit 3082, a deviation calculation unit 3084, and a potential reward calculation and ranking unit 3086. The expectation calculation unit 3082 is configured to calculate each entity's estimated expectation in R reviews. The deviation calculation unit 3084 is configured to calculate each entity's standard deviation in the R reviews. An upper confidence bound is +λ where λ is a confidence level (or a confidence coefficient). To be simple, λ is set as 1. The potential reward calculation and ranking unit 3086 is configured to calculate the upper confidence bound of each review and rank the R reviews according to the upper confidence bounds of the R reviews.
Based on ranked results generated by the ranking module 308, the recommendation engine 310 may select personalized contents to recommend to the user. That is, once the ranked results are generated, the recommendation engine 310 may be configured to handle video content selection and to recommend preferred contents for the user 108. In certain embodiments, the recommendation engine 310 may further provide video content selection and recommendation information to streaming source discovery module 312 to stream video data to the user.
Based on information from the recommendation engine 310, the streaming source discovery module 312 may select the best source to obtain the video stream and control the video stream renderer to playback the video streaming from the selected source. That is, the streaming source discovery module 312 may implement a user-adaptive streaming source discovery mechanism to enable the streaming data source selection optimization according to various constraints from the user 108, such as a home network condition, a terminal condition, a video-on-demand (VOD) service subscription, etc., and/or from a service provider or server 106, such as a regional constraint and cloud computational capability constraint, etc.
The user interaction module 314 may be configured to implement interactions between the system 300 and the user 108 based on any appropriate interaction mechanisms, such as keyboard/mouse, remote control, sensors, and/or gesture/voice control, etc.
Further, the video stream renderer 316 may be configured to generate personalized video stream and to transmit the personalized video stream to the user 108 (e.g., to TV 102) based on the configuration from the streaming source discovery module 312 and from the entities.
In certain embodiments, the video stream renderer 316 together with the streaming source discovery module 312 may deliver the personalized video stream over a particular program channel on TV 102. That is, for a particular user 108, a program channel can be configured to recommend video contents to the user based on the ranked results from the online reviews, and to deliver the personalized video contents to the user over that particular channel.
In operation, personalized content delivery system 300 may perform certain processes to deliver personalized contents to users.
As shown in
For example, if a user uses a wearable device, such as a smart phone, the device may interact with TV 102 to exchange certain user data. If the user just turns on TV, certain program selection of the user may also be obtained.
Further, the identity of the user or users may be determined (S406). For example, when the users have wearable devices, such as a bracelet, watch or a mobile phone, the devices may be wirelessly connected to the TV 102 and the user identity may be communicated to TV 102. This way, the user identity can be easily determined. The user identity may also be easily determined if TV 102 is equipped with face or user recognition technology. Further, when a smart remote control used by a user, the user's identity who is using the remote control can be obtained with reasonably high accuracy. However, other viewers who maybe also sitting there cannot be detected.
When there are no supporting devices available, the TV viewer information is not traceable but the viewing history may reveal certain viewing patterns. The identity of the user may be determined based on the content correlation and relevance. For example, a user typically watches soap every other day, but sometimes he/she controls the remote control, and sometimes others take control. In such a case, the viewing patterns of a user can be obtained by performing a pattern mining.
After the user identity is determined, the available video contents may be discovered or determined based on the user identity (S408). That is, a content discovery may be performed by the system 300 (e.g., server 106).
Further, the system 300 may select candidate video contents based on the discovered video contents (S410).
In addition, the system 300 may use a Self-Rank algorithm to rank reviews associated with the selected candidate video contents and make a recommendation on personalized video contents to the user or users based on the ranked results generated by the Self-Rank algorithm (S412).
The Self-Rank algorithm has no limitation on the input features. Actually, the users can define any kinds of features in ranking. For example, in order to rank online reviews, users can use the review length, review's entropy, review's sentiment polarity, reviews readability as the features. To rank movies, the users can use favorite actors/actresses' information, plot description and publish time as features. Thus, each entity is represented as a list of features as shown as follows:
where En is the nth entity, and fnj is the ith feature of the nth entity.
Traditionally, a binary feature value (namely 0 and 1) mechanism is used to represent features. If an entity meets a criterion, the entity has a 1 value on that feature; otherwise the entity has a 0 value on that feature. The problem of this mechanism is that there are a lot of entities sharing the same feature list, especially when there are a limited number of features but a huge number of entities to analyze. On the other hand, a binary mechanism is too coarse to distinguish entities. For example, reviews with 10 words have the same value as the review with 100 words when the length threshold is 9 words. This may be unreasonable.
The Self-Rank allows users to scale feature values into a reasonable range. Taking review length as an example, if there are 1000 reviews with average length of μLen and the deviation of σlen, a normal distribution Mlen(μLen, σlen) for the length distribution can be constructed by acquiescing in the theory that most things obey a normal distribution. Therefore, each review can be given a value according to the Cumulative Distribution Function (CDF).
As shown in
A probabilistic model is constructed to realize the parameter reinforcement learning, and Markov Chain Monte Carlo is used to infer the parameters.
In order to find whether an entity is relative or not, a latent variable hε{0,1} is introduced to denote the entities relativeness. Because relativeness/irrelativeness is a bivariate distribution problem, a Beta distribution is selected. Thus, it is defined that the latent variable h obeys a Beta Distribution.
The generating process of this model may include the following steps.
Step 1: for each latent variable h, a distribution φ1 is generated according to the hyper parameter η.
φl˜Dir(η) (4)
Step 2: for each entity r, a Beta distribution is generated according to the hyper parameter τ.
f
r˜Beta(τ) (5)
Step 2-1: the hyper parameters are updated by update(τ) and update(η).
Step 2-2: for each feature position f in the review, a label lr,f is generated according to this review's distribution on relativeness/irrelativeness.
l
r,f
˜Bern(θ) (6)
Step 2-3: for each feature position, a feature is generated according to the helpful label lr,f, and this feature's distribution φl on relativeness/irrelativeness.
f˜Mult(φl,lr,f) (7)
A Gibbs Sampling is used to conduct the inference. The Gibbs sampling is a Markov chain Monte Carlo (MCMC) algorithm for obtaining a sequence of observations which are approximated from a specified multivariate probability distribution, when direct sampling is difficult. According to the model described as the above, the probabilistic formula of this model can be defined by:
Then, by Bayesian transformation, the following formula 9 can be obtained.
where Nr,l is the number of features in review r that is assigned to a helpful label l; τr,l is the hyper parameter for the rth review on the helpfulness label l; Nl,f
Further, for each entity, the parameter π˜θ(a,b) can be obtained by counting how many features are assigned to the relative label, and how many features are assigned to irrelative one.
Further, in the above probabilistic model, the problem here is how to decide the values of the hyper parameters τ and η in the model. In addition, the values of the hyper parameters can affect the final results. The Self-Rank algorithm can learn the values of the hyper parameters by itself. The Restricted Boltzmann Machine (RBM) is implemented to facilitate such learning.
The classical RBM is an extension of neural network, including two layers of units, that is, one layer of hidden units (the latent factors that the system tries to learn) and one layer of visible units (e.g., users' movie preferences whose states the system know and set). Furthermore, each visible unit is connected to all the hidden units (this connection is undirected, so each hidden unit is also connected to all the visible units). Between the hidden layer and the visible layer, there is a symmetric matrix of weights W=(wi,j) that connects the visible unit vi and the hidden unit hj. In addition, there are two other kinds of variables ai and bj. The bias weight ai is for the visible units, and the bias weight bj is for the hidden units.
In the RBM, the hidden unit activations are mutually independent given the visible unit activations and conversely, the visible unit activations are mutually independent given the hidden unit activations. The v1 is set as the observed data (i.e., a training sample), wi,j is the weight of the connection between i and j and is initiated according to a normal distribution N(0,0.01), ai is initiated as 1.0/N, where N is the number of visible nodes in total, and bj is initiated as 0. σ(x) denotes the logistic sigmoid function σ(x)=1/(1+exp(−x)). Then, each iteration process of RBM includes the following steps.
Step 1: for each hidden unit, the individual activation probability (that is, the conditional probability of a configuration of the hidden unit h1,j, given a configuration of the visible unit v1) is calculated by:
P(h1,j=1|v1)=σ(ai+Σiv1,i*wi,j) (10)
where v1 is set as the observed data; the weight wi,j is initiated according to a normal distribution N(0,0.01); a denotes the logistic sigmoid; and the bias weight ai for the visible units is initiated as 1.0/N.
Step 2: for each visible unit, the individual activation probability (that is, the conditional probability of a configuration of the visible unit v2,i, given a configuration of the hidden unit h1) is calculated by:
P(v2,i=1|h1)=σ(bj+Σjh1,j*wi,j) (11)
where the weight wi,j is initiated according to a normal distribution N(0,0.01); σ denotes the logistic sigmoid; and the bias weight bj for the hidden units is initiated as 0.
Step 3: for each hidden unit, the individual activation probability (that is, the conditional probability of a configuration of the hidden unit h2,j, given a configuration of the visible units v2) is calculated by:
P(h2,j=1|v2)=σ(ai+Σ2,i*wi,j) (12)
where the weight wi,j is initiated according to a normal distribution N(0,0.01); Cr denotes the logistic sigmoid; and the bias weight ai for the visible units is initiated as 1.0/N.
Therefore, the updated latent variables can be represented by:
W=W+lr*(P(h1=1|v1)v1T−P(h2=1|v2)v2T) (13)
a=a+lr*(v1−v2) (14)
b=b+lr*(P(h1=1|v1)−P(h2=1|v2)) (15)
where lr is a learning rate. P(h1=1|v1)v1T measures the association between the visible unit and the hidden unit that the system wants the network to learn from the training sample. Because the RBM generates the states of visible units based on its hypotheses about the hidden units alone in step 3, P(h2=1|v2)v2T measures the association that the network itself generates when no units are fixed to training data.
The weight vector W is used to infer the hyper parameter η for feature-helpfulness distribution. For a feature fi, the prior distribution on a helpful label lj may be calculated by:
ηi,j=ew
where κ is the magnification coefficient to range ηi,j to a suitable magnitude.
The values of (P(h1=1|v1) and P(h2=1|v2) are used to infer the hyper parameter τ. For a review r, the prior distribution on a helpful label lj can be calculated by:
Thus, every entity's distribution over relativeness/irrelativeness can be automatically obtained. The process for ranking these entities includes the followings.
Because each entity is used as an independent distribution, a Multi-Armed Bandit (MAB) algorithm is used to rank the items. The MAB is an algorithm to deal with gambling problem by helping gamblers to decide in which sequence plays the gambling machines in order to maximize the total reward. There are many ways to realize a MAB process. An upper confidence bound 1 (UCB1) algorithm is a classical one. The UCB1 may achieve logarithmic regret uniformly over the reviews and require no preliminary knowledge about the reward distribution.
The principle of UCB1 is that, R reviews is acted as R independent machines to play with, and each machine i can be described as a distribution Pi. Each time the machine with maximize upper confidence bound is selected to play. Thus, the selected machine can either has a high estimated reward, or have a high uncertainty. However, the UCB1 doesn't care much about the uncertainty part, but just wants to get the reviews with high reward. Thus, an average reward is used as the upper confidence bound.
Actually, each entity's average reward μr can be inferred by its estimated expectation and standard deviation . According to Chebyshev's inequality
Further, if λ is large enough, the following formula is correct.
|μr−|≦λ (19)
μr−≦λ (20)
μr≦+λ (21)
Therefore, the upper confidence bound is +λ. To be simple, λ is set as 1. If the every entity's estimated expectation and its standard deviation can be obtained, the reviews can be ranked according to its upper confidence bound.
From the probabilistic model, each entities distribution over relativeness/irrelativeness which obeys a Beta distribution Beta(πr) can obtained. Thus, this distribution can be used to help accomplish the rank task. Each entity has a Beta distribution parameter vector(πr,α, πr,β). πr,α and πr,β are independent, where πr,α indicates the probability of this review to be helpful, and πr,β indicates the probability to be unhelpful. When the parameter vector πr of each review is known, the estimated expectation and standard variance of this review can be calculated by:
where shape parameters α, β>0.
Returning to
In addition, the video stream may be generated based on certain conditions from the user or users. For example, in regions with low network bandwidth, the high-definition (HD) content may be unsuitable, and transcoding may be performed by server 106 to guarantee the received video streaming can playback smoothly and in a reasonable viewing condition. Other conditions may also be used to configure the video stream.
Further, additionally or optionally, the system 300 may detect video quality and other related conditions (S416). For example, the system 300 may probe the network condition of a household and the capability of the devices that the family members are using, thus the constraints of streaming quality and content resolution are considered in the recommendation content selection. Such conditions are feedback to the system 300 such that the contents can be configured within the constraint of the conditions.
The system 300 may also determine whether the user continues viewing the personalized content channel (S418). If system 300 determines that the user continues the personalized content delivery (S418, Yes), the process 400 continues from S404. On the other hand, if system 300 determines that the user does not want to continue the personalized content delivery (S418, No), the process 400 completes.
The disclosed systems and methods can also be applied to other devices with displays, such as smart phones, tablets, PCs, smart watches, and so on. That is, the disclosed methods not only can be used for systems for delivering the personalized video contents, but also can be applied as the core function for other systems, such as social media systems, other content recommendation systems, information retrieval systems, or any user interactive systems, and so on.
By using the disclosed methods and systems, after receiving media contents or information entities (e.g., images, webpages, documents, etc.) through a network (e.g. the Internet), the feature extraction module may extract the feature values of the received entities. For example, in a social media content system, after the system receives media content entities, the feature extraction module may scale the feature values into a reasonable range to distinguish different entities based on a normal cumulative distribution function. The self-learning module may implement the parameter reinforcement learning process. The parameter reinforcement learning process without external interference is implemented using a probabilistic model constructed by Markov Chain Monte Carlo, such that the distribution over relativeness and irrelativeness of the received entities can be obtained automatically.
That is, based on the obtained distribution over relativeness and irrelativeness of the received entities, the ranking module may rank the received entities by the multi-armed bandit algorithm. Specifically, the entities are ranked based on the upper confidence bound +λ, where λ is a confidence coefficient; is the estimated expectation of the entities; and is the standard variance of the entities.
Provided that each entity has a Beta distribution parameter vector π(πr,α, πr,β), the estimated expectation and the standard variance of the entity are calculated respectively by:
where a parameter vector πr of each entity is known; πr,α indicates the probability of the entity to be helpful; πr,β indicates the probability of the entity to be unhelpful; and shape parameters α, β>0.
Other steps may be referred to above descriptions with respect to the personalized video content delivery system. Further, based on the ranked entities, the system may recommend top-ranked entities to at least one user or may prompt the ranked media contents to the user. For example, in a social media recommendation system, personalized social media information (e.g. Facebook like, Twitter, etc.) may be recommended to a user. In a question and answer system, personalized answer may be provided for the user to solve his/her question.
Other applications, advantages, alternations, modifications, or equivalents to the disclosed embodiments are obvious to those skilled in the art.