©2002-2003 Strands, Inc. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the U.S. Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. 37 CFR §1.71(d).
This invention pertains to systems and methods for making recommendations using model-based collaborative filtering with user communities and items collections.
It has become a cliché that attention, not content, is the scarce resource in any internet market model. Search engines are imperfect means for dealing with attention scarcity since they require that a user has reasoned enough about the items to which he or she would like to devote attention to have attached some type of descriptive keywords. Recommender engines seek to replace the need for user reasoning by inferring a user's interests and preferences implicitly or explicitly and recommending appropriate content items for display to and attention by the user.
Exactly how a recommender engine infers a user's interests and preferences remains an active research topic linked to the broader problem of understanding in machine learning. In the last two years, as large-scale web applications have incorporated recommendation technology, these areas in machine learning evolve to include problems in data-center scale, massively concurrent computation. At the same time, the sophistication of recommender architectures increased to include model-based representations for knowledge used by the recommender, and in particular models that shape recommendations based on the social networks and other relationships between users as well as a prior specified or learned relationships between items, including complementary or substitute relationships.
In accordance with these recent trends, we describe systems and methods for making recommendations using model-based collaborative filtering with user communities and item collections that is suited to data-center scale, massively concurrent computations.
a) is a user-item-factor graph.
b) is a item-item-factor graph.
Additional aspects and advantages of this invention will be apparent from the following detailed description of preferred embodiments, which proceeds with reference to the accompanying drawings.
We begin by a brief review of memory-based systems and a more detailed description of model-based systems and methods. We end with a description of adaptive model-based systems and methods that compute time-varying conditional probabilities.
A Formal Description of the Recommendation Problem
Tripartite graph USF shown in
The function c(u; τ) represents a vector of measured user interests over the categories for user u at time instant τ. Similarly, the function a(s; τ) represents a vector of item attributes for item s at time instant τ. The edge weights h(u, s; τ) are measured data that in some way indicate the interest user u has in item s at time instant τ. Frequently h(u, s; n) is visitation data but may be other data, such as purchasing history. For expressive simplicity, we will ordinarily omit the time index τ unless it is required to clarify the discussion.
The octagonal nodes={z1, z2, . . . , zK} in the USF graph are factors in an underlying model for the relationship between user interests and items. Intuition suggests that the value of recommendations traces to the existence of a model that represents a useful clustering or grouping of users and items. Clustering provides a principled means for addressing the collaborative filtering problem of identifying items of interest to other users whose interests are related to the user's, and for identifying items related to items known to be of interest to a user.
Modeling the relationship between user interests and items may involve one or two types of collaborative filtering algorithms. Memory-based algorithms consider the graph US without the octagonal factor nodes in USF of
Memory-Based Algorithms
As defined above, a memory-based algorithm fits the raw data used to train the algorithm with some form of nearest-neighbor regression that relates items and users in a way that has utility for making recommendations. One significant class of these systems can be represented by the non-linear form
X=f(h(u1,s1), . . . ,h(uM,sN),c(u1), . . . ,c(uM),a(s1), . . . ,a(sN),X) (1)
where X is an appropriate set of relational measures. This form can be interpreted as an embedding of the recommender problem as fixed-point problem in an |U|+|S | dimension data space.
Implicit Classification Via Linear Embeddings
The embedding approach seeks to represent the strength of the affinities between users and items by distances in a metric space. High affinities correspond to smaller distances so that users and items are implicitly classified into groupings of users close to items and groupings of items close to users. A linear convex embedding may be generalized as
where H is matrix representation for the weights, with submatrices HUS and HSU such that hUS;mn=h(um, sn) and hSU;mn=h(sn, um). The desired affinity measures describing the affinity of user um for items s1, . . . , sN is the m-th row of the submatrix XUS. Similarly, the desired measures describing the affinity of users u1, . . . , uM for item sn is the n-th row of the submatrix XSU. The submatrices XUU=HUSXSU and XSS=HSUXUS are user-user and item-item affinities, respectively.
If a non-zero X exists that satisfies (2) for a given H, it provides a basis for building the item-item companion graph UU shown in
so the entire set of relationships can be represented in matrix form as V=HSUHUS. The affinity of sl and sn then satisfies
X
SS
=H′X
SS
=H
SU
H
US
X
SS
which can be derived directly from (2) since
In memory-based recommenders, the proposed embedding does not exist for an arbitrary weighted bipartite graph US. In fact, an embedding in which X has rank greater than 1 exists for a weighted bipartite gUS if and only if the adjacency matrix has a defective eigenvalue. This is because H has the decomposition
where the Y is a non-singular matrix, λ1, . . . , λk and T1, . . . , Tk are upper-triangular submatrices with 0's on the diagonal. In addition, the rank of the null-space of Ti is equal to the number of independent eigenvectors of H associated with eigenvalue λi. Now, if λ1=1 is a non-defective eigenvalue with algebraic multiplicity greater than 1, Ti=0.
Q is a real, orthogonal matrix and Λ is a diagonal matrix with the eigenvalues of H on the diagonal. The form (2) implies that W has the single eigenvalue “1” so that Λ=I and
H=QIQT=I
Now, an arbitrary defective H can be expressed as
H=Y[I+T]Y
−1
=I+YTY−1
where Y is non-singular and T is block upper-triangular with “0”'s on the diagonal. The rank of the null-space is equal to the number of independent eigenvectors of H. If H is non-defective, which includes the symmetric case, T must be the 0 matrix and we see again that H=1.
Now on the other hand, if H is defective, from (2) we have (H−I)X=0 and we see that
YTY−1X=0
where the rank of the null-space of T is less than N+M. For an X to exist that satisfies the embedding (2), there must exist a graph US with the singular adjacency matrix H−I. This is simply the original graph US with a self-edge having weight −1 added to each node. The graph US is no longer bipartite, but it still has a bipartite quality: If there is no edge between two distinct nodes inUS, there is no edge between two nodes in US. Various structural properties inUS can result in a singular adjacency matrix H=I. For the matrix X to be non-zero and the proposed embedding to exist, H must have properties that correspond to strong assumptions on users' preferences.
The Adsorption Algorithm
The linear embedding (2) of the recommendation problem establishes a structural isomorphism between solutions to the embedding problem and the solutions generated by adsorption algorithm for some recommenders. In a generalized approach, the recommender associates vectors pc (um) and pA (sn) representing probability distributions Pr(c; um) and Pr(a; sn) over and respectively, with the vectors c(um) and a(sn) such that
The matrices PSA and PUC are matrices composed of thedistrubution pA (sn) and the distributions pc (um) written as row vectors. The distributions pA (um) a distributions pc (sn) that form the row vectors of the matrices PUA and PSC matrices are the projections of the distributions in PSA and PUC, respectively, under the linear embedding (2).
Although P is an (+)×(+) matrix, it bears a specific relationship to the matrix X that implies that if the 0 matrix is the only solution for X then the 0 matrix if the only solution for P. The columns of P must have the columns of X as a basis and therefore the column space has dimension M+N at most. If X does not exist, then the null space of YTY−1 has dimension M+N and P must be the 0 matrix if W is not the identity matrix.
Conversely, if X exists, even though a non-zero P that meets the row-scaling constraints on P in (3) may not exist, a non-zero
P
R
=r
−1
[X|X| . . . |X]
composed of
r=┌(+)/(+)┐
replications of X that meets the row-scaling constraints does exist. From this we deduce an entire subspace of matrices PR exists. A P with + columns selected from any matrix in this subspace and rows re-nonnalized to meet the row-scaling constraints may be a sufficient approximation for many applications.
Embedding algorithms including the adsorption algorithm are learning methods for a class of recommender algorithms. The key idea behind the adsorption algorithm that similar item nodes will have similar component metric vectors pA (sn) does provide the basis for an adsorption-based recommendation algorithm. The component metrics pA (sn) can be approximated by several rounds of an iterative MapReduce computation with run-time (M+N). The component metrics may be compared to develop lists of similar items. If these comparisons are limited to a fixed-sized neighborhood, they can be easily parallelized as a MapReduce computation with run-time (N). The resulting lists are then used by the recommender to generate recommendations.
Model-Based Algorithms
Memory-based solutions to the recommender problem may be adequate for many applications. As shown here though, they can be awkward and have weak mathematical foundations. The memory-based recommender adsorption algorithm proceeds from the simple concept that the items a user might find interesting should display some consistent set of properties, characteristics, or attributes and the users to whom an item might appeal should have some consistent set of properties, characteristics, or attributes. Equation (3) compactly expresses this concept. Model-based solutions can offer more principled and mathematically sound grounds for solutions to the recommender problem. The model-based solutions of interest here represent the recommender problem with the full graph USF that includes the octagonal factor nodes shown in
Explicit Classification In Collaborative Filters
To further clarify the conceptual difference between the particular family of memory-based algorithms that we describe above, and the particular family of model-based algorithms that we describe below, we focus on how each algorithm classifies users and items. The family of adsorption algorithms we discuss above explicitly computes vector of probabilities pc (u) and pA (s) that describe how much interests in setapply to user u and attributes in set A apply to item s, respectively. These probability vectors implicitly define communities of users and items which a specific implementation may make explicit by computing similarities between users and between items in a post-processing step.
Recommenders incorporating model-based algorithms explicitly classify users and items into latent clusters or groupings, represented by the octagonal factor nodes ={z1, . . . , zK} in
Probabilistic Latent Semantic Indexing Algorithms
A recommender may implement a user-item co-occurrence algorithm from a family of probabilistic latent semantic indexing (PLSI) recommendation algorithms. This family also includes versions that incorporate ratings. In simplest terms, given T user-item data pairs={(um
where bus is the number of occurrences of the user-item pair (u, s) in the input data set. Maximizing the PMLE is equivalent to minimizing the empirical logarithmic loss function
The PLSI algorithm treats users um and items sn as distinct states of a user variable u and an item variable s, respectively. A factor variable z with the factors sk as states is associated with each user and item pair so that the input actually consists of triples (um, sn, zk), where zk is a hidden data value such that the user variable u conditioned on z and the item variable s conditioned on z are independent and
The conditional probability Pr(s|u, θ) which describes how much item s ∈ is likely to be of interest to user u ∈ then satisfies the relationship
The parameter vector θ is just the conditional probabilities Pr(z|u) that describe how much user u interests correspond to factor z ∈and the conditional probabilities Pr(s|z) that describe how likely item s is of interest to users associated with factor z. The full data model is Pr(s, z|u)=Pr(s|z) Pr(z|u) with a loss function
where the input dataactually consists of triples (u, s, z) in which z is hidden. Using Jensen's Inequality and (5) we can derive an upper-bound on R(θ) as
Combining (6) and (7) we see that
Unlike the Latent Semantic Indexing (LSI) algorithm that estimates a single optimal zk estimated for every pair (um, sn), the PLSI algorithm [5], [6] estimates the probability of each state zk for each (um, sn) by computing the conditional probabilities in (5) with, for example, an Expectation Maximization (EM) algorithm as we describe below. The upper bound (7) on R(θ) can be re-expressed as
where Q(z|u, s, θ) is a probability distribution. The PLSI algorithm may minimize this upper bound by expressing the optimal Q*(z|u, s, θ) in terms of the components Pr(s|z) and Pr(z|u) of θ, and then finding the optimal values for these conditional probabilities.
E-step: The “Expectation” step computes the optimal Q*(z|u, s, θ−)+=Pr(z|u, s, θ) that minimizes F(Q), taking as the values of θ− for this iteration the values of θ+from the M-step of the previous iteration
M-step: The “Maximization” step then computes new values for the conditional probabilities θ+={Pr(s|z)−, Pr(z|u)−} that minimize R(θ, Q) directly from the Q*(z|u, s, θ−)+ values from the E-step as
whereu, ·) and (·, s) denote the subsets of for user u and item s, respectively.
Since Q*(z|u, s, θ) results in the optimal upper bound on the minimum value of R(θ), and the second component of the expression (8 for F(Q) does not depend on θ, these values for the conditional probabilities θ={Pr(s|z), Pr(z|u)} are the optimal estimates we seek.1 The new values for the conditional probabilities θ+={Pr(s|z)+, Pr(z|u)+} that maximize Q*(z, u, s, θ), and therefore minimize R(θ, Q), are then computed. 1 It happens that the adsorption algorithm of memory-based recommender we describe above can be viewed as a degenerate EM algorithm. The loss function to be minimized is R(X)=X−MX. There is no E-step because there are no hidden variables, and the M-step is just the computation of the matrix X of point probabilities that satisfy (2).
One insight that might further understanding how the EM algorithm minimizes the loss function R(θ, Q) with regard to a particular data set is that the EM iteration is only done for the pairs (um
As new items are added, the approximate algorithm does not re-compute the probabilities Pr(s|z) by the EM algorithm. Instead, the algorithm keeps a count for each item Sn in each factor zk and incriminates the count for sn in each factor zk for which Pr(zk|um) is large, indicating user um has a strong probability of membership, for each item sn user um accesses. The counts for the sn, in each factor zk are normalized to serve as the value Pr(sn|zk), rather than the formal value in between re-computations of the model by the EM algorithm.
Like the adsorption algorithm, the EM algorithm is a learning algorithm for a class of recommender algorithms. Many recommenders are continuously trained from the sequence of user-item pairs (um
A Classification Algorithm With Prescribed Constraints
In an embodiment, an alternate data model for user-item pairs and a nonparametric empirical likelihood estimator (NPMLE) for the model can serve as the basis for a model-based recommender. Rather than estimate the solution for a simple model for the data, the proposed estimator actually admits additional assumptions about the model that in effect specify the family of admissible models and that also that incorporates ratings more naturally. The NPMLE can be viewed as nonparametric classification algorithm which can serve as the basis for a recommender system. We first describe the data model and then detail the nonparametric empirical likelihood estimator.
A User Community and Item Collection Constrained Data Model
a) conceptually represents a generalized data model. In this embodiment, however, we assume the input data set consists of three bags of lists:
By accepting input data in the form of lists, we seek to endow the model with knowledge about the complementary and substitute nature of items gained from users and item collections, and with knowledge about user relationships. For data sources that only produce triples (u, s, h), we assume the set of lists that capture this information about complementary or substitute items can be built by selecting lists of triples from an accumulated pool based on relevant shared attributes. The most important of these attributes would be the context in which the items were selected or experienced by the user, such as a defined (short) temporal interval.
A useful data model should include an alternate approach to identifying factors that reflects the complementary or substitute nature of items inferred from user listsand item collections ε, as well as the perceived value of recommendations based on a user's social or other relationships inferred from the user communitiesas approximately represented by the graph GHEF depicted in
As for the PLSI model with ratings, our goal is to estimate the distribution Pr(h, s|S, u) given the observed data ε, and Because user ratings may not be available for a given user in a particular application, we re-express this distribution as
Pr(h,s|S,u)=Pr(h|s,S,u)Pr(s|S,u) (12)
where S={sn
To formally relate these two distributions, we first define the set(U, S, H) ⊂ of lists that include any triple (u, s, h) ∈U×S×H and let S ⊂ be a set of seed items. Then
The primary task then is to derive a data model for and estimate the parameters of that model to maximize the probability
given the observed data ε, and
Estimating the Recommendation Conditionals
As a practical approach to maximizing the probability R, we first focus on estimating Pr(s|S, u) by maximizing Pr(s, S, u) for the data setsε, and We do this by introducing latent variables y and z such that
so we can express the joint probability Pr(s, S, u) in terms of independent conditional probabilities. We assume that s, S, and y are conditionally independent with respect to z, and that u and z are conditionally independent with respect to y
Pr(s,S,y|z)=Pr(s|z)Pr(y|z)=Pr(s,S|y,z)Pr(y|z) Pr(u,z|y)=Pr(u|y)=Pr(u|z,y)Pr(z|y)
We can then rewrite the joint probability
Finally, we can derive an expression for Pr(s|S, u) by first summing (15) over z and y to compute the marginal Pr(s, S, u) and factoring out Pr(u)
and then expanding the conditional as
Equation (16) expresses the distribution Pr(s, S|u) as a product of three independent distributions. The conditional distribution Pr(s|z) expresses the probability that item s is a member of the latent item collection z. The conditional distribution Pr(y|u) similarly expresses the probability that the latent user community y is representative for user u. Finally, the probability that items in collection z are of interest to users in community y is specified by the distribution Pr(z|y). We compose these relationships between users and items into the full data model by the graph GUCIC shown in
User Community and Item Collection Conditionals
The estimation problem for the user community conditional distribution Pr(y|u) and for the item collection conditional distribution Pr(s|z) is essentially the same. They are both computed from lists that imply some relationship between the users or items on the lists that is germane to making recommendations. Given the set ε of lists of users and the setof lists of items, we can compute the conditionals Pr(y|u) and Pr(s|z) several ways.
One very simple approach is to match each user community εl with a latent factor yl and each item collectionk with a latent factor zk. The conditionals could be the uniform distributions
While this approach is easily implemented, it potentially results in a large number of user community factors y ∈ γ and item collection factors z ∈. Estimating Pr(z|y) is a correspondingly large computation task. Also, recommendations cannot be made for users in a community εl if does not include a list for at least one user in εl. Similarly, items in a collection Fk cannot be recommended if no item onk occurs on a list in
Another approach is simply to use the previously described EM algorithm to derive the conditional probabilities. For each list εi in ε we can construct M2 pairs (u, v) ∈ ×3 We can also construct N2 pairs (t, s) ∈ We can estimate the pairs of conditional probabilities Pr(v|y), Pr(y|u) and Pr(s|z), Pr(z|t) using the EM algorithm. For Pr(v|y) and Pr(y|u) we have 3If u and v are two distinct members of εl, we would construct the pairs (u; v), (v; u), (u; u), and (v; v).
E-Step:
M-Step:
whereε is the collection of all co-occurrence pairs (u, v) constructed from all lists εl ∈ε. ε (u,·) and ε(·, v) denote the subsets of such pairs with the specified user u as the first member and the specified user v as the second member, respectively. Similarly, for Pr(s|z) and Pr(z|t) we have
E-Step:
M-Step:
While the preceding two approaches may be adequate for many applications, both may not explicitly incorporate incremental addition of new input data. The iterative computations (18), (19), (20) and (21), (22), (24) assume the input data set is known and fixed at the outset. As we noted above, some recommenders incorporate new input data in an ad hoc fashion. We can extend the basic PLSI algorithm to more effectively incorporate sequential input data for another approach to computing the user community and item collection conditionals.
Focusing first on the conditionals Pr(v|y) and Pr(y|u), there are several ways we could incorporate sequential input data into an EM algorithm for computing time-varying conditionals Pr(v|y; τn)+, Pr(y|u; τn)+, and Q*(y|u, v, θ−; τn)+ We only describe one simple method here in which we also gradually de-emphasize older data as we incorporate new data. We first define two time-varying co-occurrence matrices ΔE(τn) and ΔF(τn) of the data pairs received since time τn−1 with elements
Δevu(τn)−|{(u,v)|(u,v)∈Dε(τn)−Dε(τn−1)}|Δfat(τn)=|{(t,s)|(t,s)∈DF(τn)−Dε(τn−1)}|
We then add two additional initial steps to the basic EM algorithm so that the extended computation consists of four steps. The first two steps are done only once before the E and M steps are iterated until the estimates for Pr(v|y; τn) and Pr(y|u; τn) converge:
W-Step: The initial “Weighting” step computes an appropriate weighted estimate for the co-occurrence matrix E(τn). The simplest method for doing this is to compute a suitably weighted sum of the older data with the latest data
E(τn)=αεE(τn−1)+βεΔE(τn) (25)
This difference equation has the solution
(25) is just a scaled discrete integrator for αε=1. Choosing 0≦αε<1 and setting βε=1−αε gives a simple linear estimator for the mean value of the co-occurrence matrix that emphasizes the most recent data.
I-Step: In the next “Input” step, the estimated co-occurrence data is incorporated in the EM computation. This can be done in multiple ways, one straightforward approach is to adjust the starting values for the EM phase of the algorithm by re-expressing the M-step computations (19) and (20) in terms of E(τn), and then re-estimating the conditionals Pr(v|y; τn)− and Pr(y|u; τn)−at time τn
E-Step: The EM iteration consists of the same E-step and M-step as the basic algorithm. The E-step computation is
M-step: Finally, the M-step computation is
Convergence of the EM iteration in this extended algorithm is guaranteed since this algorithm only changes the starting values for the EM iteration.
The extended algorithm for computing Pr(s|z) and Pr(z|t) is analogous to the algorithm for computing Pr(v|y) and Pr(y|u):
W-Step: Given input data ΔF(τn), the estimated co-occurrence data is computed as
F(τn)=αFF(τn−1)+βFΔF(τn) (31)
I-Step:
E-Step:
M-Step:
Association Conditionals
Once we have estimates for Pr(s|z; τn) and Pr(y|u; τn), we can derive estimates for the association conditionals Pr(z|y; τn) expressing the probabilistic relationships between the user communities y ∈γ and item collections z ∈ These estimates must be derived from the listssince this is the only observed data that relates users and items. A key simplifying assumption in the model we build here is that
Appendix C presents a full derivation of E-step (49) and M-step (53) of the basic EM algorithm for estimating Pr(z|y). Defining the list of seeds S in the triples (u, s, S) is needed in the M-step computation. In some cases, the seeds S could be independent and supplied with the list. For these cases, the input data from the user lists would be
={(ui*,si
In other cases, the seeds might be inferred from the items in the user list Hi itself. These could be just the items preceding each item in the list so that the input data would be
={(ui*,si
The seeds for each (u, s) pair in the list could also be every other item in the list, in this case
i={(ui*,si
As we did for the user community conditional Pr(y|u) and item collection conditional Pr(s|z), we can also extend this EM algorithm to incorporate sequential input data. However, instead of forming data matrices, we define two time-varying data lists Δ(τn) and Δ(τn) from the bag of lists(τn)
Δ(τn)={(u,s,S,h)|(u,s,h,)∈i,i∈(τn),∉τn−1)}Δ(τn)={(u u,s,S,1)|(u,s,S,h)∈ΔD(τn)}
where the seeds S for each item are computed by one of the methods (40), (41), (42) or any other desired method. We also note that Δ(τn) and Δ(τn) are bags, meaning they include an instance of the appropriate tuple for each instance of the defining tuple in the description. The extended EM algorithm for computing Pr(z|y; τ) then incorporates appropriate versions of the initial W-step and I-step computations into the basic EM computations:
W-Step: The weighting factors are applied directly to the list(τn−1) and the new data list Δ(τn) to create the new list
(τn)={(u,s,S,aa)|(u,s,S,a)∈(τn−1)}∪{(u,s,S,βa)|(u,s,S,a)∈Δ(τn)} (43)
I-Step: The weighted data at time τn is incorporated into the EM computation via the weighting coefficient a from each tuple (u, s, S, a) to re-estimate Pr(z|y; τn−1)+ as Pr(z|y; τn)−
We note, however, that we may have Q*(z, y|s, S, u, θ−; τn−1)+=0 for (u, s, S, a) that are in(τn) but such that (u, s, S, a′) is not in (τn−1). This missing data is filled by the first iteration of the following E-step.
E-Step:
M-Step:
Memory-based recommenders are not well suited to explicitly incorporating independent, a priori knowledge about user communities and item collections. One type of user community and item collection information is implicit in some model-based recommenders. However, some recommenders' data models do not provide the needed flexibility to accommodate notions for such clusters or groupings other than item selection behavior. In some recommnenders, additional knowledge about item collections is incorporated in an ad hoc way via supplementary algorithms.
In an embodiment, the model-based recommender we describe above allows user community and item collection information to be specified explicitly as a priori constraints on recommendations. The probabilities that users in a community are interested in the items in a collection are independently learned from collections of user communities, item collections, and user selections. In addition, the system learns these probabilities by an adaptive EM algorithm that extends the basic EM algorithm to better capture the time-varying nature of these sources of knowledge. The recommender that we describe above is inherently massively-scalable. It is well suited to implementation as a data-center scale Map-Reduce computation. The computations to produce the knowledge base can be run as an off-line batch operation and only recommendations computed in real-time on-line, or the entire process can be run as a continuous update operation. Finally, it is possible and practical to run multiple recommendation instances with knowledge bases built from different sets of user communities and item collections as a multi-criteria meta-recommender.
Exemplary Pseudo Code
Process: INFER_COLLECTIONS
Description:
To construct time-varying latent collections c1(τn), c2(τn), . . . , ck(τn), given a time-varying list D(τn) of pairs (ai, bj). The collections ck(τn) are implicitly specified by the probabilities Pr(ck|ai: τn) and Pr(bj|ck; τn).
Input:
Output:
Exemplary Method:
Notes:
Process: INFER_ASSOCIATIONS
Description:
To construct time-varying association probabilities Pr(zk|yl; τn) between two collections z1(τn), z2(τn), . . . , zk(τn) and y1(τn), y2(τn), . . . , yl(τn) of items, given the probabilities Pr(yk|ui; τn) that the ui are members of the collections yl(τn), the probabilities Pr(sj|zl; τn) that the collections zk(τn) include the sj as members, and a time-varying list D(τn) of triples (ui, sj, So).
Input:
Output:
Exemplary Method:
Notes:
Process: CONSTRUCT_MODEL
Description:
To construct a model for time-varying lists Duv(τn) of user-user pairs (ui, vj), Dts(τn) of item-item pairs (ti, sj), and Dus(τn) of user-item triples (ui, sj, So) that groups users ui into communities of items yl and items sj into communities of items sk. The model is specified by the probabilities Pr(yl|ui; τn) that the ui are members of the collections yl(τn), the probabilities Pr(sj|zk; τn) that the collections zk(τn) include the sj as members, and the probabilities Pr(zk|yl; τn) that the communities yl(τn) are associated with the collections zk(τn).
Input:
Output:
Exemplary Method:
Notes:
Exemplary System
The recommenders we describe above may be implemented on any number of computer systems, for use by one or more users, including the exemplary system 400 shown in
Moreover, a person of reasonable skill in the art will recognize that the recommender we describe above may be implemented on other computer system configurations including hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, application specific integrated circuits, and like. Similarly, a person of reasonable skill in the art will recognize that the recommender we describe above may be implemented in a distributed computing system in which various computing entities or devices, often geographically remote from one another, perform particular tasks or execute particular instructions. In distributed computing systems, application programs or modules may be stored in local or remote memory.
The general purpose or personal computer 402 comprises a processor 404, memory 406, device interface 408, and network interface 410, all interconnected through bus 412. The processor 404 represents a single, central processing unit, or a plurality of processing units in a single or two or more computers 402. The memory 406 may be any memory device including any combination of random access memory (RAM) or read only memory (ROM). The memory 406 may include a basic input/output system (BIOS) 406A with routines to transfer data between the various elements of the computer system 400. The memory 406 may also include an operating system (OS) 406B that, after being initially loaded by a boot program, manages all the other programs in the computer 402. These other programs may be, e.g., application programs 406C. The application programs 406C make use of the OS 406B by making requests for services through a defined application program interface (API). In addition, users can interact directly with the OS 406B through a user interface such as a command language or a graphical user interface (GUI) (not shown).
Device interface 408 may be any one of several types of interfaces including a memory bus, peripheral bus, local bus, and like. The device interface 408 may operatively couple any of a variety of devices, e.g., hard disk drive 414, optical disk drive 416, magnetic disk drive 418, or like, to the bus 412. The device interface 408 represents either one interface or various distinct interfaces, each specially constructed to support the particular device that it interfaces to the bus 412. The device interface 408 may additionally interface input or output devices 420 utilized by a user to provide direction to the computer 402 and to receive information from the computer 402. These input or output devices 420 may include keyboards, monitors, mice, pointing devices, speakers, stylus, microphone, joystick, game pad, satellite dish, printer, scanner, camera, video equipment, modem, and like (not shown). The device interface 408 may be a serial interface, parallel port, game port, firewire port, universal serial bus, or like.
The hard disk drive 414, optical disk drive 416, magnetic disk drive 418, or like may include a computer readable medium that provides non-volatile storage of computer readable instructions of one or more application programs or modules 406C and their associated data structures. A person of skill in the art will recognize that the system 400 may use any type of computer readable medium accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, cartridges, RAM, ROM, and like.
Network interface 410 operatively couples the computer 302 to one or more remote computers 302R on a local area network 422 or a wide area network 432. The computers 302R may be geographically remote from computer 302. The remote computers 402R may have the structure of computer 402, or may be a server, client, router, switch, or other networked device and typically includes some or all of the elements of computer 402. peer device, or network node. The computer 402 may connect to the local area network 422 through a network interface or adapter included in the interface 410. The computer 402 may connect to the wide area network 432 through a modem or other communications device included in the interface 410. The modem or communications device may establish communications to remote computers 402R through global communications network 424. A person of reasonable skill in the art should recognize that application programs or modules 406C might be stored remotely through such networked connections.
We describe some portions of the recommender using algorithms and symbolic representations of operations on data bits within a memory, e.g., memory 306. A person of skill in the art will understand these algorithms and symbolic representations as most effectively conveying the substance of their work to others of skill in the art. An algorithm is a self-consistent sequence leading to a desired result. The sequence requires physical manipulations of physical quantities. Usually, but not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. For expressively simplicity, we refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or like. The terms are merely convenient labels. A person of skill in the art will recognize that terms such as computing, calculating, determining, displaying, or like refer to the actions and processes of a computer, e.g., computers 402 and 402R. The computers 402 or 402R manipulates and transforms data represented as physical electronic quantities within the computer 402's memory into other data similarly represented as physical electronic quantities within the computer 402's memory. The algorithms and symbolic representations we describe above
The recommender we describe above explicitly incorporates a co-occurrence matrix to define and determine similar items and utilizes the concepts of user communities and item collections, drawn as lists, to inform the recommendation. The recommender more naturally accommodates substitute or complementary items and implicitly incorporates intuition, i.e., two items should be more similar if more paths between them exist in the co-occurrence matrix. The recommender segments users and items and is massively scalable for direct implementation as a Map-Reduce computation.
A person of reasonable skill in the art will recognize that they may make many changes to the details of the above-described embodiments without departing from the underlying principles. The following claims, therefore, define the scope of the present systems and methods.