SYSTEMS AND METHODS FOR MAKING RECOMMENDATIONS USING MODEL-BASED COLLABORATIVE FILTERING WITH USER COMMUNITIES AND ITEMS COLLECTIONS

Information

  • Patent Application
  • 20100169328
  • Publication Number
    20100169328
  • Date Filed
    December 31, 2008
    15 years ago
  • Date Published
    July 01, 2010
    14 years ago
Abstract
Massively scalable, memory and model-based techniques are an important approach for practical large-scale collaborative filtering. We describe a massively scalable, model-based recommender system and method that extends the collaborative filtering techniques by explicitly incorporating these types of user and item knowledge. In addition, we extend the Expectation-Maximization algorithm for learning the conditional probabilities in the model to coherently accommodate time-varying training data.
Description
COPYRIGHT NOTICE

©2002-2003 Strands, Inc. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the U.S. Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. 37 CFR §1.71(d).


TECHNICAL FIELD

This invention pertains to systems and methods for making recommendations using model-based collaborative filtering with user communities and items collections.


BACKGROUND

It has become a cliché that attention, not content, is the scarce resource in any internet market model. Search engines are imperfect means for dealing with attention scarcity since they require that a user has reasoned enough about the items to which he or she would like to devote attention to have attached some type of descriptive keywords. Recommender engines seek to replace the need for user reasoning by inferring a user's interests and preferences implicitly or explicitly and recommending appropriate content items for display to and attention by the user.


Exactly how a recommender engine infers a user's interests and preferences remains an active research topic linked to the broader problem of understanding in machine learning. In the last two years, as large-scale web applications have incorporated recommendation technology, these areas in machine learning evolve to include problems in data-center scale, massively concurrent computation. At the same time, the sophistication of recommender architectures increased to include model-based representations for knowledge used by the recommender, and in particular models that shape recommendations based on the social networks and other relationships between users as well as a prior specified or learned relationships between items, including complementary or substitute relationships.


In accordance with these recent trends, we describe systems and methods for making recommendations using model-based collaborative filtering with user communities and item collections that is suited to data-center scale, massively concurrent computations.





BRIEF DRAWINGS DESCRIPTION


FIG. 1(
a) is a user-item-factor graph.



FIG. 1(
b) is a item-item-factor graph.



FIG. 2 is an embodiment of a data model including user communities and items collections for use in a system and method for making recommendations.



FIG. 3 is an embodiment of a data model including user communities and items collections for use in a system and method for making recommendations.



FIG. 4 is an embodiment of a system and method for making recommendations.





DETAILED DESCRIPTION

Additional aspects and advantages of this invention will be apparent from the following detailed description of preferred embodiments, which proceeds with reference to the accompanying drawings.


We begin by a brief review of memory-based systems and a more detailed description of model-based systems and methods. We end with a description of adaptive model-based systems and methods that compute time-varying conditional probabilities.


A Formal Description of the Recommendation Problem


Tripartite graph USF shown in FIG. 1(a) models matching users to items. The square nodes={u1, u2, . . . , uM} represent users and the round nodes={s1, s2, . . . , sN} represent items. In this context, a user may be a physical person. A user may also be a computing entity that will use the recommended content items for further processing. Two or more users may form a cluster or group having a common property, characteristic, or attribute. Similarly, an item may be any good or service. Two or more items may form a cluster or group having a common property, characteristic, or attribute. The common property, characteristic, or attribute of an item group may be connected to a user or a cluster of users. For example, a recommender engine may recommend books to a user based on books purchased by other users having similar book purchasing histories.


The function c(u; τ) represents a vector of measured user interests over the categories for user u at time instant τ. Similarly, the function a(s; τ) represents a vector of item attributes for item s at time instant τ. The edge weights h(u, s; τ) are measured data that in some way indicate the interest user u has in item s at time instant τ. Frequently h(u, s; n) is visitation data but may be other data, such as purchasing history. For expressive simplicity, we will ordinarily omit the time index τ unless it is required to clarify the discussion.


The octagonal nodes={z1, z2, . . . , zK} in the USF graph are factors in an underlying model for the relationship between user interests and items. Intuition suggests that the value of recommendations traces to the existence of a model that represents a useful clustering or grouping of users and items. Clustering provides a principled means for addressing the collaborative filtering problem of identifying items of interest to other users whose interests are related to the user's, and for identifying items related to items known to be of interest to a user.


Modeling the relationship between user interests and items may involve one or two types of collaborative filtering algorithms. Memory-based algorithms consider the graph US without the octagonal factor nodes in USF of FIG. 1(a) essentially to fit nearest-neighbor regressions to the high-dimension data. In contrast, model-based algorithms propose that solutions for the recommender problem actually exist on a lower-dimensional manifold represented by the octagonal nodes.


Memory-Based Algorithms


As defined above, a memory-based algorithm fits the raw data used to train the algorithm with some form of nearest-neighbor regression that relates items and users in a way that has utility for making recommendations. One significant class of these systems can be represented by the non-linear form






X=f(h(u1,s1), . . . ,h(uM,sN),c(u1), . . . ,c(uM),a(s1), . . . ,a(sN),X)   (1)


where X is an appropriate set of relational measures. This form can be interpreted as an embedding of the recommender problem as fixed-point problem in an |U|+|S | dimension data space.


Implicit Classification Via Linear Embeddings


The embedding approach seeks to represent the strength of the affinities between users and items by distances in a metric space. High affinities correspond to smaller distances so that users and items are implicitly classified into groupings of users close to items and groupings of items close to users. A linear convex embedding may be generalized as












X
=




[



0



H
US






H
SU



0



]



[




X
UU




X
US






X
SU




X
SS




]







n
=
1


M
+
N








X
mn



=
1







=
HX







(
2
)







where H is matrix representation for the weights, with submatrices HUS and HSU such that hUS;mn=h(um, sn) and hSU;mn=h(sn, um). The desired affinity measures describing the affinity of user um for items s1, . . . , sN is the m-th row of the submatrix XUS. Similarly, the desired measures describing the affinity of users u1, . . . , uM for item sn is the n-th row of the submatrix XSU. The submatrices XUU=HUSXSU and XSS=HSUXUS are user-user and item-item affinities, respectively.


If a non-zero X exists that satisfies (2) for a given H, it provides a basis for building the item-item companion graph UU shown in FIG. 1(b). There are a number of ways that the edge weights h′(s1, sN) representing the similarities of the item nodes sl and sn in the graph can be computed. One straightforward solution is to consider h(um, sn) and h(sn, um) to be proportional to the strength of the relationship between item um and sn, and the relationship between sn and um, respectively. Then we can let the strength of the relationship between sl and sm, as








h




(


s
l

,

s
n


)


=




m
=
1

M








h


(


s
l

,

u
m


)




h


(


u
m

,

s
n


)








so the entire set of relationships can be represented in matrix form as V=HSUHUS. The affinity of sl and sn then satisfies






X
SS
=H′X
SS
=H
SU
H
US
X
SS


which can be derived directly from (2) since






X
=



[





H
US



H
SU




0




0




H
SU



H
US





]


X

=


H
2


X






In memory-based recommenders, the proposed embedding does not exist for an arbitrary weighted bipartite graph US. In fact, an embedding in which X has rank greater than 1 exists for a weighted bipartite gUS if and only if the adjacency matrix has a defective eigenvalue. This is because H has the decomposition






H
=


Y
[






λ
1


I

+

T
1







0















0








λ
k


I

+

T
k





]



Y

-
1







where the Y is a non-singular matrix, λ1, . . . , λk and T1, . . . , Tk are upper-triangular submatrices with 0's on the diagonal. In addition, the rank of the null-space of Ti is equal to the number of independent eigenvectors of H associated with eigenvalue λi. Now, if λ1=1 is a non-defective eigenvalue with algebraic multiplicity greater than 1, Ti=0.


Q is a real, orthogonal matrix and Λ is a diagonal matrix with the eigenvalues of H on the diagonal. The form (2) implies that W has the single eigenvalue “1” so that Λ=I and






H=QIQT=I


Now, an arbitrary defective H can be expressed as






H=Y[I+T]Y
−1
=I+YTY−1


where Y is non-singular and T is block upper-triangular with “0”'s on the diagonal. The rank of the null-space is equal to the number of independent eigenvectors of H. If H is non-defective, which includes the symmetric case, T must be the 0 matrix and we see again that H=1.


Now on the other hand, if H is defective, from (2) we have (H−I)X=0 and we see that





YTY−1X=0


where the rank of the null-space of T is less than N+M. For an X to exist that satisfies the embedding (2), there must exist a graph US with the singular adjacency matrix H−I. This is simply the original graph US with a self-edge having weight −1 added to each node. The graph US is no longer bipartite, but it still has a bipartite quality: If there is no edge between two distinct nodes inUS, there is no edge between two nodes in US. Various structural properties inUS can result in a singular adjacency matrix H=I. For the matrix X to be non-zero and the proposed embedding to exist, H must have properties that correspond to strong assumptions on users' preferences.


The Adsorption Algorithm


The linear embedding (2) of the recommendation problem establishes a structural isomorphism between solutions to the embedding problem and the solutions generated by adsorption algorithm for some recommenders. In a generalized approach, the recommender associates vectors pc (um) and pA (sn) representing probability distributions Pr(c; um) and Pr(a; sn) over and respectively, with the vectors c(um) and a(sn) such that













P
=




[



0



H
US






H
SU



0



]



[




P
UA




P
UC






P
SA




P
SC




]







n
=
1







+












P
mn



=
1







=
HP









where







P
UA

=



[





p
A
T



(

u
1

)













p
A
T



(

u
M

)





]







P
UC


=

[





p
C
T



(

u
1

)













p
C
T



(

u
M

)





]









P
SA

=



[





p
Λ
T



(

s
1

)













p
Λ
T



(

s
N

)





]







P
SC


=

[





p
C
T



(

s
1

)













p
C
T



(

s
N

)





]






(
3
)







The matrices PSA and PUC are matrices composed of thedistrubution pA (sn) and the distributions pc (um) written as row vectors. The distributions pA (um) a distributions pc (sn) that form the row vectors of the matrices PUA and PSC matrices are the projections of the distributions in PSA and PUC, respectively, under the linear embedding (2).


Although P is an (+)×(+) matrix, it bears a specific relationship to the matrix X that implies that if the 0 matrix is the only solution for X then the 0 matrix if the only solution for P. The columns of P must have the columns of X as a basis and therefore the column space has dimension M+N at most. If X does not exist, then the null space of YTY−1 has dimension M+N and P must be the 0 matrix if W is not the identity matrix.


Conversely, if X exists, even though a non-zero P that meets the row-scaling constraints on P in (3) may not exist, a non-zero






P
R
=r
−1
[X|X| . . . |X]


composed of






r=┌(+)/(+)┐


replications of X that meets the row-scaling constraints does exist. From this we deduce an entire subspace of matrices PR exists. A P with + columns selected from any matrix in this subspace and rows re-nonnalized to meet the row-scaling constraints may be a sufficient approximation for many applications.


Embedding algorithms including the adsorption algorithm are learning methods for a class of recommender algorithms. The key idea behind the adsorption algorithm that similar item nodes will have similar component metric vectors pA (sn) does provide the basis for an adsorption-based recommendation algorithm. The component metrics pA (sn) can be approximated by several rounds of an iterative MapReduce computation with run-time (M+N). The component metrics may be compared to develop lists of similar items. If these comparisons are limited to a fixed-sized neighborhood, they can be easily parallelized as a MapReduce computation with run-time (N). The resulting lists are then used by the recommender to generate recommendations.


Model-Based Algorithms


Memory-based solutions to the recommender problem may be adequate for many applications. As shown here though, they can be awkward and have weak mathematical foundations. The memory-based recommender adsorption algorithm proceeds from the simple concept that the items a user might find interesting should display some consistent set of properties, characteristics, or attributes and the users to whom an item might appeal should have some consistent set of properties, characteristics, or attributes. Equation (3) compactly expresses this concept. Model-based solutions can offer more principled and mathematically sound grounds for solutions to the recommender problem. The model-based solutions of interest here represent the recommender problem with the full graph USF that includes the octagonal factor nodes shown in FIG. 1(a).


Explicit Classification In Collaborative Filters


To further clarify the conceptual difference between the particular family of memory-based algorithms that we describe above, and the particular family of model-based algorithms that we describe below, we focus on how each algorithm classifies users and items. The family of adsorption algorithms we discuss above explicitly computes vector of probabilities pc (u) and pA (s) that describe how much interests in setapply to user u and attributes in set A apply to item s, respectively. These probability vectors implicitly define communities of users and items which a specific implementation may make explicit by computing similarities between users and between items in a post-processing step.


Recommenders incorporating model-based algorithms explicitly classify users and items into latent clusters or groupings, represented by the octagonal factor nodes ={z1, . . . , zK} in FIG. 1(b), which match user communities with item collections of interest to the factor zk. The degree to which user um and item sn belong to factor zk is explicitly computed, but generally, no other descriptions of the properties of users and items corresponding to the probability vectors in the adsorption algorithms and which can be used to compute similarities are explicitly computed. The relative importance of the interests in of similar users and the relative importance of the attributes in of similar items can be implicitly inferred from the characteristic descriptions for users and items in the factors zk.


Probabilistic Latent Semantic Indexing Algorithms


A recommender may implement a user-item co-occurrence algorithm from a family of probabilistic latent semantic indexing (PLSI) recommendation algorithms. This family also includes versions that incorporate ratings. In simplest terms, given T user-item data pairs={(um1, Sn1), . . . , (umT, snT)}, the recommender estimates a conditional probability distribution Pr(s|u, θ) that maximizes a parametric maximum likelihood estimator (PMLE)








R
^



(
θ
)


=






(

u
,
s

)











Pr


(


s

u

,
θ

)



=




u













s











Pr


(


s

u

,
θ

)



b
us









where bus is the number of occurrences of the user-item pair (u, s) in the input data set. Maximizing the PMLE is equivalent to minimizing the empirical logarithmic loss function










R


(
θ
)


=



-

1
T



log



R
^



(
θ
)



=


-

1
T







u













s











b
us


log






Pr


(


s

u

,
θ

)











(
4
)







The PLSI algorithm treats users um and items sn as distinct states of a user variable u and an item variable s, respectively. A factor variable z with the factors sk as states is associated with each user and item pair so that the input actually consists of triples (um, sn, zk), where zk is a hidden data value such that the user variable u conditioned on z and the item variable s conditioned on z are independent and











Pr


(


z

u

,
s

)




Pr


(

s

u

)




Pr


(
u
)



=


Pr


(

u
,

s

z


)




Pr


(
z
)









=


Pr


(

s

z

)




Pr


(

u

z

)




Pr


(
z
)









=


Pr


(

s

z

)




Pr


(

z

u

)




Pr


(
u
)









=


Pr


(

s
,

z

u


)




Pr


(
u
)










The conditional probability Pr(s|u, θ) which describes how much item s ∈ is likely to be of interest to user u ∈ then satisfies the relationship










Pr


(


s
|
u

,
θ

)


=




z



-






Pr


(

s
|
z

)




Pr


(

z
|
u

)








(
5
)







The parameter vector θ is just the conditional probabilities Pr(z|u) that describe how much user u interests correspond to factor z ∈and the conditional probabilities Pr(s|z) that describe how likely item s is of interest to users associated with factor z. The full data model is Pr(s, z|u)=Pr(s|z) Pr(z|u) with a loss function











R




(
θ
)


=



-

1
T








(

u
,
s
,
z

)













log






Pr


(

s
,

z
|
u


)













=


-

1
T








(

u
,
s
,
z

)







[


log






Pr


(

s
|
z

)



+

log






Pr


(

z
|
u

)




]








(
6
)







where the input dataactually consists of triples (u, s, z) in which z is hidden. Using Jensen's Inequality and (5) we can derive an upper-bound on R(θ) as










R


(
θ
)


=



-

1
T








(

u
,
s

)













log





z



-






Pr


(

s
|
z

)




Pr


(

z
|
u

)










-

1
T








(

u
,
s

)










z



-





[


log






Pr


(

s
|
z

)



+

log







Pr


(

z
|
u

)


.
.












(
7
)







Combining (6) and (7) we see that








R




(
θ
)




R


(
θ
)





-

1
T








(

u
,
s

)













z



-





[


log






Pr


(

s
|
z

)



+

log






Pr


(

z
|
u

)




]








Unlike the Latent Semantic Indexing (LSI) algorithm that estimates a single optimal zk estimated for every pair (um, sn), the PLSI algorithm [5], [6] estimates the probability of each state zk for each (um, sn) by computing the conditional probabilities in (5) with, for example, an Expectation Maximization (EM) algorithm as we describe below. The upper bound (7) on R(θ) can be re-expressed as













F


(
Q
)


=





-

1
T








(

u
,
s

)
















z



-






Q


(


z
|
u

,
s
,
θ

)




{


log






Pr


(

s
|
z

)



+

log






Pr


(

z
|
u

)




]





-










log






Q


(


z
|
u

,
s
,
θ

)



}






=




R


(

θ
,
Q

)


+


1
T







(

u
,
s

)
















z



-






Q


(


z
|
u

,
s
,
θ

)



log






Q


(


z
|
u

,
s
,
θ

)














(
8
)







where Q(z|u, s, θ) is a probability distribution. The PLSI algorithm may minimize this upper bound by expressing the optimal Q*(z|u, s, θ) in terms of the components Pr(s|z) and Pr(z|u) of θ, and then finding the optimal values for these conditional probabilities.


E-step: The “Expectation” step computes the optimal Q*(z|u, s, θ)+=Pr(z|u, s, θ) that minimizes F(Q), taking as the values of θ for this iteration the values of θ+from the M-step of the previous iteration












Q
*



(


z
|
u

,
s
,

θ
-


)


+

=





Pr


(

s
|
z

)


-




Pr


(
zu
)


-




Pr


(

s
|
u

)


-


=




Pr


(

s
|
z

)


-




Pr


(

z
|
u

)


-






z



-







Pr


(

s
|
z

)


-




Pr


(

z
|
u

)


-









(
9
)







M-step: The “Maximization” step then computes new values for the conditional probabilities θ+={Pr(s|z), Pr(z|u)} that minimize R(θ, Q) directly from the Q*(z|u, s, θ)+ values from the E-step as











Pr


(

s
|
z

)


+

=






(

u
,
s

)





(*



,
s

)








Q
*



(


z
|
u

,
s
,

θ
-


)


+







(

u
,
s

)









Q
*



(


z
|
u

,
s
,

θ
-


)


+







(
10
)








Pr


(

z
|
u

)


+

=







(

u
,
s

)




(

u
,




*)






Q
*



(


z
|
u

,
s
,

θ
-


)


+






z



-










(

u
,
s

)




(

u
,




*)






Q
*



(


z
|
u

,
s
,

θ
-


)


+








(
11
)







whereu, ·) and (·, s) denote the subsets of for user u and item s, respectively.


Since Q*(z|u, s, θ) results in the optimal upper bound on the minimum value of R(θ), and the second component of the expression (8 for F(Q) does not depend on θ, these values for the conditional probabilities θ={Pr(s|z), Pr(z|u)} are the optimal estimates we seek.1 The new values for the conditional probabilities θ+={Pr(s|z)+, Pr(z|u)+} that maximize Q*(z, u, s, θ), and therefore minimize R(θ, Q), are then computed. 1 It happens that the adsorption algorithm of memory-based recommender we describe above can be viewed as a degenerate EM algorithm. The loss function to be minimized is R(X)=X−MX. There is no E-step because there are no hidden variables, and the M-step is just the computation of the matrix X of point probabilities that satisfy (2).


One insight that might further understanding how the EM algorithm minimizes the loss function R(θ, Q) with regard to a particular data set is that the EM iteration is only done for the pairs (umi, sni) that occur in the data with the users u ∈items s ∈and the number of factors z ∈ fixed in at the start of the computation. Multiple occurrences of (um, sn), typically reflected in the edge weight function h(um, sn) are indirectly factored into the minimization by multiple iterations of the EM algorithm.2 To match the expected slow rate of increase in the number of users, but relatively faster expected rate of increase in items, an implementation of the EM iteration as a Map-Reduce computation actually is an approximation that fixes the usersand then number of factors inin advance, but which allows the number of items into increase. 2 Modifications to the model are presented in [6] that deal with potential over-fitting problems due to sparseness of the data set.


As new items are added, the approximate algorithm does not re-compute the probabilities Pr(s|z) by the EM algorithm. Instead, the algorithm keeps a count for each item Sn in each factor zk and incriminates the count for sn in each factor zk for which Pr(zk|um) is large, indicating user um has a strong probability of membership, for each item sn user um accesses. The counts for the sn, in each factor zk are normalized to serve as the value Pr(sn|zk), rather than the formal value in between re-computations of the model by the EM algorithm.


Like the adsorption algorithm, the EM algorithm is a learning algorithm for a class of recommender algorithms. Many recommenders are continuously trained from the sequence of user-item pairs (umi, sni). The values of Pr(s|z) and Pr(z|u) are used to compute factors zk linking user communities and item collections that can be used in a simple recommender algorithm. The specific factors zk associated with the user communities for which user u has the most affinity are identified from the Pr(z|u) and then recommended items s are selected from those item collections most associated with those communities based on the values Pr(s|z).


A Classification Algorithm With Prescribed Constraints


In an embodiment, an alternate data model for user-item pairs and a nonparametric empirical likelihood estimator (NPMLE) for the model can serve as the basis for a model-based recommender. Rather than estimate the solution for a simple model for the data, the proposed estimator actually admits additional assumptions about the model that in effect specify the family of admissible models and that also that incorporates ratings more naturally. The NPMLE can be viewed as nonparametric classification algorithm which can serve as the basis for a recommender system. We first describe the data model and then detail the nonparametric empirical likelihood estimator.


A User Community and Item Collection Constrained Data Model



FIG. 1(
a) conceptually represents a generalized data model. In this embodiment, however, we assume the input data set consists of three bags of lists:

    • 1. a bag of lists ={(ui*, si1, hi1), . . . , (ui*, sin, hin)} of triples, where hin is a rating that user ui* implicitly or explicitly assigns item sin,
    • 2. a bag ε of user communities ε1={ul1, . . . , ulm}, and
    • 3. a bagof item collectionsk={sk1, . . . , skn}.


By accepting input data in the form of lists, we seek to endow the model with knowledge about the complementary and substitute nature of items gained from users and item collections, and with knowledge about user relationships. For data sources that only produce triples (u, s, h), we assume the set of lists that capture this information about complementary or substitute items can be built by selecting lists of triples from an accumulated pool based on relevant shared attributes. The most important of these attributes would be the context in which the items were selected or experienced by the user, such as a defined (short) temporal interval.


A useful data model should include an alternate approach to identifying factors that reflects the complementary or substitute nature of items inferred from user listsand item collections ε, as well as the perceived value of recommendations based on a user's social or other relationships inferred from the user communitiesas approximately represented by the graph GHEF depicted in FIG. 2.


As for the PLSI model with ratings, our goal is to estimate the distribution Pr(h, s|S, u) given the observed data ε, and Because user ratings may not be available for a given user in a particular application, we re-express this distribution as






Pr(h,s|S,u)=Pr(h|s,S,u)Pr(s|S,u)   (12)


where S={sn1, . . . , snj} is a set of seed items, and we design our data model to support estimation of Pr(s|S, u) and Pr(h|s, S, u) as separate sub-problems. The observed data has the generative conditional probability distribution










Pr


(




ɛ

,


)


=


Pr


(


,
ɛ
,


)



Pr


(

ɛ
,


)







(
13
)







To formally relate these two distributions, we first define the set(U, S, H) ⊂ of lists that include any triple (u, s, h) ∈U×S×H and let S be a set of seed items. Then







Pr


(

s
,
S
,
u

)


=



Pr


(

s
,

S
|
u


)



Pr


(

S
|
u

)











=



Pr


(

s
,
S
,
u

)



Pr


(

S
,
u

)











=







l






(


{
u
}

,


{
s
}


S

,
H

)










Pr


(




l

|


,


)









l






(


{
u
}

,
S
,
H

)










Pr


(




l

|


,


)













Pr


(


h
|
s

,
S
,
u

)


=



Pr


(

h
,

s
|
S

,
u

)



Pr


(


s
|
S

,
u

)











=



Pr


(

h
,
s
,
S
,
u

)



Pr


(

s
,
S
,
u

)











=







l






(


{
u
}

,


{
s
}


S

,
h

)










Pr


(




l

|


,


)









l






(


{
u
}

,


{
s
}


S

,
H

)










Pr


(




l

|


,


)










The primary task then is to derive a data model for and estimate the parameters of that model to maximize the probability









R
=







1
















i
















j











Pr


(



l

,


i

,


j


)













=






1
















i
















j








Pr


(




l

|


i


,


j


)




Pr


(


i

)




Pr


(


j

)











(
14
)







given the observed data ε, and


Estimating the Recommendation Conditionals


As a practical approach to maximizing the probability R, we first focus on estimating Pr(s|S, u) by maximizing Pr(s, S, u) for the data setsε, and We do this by introducing latent variables y and z such that







Pr


(

s
,
S
,
u

)


=




z



-








y






Pr


(

s
,
S
,
u
,
z
,
y

)








so we can express the joint probability Pr(s, S, u) in terms of independent conditional probabilities. We assume that s, S, and y are conditionally independent with respect to z, and that u and z are conditionally independent with respect to y






Pr(s,S,y|z)=Pr(s|z)Pr(y|z)=Pr(s,S|y,z)Pr(y|z) Pr(u,z|y)=Pr(u|y)=Pr(u|z,y)Pr(z|y)


We can then rewrite the joint probability














Pr


(

s
,
S
,
u
,
y
,
z

)


=




Pr


(

s
,
S
,
z
,

y
|
u


)




Pr


(
u
)









=




Pr


(

z
,

y
|
s

,
S
,
u

)




Pr


(

s
,

S
|
u


)




Pr


(
u
)












as











Pr


(

z
,

y
|
s

,
S
,
u

)




Pr


(

s
,

S
|
u


)




Pr


(
u
)



=




Pr


(

u
,
s
,

S
|
z

,
y

)




Pr


(

z
,
y

)









=




Pr


(

s
,

S
|
z

,
y

)




Pr


(


u
|
z

,
y

)













Pr


(

z
,
y

)


-

Pr


(

s
,

S
|
z

,
y

)













Pr


(


z
|
y

,
u

)




Pr


(

y
|
u

)




Pr


(
u
)









=




Pr


(

s
,

S
|
z


)




Pr


(

z
|
y

)













Pr


(

y
|
u

)




Pr


(
u
)









=




Pr


(

s
|
z

)








s



S








Pr


(


s


|
z

)














Pr


(

z
|
y

)




Pr


(

y
|
u

)




Pr


(
u
)











(
15
)







Finally, we can derive an expression for Pr(s|S, u) by first summing (15) over z and y to compute the marginal Pr(s, S, u) and factoring out Pr(u)










Pr


(

s
,

S
|
u


)


=




z



-








y







Pr


(

s
|
z

)








s



S









Pr


(


s


|
z

)




Pr


(

z
|
y

)




Pr


(

y
|
u

)











(
16
)







and then expanding the conditional as










Pr


(


s
|
S

,
u

)


=





z



-








y







Pr


(

s
|
z

)








s



S









Pr


(


s


|
z

)




Pr


(

z
|
y

)




Pr


(

y
|
u

)











z



-








y










s



S









Pr


(


s


|
z

)




Pr


(

z
|
y

)




Pr


(

y
|
u

)











(
17
)







Equation (16) expresses the distribution Pr(s, S|u) as a product of three independent distributions. The conditional distribution Pr(s|z) expresses the probability that item s is a member of the latent item collection z. The conditional distribution Pr(y|u) similarly expresses the probability that the latent user community y is representative for user u. Finally, the probability that items in collection z are of interest to users in community y is specified by the distribution Pr(z|y). We compose these relationships between users and items into the full data model by the graph GUCIC shown in FIG. 3. We describe next how the distribution can be estimated from the input item collections the user communities ε, and user lists respectively, using variants of the expectation maximization algorithm.


User Community and Item Collection Conditionals


The estimation problem for the user community conditional distribution Pr(y|u) and for the item collection conditional distribution Pr(s|z) is essentially the same. They are both computed from lists that imply some relationship between the users or items on the lists that is germane to making recommendations. Given the set ε of lists of users and the setof lists of items, we can compute the conditionals Pr(y|u) and Pr(s|z) several ways.


One very simple approach is to match each user community εl with a latent factor yl and each item collectionk with a latent factor zk. The conditionals could be the uniform distributions







Pr


(


y
l

|
u

)


=



1



{



l

|

u



l



}






Pr


(

s
|

z
k


)



=

1




k









While this approach is easily implemented, it potentially results in a large number of user community factors y ∈ γ and item collection factors z ∈. Estimating Pr(z|y) is a correspondingly large computation task. Also, recommendations cannot be made for users in a community εl if does not include a list for at least one user in εl. Similarly, items in a collection Fk cannot be recommended if no item onk occurs on a list in


Another approach is simply to use the previously described EM algorithm to derive the conditional probabilities. For each list εi in ε we can construct M2 pairs (u, v) ∈ ×3 We can also construct N2 pairs (t, s) ∈ We can estimate the pairs of conditional probabilities Pr(v|y), Pr(y|u) and Pr(s|z), Pr(z|t) using the EM algorithm. For Pr(v|y) and Pr(y|u) we have 3If u and v are two distinct members of εl, we would construct the pairs (u; v), (v; u), (u; u), and (v; v).


E-Step:












Q
*



(


y
|
u

,
v
,

θ
-


)


+

=




Pr


(

v
|
y

)


-




Pr


(

y
|
u

)


_






y







Pr


(

v
|
y

)




Pr


(

y
|
u

)









(
18
)







M-Step:











Pr


(

v

y

)


+

=






(

u
,
v

)





ɛ



(

·

,
v


)








Q
*



(


y

u

,
v
,

θ
-


)


+







(

u
,
v

)




ɛ







Q
*



(


y

u

,
v
,

θ
-


)


+







(
19
)








Pr


(

y

u

)


+

=






(

u
,
v

)





ɛ



(

u
,
·

)








Q
*



(


y

u

,
v
,

θ
-


)


+






y

Y








(

u
,
v

)





ɛ



(

u
,
·

)








Q
*



(


y

u

,
u
,

θ
-


)


+








(
20
)







whereε is the collection of all co-occurrence pairs (u, v) constructed from all lists εl ∈ε. ε (u,·) and ε(·, v) denote the subsets of such pairs with the specified user u as the first member and the specified user v as the second member, respectively. Similarly, for Pr(s|z) and Pr(z|t) we have


E-Step:












Q
*



(


x

t

,
s
,

ψ
-


)


+

=




Pr


(

s

z

)


-




Pr


(

z

t

)


-






z

Z






Pr


(

s

z

)


-




Pr


(

z

t

)


-








(
21
)







M-Step:











Pr


(

s

z

)


+

=






(

t
,
o

)









(

·

,
o


)








Q
*



(


z

t

,
s
,

ψ
-


)


+







(

t
,
s

)












Q
*



(


z

t

,
s
,

ψ
-


)


+







(
22
)








Pr


(

z

t

)


+

=






(

t
,
s

)









(

t
,
·

)








Q
*



(


z

t

,
s
,

ψ
-


)


-






z

Z








(

t
,
s

)









(

t
,
·

)








Q
*



(


z

t

,
s
,

ψ
-


)


+








(
23
)







While the preceding two approaches may be adequate for many applications, both may not explicitly incorporate incremental addition of new input data. The iterative computations (18), (19), (20) and (21), (22), (24) assume the input data set is known and fixed at the outset. As we noted above, some recommenders incorporate new input data in an ad hoc fashion. We can extend the basic PLSI algorithm to more effectively incorporate sequential input data for another approach to computing the user community and item collection conditionals.


Focusing first on the conditionals Pr(v|y) and Pr(y|u), there are several ways we could incorporate sequential input data into an EM algorithm for computing time-varying conditionals Pr(v|y; τn)+, Pr(y|u; τn)+, and Q*(y|u, v, θ; τn)+ We only describe one simple method here in which we also gradually de-emphasize older data as we incorporate new data. We first define two time-varying co-occurrence matrices ΔE(τn) and ΔF(τn) of the data pairs received since time τn−1 with elements





Δevun)−|{(u,v)|(u,v)∈Dεn)−Dεn−1)}|Δfatn)=|{(t,s)|(t,s)∈DFn)−Dεn−1)}|


We then add two additional initial steps to the basic EM algorithm so that the extended computation consists of four steps. The first two steps are done only once before the E and M steps are iterated until the estimates for Pr(v|y; τn) and Pr(y|u; τn) converge:


W-Step: The initial “Weighting” step computes an appropriate weighted estimate for the co-occurrence matrix E(τn). The simplest method for doing this is to compute a suitably weighted sum of the older data with the latest data






En)=αεEn−1)+βεΔEn)   (25)


This difference equation has the solution







E


(

τ
n

)


=


β
E






i
=
0

¨




α
ɛ

-

(

n
-
i

)




Δ






E


(

t
i

)









(25) is just a scaled discrete integrator for αε=1. Choosing 0≦αε<1 and setting βε=1−αε gives a simple linear estimator for the mean value of the co-occurrence matrix that emphasizes the most recent data.


I-Step: In the next “Input” step, the estimated co-occurrence data is incorporated in the EM computation. This can be done in multiple ways, one straightforward approach is to adjust the starting values for the EM phase of the algorithm by re-expressing the M-step computations (19) and (20) in terms of E(τn), and then re-estimating the conditionals Pr(v|y; τn) and Pr(y|u; τn)at time τn











Pr


(


v

y

;

τ
n


)


-

=




u





e
vu



(

τ
n

)






Q
*



(


y

u

,
v
,


θ
-

;

τ

n
-
1




)


+






v





u





e
vu



(

τ
n

)






Q
*



(


y

u

,
v
,


θ
-

;

τ

n
-
1




)


+









(
26
)








Pr


(


y

u

;

ψ
n


)


-

=




v





e
vu



(

τ
n

)






Q
*



(


y

u

,
v
,


θ
-

;

τ

n
-
1




)


+








v


=
V






n





e
vu



(

τ
n

)






Q
*



(


y

u

,
v
,


θ
-

;

τ

n
-
1




)


+









(
27
)







E-Step: The EM iteration consists of the same E-step and M-step as the basic algorithm. The E-step computation is












Q
*



(


y

u

,
v
,


θ
-

;

τ
n



)


+

=




Pr


(


v

y

;

τ
n


)


-




Pr


(


y

u

;

τ
n


)


-






y

Y






Pr


(


v

y

;

τ
n


)


-




Pr


(


y

u

;

τ
n


)


-








(
28
)







M-step: Finally, the M-step computation is











Pr


(


v

y

;

τ
n


)


+

=




u





e
vu



(

τ
n

)






Q
*



(


y

u

,
v
,


θ
-

;

τ
n



)


+






v





u





e
vu



(

τ
n

)






Q
*



(


y

u

,
v
,


θ
-

;

τ
n



)


+









(
29
)








Pr


(


y

u

;

τ
n


)


+

=




v





e
vu



(

τ
n

)






Q
*



(


y

u

,
v
,


θ
-

;

τ
n



)


+







y

Y






v





e
vu



(

τ
n

)






Q
*



(


y

u

,
v
,


θ
-

;

τ
n



)


+









(
30
)







Convergence of the EM iteration in this extended algorithm is guaranteed since this algorithm only changes the starting values for the EM iteration.


The extended algorithm for computing Pr(s|z) and Pr(z|t) is analogous to the algorithm for computing Pr(v|y) and Pr(y|u):


W-Step: Given input data ΔF(τn), the estimated co-occurrence data is computed as






Fn)=αFFn−1)+βFΔFn)   (31)


I-Step:











Pr


(


s

z

;

τ
n


)


-

=




t





f
st



(

τ
n

)






Q
*



(


z

t

,
s
,


ψ
-

;

τ

n
-
1




)


+






s





t





f
st



(

τ
n

)






Q
*



(


z

t

,
s
,


ψ
-

;

τ

n
-
1




)


+









(
32
)








Pr


(


z

t

;

τ
n


)


-

=




s





f
st



(

τ
n

)






Q
*



(


z

t

,
s
,


ψ
-

;

τ

n
-
1




)


+







z

Z






s





f
st



(

τ
n

)






Q
*



(


z

t

,
s
,


ψ
-

;

τ

n
-
1




)


+









(
33
)







E-Step:












Q
*



(


z

t

,
s
,


ψ
-

;

τ
n



)


+

=




Pr


(


s

z

;

τ
n


)


-




Pr


(


z

t

;

τ
n


)


-






z

Z






Pr


(


s

x

;

τ
n


)


-




Pr


(


z

t

;

τ
n


)


-








(
35
)







M-Step:











Pr


(


s

z

;

τ
n


)


+

=




t





f
st



(

τ
n

)






Q
*



(


z

t

,
s
,


ψ
-

;

τ
n



)


+






s





t





f
st



(

τ
n

)






Q
*



(


z

t

,
s
,


ψ
-

;

τ
n



)


+









(
36
)








Pr


(


z

t

;

τ
n


)


+

=




s





f
st



(

τ
n

)






Q
*



(


z

t

,
s
,


ψ
-

;

τ
n



)


+







z

Z






s





f
st



(

τ
n

)






Q
*



(


z

t

,
s
,


ψ
-

;

τ
n



)


+









(
37
)







Association Conditionals


Once we have estimates for Pr(s|z; τn) and Pr(y|u; τn), we can derive estimates for the association conditionals Pr(z|y; τn) expressing the probabilistic relationships between the user communities y ∈γ and item collections z ∈ These estimates must be derived from the listssince this is the only observed data that relates users and items. A key simplifying assumption in the model we build here is that










Pr


(

s
,

S

z


)


=


Pr


(

s

z

)








s



S




Pr


(


s



z

)








(
39
)







Appendix C presents a full derivation of E-step (49) and M-step (53) of the basic EM algorithm for estimating Pr(z|y). Defining the list of seeds S in the triples (u, s, S) is needed in the M-step computation. In some cases, the seeds S could be independent and supplied with the list. For these cases, the input data from the user lists would be






={(ui*,si1,S), . . . , (ui*,sin,S)}  (40)


In other cases, the seeds might be inferred from the items in the user list Hi itself. These could be just the items preceding each item in the list so that the input data would be






={(ui*,si1,Si1=0),(ui*,si2,Si232 {si1}), . . . ,(ui*,sin,Sin={si1, . . . ,sn−1})}  (41)


The seeds for each (u, s) pair in the list could also be every other item in the list, in this case







i={(ui*,si1,Si1=S−{si1}, . . . ,(ui*,sin,Sin=S−{sin})}  (42)


As we did for the user community conditional Pr(y|u) and item collection conditional Pr(s|z), we can also extend this EM algorithm to incorporate sequential input data. However, instead of forming data matrices, we define two time-varying data lists Δn) and Δn) from the bag of listsn)





Δn)={(u,s,S,h)|(u,s,h,)∈i,in),τn−1)}Δn)={(u u,s,S,1)|(u,s,S,h)∈ΔDn)}


where the seeds S for each item are computed by one of the methods (40), (41), (42) or any other desired method. We also note that Δn) and Δn) are bags, meaning they include an instance of the appropriate tuple for each instance of the defining tuple in the description. The extended EM algorithm for computing Pr(z|y; τ) then incorporates appropriate versions of the initial W-step and I-step computations into the basic EM computations:


W-Step: The weighting factors are applied directly to the listn−1) and the new data list Δn) to create the new list






n)={(u,s,S,aa)|(u,s,S,a)∈n−1)}∪{(u,s,S,βa)|(u,s,S,a)∈Δn)}  (43)


I-Step: The weighted data at time τn is incorporated into the EM computation via the weighting coefficient a from each tuple (u, s, S, a) to re-estimate Pr(z|y; τn−1)+ as Pr(z|y; τn)











Pr


(


z

y

;

τ
n


)


-

=






(

u
,
s
,
S
,
a

)



A


(

τ
n

)








aQ
*



(

z
,

y

s

,
S
,
u
,


ψ
-

;

τ

n
-
1




)


+






z

Z








(

u
,
s
,
S
,
a

)



A


(

τ
n

)








aQ
*



(

z
,

y

s

,
S
,
u
,


φ
-

;

τ

n
-
1




)


+








(
44
)







We note, however, that we may have Q*(z, y|s, S, u, θ; τn−1)+=0 for (u, s, S, a) that are inn) but such that (u, s, S, a′) is not in n−1). This missing data is filled by the first iteration of the following E-step.


E-Step:












Q
*



(

z
,

y

s

,
S
,
u
,


φ
-

;

τ
n



)


+

=



[





Pr


(


s

z

;

τ
n


)







s



S








Pr


(



s



z

;

τ
n


)



Pr


(

yu
;

τ
n


)






]




Pr


(


z

y

;

τ
n


)


-






z

Z







u

Y





[





Pr


(


s

z

;

τ
n


)







s



S








Pr


(



s



z

;

τ
n


)



Pr


(


y

u

;

τ
n


)






]




Pr


(


z

y

;

τ
n


)


-









(
45
)







M-Step:











Pr


(


z

y

;

τ
n


)


+

=






(

u
,
a
,
S
,
a

)



A


(

τ
n

)








aQ
*



(

z
,

y

s

,
S
,
u
,


φ
-

;

τ
n



)


+






z

Z








(

u
,
s
,
S
,
a

)



A


(

τ
n

)








aQ
*



(

z
,

y

s

,
S
,
u
,


φ
-

;

τ
n



)


+








(
46
)







Memory-based recommenders are not well suited to explicitly incorporating independent, a priori knowledge about user communities and item collections. One type of user community and item collection information is implicit in some model-based recommenders. However, some recommenders' data models do not provide the needed flexibility to accommodate notions for such clusters or groupings other than item selection behavior. In some recommnenders, additional knowledge about item collections is incorporated in an ad hoc way via supplementary algorithms.


In an embodiment, the model-based recommender we describe above allows user community and item collection information to be specified explicitly as a priori constraints on recommendations. The probabilities that users in a community are interested in the items in a collection are independently learned from collections of user communities, item collections, and user selections. In addition, the system learns these probabilities by an adaptive EM algorithm that extends the basic EM algorithm to better capture the time-varying nature of these sources of knowledge. The recommender that we describe above is inherently massively-scalable. It is well suited to implementation as a data-center scale Map-Reduce computation. The computations to produce the knowledge base can be run as an off-line batch operation and only recommendations computed in real-time on-line, or the entire process can be run as a continuous update operation. Finally, it is possible and practical to run multiple recommendation instances with knowledge bases built from different sets of user communities and item collections as a multi-criteria meta-recommender.


Exemplary Pseudo Code


Process: INFER_COLLECTIONS


Description:


To construct time-varying latent collections c1n), c2n), . . . , ckn), given a time-varying list D(τn) of pairs (ai, bj). The collections ckn) are implicitly specified by the probabilities Pr(ck|ai: τn) and Pr(bj|ck; τn).


Input:

    • A) List D(τn).
    • B) Previous probabilities Pr(ck|ai; τn−1) and Pr(bj|ck; τn−1).
    • C) Previous conditional probabilities Q*(ck|ai, bj; τn−).
    • D) Previous list E(τn−1) of triples (ai, bj, eij) representing weighted, accumulated input lists.


Output:

    • A) Updated probabilities Pr(ck|ai; τn) and Pr(bj|ck; τn).
    • B) Conditional probabilities Q*(ck|ai, bj; τn).
    • C) Updated list E(τn) of triples (ai, bj, eij) representing weighted, accumulated input lists.


Exemplary Method:

    • 1) (W-step) Create the updated list E(τn) incorporating the new pairs D(τn) into E(τn−1):
      • a) Let E(τn) be the empty list.
      • b) For each triple (ai, bj, eij) in E(τn−1), add (ai, bj, αeij) to E(τn).
      • c) For each pair (ai, bj) in D(τn):
        • i. If (ai, bj, eij) in E(τn), replace (ai, bj, eij) with (ai, bj, eij +β).
        • ii. Otherwise, add (ai, bj, β) to E(τn).
    • 2) (I-step) Initially re-estimate the probabilities Pr(ck|ai; τn) and Pr(bj|ck; τn) using E(τn) and the conditional probabilities Q*(ck|ai, bj; τn−1):
      • a) For each ck and each (ai, bj, eij) in E(τn), estimate Pr(bj|ck; τn):
        • i. Let PrN be the sum across ai′ of eij Q*(ck|ai′, bj; τn−1).
        • ii. Let PrD be the sum across ai′ and bj′ of eij Q*(ck|ai′, bj′; τn−1).
        • iii. Let Pr(bj|ck; τn)31 be PrN/PrD.
      • b) For each ck and each (ai, bj, eij) in E(τn), estimate Pr(ck|ai; τn):
        • i. Let PrN be the sum across bj′ of eij Q*(ck|ai, bj′; τn−1).
        • ii. Let PrD be the sum across ck ′ and bj′ of eij Q*(ck′|ai, bj′; τn−1).
        • iii. Let Pr(ck|ai; τn) be PrN/PrD.
    • 3) (E-step) Estimate the new conditionals Q*(ck|ai, bj; τn):
      • a) For each ck and each (ai, bj, eij) in E(τn), estimate the conditional probability Q*(ck|ai, bj; τn):
        • i. Let Q*D be the sum across ck′ of Pr(bj|ck′; τn)Pr(ck′|ai; τn).
        • ii. Let Q*(ck|ai, bj; τn) be Pr(bj|ck; τn)Pr(ck|ai; τn)/Q*D.
    • 4) (M-step) Estimate the new probabilities Pr(ck|ai; τn)+ and Pr(bj|ck; τn)+:
      • a) For each ck and each (ai, bj, eij) in E(τn), estimate Pr(bj|ck; τn):
        • i. Let PrN be the sum across ai′ of eij Q*(ck|ai′, bj; τn).
        • ii. Let PrD be the sum across ai′ and bj′ of eij Q*(ck|ai′, bj′; τn).
        • iii. Let Pr(bj|ck; τn)+ be PrN/PrD.
      • b) For each ck and each (ai, bj, eij) in E(τn), estimate Pr(ck|ai; τn)+:
        • i. Let PrN be the sum across bj′ of eij Q*(ck|ai, bj′; τn).
        • ii. Let PrD be the sum across ck′ and bj′ of eij Q*(ck′|ai, bj′; τn).
        • iii. Let Pr(ck|ai; τn)+ be PrN/PrD.
    • 5) If |Pr(bj|ck; τn)−Pr(bj|ck; τn)+|>d or |Pr(ck|ai; τn)−Pr(ck|ai, τn)+|>d for a pre-specified d<<1, repeat E-step (3.) and M-step (4.) with Pr(bj|ck; τn)=Pr(bj|ck; τn)+ and Pr(ck|ai; τn)=Pr(ck|ai; τn)+.
    • 6) Return updated probabilities Pr(ck|ai; τn)=Pr(ck|ai; τn)+ and Pr(bj|ck; τn) =Pr(bj|ck; τn)+, along with conditional probabilities Q*(ck|ai, bj; τn), and updated list E(τn) of triples (ai, bj, eij).


Notes:

    • A) In one embodiment, α and β in the W-step (1. ) are assumed to be constants specified a priori.
    • B) In the I-step (2. ), Q*(ck|ap, bj; τn)=0 if Q*(ck|ap, bj; τn−) does not exist from the previous iteration.


Process: INFER_ASSOCIATIONS


Description:


To construct time-varying association probabilities Pr(zk|yl; τn) between two collections z1n), z2n), . . . , zkn) and y1n), y2n), . . . , yln) of items, given the probabilities Pr(yk|ui; τn) that the ui are members of the collections yln), the probabilities Pr(sj|zl; τn) that the collections zkn) include the sj as members, and a time-varying list D(τn) of triples (ui, sj, So).


Input:

    • A) Probabilities Pr(yl|ui; τn) and Pr(sj|zk; τn).
    • B) List D(τn).
    • C) Previous probabilities Pr(zk|yl; τn−1).
    • D) Previous list E(τn−1) of 4-tuples (ui, sj, So, eijo) representing weighted, accumulated input lists.
    • E) Previous conditional probabilities Q*(zk, yl|ui, sj, So; τn−1).


Output:

    • A) Updated probabilities Pr(zk|yl; τn).
    • B) Updated list E(τn) of 4-tuples (ui, sj, So, eijo) representing weighted, accumulated input lists.
    • C) Conditional probabilities Q*(zk|yl|ui, sj, So; τn).


Exemplary Method:

    • 1) (W-step) Create the updated list E(τn) incorporating the new triples D(τn) into E(τn−1):
      • a) Let E(τn) be the empty list.
      • b) For each 4-tuple (ui, sj, So, eijo) in E(τn−1), add (ui, sj, So, αeji) to E(τn).
      • c) For each triple (ui, sj, So) in D(τn):
        • i. If (ui, sj, So, eijo) in E(τn), replace (ui, sj, So, eijo) with (ui, sj, So, eijo+β).
        • ii. Otherwise, add (ui, sj, So, β) to E(τn).
    • 2) (I-step) Initially estimate the probabilities Pr(zk|yl; τn) using E(τn) and the conditional probabilities Q*(zk, yl|ui, sj, So; τn).
      • a) For each yl and zk, estimate Pr(zk|yl; τn):
        • i. Let PrN be the sum across ui, sj, and So of eijo Q*(zk,yl|ui, sj, So; τn−1).
        • ii. Let PrD be the sum across ui, sj, So and zk′ of eijo Q*(zk, yl|ui, sj, So; τn−1).
        • iii. Let Pr(zk|yl; τn)31 be PrN/PrD.
    • 3) (E-step) Estimate the new conditionals Q*(zk, yl|ui, sj, So; τn):
      • a) For each yl and zk, estimate the conditional probability Q*(zk, yl|ui, sj, So; τn):
        • i. Let Q*s be the total product of Pr(sj|zk; τn), the product across sj′ of Pr(sj′|zk; τn), and Pr(yl|ui; τn).
        • ii. Let Q*D be the sum across yl′ and zk′ of Q*s Pr(zk′|yl; τn).
        • iii. Let Q*(zk, yl|ui, sj, So; τn) be Q*s Pr(zk|yl; τn)/Q*D.
    • 4) (M-step) Estimate the new probabilities Pr(zk|yl; τn)+:
      • a) For each yl and zk, estimate Pr(zk|yl; τn)+:
        • i. Let PrN be the sum across ui, sj, and So of eijo Q*(zk, yl|ui, sj, So; τn).
        • ii. Let PrD be the sum across ui, sj, So and zk′ of eijo Q*(zk′, yl|ui, sj, So; τn).
        • iii. Let Pr(zk|yl; τn)+ be PrN/PrD.
    • 5) If, for any pair (zk, yl), |Pr(zk|yl; τn)−Pr(zk|yl; τn)+|>d for a pre-specified d <<1, and the E-step (3.) and M-step (4.) and not been repeated more than some number R times, repeat E-step (3.) and M-step (4.) with Pr(zk|yl; τn) Pr(zk|yl; τn)+.
    • 6) For any pair (zk, yl), |Pr(zk|yl; τn)−Pr(zk|yl; τn)+|>d for a pre-specified d <<1, let Pr(zk|yl; τn)+=[Pr(zk|yl; τn)+Pr(zk|y1; τn)+]/2.
    • 7) Return updated probabilities Pr(zk|yl; τn)=Pr(zk|yl; τn)+, along with conditional probabilities Q*(zk, yl|ui, sj, So; τn), and updated list E(τn) of 4-tuples (ui, sj, So, eijo).


Notes:

    • A) There potentially are combinations of triples (ui, sj, So) such that the process does not produce valid Pr(zk|yl; τn).
    • B) The α and β in the W-step (1.) are assumed to be constants specified a priori.
    • C) In the I-step (2.), Q*(zl|yk|ui, sj, So; τn−1)=0 if Q*(zk, yk|ui, sj, So; τn−1) does not exist from the previous iteration.


Process: CONSTRUCT_MODEL


Description:


To construct a model for time-varying lists Duvn) of user-user pairs (ui, vj), Dtsn) of item-item pairs (ti, sj), and Dusn) of user-item triples (ui, sj, So) that groups users ui into communities of items yl and items sj into communities of items sk. The model is specified by the probabilities Pr(yl|ui; τn) that the ui are members of the collections yln), the probabilities Pr(sj|zk; τn) that the collections zkn) include the sj as members, and the probabilities Pr(zk|yl; τn) that the communities yln) are associated with the collections zkn).


Input:

    • A) Lists Duvn), Dtsn), and Dusn).
    • B) Previous probabilities Pr(yl|ui; τn−1), Pr(zk|yl; τn−1), and Pr(sj|zk; τn−1).
    • C) Previous lists Euvn−1) of triples (ui, vj, eij), Etsn−1) of triples (ti, sj, eij), and Eus−1) of 4-tuples (ui, sj, So, eijo) representing weighted, accumulated input lists.
    • D) Previous conditional probabilities Q*(yl|ui, vj; τn−1), Q*(zk|ti, sj; τn−1), and Q*(zk|ui, sj, So; τn−1).


Output:

    • A) Updated probabilities Pr(yl|ui; τn), Pr(zk|yl; τn), and Pr(si|zk; τn).
    • B) Conditional probabilities Q*(yl|ui, vj; τn−1), Q*(zk, |ti, sj; τn−1), and Q*(zk, yl|ui, sj, So; τn−1).
    • C) Updated lists Euvn) of triples (ui, vj, eij), Etsn) of triples (ti, sj, eij), and Eusn) of 4-tuples (ui, sj, So, eijo) representing weighted, accumulated input lists.


Exemplary Method:

    • 1) Construct user communities y1n), y2n), . . . , yln) by the process INFER_COLLECTIONS.
      • Let Duvn), Pr(yl|ui; τn−1), Pr(vi|yl; τn−1), Q*(yl|ui, vj; τn−1), and Euvn−1) be the inputs D(τn), Pr(ck|ai; τn−1), Pr(bj|ck; τn−1), Q*(yl|ui, vj; τn−1), and E(τn−1), respectively.
      • Let Pr(yl|ui; τn), Pr(vj|yl; τn), Q*(yl|uj, vj; τn), and Euvn) be the outputs Pr(ck|ai; τn), Pr(bj|ck; τn), Q*(yl|ui, vj; τn), and E(τn), respectively.
    • 2) Construct item collections z1n), z2n), . . . , zkn) by the process INFER_COLLECTIONS.
      • Let Dtsn), Pr(zk|tj; τn−1), Pr(sj|zk; τn−1), Q*(zk|ti, sj; τn−1), and Estn−1) be the inputs D(τn), Pr(ck|ai; τn−1), Pr(bj|ck; τn−1), Q*(yl|ui, vj; τn−1), and E(τn−1), respectively.
      • Let Pr(zk|tj; τn), Pr(sj|zk; τn), Q*(zk|ti, aj; τn), and Estn) be the outputs Pr(ck|ai; τn), Pr(bj|ck; τn), Q*(yl|ui, vj; τn), and E(τn), respectively.
    • 3) Estimate the associations between user communities and item collections by the process INFER_ASSOCIATIONS:
      • Let Pr(yl|ui; τn), Pr(zk|tj; τn), Dusn), Pr(zk|yl; τn), Euvn−1), and Q*(zk, yl|ui, sj, So; τn−1) be the inputs.
      • Let Pr(zk|yl; τn), Euvn), and Q*(zk|ui, sj, So; τn) be the outputs.


Notes:

    • A) The process may optionally be initialized with estimates for the user communities and item collections, in the form of the probabilities Pr(yl|ui; τ−1), Pr(vj|yl; τ−1) and the probabilities Pr(zk|tj; τ−1), Pr(sj|zk; τ−1), and using the process INFER_COLLECTIONS without inputs Duvn) and Dtsn) to re-estimate the probabilities Pr(yl|ui; τ−1), Pr(vj|yl; τ−1), Q*(yl|ui, vj; τ−1), and the probabilities Pr(zk|tj; τ−1), Pr(sj|zk; τ−1), Q*(zk|ti, aj; τ−1).
    • B) Alternatively, the estimated user communities and item collections may be supplemented with additional fixed user communities and item collections, in the form of fixed probabilities Pr(yl|ui; ·), Pr(zk|tj; ·), in the input to the INFER_ASSOCIATIONS process.


Exemplary System


The recommenders we describe above may be implemented on any number of computer systems, for use by one or more users, including the exemplary system 400 shown in FIG. 4. Referring to FIG. 4, the system 400 includes a general purpose or personal computer 302 that executes one or more instructions of one or more application programs or modules stored in system memory, e.g., memory 406. The application programs or modules may include routines, programs, objects, components, data structures, and like that perform particular tasks or implement particular abstract data types. A person of reasonable skill in the art will recognize that many of the methods or concepts associated with the above recommender, that we describe at times algorithmically may be instantiated or implemented as computer instructions, firmware, or software in any of a variety of architectures to achieve the same or equivalent result.


Moreover, a person of reasonable skill in the art will recognize that the recommender we describe above may be implemented on other computer system configurations including hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, application specific integrated circuits, and like. Similarly, a person of reasonable skill in the art will recognize that the recommender we describe above may be implemented in a distributed computing system in which various computing entities or devices, often geographically remote from one another, perform particular tasks or execute particular instructions. In distributed computing systems, application programs or modules may be stored in local or remote memory.


The general purpose or personal computer 402 comprises a processor 404, memory 406, device interface 408, and network interface 410, all interconnected through bus 412. The processor 404 represents a single, central processing unit, or a plurality of processing units in a single or two or more computers 402. The memory 406 may be any memory device including any combination of random access memory (RAM) or read only memory (ROM). The memory 406 may include a basic input/output system (BIOS) 406A with routines to transfer data between the various elements of the computer system 400. The memory 406 may also include an operating system (OS) 406B that, after being initially loaded by a boot program, manages all the other programs in the computer 402. These other programs may be, e.g., application programs 406C. The application programs 406C make use of the OS 406B by making requests for services through a defined application program interface (API). In addition, users can interact directly with the OS 406B through a user interface such as a command language or a graphical user interface (GUI) (not shown).


Device interface 408 may be any one of several types of interfaces including a memory bus, peripheral bus, local bus, and like. The device interface 408 may operatively couple any of a variety of devices, e.g., hard disk drive 414, optical disk drive 416, magnetic disk drive 418, or like, to the bus 412. The device interface 408 represents either one interface or various distinct interfaces, each specially constructed to support the particular device that it interfaces to the bus 412. The device interface 408 may additionally interface input or output devices 420 utilized by a user to provide direction to the computer 402 and to receive information from the computer 402. These input or output devices 420 may include keyboards, monitors, mice, pointing devices, speakers, stylus, microphone, joystick, game pad, satellite dish, printer, scanner, camera, video equipment, modem, and like (not shown). The device interface 408 may be a serial interface, parallel port, game port, firewire port, universal serial bus, or like.


The hard disk drive 414, optical disk drive 416, magnetic disk drive 418, or like may include a computer readable medium that provides non-volatile storage of computer readable instructions of one or more application programs or modules 406C and their associated data structures. A person of skill in the art will recognize that the system 400 may use any type of computer readable medium accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, cartridges, RAM, ROM, and like.


Network interface 410 operatively couples the computer 302 to one or more remote computers 302R on a local area network 422 or a wide area network 432. The computers 302R may be geographically remote from computer 302. The remote computers 402R may have the structure of computer 402, or may be a server, client, router, switch, or other networked device and typically includes some or all of the elements of computer 402. peer device, or network node. The computer 402 may connect to the local area network 422 through a network interface or adapter included in the interface 410. The computer 402 may connect to the wide area network 432 through a modem or other communications device included in the interface 410. The modem or communications device may establish communications to remote computers 402R through global communications network 424. A person of reasonable skill in the art should recognize that application programs or modules 406C might be stored remotely through such networked connections.


We describe some portions of the recommender using algorithms and symbolic representations of operations on data bits within a memory, e.g., memory 306. A person of skill in the art will understand these algorithms and symbolic representations as most effectively conveying the substance of their work to others of skill in the art. An algorithm is a self-consistent sequence leading to a desired result. The sequence requires physical manipulations of physical quantities. Usually, but not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. For expressively simplicity, we refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or like. The terms are merely convenient labels. A person of skill in the art will recognize that terms such as computing, calculating, determining, displaying, or like refer to the actions and processes of a computer, e.g., computers 402 and 402R. The computers 402 or 402R manipulates and transforms data represented as physical electronic quantities within the computer 402's memory into other data similarly represented as physical electronic quantities within the computer 402's memory. The algorithms and symbolic representations we describe above


The recommender we describe above explicitly incorporates a co-occurrence matrix to define and determine similar items and utilizes the concepts of user communities and item collections, drawn as lists, to inform the recommendation. The recommender more naturally accommodates substitute or complementary items and implicitly incorporates intuition, i.e., two items should be more similar if more paths between them exist in the co-occurrence matrix. The recommender segments users and items and is massively scalable for direct implementation as a Map-Reduce computation.


A person of reasonable skill in the art will recognize that they may make many changes to the details of the above-described embodiments without departing from the underlying principles. The following claims, therefore, define the scope of the present systems and methods.

Claims
  • 1. A computer-implemented method, comprising: programming one or more processors to: access a list of users stored in one or more user databases and a list of items stored in one or more item databases;construct user communities of two or more users having an association there between;construct item collections of two or more items having an association therebetween;estimate associations between the user communities and the item collections; andprovide one or more recommendations responsive to estimating the associations; anddisplaying the one or more recommendations on a display.
  • 2. The computer-implemented method of claim 1 further comprising programming the one or more processors to access the list of users or list of items in one or more memories.
  • 3. The computer-implemented method of claim 1 further comprising programming the one or more processors to construct the user communities by constructing time-varying user communities responsive to a time-varying list of user-user pairs.
  • 4. The computer-implemented method of claim 3 further comprising programming the one or more processors to construct the user communities responsive to time-varying relational probabilities between the user communities and the list of users, the list of items, item collections, or combinations thereof.
  • 5. The computer-implemented method of claim 3 further comprising programming the one or more processors to construct the user communities y1(τn) y2(τn), . . . , yl(τn) by creating an updated list Euv(τn) at a time τ incorporating a time-varying list of user-user pairs Duv(τn) into Euv(τn) where l and n are integers.
  • 6. The computer-implemented method of claim 5 further comprising programming the one or more processors to construct the user communities y1(τn), y2(τn), . . . , yl(τn) by: adding (ui, vj, αeij) to Euv(τn) for each triple (ui, vj, eij) in Euv(τn−1); andfor each pair (ui, vj) in Duv(τn), replacing (ui, vj, eij) with (ui, vj, eij+β) if (ui, vj, eij) is in Euv(τn), otherwise add (ui, vj, β) to Euv(τn);where β is a predetermined variable; andwhere l, n, i, and j are integers.
  • 7. The computer-implemented method of claim 5 further comprising programming the one or more processors to construct the user communities y1(τn), y2(τn), . . . , yl(τn) by estimating at least one of the probabilities Pr(yl|ui; τn)− or Pr(vj|yl; τn)− using the updated list Euv(τn) and conditional probabilities Q*(yl|ui, vj; τn−1), where l, n, i, and j are integers.
  • 8. The computer-implemented method of claim 7 further comprising programming the one or more processors to construct the user communities y1(τn), y2(τn), . . . , yl(τn) by, for each yl and each (ui, vj, eij) in Euv(τn), estimating Pr(vj|yl; τn)− as PrN/PrD, where PrN is a sum across ui′ of eijQ*(yl|ui′, vj; τn−1) and where PrD is a sum across yl′ and vl′ of eijQ*(yl′|ui, vj′; τn−1).
  • 9. The computer-implemented method of claim 7 further comprising programming the one or more processors to construct the user communities y1(τn), y2(τn), . . . , yl(τn) by, for each yl and each (ui, vj, eij) in Euv(τn), estimating Pr(yl|ui; τn)− as PrN/PrD where PrN is a sum across vj′ of eijQ*(yl|ui, vj′; τn−1) and where PtD is a sum across yl′ and vj′ of eijQ*(yl′|ui, vj′; τn−1).
  • 10. The computer-implemented method of claim 7 further comprising programming the one or more processors to construct the user communities y1(τn), y2(τn), . . . , yl(τn) by estimating conditional probabilities Q*(yl|ui, vj; τn) for each yl and each (ui, vj, eij) in Euv(τn).
  • 11. The computer-implemented method of claim 10 further comprising programming the one or more processors to construct the user communities y1(τn), y2(τn), . . . , yl(τn) by setting Q*(yl|ui, vj; τn) to Pr(vj|yl; τn)− Pr(yl|ui; τn)−/Q*D where Q*D is a sum across yl′ of Pr(vj|yl′;τn)−Pr(yl′|ui; τn).
  • 12. The computer-implemented method of claim 10 further comprising programming the one or more processors to construct the user communities yl(τn), y2(τn), . . . , tl(τn) by estimating probabilities Pr(yl|ui; τn)+ and Pr(vj|yl; τn)+ for each yl and each (ui, vj, eij) in Euv(τn).
  • 13. The computer-implemented method of claim 12 further comprising programming the one or more processors to construct the user communities y1(τn), Y2(τn), . . . , yl(τn) by setting Pr(vj|yl; τn)+ to PrN1/PrD1 where PrN1 is a sum across ui′ of eijQ*(yl|ui′, vj; τ) and PrD1 is a sum across ui′ and vj′ of eijQ*(yl|ui′, vj′; τn).
  • 14. The computer-implemented method of claim 13 further comprising programming the one or more processors to construct the user communities y1(τn), y2(τn), . . . , yl(τn) by setting Pr(yl|ui; τn)+ to PrN2/PrD2 where PrN2 is a sum across vj′ of eijQ*(yl|ui, vj′; τn) and PrD2 is a sum across yl′ and vj′ of eijQ*(yl′|ui, vj′; τn).
  • 15. The computer-implemented method of claim 14 further comprising programming the one or more processors to construct the user communities yl(τn), y2(τn), . . . , yl(τn) by: repeating the estimating conditional probabilities Q*(yl,|ui, vj; τn) and the estimating probabilities Pr(yl|ui; τn) and Pr(vj|yl; τn)+ with Pr(vj|yl; τn)−=Pr(vj|yl; τn)+ and Pr(yl|uj; τn)−=Pr(yl|ui; τn)+ if |Pr(vj|yl; τn)−−Pr(vj|yl; τn)+|>d or |Pr(yl|ui; τn)−Pr(yl|ui; τn)+|>d for a predetermined d<<1; andreturning the probabilities Pr(yl|ui; τn)=Pr(yl|ui; τn)+ and Pr(vj|yl; τn)=Pr(vj|yl; τn)+, the conditional probabilities Q*(yl|ui, vj; τn), and the list Euv(τn) of triples (ui, vj, eij), where d is a predetermined number.
  • 16. The computer-implemented method of claim 1 further comprising programming the one or more processors to construct the item collections by constructing time-varying items collections responsive to a time-varying list of item-item pairs.
  • 17. The computer-implemented method of claim 16 further comprising programming the one or more processors to construct item collections responsive to time-varying relational probabilities between the item collections and the list of users, the list of items, user communities, or combinations thereof.
  • 18. The computer-implemented method of claim 16 further comprising programming the one or more processors to construct item collections z1(τn), z2(τn), . . . , zk(τn) by creating an updated list Est(τn) at a time τ incorporating a time-varying list of item-item pairs Dst(τn) into Est(τn−1), where k and n are integers.
  • 19. The computer-implemented method of claim 16 further comprising programming the one or more processors to construct item collections z1(τn), z2(τn), . . . , zk(τn) by: adding (si, tj, αeil) to Est(τn) for each triple (si, tj, eij) in Est(τn−1); andfor each pair (si, tj) in Dst(τn) replacing (vi, tj, eij) with (si, tj, eij+β) if (si, tj, eij) is in Est(τn), otherwise add (si, tj, β) to Est(τn);where β is a predetermined variable; andwhere k, n, i, andj are integers.
  • 20. The computer-implemented method of claim 16 further comprising programming the one or more processors to construct item collections z1(τn), z2(τn), . . . , zk(τn) by estimating at least one of the probabilities Pr(zk|si; τn)− or Pr(tj|zk; τn)− using the updated list Est(τn) and conditional probabilities Q*(zk|si, tj; τn−1), where k, n, i, and j are integers.
  • 21. The computer-implemented method of claim 20 further comprising programming the one or more processors to construct item collections z1(τn), z2(τn), . . . , zk(τn) by, for each Zk and each (si, tj, eij) in Est(τn), estimating Pr(tj|zk; τn)− as PrN/PrD, where PrN is a sum across si′ of eijQ*(zk|si′; τn−1) and where PrD is a sum across zk′ and tj′ of eij Q*(zk′|si, tj′; τn−1).
  • 22. The computer-implemented method of claim 20 further comprising programming the one or more processors to construct item collections z1(τn), z2(τn), . . . , zk(τn) by, for each zk and each (si, tj, eij) in Est(τn), estimating Pr(zk|ti; τn)− as PrN/PrD where PrN is a sum across tj′ of eijQ*(zk|si, tj′; τn−1) and where PrD is a sum across zk′ and tj′ of eijQ*(zk′|si, tj; τn−1).
  • 23. The computer-implemented method of claim 20 further comprising programming the one or more processors to construct item collections z1(τn), z2(τn), . . . , zk(τn) by estimating conditional probabilities Q*(zk|si, tj; τn) for each zk and each (si, tj, eij) in Est(τn).
  • 24. The computer-implemented method of claim 23 further comprising programming the one or more processors to construct item collections z1(τn), z2(τn), . . . , zk(τn) by setting Q*(zk|si, tj; τn) to Pr(tj|zk; τn)−Pr(zk|si; τn)−/Q*D where Q*D is a sum across zk′ of Pr(tk|zk′; τn)−Pr(zk′si; τn)−.
  • 25. The computer-implemented method of claim 23 further comprising programming the one or more processors to construct item collections z1(τn), z2(τn), . . . , zk(τn) by estimating probabilities Pr(zk|si; τn)+ and Pr(tj|zk; τn)+ for each zk and each (si, tj, eij) in Est(τn).
  • 26. The computer-implemented method of claim 25 further comprising programming the one or more processors to construct item collections z1(τn), z2(τn), . . . , zk(τn) by setting Pr(tj|zk; τn)+ PrN1/PrD1 where PrN1 is a sum across si′ of eijQ*(zk|si′, tj; τ) and PrD1 is a sum across si′ and tj′ of eijQ*(zk|si′, tj′; τn).
  • 27. The computer-implemented method of claim 26 further comprising programming the one or more processors to construct item collections z1(τn), z2(τn), . . . , zk(τn) by setting Pr(zk|si; τn)+ to PrN2/PrD2 where PrN2 is a sum across tj′ of eijQ*(zk|si, tj′; τn) and PrD2 is a sum across zk and tj′ of eijQ*(zk′|si, tj′; τn).
  • 28. The computer-implemented method of claim 27 further comprising programming the one or more processors to construct item collections z1(τn), z2(τn), . . . , zk(τn) by: repeating the estimating conditional probabilities Q*(zk|si, tj; τn) and the estimating probabilities Pr(zk|si; τn)+ and Pr(tj|zk; τn)+ with Pr(tj|zk; τn−=Pr(tj|zk; τn)+ and Pr(zk|si; τn)−=Pr (zk|si; τn)+ if |Pr(tj|zk; τn)−−Pr(tj|zk; τn)+|>d or |Pr(zk|si; τn)−−Pr(zk|si; τn)+|>d for a predetermined d<<1; andreturning the probabilities Pr(zk|si; τn)=Pr(zk|si; τn)+ and Pr(tj|zk; τn)=Pr(tj|zk; τn)+, the conditional probabilities Q*(zk|si, tj; τn), and the list Est(τn) of triples (si, tj, eij), where d is a predetermined number.
  • 29. The computer-implemented method of claim 1 further comprising programming the one or more processors to estimate associations by constructing time-varying association probabilities between at least two item collections.
  • 30. The computer-implemented method of claim 1 further comprising programming the one or more processors to estimate associations by constructing time-varying association probabilities between at least two item collections z1(τn), z2(τn), . . . , zk(τn) and y1(τn), y2(τn), . . . , yl(τn) responsive to probabilities Pr(yk|ui; τn) that ui are members of the item collection yl(τn), probabilities Pr(tj|zk; τn) that the item collection zk(τn) include the tj as members, and a time-varying list D(τn) of triples (ui, tj, So).
  • 31. The computer-implemented method of claim 30 further comprising programming the one or more processors to estimate associations by creating an updated list E(τn) at a time τ incorporating a time-varying list of triples D(τn) into E(τn−1), where l and n are integers.
  • 32. The computer-implemented method of claim 31 further comprising programming the one or more processors to estimate associations by: adding (ui, tj, So, αeij) to E(τn) for each 4-tuple (ui, tj, So, eijo) in E(τn−1); andfor each triple (ui, tj, So) in D(τn), replacing (ui, tj, So, eijo) with (ui, tj, eijo+β) if (ui, tj, So, eijo) is in E(τn), otherwise add (ui, sj, So, β) to E(τn);where, β is a predetermined variable; andwhere l, n, i, j, o are integers.
  • 33. The computer-implemented method of claim 31 further comprising programming the one or more processors to estimate associations by estimating probabilities Pr(zk|yl; τn)− using the updated list E(τn) and conditional probabilities Q*(zk, yl|ui, tjSo,; τn−1), where l, n, i, j, and o are integers.
  • 34. The computer-implemented method of claim 33 further comprising programming the one or more processors to estimate associations by, for each yl and zk, estimating Pr(zk|yl; τn)− as PrN/PrD, where PrN is a sum across ui, tj, and So of eijoQ*(zk, yl|ui, tj, So; τn−1) and where PrD is a sum across ui, tj, So and zk′ of eijoQ*(zk′, yl|ui, tj, So; τn1).
  • 35. The computer-implemented method of claim 33 further comprising programming the one or more processors to estimate associations by estimating conditional probabilities Q*(zk, yl|ui, sj, So; τn).
  • 36. The computer-implemented method of claim 35 further comprising programming the one or more processors to estimate associations by, each yl and zk, estimating probabilities Pr(zk|yl; τn)− as PrN/PrD, where PrN is a sum across ui, tj, and So of eijoQ*(zk, yl|ui, tj, So; τn−1) and where PrD is a sum across ui, tj, So and zk′ of eijoQ*(zk′, yl|ui, tj, So; τn−1).
  • 37. The computer-implemented method of claim 35 further comprising programming the one or more processors to estimate associations by estimating the probabilities Pr(zk|yl; τn)+.
  • 38. The computer-implemented method of claim 37 further comprising programming the one or more processors to estimate associations by, for each yl and zk, estimating probabilities Pr(zk|yl; τn)+ as PrN/PrD, where PrN is a sum across ui, tj, and So of eijoQ*(zk, yl|ui, tj, So; τn) and where PrD is a sum across ui, tj, So and zk′ of eijoQ*(zk′, yl|ui, tj, So; τn).
  • 39. The computer-implemented method of claim 37 further comprising programming the one or more processors to estimate associations by, for any pair (zk, yl), if |Pr(zk|yl; τn)−−Pr(zk|yl; τn)+|>d for a predetermined d<<1 and the estimating probabilities Pr(zk|yl; τn)− and the estimating probabilities Pr(zk|yl; τn)+ have not been repeated more than R times, repeat the estimating probabilities Pr(zk|yl; τn)− and the estimating probabilities Pr(zk|yl; τn)+ with Pr(zk|yl; τn)−=Pr(zk|yl; τn)+, where d is a predetermined variable and R is an integer.
  • 40. The computer-implemented method of claim 38 further comprising programming the one or more processors to estimate associations by, for any pair (zk, yl) and for |Pr(zk|yl; τn)−−Pr(zk|yl; τn)+|>d for a predetermined d<<1, let Pr(zk|yl; τn)+=[Pr(zk|yl; τn)++Pr(zk|yl; τn)+]/2 where d is an predetermined variable.