METHOD AND SYSTEM OF CONTENT RECOMMENDATION

Information

  • Patent Application
  • 20110258196
  • Publication Number
    20110258196
  • Date Filed
    December 30, 2008
    15 years ago
  • Date Published
    October 20, 2011
    13 years ago
Abstract
A method of content recommendation, includes: generating a first digital mathematical representation of contents to associate the contents with a first plurality of words describing the contents; generating a second digital mathematical representation of text documents different from the contents to associate the documents with a second plurality of words; processing the first and second pluralities of words to determine a common plurality of words; processing the first and second digital mathematical representations to generate a common digital mathematical representation of the contents and the text documents based on the common plurality of words; and providing content recommendation by processing the common digital mathematical representation.
Description
BACKGROUND

1. Technical Field


The present invention relates to the field of content recommendations, i.e. the methods that attempt to present contents, such as films, music files, videos, books, news, images, web pages, that are likely of interest to the user.


2. Description of the Related Art


Many recommendation techniques are known: some of them provide recommendations based on the user's predilections expressed by means of explicit votes, while other techniques are based on the observation of the contents that a user chooses. The online DVD rental service Netflix (www.netflix.com) is of the first type and operates by recommending users a list of films which have not yet been rented. The list of films is estimated by comparing previous user's votes with the ones of other users.


The Internet Movie Database (IMDb) is an online database of information related to films, actors, television shows, production crew personnel, video games, and most recently, fictional characters featured in visual entertainment media. This database employs another recommendation technique which is based on the contents and does not exploit the user's predilections.


Recommendation techniques based on the content generally employ texts describing in written form the contents and use information retrieval methods to determine relations between contents. Document “Hybrid pre-query term expansion using Latent Semantic Analysis”, Laurence A. F. Park, Kotagiri Ramamohanarao, Proceedings of the Fourth IEEE International Conference on Data Mining (ICDM'04) describes an information retrieval method in which a query map using single Value Decomposition and Latent Semantic Analysis is built.


Document “Recommending from Content: Preliminary Results from an E-Commerce Experiment”, Mark Rosenstein, Carol Lochbaum, Conference on Human Factors in Computing Systems (CHI '00) discloses the effects of various forms of recommendations on consumer behaviour at a web site using also Latent Semantic Indexing. H. Zha and H. Simon in “On updating problems in latent semantic indexing”, SIAM Journal of Scientific Computing, vol. 21, pp. 782-791, 1999″ and M. Brand in “Fast low-rank modifications of the thin singular value decomposition”, Linear Algebra and its Applications, vol. 415, pp. 20-30, 2006″ have disclosed an incremental LSA technique according to which additive modifications of a singular value decomposition (SVD) to reflect updates, downdates and edits of the data matrix is developed.


Rocha Luis M., Johan Bollen in “Biologically Motivated Distributed Designs for Adaptive Knowledge Management, Design Principles for the Immune System and other Distributed Autonomous Systems”, L. Segel and I. Cohen (Eds.) Santa Fe Institute Series in the Sciences of Complexity; Oxford University Press, pp. 305-334. 2001 discuss the adaptive recommendation systems TalkMine and @ ApWeb that allow users to obtain an active, evolving interaction with information resources.


Badrul M. Sarwar, George Karypis, Joseph A. Konstan, John T. Riedl in “Application of Dimensionality Reduction in Recommender System”—A Case Study”, ACM WebKDD Workshop, 2000, disclose a technique based on the Singular Value Decomposition permitting to improve the scalability of recommender systems. Document US2006/0259481 refers to a method of analysing documents in the field of the information retrieval which uses a Latent Semantic Analysis to infer semantic relations between terms and to achieve dimensionality reduction of processed matrices.


BRIEF SUMMARY OF THE INVENTION

The Applicant has noticed that the known recommendation techniques based on the textual content description show unsatisfying accuracy. With particularly reference to video contents, the Applicant has observed that the known systems can produce video recommendations that are not properly linked by the same or analogous thematic. This inaccuracy increases when the texts describing the contents are very short and include insufficient information.


The Applicant has noticed that an improved accuracy can be obtained by considering in the recommendation processing not only the set of texts describing the contents but also a number of external texts, such as for instance newspaper articles, discussing general arguments not exclusively connected to the contents object of the recommendations. According to a first object, the present invention relates to a method of content recommendation as defined in the appended claim 1. Particular embodiments of such method are described by the dependent claims 2-11. The invention also relates to a software comprising codes for carrying out the method of claim 1. According to a further object, the invention relates to a content recommendation system as defined by the appended claim 12 and to embodiments of the system as described by the dependent claims 13-15.





BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

Further characteristics and advantages will be more apparent from the following description of a preferred embodiment and of its alternatives given as a way of an example with reference to the enclosed drawings in which:



FIG. 1 shows an embodiment of a content recommendation system;



FIG. 2 illustrates a flow chart for an embodiment of a method of content recommendation;



FIG. 3 shows by a flow chart an example of a method of generating first and second document-word matrices;



FIG. 4 depicts a flow chart of a first example of generation of the third document-word matrix from said first and second document-word matrices;



FIG. 5 shows a flow chart representing an embodiment of a recommendation step;



FIG. 6 depicts a flow chart of a generation step alternative to the one of FIG. 4;



FIG. 7 illustrates a flow chart of a matrix decomposition process included in the generation step of FIG. 6;



FIG. 8 depicts a flow chart representing a matrices merging step included in the generation step of FIG. 6.





DETAILED DESCRIPTION
Content Recommendation System


FIG. 1 shows a content recommendation system 100 in accordance with an embodiment of the invention. Particularly, content recommendation system 100 is employable to provide recommendations to users in connection with any type of contents such as, for instance films, videos, reportages, audio contents or books, which can be associated with corresponding content text documents 200 providing a description of the contents such as, content summaries. These content text documents 200 include a first plurality of words. Moreover, a plurality of context documents 300 which are different from the contents documents 200 and their describing texts is defined. Such context document can be, as an example, newspaper articles describing a subject also taking into account the point of view of people located in a specific area. The newspaper articles are preferred since they are a proper representation of arguments and subjects known by people of a particular area and of their culture. The plurality of context documents 300 includes a second plurality of words. Moreover it is observed that these context documents 300 do not represent votes, opinions or feedback information provided by the users in using the system 100.


The content recommendation system 100 comprises a processor 101 (PRC), a memory 102 (MRY), a database 103 (DTB) and a user interface 104 (INTRF). Particularly, software modules configured to perform processing and apply algorithms in accordance with the described example of the invention are stored in the memory 102. In accordance with said example, the following modules are stored in the memory 102: a first generation module 201, a second generation module 202, a first processing module 203, a second processing module 204 and a recommendation module 205. The database 103 may store data corresponding to the content text documents 200 and the context documents 300. The first generation module 201 is configured to generate a first mathematical representation of the content text documents 200. The second generation module 202 is configured to generate a second mathematical representation of the context documents 300.


The first processing module 203 is configured to process the first and second pluralities of words to determine a common plurality of words including significant words of both first and second pluralities. The second processing module 204 is structured to process the first and second mathematical representations to generate a third mathematical representation defining the content text documents 200 and the context documents 300 which is based on the common plurality of words. The recommendation module 205 is configured to provide content recommendation by processing the third mathematical representation.


The mathematical representations above indicated can be expressed and stored in a digital form in order to allow processing by the processor 101. It has to be observed that according to a particular embodiment the mathematical representations above indicated are matrix representations in which any document is represented by a vector containing weights associated to specific words of the represented document text. However, any other mathematical representation is possible which allows defining and determining a similarity between documents.


Method of Content Recommendation



FIG. 2 shows by means of a flow chart a method of content recommendation 400, which is implementable by the recommendation system 100. The method of content recommendation 400 is, preferably, of the type “based on the content” and therefore it does not require any information, vote, opinion or feedback provided from users during the running of the recommendation method 400. The method of content recommendation 400 comprises a definition step 401 (DEF) in which the content text documents 200 (e.g. summaries of videos or films) and the plurality of context documents 300 corresponding, preferably, to newspaper articles are defined. In a first generation step 402 (CNT-GEN) the first plurality of words associated with the content text documents 200 are processed and a first document-word matrix G1 is generated to represent the content text documents 200. This first processing step 402 can be implemented by the first generation module 201 of FIG. 1. In a second generation step 403 (DOC-GEN) the second plurality of words associated with the context documents 300 are processed and a second document-word matrix G2 is generated to represent the context documents 300. The second generation step 403 can be implemented by the second generation module 202 of the content recommendation system 100.


The recommendation method 400 includes a first processing step 404 (WRD-UNN) in which the first plurality of words of the content text documents 200 and the second plurality of words of the context documents 300 are processed to create a common plurality of words, such as single vocabulary. This first processing step 404 can be carry out by the first processing module 203. Moreover, the recommendation method 400 includes a second processing step 405 (MTRX-UNN) wherein the first and second document-word matrices G1 and G2 are processed to generate a third document-word matrix G3 defining a specific merging of the content text documents 200 and the text documents 300 which is based on the common plurality of words defined in the second generation step 404. The second processing step 405 can be carried out by the second processing module 204.


Particularly, as it will be explained in greater detail later, the generation of the first document-word matrix G1, the second word-document matrix G2 and the third document-word matrix G3 is performed computing weights to be associated with each word of the common plurality by evaluating Term Frequency factors and Inverse Document Frequency Factors.


In a recommendation step 406 (RECM) the third document-word matrix G3 is processed so as to provide a content recommendation which allows linking a content to another one, as an example, by means of indexes that are stored in the database 103. In accordance with a particular example, the recommendation step 406 includes processing the third document-word matrix G3 by using a Latent Semantic Analysis (LSA) which allows obtaining a document-word matrix structured in such a way that semantic relations between words are better identifiable. Moreover, the Latent Semantic Analysis permits dimensionality reduction which corresponds to a reduction of computing complexity. The link between contents is determined by applying to vectors included in the third document-word matrix G3 as processed by the LSA a similarity function.


The Applicants has noticed that adding selected context documents to content text documents allows strengthening words relations and creating new words relations that permit to obtain recommendation results which are enriched and improved in comparison with the ones obtainable exclusively on the contents words.


Steps 402 and 403 of Generation of First and Second Document-Word Matrices



FIG. 3 shows by a flow chart an example of the first generation step 402 in which the first document-word matrix G1 is computed. The method described with reference to FIG. 3 is also valid for the second generation step 403 in which the second document-word matrix G2 is computed.


According to the particular first generation step 402 of FIG. 3, a vectorial representation of each content text document is employed. The first generation step 402 includes the following steps: a stopwords removing step 501 (STPREM), a lemmatisation step 502 (LEMM), a vectorial representation step 503 and a matrix generation step 508 (MTX). In greater detail, in the stopwords removing step 501 the words that in a language occurs so frequently that can be considered without information content are removed. As an example, the words to be removed are grammar articles, preposition, pronouns, but also common verbs such as “to do” or “to go”, and so on. With reference to the Italian language, a standard list of the stopwords is available on the web page: http://members.unine.ch/jacques.savoy/clef/italianST.txt. The stopwords removing allows to maximise the useful information included in the vectorial representation to be evaluated. The words set comprised in the content text document i is Ci and the set of stopwords for the particular language considered is S. The stopwords removing step 501 creates a new set of words Di associated with the i-th content text document:






D
i
=C
i
−S


Lemmatisation 502 is the process of grouping together the different inflected forms (e.g. good and better) of a word so they can be analysed as a single item. With reference to the Italian language, a lemmas vocabulary is available on the web page: http://sslmitdev-online.sslmit.unibo.it/linguistics/morph-it.php. The lemmatisation step 502 is performed by a non invertible function ƒL: ƒL(p)=l, wherein l is the word to lemmatise and p is a reference lemma.


The vectorial representation step 503 provides for the representation of each content text document 200 by means of a vector expressed in the keywords space. A keyword is one of the terms resulting from the stopwords removing step 501 and lemmatisation step 502. In accordance with the above definitions, a keyword k is:






k=ƒ
L(p):pεDi wherein Di=Ci−S  (1)


A vocabulary is a words set consisting of all or some of the keywords as defined by the expression (1). In a vocabulary there are no equal words. The content text documents 200 define a first vocabulary V1 and the context documents 400 define a second vocabulary V2. The vectorial representation is performed, as an example, by applying the term frequency—inverse document frequency (tf-idf) method.


In accordance with the tf-idf method a computing of the TF factor (step 504) and a computing of the IDF (step 505) factor are performed. With reference to the TF factor, Pi is a keywords set included in a content text document; the TF factor for a keyword k is defined as:











TF
i



(
k
)


=




n
k




j



n
j








j



P
i






(
2
)







In accordance with expression (2); TF factor is the internal normalised frequency, i.e. the ratio between the number of occurrences nk of the keyword k in a i-th document and the number of occurrences of all keywords in the i-th document. The TF factor indicates the actual importance of that keyword in the document.


The inverse document frequency IDF is a measure of the general importance of the word within a collection of documents and is obtained by dividing the number of all documents N by the number dk of documents containing the keyword k, and then taking the logarithm of that quotient:










IDF


(
k
)


=

log


(

N

d
k


)






(
3
)







In a weight calculation step 506, a weight wi of keyword k in content text document i is computed as product of the TF and IDF factors:












w
i



(
k
)


=



TF
i



(
k
)


×

IDF


(
k
)












w
i



(
k
)


=



n
k




j







n
j



×

log


(

N

d
k


)








(
4
)







Formula (4) is repeated for all keywords in connection with the content text document i. Each content text document i is described through a corresponding vector having a length equal to the number of keywords in the vocabulary. Each components of vector Ωi is a keyword weight. Defining V1 as the vocabulary associated with the content text documents, in a vector representation step 507, the keywords vector Ωi associated with the content text document i is defined as:





Ωi=[wi(k1),wi(k2),Λ,wi(km)] k1,k2,K,kmεV1  (5)


In the matrix generation step 508 (MTX), all the vectors Ωi associated with a corresponding content text document of set 200 are organised as rows of the first document-word matrix G1. The first document-word matrix G1 shows rows associated with a content text documents and columns associated with a respective word; each element of the matrix is a weight computed in accordance with the tf-idf method. In analogous way, the second document-word matrix G2 can be determined in the second generation step 403.


As an example, a document for which a document-word matrix has to be determined is:


“Centuries ago there lived . . . . “A king!” my little readers will say immediately.”


After stopwords removing (step 501), the sentence is:


“centuries ago lived king little readers say immediately”


the lemmatisation step 503 converts the sentence in:


“century ago live king little reader say immediately”


The vocabulary to be used in this example consists of sixteen words, numbered from 1 to 16: little (1), live (2), do (3), king (4), ago (5), carpenter (6), concrete (7), airplane (8), milk (9), spaceship (10), immediately (11), house (12), reader (13), century (14), master (15), say (16).


The above indicated document includes only some of the vocabulary words and specifically the ones having the following positions: 1, 2, 4, 5, 11, 13, 14, 16; each one of this word appears once. Table 1 shows the number of occurrences of all the vocabulary words in the above document:











TABLE 1







1
little
1


2
live
1


3
do
0


4
king
1


5
ago
1


6
carpenter
0


7
concrete
0


8
airplane
0


9
milk
0


10
spaceship
0


11
immediately
1


12
house
0


13
reader
1


14
century
1


15
master
0


16
say
1










The third column of Table 1 is a vector which describes the document in a non weighted manner:





























1
1
0
1
1
0
0
0
0
0
1
0
1
1
0
1










Considering that the vector above indicated includes eight words having non-zero occurrences, the TF vector can be obtained by dividing each element by 8:





























1/8
1/8
0
1/8
1/8
0
0
0
0
0
1/8
0
1/8
1/8
0
1/8










According to the example a set of ten documents is considered and the number of documents containing the defined words is:
















1
little
4


2
live
1


3
do
8


4
king
3


5
ago
6


6
carpenter
1


7
concrete
1


8
airplane
2


9
milk
1


10
spaceship
5


11
immediately
8


12
house
4


13
reader
2


14
century
1


15
master
1


16
say
1










It is clear from expression (3) that the vector containing the IDF factors for the considered document is:














0.916


2.302


0.223


1.203


0.510


2.302


2.302


1.609


2.302


0.693


0.223


0.916


1.609


2.302


2.302


2.302










According to the expression (4), the tf-idf vector associated with the document above introduced is:





























0.114
0.287
0
0.150
0.063
0
0
0
0
0
0.027
0
0.201
0.287
0
0.287










A number of ten vectors each associated with a respective document of the considered set represents the document-word matrix.


First Processing Step 404: Creation of the Common Plurality of Words


According to the described example, in the first processing step 404 a common vocabulary V3 is created starting from the first plurality of words, i.e. the first vocabulary V1 of the content text documents 200, and the second plurality of words, i.e. the second vocabulary V2 of the context documents 300. The first vocabulary V1 and the second vocabulary V2 are not equal. The common vocabulary V3 can be obtained by a union of the first and second vocabularies or intersection of the first and second vocabularies V1 and V2, or by choosing only the first vocabulary V1 or only the second vocabulary V2.


Preferably, the common vocabulary V3 is obtained by a union (in accordance with the set theory) of the first and second vocabularies V1 and V2 and therefore by taking into account all the keywords included in such vocabularies. As an example, the union technique allows to find that two words that are uncorrelated in the first vocabulary V1 result linked each other in the common vocabulary V3 due to the fact that they show a relation to the same word belonging to the second vocabulary V2.


A method for converting the first vocabulary V1 and the second vocabulary V2 into the common vocabulary V3 is described hereinafter. The numbers N1 and N2 are the number of words of the first and second vocabularies, respectively. The indexes identifying the words of the two vocabularies to be merged are W1 and W2:





W1={1,K,N1}





W2={1,K,N2}


The number of words included in the common vocabulary V3 is Nb:





Wb={1,K,Nb}


The conversion of the first and second vocabularies V1 and V2 into the common vocabulary V3 is performed, as an example, by applying a first conversion function ƒ1 and a second conversion function ƒ2:





ƒ1:W1→Wb∪{0}





ƒ2:W2→Wb∪{0},


the two conversion functions ƒ1 and ƒ2 are so that the word having index i is converted in the same word having an index j belonging to the Wb set or in 0 (null) if the word is not present in the common vocabulary (this situation can occur when the latter is not obtained by union of the two vocabularies). As an example, the first vocabulary includes the words: bread (index 1), butter (index 2), milk (index 3). The second vocabulary includes the words: water (index 1), wine (index 2), bread (index 3): The common vocabulary includes the following words: bread (index 1), butter (index 2), milk (index 3), water (index 4), wine (index 5). Therefore, the word “water” having index 1 in the second vocabulary assumes an index 4 in the common vocabulary, while the word “bread” keeps the same index from the first vocabulary.


First Embodiment of Second Processing Step 405: Generation of the Third Document-Word Matrix G3.


A first embodiment of the second processing step 405, wherein the third document-word matrix G3 is obtained, is now described with reference to FIG. 4. The first embodiment of the processing step 405 includes a step of defining a first conversion matrix F1 and a second conversion matrix F2 601 (CONV-MTX). The conversion matrices F1 and F2 can be obtained by means of the above defined conversion functions ƒi and ƒ2 used for the construction of the common vocabulary V3. The conversion matrices F1 and F2 include elements equal to 1 or to 0 and show dimensions N1×Nb and N2×Nb, respectively.





F1ε{0,1}N1×Nb





F2ε{0,1}N2×Nb


If the word having index i is included in the common vocabulary V3, i.e. first conversion function ƒ1(i)≠0, then the first conversion matrix F1 assumes the value 1:






F
1(i,ƒ1(i))=1


and the other elements of the row are null. If the word of index i is not present in the common vocabulary the row having index i of the first conversion matrix F1 shows null elements. Moreover, if a word of index j is present in the common vocabulary V3 but is not present in the first vocabulary V1 the column having index j of the first conversion matrix F1 shows null elements. Analogous definitions can be given for the second conversion matrix F2 based on the second conversion function ƒ2.


In a first matrix generation step 602 a first non-weighted matrix O and a second non-weighted matrix B are defined. A non-weighted matrix is a matrix in which each row is associated to a document and each column is associated to a word. Each element (i,j) of a non-weighted matrix indicates the number of occurrences of the word j in the document i. The first non-weighted matrix O is based on the content text documents 200 and the second non-weighted matrix B is based on the text documents 300. A non-weighted common matrix Gn is defined by using the first and second non-weighted matrices O and B and the first and second conversion functions F1 and F2:







G
n

=

[




OF
1






BF
2




]





In a second matrix generation step 602 the non-weighted common matrix Gn is transformed according to the tf-idf technique and the above defined third document-word matrix G3 is obtained:





Gntf-idf→G3  (6)


First Embodiment of the Recommendation Step 406


FIG. 5 shows a flow chart representing a first embodiment of the recommendation step 406 comprising a projecting step 701, a step of applying a similarity function 702 and a storing step 703. In the projecting step 701, the third document-word matrix G3 is projected in lower dimensional space by applying a LSA technique. According to this technique, a lower rank approximation of the third document-word matrix G3 is computed using a factorisation based on the Singular Value Decomposition method (SVD). It has to be noticed that the rank of the third document-word matrix G3 can be as, an example, of the order of tens of thousands, while the LSA technique allows to reduce the rank down to 100-400. As an example, the LSA technique using the SVD method is described by M. W. Berry, S. T. Dumais, G. W. O'Brien in “Using Linear Algebra for Intelligent Information Retrieval,” SIAM Review, Vol. 37, No. 4, pp. 573-595, 1995.


Therefore, in the projecting step 701 the third document-word matrix G3 is expressed as the product of three matrices, i.e. the following factorisation: wherein U is a matrix wherein each row corresponds to a document, S is a diagonal matrix containing singular values and V is word matrix (VT is the transpose of matrix V) wherein each row corresponds to a word. Matrices U and V are orthonormal.


The approximation of the third document-word matrix G3 can be expressed by a truncation at a low rank G3K of the above factorisation and maintaining the first K components:





G3≈G3K=UKSKVKT  (7)


wherein the matrices UK and VK consist of the first K columns of matrices U e V, respectively, and SK consists of the first K rows and columns of matrix S.


The above truncation of the G3 factorisation can be computed by processing algorithms known to the skilled in the art which can be memorised, as an example, under the form of a software in a LSA module to be stored in the memory 102 of the recommendation system 100 of FIG. 1.


It has to be noticed that the truncation expressed by the product G3≈G3K=UKSKVKT allows establishing of relations between documents and words which were not present in the original third document-word matrix G3 or they originally showed a low strength. Moreover, it is observed that the similarity computing performed on the third document-word matrix G3 has a complexity that is of the order of Nb·M2 where Nb is the number of words of the third vocabulary V3 and M is the number of documents in the set comprising the content text documents 200 and the documents 300. The number Nb can be, as an example, of the order of many tens of thousands. Thanks to the above LSA technique the similarity computing has a complexity of the order of K·M2, where K is of the order of the hundreds (K<<Nb).


With further reference to FIG. 5, the step of applying a similarity function 702 includes the definition of a function that allows obtaining relations between the content text documents 200. This function is based on the computing of the angle between the vectors corresponding to documents to be compared. In accordance with the described example, the similarity function can be applied to the above computed low rank factorisation G3K of the third document-word matrix G3. In greater detail, with reference to expression (7) it has to be notice that the approximation g3i of document di of the third document-word matrix G3 is





diT≈g3iT=uiTSKVKT.


The row uiT of the matrix UK corresponds to the i-th document







U
K

=

[




u
1
T





M





u
M
T




]





The similarity between i-th document and j-th document can be expressed by the following similarity function, i.e. the cosine of the angle between the corresponding vectors in the low rank factorisation G3K:













cos









(


g






3
i


,

g






3
j



)



=


g






3
i
T


g






3
j






g






3
i








g






3
j












=



u
i
T



S
K



V
K
T



V
K



S
K
T



u
j







V
K



S
K
T



u
i









V
K



S
K
T



u
j












=



u
i
T



S
K



S
K
T



u
j







S
K
T



u
i









S
K
T



u
j













(
8
)







By using the expression (8) a similarity matrix can be defined wherein the i,j element represent the similarity between i-th document and j-th document. Expression (8) corresponds to a sum of products of iƒ-idƒ weights associated to the documents to be compared.


By reference to the storing step 703, starting from the similarity matrix the couples of different content text documents having similarity greater than a threshold value are determined. Further contents Cj, . . . , Cp (FIG. 5) to be recommended are determined in connection with a reference content Ci. These results are stored in the database 103 (FIG. 1) by using, as an example, indexes that link reference content Ci to contents Cj and Cp. When the user of the recommendation system 100 retrieves data from database 103 concerning content Ci, the recommendation system 100 recommends also contents Cj and Cp. In an alternative and preferred embodiment, the contents are ordered in a list according to decreasing values of their similarities with content Ci and only a pre-established number of contents are selected and recommended (i.e. the first five contents of the list) to the user.


It has to be observed that in accordance with a further embodiment, the LSA technique can be avoided and the similarity function computing the cosine of an angle between documents can be applied directly to the third document-word matrix G3.


Second Embodiment of the Recommendation Step 406


A second embodiment of the recommendation step 406 employs an LSA technique based on an incremental approach as depicted by H. Zha and H. Simon, in “On updating problems in latent semantic indexing”, SIAM Journal of Scientific Computing, vol. 21, pp. 782-791, 1999″ and by M. Brand in “Fast low-rank modifications of the thin singular value decomposition”, Linear Algebra and its Applications, vol. 415, pp. 20-30, 2006″.


In accordance with this approach, the third document-word matrix G3 is not entirely processed by the LSA method as performed in the projecting step 701 of FIG. 5 but the decomposition technique is applied in an iterative manner by processing only L rows (i.e. L documents) at any step. According to this iterative method, at any step a i-th submatrix GSMi (having size L×Nb) of the third document-word matrix G3 is processed by producing an updated factorisation the L×i first rows (G3(L×i)) of the third document-word matrix G3:





G3(L×i)≈UK(i)SK(i)VK(i)T


At any step the submatrices UK(i), SK(i) and VK(i) are used to update the matrices UK, SK and, and VK which express the factorisation of the third document-word matrix G3 in accordance with formula (7). When all the submatrices are processed the three matrices UK, SK and VK have been obtained and the similarity function step 702 (FIG. 5) and the storing step 703 can be carried out as described above.


It has to be noticed that the matrix SK has size K×K (e.g. 400×400) and so it occupies a reduced memory portion. As an example, the matrix V has size Nb×K, Nb=105, K=400, where each scalar value is represented by 4 byte, then matrix V occupies 160 megabyte of memory. It has to be observed that matrix V occupies a constant memory portion during the iterative method. Matrix U has M×K, where M is the cumulative number of documents considered from the first step of the iterative processing to the one of index (M/L)-th. Since the total number of the content text documents and the context documents can be very great, the processing of matrix U can require a great memory portion and the processing unit resources could be insufficient. The incremental LSA method above described allows overcoming this situation; indeed it allows limiting the computing of the U matrix to a submatrix U0 concerning the content documents about which a recommendation has to be provided and avoiding computing of a portion UB of matrix UK concerning the context documents. According to this method the memory assigned to matrix UK contains only rows corresponding to matrix U0 (having size N0×K) and the memory occupation can be reduced (e.g. N0=104).


Second Embodiment of Second Processing Step 405: Alternative Generation of the Third Document-Word Matrix G3.


It has to be noticed the decomposition using incremental factorisation as described in the above paragraph cold be computationally complex and needs a lot of processing time. There are situations in which an updating of the context documents by adding new newspaper articles or replacing them with other information sources is required and therefore the incremental factorisation should be repeated. In accordance with the embodiment hereinafter described, the LSA methods are applied separately to the first and second document-word matrices G1 and G2 and after that the corresponding resulting factorisations are merged.


In greater detail and with reference to FIG. 6 the second embodiment of the generation step 405 includes: a first LSA step 801 in which the first document-word matrix G1 is processed, a second LSA step 810 in which the second document-word matrix G2 is processed and a merging step 820 wherein the matrices resulting from the above defined modification steps 801 and 810 are merged.



FIG. 7 illustrates an example of the first LSA step 801 which includes a tf factor modification step 802, an idf factor modification step 803 an intermediate matrices computing step 804. In the tf factor modification step 802 a matrix D representing the modification of the first document-word matrix G1 due to the modification of the first vocabulary V1 into the common vocabulary V3 is determined. When the vocabulary changes the total number of words could change and therefore tf factor has to be amended. Value Ni represents the number of words of i-th document using the first vocabulary V1; value is the number of words included in the same document using the common vocabulary V3; the original tf factor is indicated as:







λ


(

i
,
j

)


=


n
ij


N
i






where nij is the number of time that the j-th word appears in the i-th document. The modified tf factor is:











λ




(

i
,
j

)


=


n
ij


N
i









=



n
ij


N
i


·


N
i


N
i










=


λ


(

i
,
j

)


·


N
i


N
i











It is observed that the above expression shows that the modified tf factor is expressed by the original tf factor multiplied for the ratio Ni/Ni′. Therefore, the first document-word matrix G1 as modified by the change of the tf factor can be expressed by the following factorisation:





USVTαDUSVT  (9)


where the elements of the diagonal matrix D are evaluated in accordance with the following formula:







D
ii

=


N
i


N
i







The elements of matrix D are therefore computed by evaluating the ratio Ni/Ni′ for each row.


With reference to the idf factor modification step 803 it has to be observed that because of the union of the content text documents 200 together with the context documents 300 the total number of documents increases and the idf factor changes consequently. In the idf factor modification step 803 a matrix C representing the idf factor modification is computed. The number of documents included in the content text documents 200 is M while the total number of documents in the common set of documents (i.e. the union of content text documents 200 and text documents 300) to be used for the final factorisation is M′. The number of documents in which the j-th word appears is dj and the number of documents in which the same word appears in the common set of documents to be used in the final factorisation is dj′.


The new idf factor associated to the common set of documents is:








γ




(
j
)


=


log


(


M



d
j



)


=


γ


(
j
)


·




γ




(
j
)



γ


(
j
)



.







According to the above formula the new idf factor is proportional to the original idf factor γ(j) by the ratio









γ




(
j
)



γ


(
j
)



.




Therefore, the first document-word matrix G1 as modified by the change of the idf factor can be expressed by the following factorisation:





USVTαUSVTC  (10)


where the elements of the diagonal matrix C are evaluated in accordance with the following formula:







C
jj

=



γ




(
j
)



γ


(
j
)







The elements of matrix C are therefore computed by evaluating the ratio









γ




(
j
)



γ


(
j
)



.




In the intermediate matrices computation step 804 the following factorisation G1 of the first document-word matrix G1 is evaluated to take into account of the modifications of both factors tf and idf:





G1≈USVTαG1′DUSVTC  (11)


It is noticed that the product DUSVTC is not a valid SVD decomposition since matrices DU e CTV do not show orthonormal columns. The Applicant notices that it is possible to express the product including matrix D, DUS, by using the per se known factorisation “QR” A useful explanation of the factorisation QR can be found in the book L. N. Trefethen and D. Bau, Numerical Linear Algebra, The Society for Industrial and Applied Mathematics, 1997. Therefore, the following factorisation is computed:





DUS=Q1R1  (12)


Moreover, the product including matrix C, CTV, can be expressed by another QR factorisation. Therefore, the following further factorisation is computed:





CTV=Q2R2  (13)


Wherein matrices Q1 and Q2 show orthonormal columns, and matrices R1 and R2 are upper-triangular matrices. By expressions (12) and (13) it is obtained that:





DUSVTC=Q1R1R2TQ2T


The central product R1R2T is factorised by the following SVD decomposition:





R1R2T=Ũ{tilde over (S)}{tilde over (V)}T  (14)


It has to be observed that the factorisation expressed by formula (14) can be computed in a fast manner thanks to the fact that the product R1R2T is a squared matrix having reduced size, equal to the size of the truncation of the first document-word matrix G1.


From expressions (11), (12), (13) and (14) the factorisation of the first document-word matrix Gi as modified by of the changes of factors tf and idf is:





DUSVTC=Q1Ũ{tilde over (S)}{tilde over (V)}TQ2T


wherein products Q1Ũ e Q2{tilde over (V)} are matrices having orthonormal columns, and matrix {tilde over (S)} is a non-negative diagonal matrix having ordered values. Then, the factorisation of formula (II) can be expressed by:





G1′≈DUSVTC=U′S′V′T  (15)


where the matrix factors U′, S′ and V′ are computed according to the following expressions:





U′=Q1Ũ,





S′={tilde over (S)},





V′=Q2{tilde over (V)}.  (16)


The intermediate matrices computation step 804 allows expressing the first modified matrix G1′, that is to say the first document-word matrix G1 modified because of the union of the content text document 200 and the text document 300, by a factorisation employing the intermediate matrices U′, S′ and V′.


In the second LSA step 810 (FIG. 6) a second modified matrix G2′ representing the second document-word matrix G2 as modified because of the tf and idf factors changes can be expressed by the following factorisation:





U″S″V″T  (17)


wherein further intermediate matrices U″, S″ and V″ are computed in a way analogous to the one described with reference to FIG. 7.


An example of the merging step 820 is now described with reference to a flow chart shown in FIG. 8. The merging step 820 is based on the intermediate matrices of expressions (16) and (17) and includes a step of creating a union matrix 901 by lining up the two factorisations:










[





U




S




V







T









U
′′



S
′′



V










T







]

=


[




U




0




0



U
′′




]



[





S




V







T









S
′′



V










T







]






(
18
)







Furthermore, the factor on the right side of expression (18) is factorised using the QR method (first factorisation step 902):










[





S




V







T









S
′′



V










T







]

=


R
T



Q
T






(
19
)







The matrix RT (having, advantageously, a low size) is decomposed by using the SVD factorisation in a second factorisation step 903:





RT=ÛŜ{circumflex over (V)}T  (20)


and the following result is obtained:










[





U




S




V







T









U
′′



S
′′



V










T







]

=


[




U




0




0



U
′′




]



U
^



S
^




V
^

T



Q
T






(
21
)







The following three factors U′″, S′″ and V′″T are then computed in computing step 904:











U
′′′

=


[




U




0




0



U
′′




]



U
^



,






S
′′′

=

S
^


,






V
′′′

=

Q


V
^



,




(
22
)







As derivable by expressions (22), (21) and (18) the three factors U′″, S′″ and V′″ allow to express in a truncated factorised manner the third document-word matrix G3 in accordance with an innovative LSA technique. It has to be noticed that the computation of matrix V′″ can be avoided considering that it is not employed in the similarity evaluation performed to recommend contents. Moreover, only rows of matrix U′″ corresponding to the documents of the set of the content text documents 200 are computed considering that the recommendation concerns only such documents.


Moreover, it has to be underlined that the second embodiment of second processing step 405 described with reference to FIGS. 6, 7 and 8 is particularly advantageous not only to merge factorisations of matrices concerning the contents and the context documents but also to merge matrices associated with different context documents or different contents. The procedure of FIGS. 6, 7 and 8 is computationally convenient in comparison to the LSA technique applied directly to the document-word matrix resulting from the union of two matrices when an updating of the set of document has to be performed. In fact, by applying the above described method a re-factorisation of the whole third document-word matrix G3 is avoided and only the smaller matrix corresponding to the updated set of documents is re-factorised.


In accordance with the second embodiment of the second processing step 405 described with reference to FIGS. 6, 7 and 8, the recommendation step 406 (FIG. 2) does not include the projecting step 701 (FIG. 5) and so the similarity function step 702 (FIG. 5) is applied directly to the documents belonging to the U′″ matrix expressed by formula (22) and subsequently the storing step 703 is performed.


Comparison Example

The Applicant has considered a set of 1432 content text documents consisting of films available on a video on demand system and a set of 20000 context documents consisting of newspaper articles. The reference content used for the recommendation is the film The Dead Pool, and the associated text is: “Inspector Callaghan returns with a new mission: to stop a dangerous serial killer who eliminates his victims according to a list linked to a TV show. But this time he's among the targets”. In accordance with this example, only a pre-established number of contents (i.e. the first four contents) are selected and recommended to the user in a profile concerning the above indicated reference content.


Table 2, Table 3 and Table 4 refer to a standard LSA technique applied directly to the document-word matrix G1 resulting from the first generation step 402 of FIG. 2. Table 2 shows the recommended films.









TABLE 2





(Standard LSA)


















Reference content
The Dead Pool



Recommended contents
Garfield 2




Teenage Mutant Ninja Turtles III




Dressed to Kill




Rocky Balboa










It has to be observed that only the film Dressed to Kill shows an analogy with the reference content. The following Table 3 shows the words that are shared by the reference content and the recommended contents.









TABLE 3







(Standard LSA)











Teenage Mutant Ninja

Rocky


Garfield 2
Turtles III
Dressed to Kill
Balboa





new
this
killer
death


return
new
victim
return


time, as in “this
return


time”



time, as in



“this time”










Table 3 shows that: the shared words are not relevant to a thematic similarity for Garfield 2 and Teenage Mutant Ninja Turtles III; the common words selected for Rocky Balboa are more relevant than the previous case but however there are not sufficient for a thematic relation; only for case Dressed to Kill there is an actual thematic link.


Table 4 shows the most relevant words for the similarity computing. By analysing Table 4 is noticed that the words of the lists do not describe exactly the thematic of the corresponding films.









TABLE 4







(Standard LSA)











Teenage Mutant Ninja

Rocky


Garfield 2
Turtles III
Dressed to Kill
Balboa





new
new
new
death


time, as in
time, as in “this time”
do
new


“this time”


life
time
time, as in “this time”
year


return
return
death
time, as in





“this time”


death
decide
decide
decide


decide
death
woman
life


year
victim
victim
return


American
woman
life
leave


leave
police
come
police


find again
leave
part
time









Table 5, Table 6, Table 7 refer to the recommendation method described with reference to FIG. 2 in accordance with the embodiments of FIGS. 6, 7 and 8.









TABLE 5





(Embodiment of the invention)


















Reference content
The Dead Pool



Recommended contents
The Spreading Ground




Dressed to Kill




In the Line of Fire




Basic Instinct 2: Risk Addiction











Table 5 allows appreciating that each of the recommended films shows a thematic similarity with the reference content.


This similarity is also appreciable from Table 6 wherein the words that are shared by the reference content and the recommended contents are shown.









TABLE 6







(Embodiment of the invention)










The Spreading


Basic Instinct 2:


Ground
Dressed to Kill
In the Line of Fire
Risk Addiction





eliminate
killer
stop
this


killer
victim
killer
time, as in “this





time


new


serial










It is clear from the following Table 7 showing the most relevant words for the similarity computing that the listed words refer to the same thematic, that is to say crime and investigations.









TABLE 7







(Embodiment of the invention)










The Spreading


Basic Instinct 2:


Ground
Dressed to Kill
In the Line of Fire
Risk Addiction





new
new
agent
agent


murderer
murderer
new
time, as in “this





time”


come
death
secret
new


police
kill
police
police


kill
time, as in “this
murderer
murderer



time”


agent
agent
time, as in “this
secret




time”


death
decide
death
kill


decide
woman
kill
investigate


time, as in
police
assign
mysterious


“this time”










Even if the above description of the comparison example has been made in English language it has to be considered that the corresponding processing steps have been performed using texts and words in Italian language.


The Applicant has observed further situations in which the teaching of the invention gives better results than the ones obtainable with a standard LSA technique. The advantages of the teaching of the invention have been also noticed for a small number of contents and/or short content texts.

Claims
  • 1-15. (canceled)
  • 16. A method of content recommendation, comprising: generating a first digital mathematical representation of contents to associate the contents with a first plurality of words describing the contents;generating a second digital mathematical representation of text documents different from said contents to associate said text documents with a second plurality of words;processing the first and second pluralities of words to determine a common plurality of words;processing the first and second digital mathematical representations to generate a common digital mathematical representation of the contents and the text documents based on the common plurality of words; andproviding content recommendation by processing a common digital mathematical representation.
  • 17. The method of claim 16, wherein: generating the first digital mathematical representation comprises: defining further text documents describing the contents by said first plurality of words,processing the plurality of words to generate a first document-word matrix representing said contents;generating the second digital mathematical representation comprises: processing the second plurality of words to generate a second document-word matrix representing said text documents; andprocessing the first and second digital mathematical representations comprises: processing the first and second document-word matrices and the common plurality of words to generate a third document-word matrix representing the text documents and the further text documents.
  • 18. The method of claim 16, wherein providing content recommendation comprises: processing the common digital mathematical representation to apply a similarity function to associate a first content with a second content; andstoring in a database an index linking said first content to said second content.
  • 19. The method of claim 17, wherein processing the first and second document-word matrices comprises: computing weights capable of being associated with each word of the common plurality of words by computing term frequency factors and inverse document frequency factors.
  • 20. The method of claim 17, wherein providing content recommendation by processing the common digital mathematical representation comprises: projecting the third document-word matrix onto a lower dimensional space using latent semantic analysis to determine a transformed document-word matrix; andapplying a similarity function to the transformed document-word matrix to determine significant semantic relations between contents.
  • 21. The method of claim 20, wherein projecting the third document-word matrix onto a lower dimensional space comprises: decomposing the third document-word matrix into a product of at least three matrices; andapplying dimensional reduction to the product of the at least three matrices and defining a transformed document-word matrix.
  • 22. The method of claim 21, wherein decomposing and applying dimensional reduction comprises using a singular value decomposition technique.
  • 23. The method of claim 21, wherein decomposing and applying dimensional reduction comprises: using an incremental singular value decomposition technique by iteratively applying latent semantic analysis to submatrices of the third document-word matrix.
  • 24. The method of claim 17, wherein processing the first and second document-word matrices and the common plurality of words to generate a third document-word matrix comprises: decomposing the first document-word matrix into a product of three first matrices taking into account the common plurality of words;decomposing the second document-word matrix into a product of three second matrices taking into account the common plurality of words;defining a common matrix based on the first matrices and the second matrices;decomposing the common matrix into a product of three common matrices; anddefining the third document-word matrix from said product of the three common matrices.
  • 25. The method of claim 24, wherein each product of three first matrices and each product of three second matrices incorporate modifications of term frequency factors and inverse document frequency factors due to updating of said text documents and/or said further text documents.
  • 26. The method of claim 25, wherein decomposing the first document-word matrix, decomposing the second document-word matrix and decomposing the common matrix comprises using a respective QR factorisation technique.
  • 27. A content recommendation system, comprising: a first generation module capable of being configured to generate a first digital mathematical representation of contents associated with a first plurality of words describing the contents;a second generation module capable of being configured to generate a second digital mathematical representation of text documents different from said contents and associated with a second plurality of words;a first processing module capable of being structured to process the first and second pluralities of words to determine a common plurality of words;a second processing module capable of being structured to process the first and second digital mathematical representations to generate a common digital mathematical representation defining a common representation of the contents and the text documents based on the common plurality of words; anda recommendation module capable of being configured to provide content recommendation by processing the common digital mathematical representation.
  • 28. The system of claim 27, wherein: the first digital mathematical representation comprises a first document-word matrix representing said contents;the second digital mathematical representation comprises a second document-word matrix representing said text documents; andthe common digital mathematical representation comprises a third document-word matrix representing a union of the text documents and the further text documents.
  • 29. The system of claim 28, wherein said recommendation module is capable of being configured to: process the common digital mathematical representation by applying a similarity function and associating a first content with a second content; andstoring in a database, an index linking said first content to said second content.
  • 30. The system of claim 28, wherein the second processing module comprises: a first factorization module capable of being configured to decompose the first document-word matrix into a product of three first matrices taking into account the common plurality of words;a second first factorization module capable of being configured to decompose the second document-word matrix into a product of three second matrices taking into account the common plurality of words; anda third factorization module capable of being configured to define a common matrix based on the first matrices and the second matrices, decompose the common matrix into the product of three common matrices and determine the third document-word matrix from said product of the three common matrices.
PCT Information
Filing Document Filing Date Country Kind 371c Date
PCT/EP2008/068355 12/30/2008 WO 00 6/28/2011