User to user recommender

Information

  • Patent Grant
  • 7962505
  • Patent Number
    7,962,505
  • Date Filed
    Tuesday, December 19, 2006
    18 years ago
  • Date Issued
    Tuesday, June 14, 2011
    13 years ago
Abstract
Disclosed are embodiments of systems and methods for recommending relevant users to other users in a user community. In one implementation of such a method, two different sets of data are considered: a) music (or other items) that users have been listening to (or otherwise engaging), and b) music (or other items) recommendations that users have been given. In some embodiments, pre-computation methods allow the system to efficiently compare item sets and recommended item sets among the users in the community. Such comparisons may also comprise metrics that the system can use to figure out which users should be recommended for a given target user.
Description
BRIEF DESCRIPTION OF THE DRAWINGS

Understanding that drawings depict only certain preferred embodiments of the invention and are therefore not to be considered limiting of its scope, the preferred embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:



FIG. 1 is a diagram showing the basic components and sources for a user profile according to one embodiment.



FIG. 2 depicts a graph showing the position of a target user “X” in ADG Space according to one embodiment.



FIG. 3 is a diagram showing a basic architecture schema fora user recommender according to one embodiment.



FIG. 4 depicts a qualitative scale indicative of the relevance of particular items to a particular user within a user community.



FIG. 5 is a diagram showing a Servlet View of one embodiment of a user recommender system.



FIG. 6 is a diagram showing a Recommender View of one embodiment of a user recommender system.



FIG. 7 is a diagram showing a Manager View of one embodiment of a user recommender system.



FIG. 8 is a core Unified Modeling Language (UML) diagram of one embodiment of a user recommender system.



FIG. 9 is a diagram depicting an example of the information that can be extracted from a GraphPlotter tool used with one embodiment of a user recommender system.



FIG. 10 is a graph representing relationships between types of listeners according to a “Frequency/Knowledge” model.



FIG. 11A is a representation in matrix form of a metric describing the similarity values between collections of media items.



FIG. 11B provides a weighted graph representation for the associations within a collection of media items. Each edge between two media items is annotated with a weight representing the value of the metric for the similarity between the media items.



FIG. 12 is a block diagram of one method for selecting a set of media items corresponding to an initial set of media items in accordance with an embodiment of the invention.



FIG. 13 is a simplified, conceptual diagram of a knowledge base or database comprising a plurality of mediasets.







DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

In the following description, certain specific details of programming, software modules, user selections, network transactions, database queries, database structures, etc., are provided for a thorough understanding of the specific preferred embodiments of the invention. However, those skilled in the art will recognize that embodiments can be practiced without one or more of the specific details, or with other methods, components, materials, etc.


In some cases, well-known structures, materials, or operations are not shown or described in detail in order to avoid obscuring aspects of the preferred embodiments. Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in a variety of alternative embodiments. In some embodiments, the methodologies and systems described herein may be carried out using one or more digital processors, such as the types of microprocessors that are commonly found in PC's, laptops, PDA's and all manner of other desktop or portable electronic appliances.


Disclosed are embodiments of systems and methods for recommending users to other users in a user community. As used herein, a “user recommender” is a module integrated in a community of users, the main function of which is to recommend users to other users in that community. There may be a set of items in the community for the users of the community to interact with. [INSERTED PARAGRAPH BREAK]


There may also be an item recommender to recommend other items to the users. Examples of recommender systems that may be used in connection with the embodiments set forth herein are described in U.S. patent application Ser. No. 11/346,818 titled “Recommender System for Identifying a New Set of Media Items Responsive to an Input Set of Media Items and Knowledge Base Metrics,” and U.S. patent application Ser. No. 11/048,950 titled “Dynamic Identification of a New Set of Media Items Responsive to an Input Mediaset,” both of which are hereby incorporated by reference. A description of the former item recommender system, application Ser. No. 11/346,818 is set forth below with reference to drawing FIGS. 11A,11B,12 and 13.


As used herein, the term “media data item” is intended to encompass any media item or representation of a media item. A “media item” is intended to encompass any type of media file which can be represented in a digital media format, such as a song, movie, picture, e-book, newspaper, segment of a TV/radio program, game, etc. Thus, it is intended that the term “media data item” encompass, for example, playable media item files (e.g., an MP3 file), as well as metadata that identifies a playable media file (e.g., metadata that identifies an MP3 file). It should therefore be apparent that in any embodiment providing a process, step, or system using “media items,” that process, step, or system may instead use a representation of a media item (such as metadata), and vice versa.


The user recommender may be capable of selecting relevant users for a given target user. To do so, users should be comparable entities. The component that defines a user in a community may be referred to as the user profile. Thus, a user profile may be defined by defining two sets, such that comparing two users will be a matter of intersecting their user profile sets. For example, with reference to FIG. 1, the first set may be the “items set,” referenced at 110 in FIG. 1, which may contain the most relevant items 115 for a particular user 118. The second set may be the “recommendations set,” referenced at 120 in FIG. 1, which may contain the most relevant recommended items for user 118. The items set 110 can be deduced by the item usage and/or interaction of a certain user with certain items, whereas the recommendations set can be deduced by using an item recommender 130. In some cases, the items set 110 can be used as the input for the recommender 130, thereby obtaining a recommendations set as the output.


AN EXAMPLE OF AN ITEM RECOMMENDER

A system identifies a new set of recommended media items in response to an input set of media items. The system employs a knowledge base consisting of a collection of mediasets. Mediasets are sets of media items, which are naturally grouped by users. They reflect the users subjective judgments and preferences. The recommendation is computed using metrics among the media items of a knowledge base of the system. This knowledge base comprises collections of mediasets from a community of users. A mediaset is nota collection of media items or content. Rather, it is a list of such items, and may include various metadata.


The mediasets of the knowledge base define metrics among items. Such metrics indicate the extent of correlation among media items in the mediasets of the knowledge base. Preferably, the methods of the present invention are implemented in computer software.


Various different metrics between and among media items can be generated from the knowledge base of mediasets. Such metrics can include but are not limited to the follow examples:

    • a) Pre-concurrency (for ordered mediasets) between two items is computed as the number of times a given item precedes the other item in the mediasets of the knowledge base.
    • b) Post-concurrency (for ordered mediasets) between two items is computed as the number of times an item follows another item in the mediasets of the knowledge base.
    • c) Co-concurrency between two items is computed as the number of times the items appear together in a mediaset.
    • d) Metadata similarities may be computed as well by considering keywords associated with the media items such as artist, actor, date, etc.
    • e) Combinations of the previous metrics can be useful.
    • f) Combinations of the previous metrics applying transitivity.


Such metrics can be represented in an explicit form that directly associates media items with other media items. For each media item of the input set, the system retrieves n media items with highest metrics. These media items are called candidates. Then, the recommended set of media items is a subset of the candidates that maximize an optimization criterion. Such criterion can be simply defined using the metrics of the knowledge base of the system. Furthermore, such criterion can also include filters including but not limited to:

    • a) Filters that the user expresses to focus the recommendation only on a determined type of items.
    • b) Filters that the user expresses to focus the recommendations on items that meet certain keyword-based criteria, such as a specific artist/s, year/s, genre/s, etc.
    • c) Filters that personalize the recommendations to the user. This kind of filtering includes recommending only items that the user knows about, or only items that the user does not know about, etc.


Additional aspects and advantages will be apparent from the following detailed description of preferred embodiments, which proceeds with reference to the accompanying drawings.


The item recommender preferably comprises or has access to a knowledge base which is a collection of mediasets. A mediaset is a list of media items that a user has grouped together. A media item can be almost any kind of content; audio, video, multi-media, etc., for example a song, a book, a newspaper or magazine article, a movie, a piece of a radio program, etc. Media items might also be artists or albums. If a mediaset is composed of a single type of media items it is called a homogeneous mediaset, otherwise it is called a heterogeneous mediaset. A mediaset can be ordered or unordered. An ordered mediaset implies a certain order with respect to the sequence in which the items are used1 by the user. Note again that a mediaset, in a preferred embodiment, is a list of media items, i.e. meta data, rather than the actual content of the media items. In other embodiments, the content itself may be included. Preferably, a knowledge base is stored in a machine-readable digital storage system. It can employ well-known database technologies for establishing, maintaining and querying the database. 1 Depending on the nature of the item, it will be played, viewed, read, etc.


In general, mediasets are based on the assumption that users group media items together following some logic or reasoning, which may be purely subjective, or not. For example, in the music domain, a user may be selecting a set of songs for driving, hence that is a homogeneous mediaset of songs. In this invention, we also consider other kinds of media items such as books, movies, newspapers, and so on. For example, if we consider books, a user may have a list of books for the summer, a list of books for bus riding, and another list of books for the weekends. A user may be interested in expressing a heterogeneous mediaset with a mix of books and music, expressing (impliedly) that the listed music goes well with certain books.


A set of media items is not considered the same as a mediaset. The difference is mainly about the intention of the user in grouping the items together. In the case of a mediaset the user is expressing that the items in the mediaset go together well, in some sense, according to her personal preferences. A common example of a music mediaset is a playlist. On the other hand, a set of media items does not express necessarily the preferences of a user. We use the term set of media items to refer to the input of the system of the invention as well as to the output of the system.


A metric M between a pair of media items i and j for a given knowledge base k expresses some degree of relation between i and j with respect to k. A metric may be expressed as a “distance,” where smaller distance values (proximity) represent stronger association values, or as a similarity, where larger similarity values represent stronger association values. These are functionally equivalent, but the mathematics are complementary. The most immediate metric is the co-concurrency (i, j, k) that indicates how many times item i and item j appear together in any of the mediasets of k. The metric pre-concurrency (i, j, k) indicates how many times item i and item j appear together but i before j in any of the mediasets of k. The metric post-concurrency (i, j, k) indicates how many times item i and item j appear together but only i after j in any of the mediasets of k. The previous defined metrics can also be applied to considering the immediate sequence of i and j. So, the system might be considering co/pre/post-concurrencies metrics but only if items i and j are consecutive in the mediasets (i.e., the mediasets are ordered). Other metrics can be considered and also new ones can be defined by combining the previous ones.


A metric may be computed based on any of the above metrics and applying transitivity. For instance, consider co-concurrency between item i and j, co(i,j), and between j and k, co(j,k), and consider that co(i,k)=0. We could create another metric to include transitivity, for example d(i,k)=1/co(i,j)+1/co(j,k). These type of transitivity metrics may be efficiently computed using standard branch and bound search algorithms. This metric reveals an association between items i and k notwithstanding that i and k do not appear within any one mediaset in K.


A matrix representation of metric M, for a given knowledge base K can be defined as a bidimensional matrix where the element M(i, j) is the value of the metric between the media item i and media item j.


A graph representation for a given knowledge base k, is a graph where nodes represent media items, and edges are between pairs of media items. Pairs of media items i, j are linked by labeled directed edges, where the label indicates the value of the similarity or distance metric M(i,j) for the edge with head media item i and tail media item j.


One embodiment of the recommender is illustrated by the flow diagram shown in FIG. 12. This method accepts an input set 301 of media items. Usually, this is a partial mediaset, i.e. a set of media items (at lease one item) that a user grouped together as a starting point with the goal of building a mediaset. A first collection of candidate media items most similar to the input media items is generated by process 302 as follows.


As a preliminary matter, a pre-processing step may be carried out to analyze the contents of an existing knowledge base. This can be done in advance of receiving any input items. As noted above, the knowledge base comprises an existing collection of mediasets. This is illustrated in FIG. 13, which shows a simplified conceptual illustration of a knowledge base 400. In FIG. 13, the knowledge base 400 includes a plurality of mediasets, delineated by rectangles [or ovals] and numbered 1 through 7. Each mediaset comprises at least two media items. For example, mediaset 2 has three items, while mediaset 7 has five items. The presence of media items within a given mediaset creates an association among them.


Pre-processing analysis of a knowledge base can be conducted for any selected metric. In general, the metrics reflect and indeed quantify the association between pairs of media items in a given knowledge base. The process is described by way of example using the co-concurrency metric mentioned earlier. For each item in a mediaset, the process identifies every other item in the same mediaset, thereby defining all of the pairs of items in that mediaset. For example, in FIG. 13, one pair in set 1 is the pair M(1,1)+M(1,3). Three pairs are defined that include M(1,1). This process is repeated for every mediaset in the knowledge base, thus every pair of items that appears in any mediaset throughout the knowledge base is defined.


Next, for each pair of media items, a co-concurrency metric is incremented for each additional occurrence of the same pair of items in the same knowledge base. For example, if a pair of media items, say the song “Uptown Girl” by Billy Joel and “Hallelujah” by Jeff Buckley, appear together in 42 different mediasets in the knowledge base (not necessarily adjacent one another), then the co-concurrency metric might be 42 (or some other figure depending on the scaling selected, normalization, etc. In some embodiments, this figure or co-concurrency “weight” may be normalized to a number between zero and one.


Referring now to FIG. 11A, matrix 100 illustrates a useful method for storing the metric values or weights for any particular metric. Here, individual media items in the knowledge base, say m1, m2, m3 . . . mk are assigned corresponding rows and columns in the matrix. In the matrix, the selected metric weight for every pair of items is entered at row, column location x,y corresponding to the two media items defining the pair. In FIG. 11A, the values are normalized.


Now we assume an input set of media items is received. Referring again to process step 302, a collection of “candidate media items” most similar to the input media items is generated, based on a metric matrix like matrix 100 of FIG. 11A. For instance, for each media item, say (item m2) in the input set 301, process 302 could add to a candidate collection of media items every media item (m1, m3 . . . mk in FIG. 1A) that has a non-zero similarity value, or exceeds a predetermined threshold value, in the corresponding row 102 of metric matrix 100 for the media item m2, labeling each added media item with the corresponding metric value (0.7, 0.4 and 0.1, respectively). See the edges in FIG. 11B. For each media item in the input set of size m, process 302 selects n media items as candidates; thus the aggregation of all the candidates produces a set of at most m*n media items.


Process 303 receives the candidate set from process 302 which contains at the most m*n media items. This component selects p elements from the m*n items of the candidate set. This selection can be done according to various criteria. For example, the system may consider that the candidates should be selected according to the media item distribution that generated the candidate set. This distribution policy may be used to avoid having many candidates coming from very few media items. Also, the system may consider the popularity of the media items in the candidate set. The popularity of a media item with respect to a knowledge base indicates the frequency of such media item in the mediasets of the knowledge base.


Finally, from the second collection of [p] media items, a third and final output set 305 of some specified number of media items is selected that satisfy any additional desired external constraints by a filter process 304. For instance, this step could ensure that the final set of media items is balanced with respect to the metrics among the media sets of the final set. For example, the system may maximize the sum of the metrics among each pair of media items in the resulting set. Sometimes, the system may be using optimization techniques when computation would otherwise be too expensive. Filtering criteria such as personalization or other preferences expressed by the user may also be considered in this step. In some applications, because of some possible computational constraints, these filtering steps may be done in the process 303 instead of 304. Filtering in other embodiments might include genre, decade or year of creation, vendor, etc. Also, filtering can be used to demote, rather then remove, a candidate output item.


In another embodiment or aspect of the invention, explicit associations including similarity values between a subset of the full set of media items known to the system, as shown in graph form in FIG. 11B, may be used. To illustrate, if the similarity value between a first media item 202, generally denoted below by the index i, and a second media item, say 214, generally denoted below by the index j, is not explicitly specified, an implicit similarity value can instead be derived by following a directed path such as that represented by edges 210 and 212 from the first media item to an intermediate item, and finally to the second media item of interest, in this example item mp. Any number of intermediate items can be traversed in this manner, which we call a transitive technique. The list of similarity values M(i, i+1), M(i+1, i+2), . . . , M(i+k, j) between pairs of media items along this path through the graph are combined in a manner such that the resulting value satisfies a definition of similarity between media item i and media item j appropriate for the application. For example, the similarity M(i,j) might be computed as:

M(i,j)=min{M(i,i+1),M(i,i+2), . . . ,M(i+k,j)}
or
M(i,j)=M(i,i+1)*M(i,i+2)* . . . *M(i+k,j)


Other methods for computing a similarity value M(i,j) for the path between a first media item i and a second, non-adjacent media item j where the edges are labeled with the sequence of similarity values M(i, i+1), M(i+1, i+2), . . . , M(i+k, j) can be used. From the user standpoint, this corresponds to determining an association metric for a pair of items that do not appear within the same mediaset.


Using an Item Recommender to Generate a Recommendation Set

The items described in the following examples and implementations will be for musical or other media items. However, it should be understood that the implementations described herein are not item-specific and may operate with any other type of item used/shared by a community of users.


For musical or multimedia items (tracks, artists, albums, etc.), users may interact with the items by using them (listening, purchasing, etc.). The sets in such embodiments will be referred to as “musical sets,” as they contain musical items. These sets will therefore be referred to as the “Music Set” and the “Recommendations Set.”


The Music Set is the musical set formed by the items the user is listening to. A User A′s Music Set will be denoted herein as Ma.


The Recommendations Set is the musical set formed by the items the user is being recommended. A User A′s Recommendations Set will be denoted herein as Ra.


To compare two user profiles, the intersection between one or more of their respective sets may be analyzed. A variety of different metrics may also be applied to set intersections to provide useful data. Some such metrics will describe relations between users. For example, four elementary intersecting cases are:


Ma∩Mb, Ma∩Rb, Ra∩Mb, and Ra∩Rb. Analyzing these cases may lead to complex cases that may be labeled or classified as different relations. For example, in one implementation, four relevant relations may be extracted:


Peer: If Ma intersects Mb sufficiently, B is considered a “Peer” of A.


Guru: If Ma intersects Rb sufficiently, B is considered a “Guru” of A.


Peer-Guru: Peer condition plus Guru condition. B is considered a “Peer-Guru” of A.


Follower: If Ra intersects Mb sufficiently, B is considered a “Follower” of A.


Peer relation may be relevant because it gives the target user another user whose musical (or other item) library is similar in some way to the target user's musical library. Guru relation may be relevant because it gives the target user another user whose musical library contains music the target user may enjoy discovering. Peer-Guru may be relevant because it gives the target user both the affinity and the discovery experiences of the Peer and Guru relations, respectively, toward one or more recommended users. Follower relation may be relevant because it gives the user the chance to know which users may be influenced by him or her.


Illustrative concrete metrics will now be disclosed, from which the aforementioned relations, for example, can be deduced. A metric may be a function that takes as input two (or more) user profiles and produces a measurable result as an output. The metrics discussed below are unidirectional, meaning that the order of the parameters can change the result.


The “Affinity” metric answers the question, “How much does Ma intersect with Mb?” In other words, how much “affinity experience” does user A have towards user B?


The “Discovery” metric answers the question, “How much does Ra intersect with Mb?” In other words, how much “discovery experience” does user A have towards user B?


The “Guidance” metric answers the question, “How much does Ma intersect with Rb?” In other words, how much can user A guide user B?


With these metrics. Peer relations can be found by maximizing the Affinity metric, Guru relations can be found by maximizing the Discovery metric, Peer-Guru relations can be found by maximizing both the Affinity and the Discovery metric, and Follower relations can be found by maximizing the Guidance metric. A total relevance of one user toward another user can be computed, for example, by defining a function that operates with each (or greater than one) of the metrics. For a target user, all the other users in the community can be located as points into a three-dimensional space (“ADG Space”) where X=Affinity, Y=Discovery, and Z=Guidance. Defining the metrics as to return a number between [0,1], all the users in the community can be enclosed within a cube of 1×1×1 in that space. FIG. 2 illustrates the position of a sample target user “X” in ADG Space.


To implement a user recommender following the conceptual model explained, an under-lying system may be built. One such system may be configured such that:


1. There is a user community and an item recommender from which from which a user profile for each user in the community can be extracted. This information may be fed by one or more data sources.


2. There is an implementation of the user recommender that builds the data and the operations of the model, and collects the data from the data sources.


A basic architecture schema for a user recommender according to one implementation is shown in FIG. 3. As shown in the figure, data sources 310 provide data to (and receive data from) a user community 320. Data sources 310 may also provide a data feed to a user recommender 330. User recommender 330 interacts with the user community 320. In particular, requests for user recommendations may be received from the user community 320 and recommended users may, in turn, be provided to the user community 320. Of course, it is contemplated that some implementations may rely upon receiving requests from users to generate recommended user sets and other implementations may generate such sets and offer them to users in the community without first receiving user requests.


It may also be desirable to provide a scalable architecture solution. Given a request, it may not be feasible to compare the target user to all of the users in the community (the response time may grow linearly with the number of users). A number of solutions to this problem may be implemented. For example:


1. The user data may be clusterized and the target user compared to the right cluster.


2. A fixed number or subset of users may be selected from the user community. This subset of users may be referred to as “Recommendable Users” and the target user(s) may be compared to that size-fixed set. The Recommendable Users may be selected by some procedure that allows the system to recommend the most interesting users in the community.


A musical set entity can be modeled as a sparse vector of an N-dimensional space where N is the total number of musical items in our universe. Each dimension refers to a different item, whereas each concrete value refers to the relevance of that item. Adding a relevance value for each item allows the underlying system or implementation to be aware of the most relevant items for a certain user.


In some implementations, the items can be music tracks. However, in such embodiments, intersections between items in two user's sets may be less probable (due to a sparsity problem). In addition, such intersections may be computationally expensive.


These issues may be addressed in some embodiments by instead working on the artist level. The probability of intersection of artists instead of tracks is higher. On the other hand, the relevance value may depend on the data source from which the items are extracted. A normalization process may therefore be used so that all the relevance values finally belong to a known value scale, such as a qualitative value scale.


For example, FIG. 4 depicts a qualitative scale indicative of the relevance of particular items, such as artists, to a particular user. Artists (or other items) with a relevance value of less than C1 for a given user will be considered “Low,” those with a relevance value between C1 and C2 will be considered “Medium,” those with a relevance value between C2 and C3 will be considered “Medium/High,” and those with a relevance value greater than C3 will be considered “High.”


Details and examples of an illustrative normalization process are discussed later, along with other approaches for finding the relevance of a certain item for a certain user.


A user entity can be modeled as an entity with an unique ID plus two musical set entities, so that we have all the data needed to compute intersections according to the conceptual model discussed herein.


Some operations that may be implemented in some embodiments of the invention will now be discussed. The primitive operations are those that are needed to compare two user entities, and that involve intersections between musical sets. For example:


1. The size of a musical set can be represented as:









M
u



=




k
=
1

N



M
uk







Where Muk is the relevance value of the item k in the set Mu.


2. The size of an intersection can be represented as:










M
u



M

u






=




k
=
1

M



min


(


M
uk

,

M


u



k



)







For all those M items that are in common in Mu and Mu′.


3. The Affinity, Discovery, and Guidance metrics can be represented as follows.


One approach to the Affinity metric consists of calculating the size of Mu with Mu′ and normalizing it by the size of Mu, as follows:







Affinity






(

U
,

U



)


=





M
u



M
u








M
u








As another possibility, if we consider that the intersection of Ru Ru′ is somehow an affinity measure, then we can add this factor to the whole formula, weighting it by a K factor and thereby normalizing the measure:







Affinity






(

U
,

U



)


=







M
u



M

u









M
u




+





R
u



R

u







K
·



R
u







1
+

1
K







Note that a high Affinity of U to U′ does not necessarily mean a high Affinity of U′ to U.


Corresponding formulas for Discovery and Guidance are as follows:







Discovery






(

U
,

U



)


=





R
u



R
u








R
u











Guidance






(

U
,

U



)


=





M
u



R
u








R
u









Note that it is always true that Discovery(U, U′)=Guidance(U′, U).


The following model operations are illustrative global operations that may be implemented in the user recommender that, by means of using primitive operations, allow it to compute the desired result.


getBestUsers(User, Requirement):


By computing a certain set of metrics, a set of recommended users may be returned for the target user. The Requirement may specify what kind of users are to be recommended and what metrics are to be considered. A general algorithm for this function may be as follows:


1. Let TU be the Target User, RUS the Recommendable Users Set and REQ the Requirement of the request.


2. For each User U in RUS, compute the necessary Metrics (TU,U) according to REQ and store the result, together with the compared user U of RUS.


3. Sort RUG by the result of the comparision so that at the beginning of the list we have the best users according to REQ.


4. Return a sublist of RUS, starting at the beginning.


getRelevance(User1, User2):


By computing all the metrics of User1 toward User2, a floating number may be returned by computing a function performing some calculation with all the metric values, which answers the question: How relevant is User2 for User1? This function may, for example, calculate the length of the vector in the ADG Space.


The user recommender may be implemented as a Java Web Module. This module may be deployed, for example, as a webapp in a Tomcat environment. In one implementation, the Data Sources for such an implementation may be as follows:


1. “Reach” API: Returns playcount data for each user. Some implementations may be able to deduce a musical set from this data.


2. “UMA” Recommender: Returns recommended items for a set of items. Some implementations may be able to deduce a musical set from this data using as input, for example, the Music Set of the user profile, and consequently obtaining the Recommendations Set of the user profile explained above.


3. “KillBill” API: Returns some extra information about a user, for example, its alias.


One scalable solution is to obtain the best N users of the community through the Reach API and make them Recommendable Users. In some implementations, a CLUTO clustering program may be used. CLUTO programs are open source software, and are available for download at: <http://glaros.dtc.umn.edu/gkhome/cluto/cluto/download>. In other implementations, a WEKA clustering program may be used. WEKA is also an open source program, and is available for download at: <http://www.cs.waikato.ac.nz/ml/weka/>. A file-cache system may also be built and used in some implementations.


Therefore, only the best users may be recommended, even though recommendations may be provided to all the users. If a user is not in the best N set, then the user recommender may ask in real-time for the user profile of that user to be added to the data sources.


A Java implementation may consist of a set of classes the logic of which can be partitioned as follows:


Servlet View: Where all the servlet request/response logic is enclosed.


Recommender View: Where the main operations are performed by the singleton Recommender, often delegating them to a Manager.


Manager View: Where all the operations are actually performed, by using the Core classes.


Core View: Where the foundations of the model are established by a set of Java Classes.


The aforementioned “Views” will now be described in greater detail. The user recommender may be implemented as a set of HTTP Servlets. The recommender may be implemented as a Singleton Instance, which may be used by three different servlets, as shown in FIG. 5.


1. The Debug Servlet 500 attends debug commands. For instance, action=stats may perform a plot of the recommender 530 internal memory statistics.


2. The Recommender Servlet 510 attends the recommendation requests according to the model operations explained previously.


3. The Update Servlet 520 performs the update process on the user data.


The Recommender may be a singleton class mainly formed by a manager and a request cache. Concurrent accesses to the cache may also be controlled by semaphores to avoid inconsistent information. For example, if the Debug Servlet sends a flush cache command, if there is a recommendation request waiting to get the result from cache, a null response can be received if the request is processed after the flush has been performed.


In some implementations, the general algorithm for accessing the request cache is as follows:


1. Close the semaphore if it is opened; otherwise wait.


2. Is the result in the cache?


3. If so, take the result out from the cache.


4. Open the cache semaphore.


The Cache may be implemented in a Hash Map, the keys of which are string request hashes and the values of which are a request result. The string request hashes may be calculated to obtain a unique string for each set of parameters that may produce a different result.



FIG. 6 shows an example of a user to user recommender 600 and its basic interactions with other elements of the system. Recommender 600 has a Manager 610 and a Request Cache 620. Manager 610 has one or more external connections, shown at 612, and a data feed, shown at 614.


The Manager may be implemented as a singleton class with two layers—the service layer and the dataspace layer, as shown at 700 in FIG. 7.


The service layer 710 may contain singleton instances of services 712. Each service 712 may be used to perform a set of tasks of a particular type. For example, an “Update Service,” containing all the logic for performing the update on the data, may be provided. A “Comparator Service,” containing all the logic for performing comparators between the Data, may also be provided. These services may communicate directly with the dataspace layer 720.


The dataspace layer 720 may contain all the connections to external services (such as UMA 722, KillBill 724, and Reach 726) as well as the main memory structures 728 where the recommendable users are permanently stored.


The basic classes for some implementations are: User 810, MeasuredUser 820, UserSimilarity 830, and Requirement 840, as shown in FIG. 8. In such implementations, a Musical Set 850 for a user 810 may be implemented as a set of pairs (Item 854 and Relevance 856). Certain implementations of Item and Relevance are ArtistItem (containing Artist ID's) and SimpleRelevance (a magnitude between 1 and 3). There may also be an ItemFactory and a RelevanceFactory, the tasks of which are to create Item and Relevance objects, taking the name of the implementation as input. In such embodiments, the implementation of Item and Relevance can easily be changed at any stage of the project without affecting the other core classes.


When a user is compared to another user, we have a MeasuredUser 820, which is the compared user together with UserSimilarity 830 instances. A UserSimilarity 830 specifies how much of each Metric 860 is correlated with the target User. The client may also be able to specify a Requirement to maximize/minimize different metrics. The Affinity, Discovery, and Guidance Metrics are shown in FIG. 8 at 862, 864, and 866, respectively.


As described previously, the Affinity, Discovery and Guidance metrics may be implemented, but other metrics can also be implemented by extending the interface metric.


The interface metric specifies that each metric has to perform an intersection between two users, the result of which is parametrizable. The interface may also specify that each metric has to return a computation value measuring the relevance between two users according to the metric. This value may also be parameterizable, so that each metric is double-parameterized. The computation value may also be normalized between 0 and 1. As an example, a new metric “Age Affinity” could be implemented. This Metric might return a signed Integer comprising the difference between two user ages as the intersection and/or a String representing the qualitative age difference of one user to the other (“younger”, “much younger,” etc.). The normalized computation might be calculated so that 1 means the two users have the same age and 0 means the two users are too far apart in age to be considered related for purposes of the system.


In one example, a webapp was deployed in two production machines controlled by a load balancer. The number of recommendable users was about 1000 and the system was able to respond to more than 500 requests per day. A stress test was made with Apache Jmeter, a Java desktop application available for download at: <http://jakarta.apache.orq/site/downloads/downloads_jimeter.cqi>. In this test, several requests were sent to the server, and the response time increased linearly with the number of simultaneous requests. The test result numbers were as follows:




















Average
Average




Requests
Processing
Response



Requests
per second
Time
Time





















100
1
 60 ms
290 ms



200
2
110 ms
370 ms



1000
10
120 ms
420 ms



10000
100
500 ms
1800 ms 










In some implementations, a viewable graph can be plotted by using a GraphPlotter tool. FIG. 9 provides an example of the information that can be extracted from a GraphPlotter tool. As shown in FIG. 9, Discovery and/or Affinity values between various users in the user community are represented in the graph.


Additional details of particular implementations will now be described in greater detail. Let's suppose we have two users. A and B, with Music and Recommendation sets Ma, Ra and Mb, Rb, respectively. If R is the output of an item recommender generated from input M, for some item recommenders it is always true that

M∩R=Ø


As forth previously, the four possible intersections are Ma∩Mb, Ma ∩ Rb, Ra∩Mb, and Ra∩Rb. The total number of cases is 12:


Peer relation: Ma∩Mb. A and B have common musical tastes.


Peer-Brother relation: Ma∩Mb+Ra∩Rb. A and B have common musical tastes and may also have common musical tastes in the future.


Guru-follower relation: Ma∩Rb. B can learn from A (A is a guru to B and B is a follower of A).


Hidden-peer relation: Ra∩Rb. A and B may evolve to common musical tastes.


Peer-guru/Peer-follower relation: Ma∩Mb+Ma∩Rb. B can learn from A, but B has already learned something from A. This case may be treated as a special case of Peer or as a special case of Guru-follower. If treated as the first, then we can say that this is a “stronger” Peer (the second condition assures that the next “state” of user B′s taste is also a Peer state between A and B), whereas if treated as the second, then it may be considered a “weaker” Guru-Follower relation (the follower will see some of his music in the Guru's music).


Peer-Brother-Guru/Peer-Brother-Follower relation: Ma∩Mb+Ma∩Rb+Rb. The same as above, but with intersection in recommendations.


Static Guru-Follower relation: Ma∩Rb+Ra∩Rb. B can learn from A and B will still learn from A if A moves towards the next state. It is a stronger case of Guru-Follower.


Crossing-trains relation: Ma∩Rb+Ra∩Mb. B learns from A and A learns from B. However, these users' next states are not going to intersect, so this is a strange case of Guru-Follower (because of being bidirectional).


Taxi Relation: Ma∩Rb+Ra∩Mb+Ma∩Mb. The same as above but with intersection in music.


Meeting-trains relation: Ma∩Rb+Ra∩Mb+Ra∩Rb. B learns from A, A learns from B, and their next state is going to intersect. If A or B moves to the next state, the other can still learn from him. If both move, then they are going to be Peers. This may be the strongest case of bidirectional Guru-Follower.


Perfect Connection Relation: Ma∩Rb+Ra∩Mb+Ma∩Mb+Ra∩Rb. Everything intersects.


There may also be ways for determining/calculating how relevant an artist is for a particular user. For example, if the system has playcounts of the artist for the user, the data may be normalized by setting absolute cut-off points such that certain numbers of playcounts can be considered “Low,” certain other numbers of playcounts can be considered “Medium,” and so on.


Alternatively, if the system has a set of playlists for the user, the number of times the artist appears in the playlists may be counted. The methodology may then proceed as described above (i.e., with cut-off points).


As another alternative, if the system has a recommended set, the relevance of the artist based on the position it occupies in the recommended list may be calculated and used in the analysis. Of course, this assumes that the recommender provides a ranked list of recommended artists.


In some implementations, users may further be classified by how frequently they listen to a given artist, how many songs the user has in his or her profile from the artist, and/or otherwise how familiar the user is with a given artist. For example, for each artist a user listens to, we have:


1. F: Frequency of listening; and


2. K: Knowledge (how many songs from this artist the user knows).


The values for F and K can be classified as High or Low. Listeners for a particular artist can therefore be classified as:

















Listener
Frequency
Knowledge









A
Low
Low



B
Low
High



C
High
Low



D
High
High










In general, only listeners of the same type will match, but if we imagine these classifications as points on a perfect square (where 0 is Low and 1 is High). A is distance 1 to B and C, and distance √2 to D. Likewise, B is distance 1 to A and D, and distance √2 to C, and so on.


However, it may the case that the frequency of listening is not as relevant as the knowledge. So one dimension can be made larger than the other, which makes the square become larger around the K dimension.


With this approach. A is High close to C. Medium close to B and Low close to D. These relationships are represented graphically in FIG. 10. The Relevance of an artist A for a given user U may therefore be provided as:

Rel(U,A)=(1+K(U,A))2+θ(U,A)

where K(A)[0, 1] is a function that measures the Knowledge a User U has about Artist A, and f(A) [0, 1] is a function returning the relative frequency of User U listening to this Artist.


In some embodiments, K can be deduced from n/N where n is the number of songs from a certain artist that a user knows, and N is the total songs for this artist. F can likewise be deduced from n/P where n is the number of playcounts from a certain artist that a user has listened to, and P is the total of playcounts for this user. F may be computed through the Reach API (described above) in some implementations.


The above description fully discloses the invention including preferred embodiments thereof. Without further elaboration, it is believed that one skilled in the art can use the preceding description to utilize the invention to its fullest extent. Therefore the examples and embodiments disclosed herein are to be construed as merely illustrative and not a limitation of the scope of the present invention in any way.


It will be obvious to those having skill in the art that many changes may be made to the details of the above-described embodiments without departing from the underlying principles of the invention. Therefore, it is to be understood that the invention is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims.


The scope of the present invention should, therefore, be determined only by the following claims.

Claims
  • 1. A method for recommending users in a user community to a target user, the method comprising: selecting a target user within the user community;selecting a user set within the user community excluding the target user;comparing a user profile for the target user with user profiles for each of the users in the user set to assess similarity of the target user profile to each of the other users' profiles as a proxy for compatibility of the corresponding users; andgenerating a recommended set of compatible users for the target user responsive to the comparing step, wherein the recommended set of compatible users comprises at least one user within the selected user set; and furtherwherein the target user's profile comprises a target user media item set [IS] and a target user recommended media item set [RS];the user profiles for each of the users in the user set comprises a corresponding user media item set and a corresponding user recommended media item set; andeach recommended media item set is generated by providing the corresponding user's media item set as input to a computer-implemented recommender system,wherein the computer-implemented recommender system that generates the recommended media item set is driven by a predetermined knowledge base stored in a memory that contains correlation metrics among a collection of media data items.
  • 2. A method for recommending users according to claim 1, wherein: the correlation metrics stored in the knowledge base driving the computer-implemented recommender system to generate the recommended media item set of each user's profile reflect subjective judgments of a universe of other users outside of the said user community.
  • 3. A method for recommending users according to claim 1, wherein: the stored correlation metrics are not based on objective characterizing attributes of the selected media items.
  • 4. A method for recommending users according to claim 1, wherein: the correlation metrics stored in the knowledge base driving the computer-implemented recommender system to generate the recommended media item set of each user's profile reflect the numbers of co-occurrences of pairs of media items together within the personal playlists of a universe of other users outside of the said user community, so that the correlation metrics are a function of the outside users' subjective decisions to include a pair of selected media items together within a data set or playlist, without regard to characterizing attributes of the selected media items.
  • 5. A method for recommending users according to claim 1, wherein: said comparing the user profiles includes determining an intersection of the target user media item set with the respective recommended media item sets of each of the users in the selected user set.
  • 6. A method for recommending users according to claim 1, wherein: said comparing the user profiles includes determining an intersection of the target user recommended media item set with the respective recommended media item sets for each of the users in the selected user set.
  • 7. A method for recommending users according to claim 1, wherein: said comparing the user profiles includes determining an intersection of the target user recommended media item set with the respective media item sets for each of the users in the selected user set.
  • 8. A computer-readable medium having stored thereon computer executable instructions for performing a method for recommending users in a user community, the method comprising: selecting a first user within the user community;selecting a user set within the user community;comparing a user profile for the first user with user profiles for each of the users in the user set; andgenerating a recommended user set for the first user responsive to the comparing step, wherein the recommended user set comprises at least one user within the user community; and furtherwherein the selected first user's profile comprises a first user media item set and a first user recommended media item set;the user profiles for each of the users in the selected user set comprises a corresponding user media item set and a corresponding user recommended media item set;each recommended media item set is generated by providing the corresponding user's media item set as input to a computer-implemented recommender system;and the computer-implemented recommender system that generates the recommended media item set is driven by a predetermined knowledge base stored in a memory that contains correlation metrics among a collection of media data items.
  • 9. The computer-readable medium of claim 8, wherein the comparing a user profile for the first user with user profiles for each of the users in the user set includes computing a total relevance of the target user to each of the users in the user set, wherein the total relevance is a function of at least one of an affinity metric, a discovery metric and a guidance metric; wherein the affinity metric determines an intersection Ma∩Mb of the target user's media set with the other users' respective media sets;the discovery metric determines an intersection Ra∩Mb of the other users' respective media sets with the target user's recommended media set; andthe guidance metric determines an intersection Ma∩Rb of the target user's media set with the other users' respective recommended media sets.
  • 10. The computer-readable medium of claim 8, wherein the correlation metrics stored in the knowledge base driving the computer-implemented recommender system to generate the recommended media item set of each user's profile reflect subjective judgments of a universe of other users outside of the said user community.
  • 11. A method for recommending users according to claim 8, wherein: the stored correlation metrics are not based on objective characterizing attributes of the selected media items.
  • 12. A method for recommending users according to claim 8, wherein: the correlation metrics stored in the knowledge base driving the computer-implemented recommender system to generate the recommended media item set of each user's profile reflect the numbers of co-occurrences of pairs of media items together within the personal playlists of a universe of other users outside of the said user community, so that the correlation metrics are a function of the outside users' subjective decisions to include a pair of selected media items together within a data set or playlist, without regard to characterizing attributes of the selected media items.
  • 13. A method for recommending users according to claim 8, wherein: said comparing the user profiles includes determining an intersection of the target user media item set with the respective recommended media item sets of each of the users in the selected user set.
  • 14. A method for recommending users according to claim 8, wherein: said comparing the user profiles includes determining an intersection of the target user recommended media item set with the respective recommended media item sets for each of the users in the selected user set.
  • 15. A method for recommending users according to claim 8, wherein: said comparing the user profiles includes determining an intersection of the target user recommended media item set with the respective media item sets for each of the users in the selected user set.
RELATED APPLICATIONS

This application claims the benefit under 35 U.S.C. §119(e) of U.S. Provisional Patent Application No. 60/752,102 filed Dec. 19, 2005, and titled “User to User Recommender,” which is incorporated herein by specific reference.

US Referenced Citations (203)
Number Name Date Kind
5349339 Kind Sep 1994 A
5355302 Martin Oct 1994 A
5375235 Berry Dec 1994 A
5464946 Lewis Nov 1995 A
5483278 Strubbe Jan 1996 A
5583763 Atcheson et al. Dec 1996 A
5754939 Herz May 1998 A
5758257 Herz May 1998 A
5890152 Rapaport Mar 1999 A
5918014 Robinson Jun 1999 A
5950176 Keiser Sep 1999 A
6000044 Chrysos Dec 1999 A
6047311 Ueno Apr 2000 A
6112186 Bergh et al. Aug 2000 A
6134532 Lazarus Oct 2000 A
6345288 Reed Feb 2002 B1
6346951 Mastronardi Feb 2002 B1
6347313 Ma Feb 2002 B1
6381575 Martin Apr 2002 B1
6430539 Lazarus Aug 2002 B1
6434621 Pezzillo Aug 2002 B1
6438579 Hosken Aug 2002 B1
6487539 Aggarwal Nov 2002 B1
6526411 Ward Feb 2003 B1
6532469 Feldman Mar 2003 B1
6577716 Minter Jun 2003 B1
6587127 Leeke Jul 2003 B1
6615208 Behrens Sep 2003 B1
6647371 Shinohara Nov 2003 B2
6687696 Hofmann et al. Feb 2004 B2
6690918 Evans et al. Feb 2004 B2
6704576 Brachman Mar 2004 B1
6748395 Picker et al. Jun 2004 B1
6751574 Shinohara Jun 2004 B2
6785688 Abajian Aug 2004 B2
6842761 Diamond Jan 2005 B2
6850252 Hoffberg Feb 2005 B1
6914891 Ha Jul 2005 B2
6931454 Deshpande Aug 2005 B2
6941324 Plastina Sep 2005 B2
6947922 Glance Sep 2005 B1
6950804 Strietzel Sep 2005 B2
6987221 Platt Jan 2006 B2
6990497 O'Rourke Jan 2006 B2
6993532 Platt Jan 2006 B1
7020637 Bratton Mar 2006 B2
7021836 Anderson Apr 2006 B2
7051352 Schaffer May 2006 B1
7072846 Robinson Jul 2006 B1
7082407 Bezos Jul 2006 B1
7096234 Plastina Aug 2006 B2
7111240 Crow Sep 2006 B2
7113917 Jacobi Sep 2006 B2
7113999 Pestoni Sep 2006 B2
7120619 Drucker Oct 2006 B2
7127143 Elkins Oct 2006 B2
7136866 Springer, Jr. Nov 2006 B2
7139723 Conkwright Nov 2006 B2
7174126 McElhatten Feb 2007 B2
7180473 Horie Feb 2007 B2
7194421 Conkwright Mar 2007 B2
7197472 Conkwright Mar 2007 B2
7224282 Terauchi May 2007 B2
7236941 Conkwright Jun 2007 B2
7256341 Plastina Aug 2007 B2
7302419 Conkwright Nov 2007 B2
7302468 Wijeratne Nov 2007 B2
7358434 Plastina Apr 2008 B2
7363314 Picker Apr 2008 B2
7392212 Hancock Jun 2008 B2
7403769 Kopra Jul 2008 B2
7415181 Greenwood Aug 2008 B2
7434247 Dudkiewicz Oct 2008 B2
7457862 Hepworth Nov 2008 B2
7478323 Dowdy Jan 2009 B2
7493572 Card Feb 2009 B2
7499630 Koch Mar 2009 B2
7505959 Kaiser Mar 2009 B2
7524521 Takahashi Apr 2009 B2
7546254 Bednarek Jun 2009 B2
7568213 Carhart Jul 2009 B2
7574422 Guan Aug 2009 B2
7574513 Dunning Aug 2009 B2
7580932 Plastina Aug 2009 B2
7599950 Walther Oct 2009 B2
7644077 Picker Jan 2010 B2
7657224 Goldberg Feb 2010 B2
7734569 Martin Jun 2010 B2
20010056434 Kaplan Dec 2001 A1
20020002899 Gjerdingen Jan 2002 A1
20020042912 Iijima Apr 2002 A1
20020059094 Hosea May 2002 A1
20020082901 Dunning Jun 2002 A1
20020152117 Cristofalo Oct 2002 A1
20020178223 Bushkin Nov 2002 A1
20020178276 McCartney Nov 2002 A1
20020194215 Cantrell Dec 2002 A1
20030022953 Zampini Jan 2003 A1
20030033321 Schrempp Feb 2003 A1
20030055689 Block Mar 2003 A1
20030097379 Ireton May 2003 A1
20030120630 Tunkelang Jun 2003 A1
20030212710 Guy Nov 2003 A1
20030229537 Dunning Dec 2003 A1
20040002993 Toussaint Jan 2004 A1
20040003392 Trajkovic Jan 2004 A1
20040068552 Kotz Apr 2004 A1
20040073924 Pendakur Apr 2004 A1
20040128286 Yasushi Jul 2004 A1
20040139064 Chevallier Jul 2004 A1
20040148424 Berkson Jul 2004 A1
20040158860 Crow Aug 2004 A1
20040162738 Sanders Aug 2004 A1
20040194128 McIntyre Sep 2004 A1
20040247715 Kuo Dec 2004 A1
20050019114 Sung Jan 2005 A1
20050021470 Martin Jan 2005 A1
20050060350 Baum Mar 2005 A1
20050075908 Stevens Apr 2005 A1
20050102610 Jie May 2005 A1
20050114357 Chengalvarayan May 2005 A1
20050141709 Bratton Jun 2005 A1
20050154608 Paulson Jul 2005 A1
20050160458 Baumgartner Jul 2005 A1
20050193014 Prince Sep 2005 A1
20050193054 Wilson et al. Sep 2005 A1
20050195696 Rekimoto Sep 2005 A1
20050203807 Bezos et al. Sep 2005 A1
20050210101 Janik Sep 2005 A1
20050216855 Kopra Sep 2005 A1
20050222989 Haveliwala Oct 2005 A1
20050223039 Kim Oct 2005 A1
20050234891 Walther Oct 2005 A1
20050235811 Dukane Oct 2005 A1
20050256867 Walther Nov 2005 A1
20050276570 Reed et al. Dec 2005 A1
20060015904 Marcus Jan 2006 A1
20060018208 Nathan Jan 2006 A1
20060018209 Drakoulis Jan 2006 A1
20060020062 Bloom Jan 2006 A1
20060026263 Raghavan Feb 2006 A1
20060053077 Mourad Mar 2006 A1
20060062094 Nathan Mar 2006 A1
20060067296 Bershad Mar 2006 A1
20060074750 Clark Apr 2006 A1
20060080356 Burges Apr 2006 A1
20060091203 Bakker May 2006 A1
20060100978 Heller May 2006 A1
20060112098 Renshaw May 2006 A1
20060123052 Robbins Jun 2006 A1
20060136344 Jones Jun 2006 A1
20060143236 Wu Jun 2006 A1
20060165571 Seon Jul 2006 A1
20060168616 Candelore Jul 2006 A1
20060173910 McLaughlin Aug 2006 A1
20060173916 Verbeck Aug 2006 A1
20060195462 Rogers Aug 2006 A1
20060195513 Rogers Aug 2006 A1
20060195514 Rogers Aug 2006 A1
20060195515 Beaupre Aug 2006 A1
20060195516 Beaupre Aug 2006 A1
20060195521 New Aug 2006 A1
20060195789 Rogers Aug 2006 A1
20060195790 Beaupre Aug 2006 A1
20060253874 Stark Nov 2006 A1
20060277098 Chung Dec 2006 A1
20060282311 Jiang Dec 2006 A1
20060288044 Kashiwagi Dec 2006 A1
20060288367 Swix Dec 2006 A1
20070016507 Tzara Jan 2007 A1
20070043829 Dua Feb 2007 A1
20070100690 Hopkins May 2007 A1
20070118546 Acharya May 2007 A1
20070136264 Tran Jun 2007 A1
20070156677 Szabo Jul 2007 A1
20070161402 Ng Jul 2007 A1
20070244880 Martin Oct 2007 A1
20070250429 Walser Oct 2007 A1
20070250761 Bradley Oct 2007 A1
20070422880 Martin Oct 2007
20070271286 Purang Nov 2007 A1
20070294096 Randall Dec 2007 A1
20080004948 Flake Jan 2008 A1
20080004990 Flake Jan 2008 A1
20080027881 Bisse Jan 2008 A1
20080046317 Christianson Feb 2008 A1
20080077264 Irvin Mar 2008 A1
20080082467 Meijer Apr 2008 A1
20080133601 Cervera Jun 2008 A1
20080155057 Khedouri Jun 2008 A1
20080155588 Roberts Jun 2008 A1
20080220855 Chen Sep 2008 A1
20080270221 Clemens Oct 2008 A1
20090024504 Lerman Jan 2009 A1
20090024510 Chen Jan 2009 A1
20090073174 Berg Mar 2009 A1
20090076939 Berg Mar 2009 A1
20090076974 Berg Mar 2009 A1
20090083307 Cervera Mar 2009 A1
20090089222 Ferreira Apr 2009 A1
20090106085 Raimbeault Apr 2009 A1
20090210415 Martin Aug 2009 A1
20090276368 Martin Nov 2009 A1
Foreign Referenced Citations (23)
Number Date Country
1 050 833 Aug 2000 EP
1 231 788 Aug 2002 EP
1420388 May 2004 EP
1 548 741 Jun 2005 EP
11-052965 Feb 1999 JP
2002-108351 Apr 2002 JP
2002-320203 Oct 2002 JP
2003-255958 Sep 2003 JP
2004221999 Aug 2004 JP
2005027337 Jan 2005 JP
2002025579 Apr 2002 KR
WO 2003036541 May 2003 WO
03051051 Jun 2003 WO
WO2004070538 Aug 2004 WO
20051013114 Feb 2005 WO
WO 2005115107 Dec 2005 WO
WO2006052837 May 2006 WO
20061075032 Jul 2006 WO
20061114451 Nov 2006 WO
WO 2007038806 Apr 2007 WO
WO2007134193 May 2007 WO
WO2007092053 Aug 2007 WO
200910149046 Dec 2009 WO
Related Publications (1)
Number Date Country
20070203790 A1 Aug 2007 US
Provisional Applications (1)
Number Date Country
60752102 Dec 2005 US