The present invention relates to ranking nodes and labels in a hyperlinked database.
Page (U.S. Pat. No. 6,285,999 issued Sep. 4, 2001 to Lawrence Page, and entitled “Method for mode ranking in a linked database”) originally proposed a ranking measure referred to as PageRank. This ranking measure involved a node-to-node weight propagation scheme. The PageRank, or relative importance of a page (node), corresponds to a stationary distribution of probabilities. This probability distribution is that generated by a random walker surfing pages by randomly following out-links, or occasionally jumping to a random page. Page's method provides a static ranking of the popularity of pages independent of the query. But, in a typical application such as a search engine, there is a need to model the relevance of the query to a page while returning the ranked list of result pages.
The problem of ranking documents for a given query is also well studied. Several methods that consider document structure have been proposed. The HITS algorithm proposed by Kleinberg (U.S. Pat. No. 6,112,202 issued Aug. 29, 2000 to Jon Michael Kleinberg and entitled “Method and system for identifying authoritative information resources in an environment with content-based links between information resources”) requires access to a search engine, which returns a subset of results for a query. Their methods can then be used to select and rank relevant pages from these results. The main drawback of the Kleinberg's approach relates to performance. The time taken to retrieve the results, extract the links and perform the relevant link analysis, for each and every query is comparatively slow due to the computational demands associated with this activity.
Accordingly, a need exists in view of these and other observations, for an improved manner of ranking nodes and labels in a hyperlinked database.
Scalable and dynamic ranking of pages for online queries is described herein. The World Wide Web (WWW) can be modelled as a labelled directed graph G(V,E,L), in which V is the set of nodes, E is the set of edges, and L is a label function that maps edges to labels. This model, when applied to the WWW, indicates that V is a set of hypertext pages, E is a set of hyperlinks connecting the hypertext pages, and the label function L represents the anchor-text corresponding to the hyperlinks on the hypertext pages. One can determine probabilistic measures relating to the directed graph. One application of such measures is to generate rankings of these probabilistic measures. Examples include rankings of the nodes for any given label, the labels for any given node, and of labels and pages. Other measures and rankings are of course possible.
Computations for the labelled directed graph G(V,E,L) are performed iteratively. Each time step, every node of set V accepts flow from its incoming edges of set E and disperses “old” value to its neighbors through its outgoing edges from the set E. A node that has no incoming edges never receives any flow. A node that has no outgoing edges does not propagate its old value, which is “lost” from the system; that is, the node simply discards the old value and accepts the new value. Further, a flow is associated with each label l in the system. At each time step, for each edge bearing the label l (the particular label for which one computes the flow) a unit of flow is added to the new value of the node to which this labelled edge points.
Typically, nodes are initialised with zero values and these labelled edges introduce values into the network. This flow network evolves over time. The value of the total flow in the network increases with time, and in some cases the flow values at all the nodes saturate. The flow value in the system increases without bound in some other cases. Several useful quantities can in both cases be determined by observing the flow values for a label at a node. A variety of ranking measures (denoted as Pr(n|l), Pr(l|n), Pr(l) and Pr(n), as described herein) can be determined.
The original graph structure need not be modified, unlike use of Lawrence Page's PageRank described above. Also, the original graph structure need not be dynamically created from a search engine, as is the case with Kleinberg's HITS algorithm also described above.
The techniques described herein can be used for mining topics of a WWW page in the ranking of WWW pages based on their authority on topics, detecting new topics, and their ranking and overall popularity of WWW pages. These rankings are particularly useful in enhancing the precision of search engine results and also for market intelligence applications such as ranking of market “buzz”. Though the techniques described herein concerning labelled directed graphs has immediate application to the WWW or a hyperlinked database, other “connected” systems can benefit from the computational techniques described herein.
A labelled directed graph G(V,E,L) is used as the model for the computations described herein. The directed graph G(V,E,L) is a mathematical object in which V is the set of nodes, E is the set of edges, and L is a label function that maps edges to labels. Probabilistic rankings concerning nodes and labels are determined based upon the described computations that allow the determination of probabilistic measures, which are outlined below.
Probabilistic Measures
Sparse matrix computations can be used to assist in compute the above measures/rankings. This approach involves three: (i) sparse matrices initial sparse adjacency matrix, (ii) the final resultant matrix and (iii) a temporary matrix. Any of the above measures can be computed using a final sparse vector multiplication on the resultant matrix and a vector representing initial distribution of labels.
The measures Pr(n) and Pr(l) enumerated above can be computed for any arbitrary graph. Measures for Pr(n|l) and Pr(l|n), however, can be determined whenever they exist (conditions for which are described below). The above measures can be computed and applied to a variety of applications in areas such as search engines for the internet and intranets, opinion tracking in social networks, importance ranking of customers in relational tables of a Customer Relationship Management (CRM) application, amongst other applications.
Iterative Computation
The flow model described herein can be implemented using an iterative scheme. Define a vector vl in Equation [1] below.
νl[i]=ΣE1
where (j,i)∈E′⊂E iff L((j,i))=l [1]
The vector yt+1 of flow values at the nodes at time t+1 is computed by the following iterative algorithm of Equation [2] below.
The following definition introduces the transition matrix WG associated with a given G.
Definition 1
Equation [3] above defines WG as the n×n transition matrix for G. Transitions out of out-degree zero nodes in G are partially defined (corresponding column-sums are less than 1). Transitions out of all other nodes are fully defined (corresponding column-sums are equal to 1). Equation [4] applies for all columns j.
ΣiWij=1 if outdegree(j)≧1.
0≦Σiwij<1 if outdegree(j)=0. [4]
One can set wij=1/outdegree(j) if (j,i)∈G and 0 otherwise. If one wishes to add random jumps (that is, transitions from node j to node i when (j,i)∉G), then wij=α/N+(1−α)(1/outdegree(j)) if (j,i)∈G and α/N otherwise. Here α∈[0,1] is the probability of a random jump (for example, α=0.1). The definition above is one possible way of setting up an appropriate WG for a given G. The iteration above can be succinctly expressed in Equation [5] below.
All nodes are initialised with zero values, WG is an appropriately defined n×n transition matrix (referring to Definition 1) and vector yt+1 corresponds to the flow values at the nodes after t+1 iterations.
Definition 2
There is a relationship between the model described herein and Markov Processes. Consider a Markov Process whose transition matrix is given in Equation [6] below.
The WG submatrix of Ŵ is the same as the n×n WG matrix in Definition 1 above. R is a 1×n matrix suitably chosen so that Ŵ is column-stochastic. The last column of Ŵ has zeros in rows 1 through n and a one in row n+1.
An absorbing state is added to the system (that is, a state which once entered cannot be exited). Note that if WG were column stochastic to begin with, the system never enters the newly added absorbing state. However, if WG has nodes such that transitions out of these nodes are partially defined, then all such partially defined states (nodes) can reach the newly added absorbing state. Moreover, if all the nodes in WG reach one or more of these partially defined states, then the system eventually ends up in the newly added absorbing state. In case all the nodes in WG can reach the newly added absorbing state, the bulleted points below are established by Markov theory.
Theorem 1 establishes a sufficient condition for the flow values at the nodes in the network to saturate. The flow values in network G saturate if every node in G can reach an out-degree zero node in G using transitions defined in WG.
Proof. Define matrix Ŵ as in Definition 2. The matrices R and WG have the same definition as in Definition 2. Now, if all the nodes in G can reach an out-degree zero node in G using transitions in WG, then all states (nodes) in WG are transient and can reach the newly added absorbing state in Ŵ. If all the transient states (nodes) in WG can reach the newly added absorbing state, Markov theory indicates that ∃po such that ∀p≧po obey the relation of Equation [7] below.
The result of Equation [8] below can also easily be verified, where Q is some 1×n matrix.
So WGP is the zero matrix for any p≧po. Therefore, from Equation [5] one can establish that after p iterations ∀p≧po, yp=ypo That is, the amount of flow at any node remains constant from one iteration to the next.
Note that in the theorem above, even if in G some nodes are unable to reach an out-degree zero node, by adding random jumps to WG all nodes in G can reach an out-degree zero node in G. So all that is required for convergence (provided random jumps are added) is at least one out-degree zero node in G. So random jumps can be used to enable the algorithm to converge on a larger class of graphs.
At steady state, the amount of flow leaving the out-degree zero nodes in G given by Equation [9] below, which is equal to the amount of flow entering the system given by ∥νl∥1
PageRank
PageRank corresponds to the stationary distribution of a Markov process with a transition matrix WG as defined in Definition 1. The PageRank of a page is the probability of finding a random walker with transition probabilities as defined in WG at that page. Markov theory indicates that if the transition matrix WG for a Markov process is regular (that is, some power of WG has only positive entries), then there exists a unique steady state distribution for the process. For regular WG, the Markov process reaches this unique steady state distribution irrespective of the initial probability distribution. PageRank essentially relies on these properties of regular Markov systems to obtain the steady state distribution for the nodes in G. When the transition matrix WG is regular, this steady state distribution also corresponds to the principal eigenvector of WG. Definition 3 below codifies these facts.
Definition 3
G is PageRank-able if PageRank is well defined for G, if for the corresponding WG, ∃ a p0 such that ∀p≧p0 Equation [10] below holds.
The vector
is a principal eigenvector of WG with an associated eigenvalue of 1, where ∥c∥1=1.
Lemma 1
The total value in the system increases without bound if G is PageRank-able. If G is PageRank-able, then the flow values in network G do not saturate.
Proof. From Definition 3, G is PageRank-able, then ∃p0 such that ∀p≧po, WGP=WGpo≠on×n. Therefore the vector of flow values continue to increase from one iteration to the next, and the flow values in network G do not saturate.
The change in flow at node i from iteration t to iteration t+1 is y[i]t+1−y[i]t, where yt and yt+1 are the vectors of flow values at the nodes after t and t+1 iterations respectively. The lemma below shows that when G is PageRank-able, the change in flow at any node can be used to find PageRank.
Lemma 2
For PageRank-able G, the quantity
equals PageRank(i), for sufficiently large t.
Proof: The change in flow values at the nodes is given by Yt+1−yt=WGt+1νl For PageRank-able G, from Definition 3 ∃po such that ∀p≧po,
So for sufficiently large t, the quantity WGt+1=WGpo. For such a t, the quantity WGt+1νl=WGpoνl=∥νl∥1 c. Where
is the principal eigenvector of WG. Therefore, y[i]t+1−y[i]t=(WGt+1νl)[i]=(WGpoνl)[i]=∥νl∥1.ci. Where ci is the PageRank of i i.e. the ith entry of the principal eigenvector of WG.
G is not PageRank-able when G has one or more out-degree zero nodes, since the corresponding WG is not regular. When G is PageRank-able, the model described herein can only find PageRank. Experiments have shown, however, that the graph has a bowtie structure. A graph with such a structure is not PageRank-able due to the presence of a large number of out-degree zero nodes. However, the graph satisfies the sufficient condition for convergence outlined in Theorem 1.
As a result, the techniques described herein can be applied to the graph per se. When G satisfies the sufficient condition in Theorem 1, different choices of l lead to different results.
An iterative scheme for a fixed label can be used to find the flow values at the nodes for a fixed label, and one can use the following iterative scheme. Define a vector νl as per Equation [11] below.
The vector y of label biased node ranks is then computed by the following iterative algorithm of Equation [12] below.
yt+1=νl+βGys [12]
In Equation [12] above, β∈[0,1] and WG is a suitably defined n×n matrix as in definition 1. Choosing β<1 ensures convergence irrespective of the choice of WG and vl. The iterative algorithm seeks to find the steady state vector ys for this iteration, as expanded in Equation [13] below.
In practice, the iterative algorithm declares a solution once ∥yt+−yt∥2≦∈ or when the ranks of the top few nodes remain unchanged from one iteration to the next.
The condition for the expected values at each node to stabilize is that the quantity (I−βWG)−1 must exist. The quantity (I−βWG)−1 is guaranteed to exist and our iterative algorithm is guaranteed to converge if ∥βWG∥<1, where ∥βWG∥=βmaxjΣi|wij|. Since β<1 and Σi|Wij|≦1 for all i, ∥βWG∥<1 and the calculations converge. The β term acts like a damping factor and its value can be set as close to 1 as required so that ∥(1−β)WG∥≦∈ (for instance choose β≧1−∈/N for the Frobenius norm).
The β premultiplier is not required to ensure convergence if WG satisfies the convergence criterion of Theorem 1. Choosing a small β, however, speeds up convergence.
Calculating the Reachability Matrix B
Computing the steady state flows at all the nodes for a fixed label is described above. The flows across all the nodes are computed for a large number of labels. Therefore, a matrix called B is computed, which indicates the “reachability” between pair of nodes in the graph through all possible paths.
The (i, j) entry in a reachability matrix encodes the total flow that node i observes given that a unit of flow is sent out from node j. This entry is a measure of the effect node j has on node j. If the maximum length of influence paths that are considered is fixed as tmax, a good approximation to the reachability matrix can be found efficiently provided tmax is small. Since N is very large and tmax is a small fixed number, one can ignore the effect of random jumps (namely, transitions from node j to node i when (j.i)∉G). Once such an approximate reachability measure is precomputed, given any label a suitable vector vl can be found and node ranks for the given label can be computed.
The matrix B denotes such an approximate reachability measure between every pair of nodes in G. The main challenge in computing the B matrix is one of scale. Some applications may have close to 1 billion nodes and 25 billion edges. Say |V|=n and |E|=m. Clearly, storing G (ignoring L) alone requires O (m+n) storage space. Further, the sparsity of B dictates a lower bound on the space required to compute and store B. The challenge now is to compute and store B using space and time close to the sparsity induced lower bound. Define a matrix W according to Equation [14] below.
Define B to be a reachability measure according to Equation [15] below where Bij represents the total influence that node j exerts on node i.
B=I+W+W2+W3+ . . . +Wt
Note that I represents influences through paths of length zero, W represents influences through paths of length one and so on.
Matrix B is calculated using an iterative algorithm. The number of iterations of this algorithm corresponds to the maximum length of influence paths one wishes to consider. This calculation proceeds as indicated in Equation [16] below.
B(0)=W
B(t+1)ij=Wij+Σ(k.i)∈G,(j.k)∈BB(t)jkWki [16]
After the final iteration I is added to B. To compute B (t+1), B (t) and W are stored. The following lemma establishes the equivalence of the matrix B in [15] and the matrix B computed by the iterative algorithm above in [16].
Lemma 3
At the end of iteration t, any Bij entry corresponds to the total influence of node j on node i through all paths of length 1 through t from node j to node i. This scheme appears to be best realized by storing B and W as sparse indexes where for each node j one can query and obtain a list of nodes pointed to by node j. The underlying mechanism responsible for storing such a forward index might also prune some out-edges from time to time to meet storage constraints.
Probabilities (ranking measures Pr(n|l), Pr(l|n), Pr(l) and Pr(n)) can be computed by first computing vectors yt, which stores the steady state flows at every node for each label l, and later computing the probabilities from the set of yt vectors. The initial label distribution vl for each label l is computed using Equation [17] below. From Equation [17] the result of Equation [18] below can be discerned.
yl(n×1)=B(n×n)νl(n×1) [17]
Now, if L denotes the set of all labels in the system, then Equation [18] below follows.
Note that for the sake of ranking, only the sorted numerators in Equation [18] are considered sufficient.
Search Engine Application
Preferably, the labels are stemmed using Porter's stemmer. The Stop-words are removed, though no synonyms are used or thesauri consulted. The indices can be saved as “flat file” indices. Given a Boolean query q 110 which has labels and Boolean operands, as a first step, the label index for the vector νl is determined for each of the labels in q and further form a vector νq based on the following rules:
The second step is a sparse matrix multiplication of the vector vq and the matrix B. A straight-forward implementation can be used, in which each entry νq[i] is multiplied with the ith row of the B matrix. The results are added in memory to form the vector y that contains the ranking of the documents for the query q 110. For the sake of performance, the entries in B are sorted on their magnitude, so that the top k results can be computed quickly. In theory, a sparse matrix-vector multiplication is O(n2). In practice, however, given the fact that one computes only the top k ranks is of interest, the number of entries in matrix B can be limited to the top m entries when computing the ranks.
Computer Hardware
The components of the computer system 200 include a computer 220, a keyboard 210 and mouse 215, and a video display 290. The computer 220 includes a processor 240, a memory 250, input/output (I/O) interfaces 260, 265, a video interface 245, and a storage device 255.
The processor 240 is a central processing unit (CPU) that executes the operating system and the computer software executing under the operating system. The memory 250 includes random access memory (RAM) and read-only memory (ROM), and is used under direction of the processor 240.
The video interface 245 is connected to video display 290 and provides video signals for display on the video display 290. User input to operate the computer 220 is provided from the keyboard 210 and mouse 215. The storage device 255 can include a disk drive or any other suitable storage medium.
Each of the components of the computer 220 is connected to an internal bus 230 that includes data, address, and control buses, to allow components of the computer 220 to communicate with each other via the bus 230.
The computer system 200 can be connected to one or more other similar computers via a input/output (I/O) interface 265 using a communication channel 285 to a network, represented as the Internet 280.
The computer software may be recorded on a portable storage medium, in which case, the computer software program is accessed by the computer system 200 from the storage device 255. Alternatively, the computer software can be accessed directly from the Internet 280 by the computer 220. In either case, a user can interact with the computer system 200 using the keyboard 210 and mouse 215 to operate the programmed computer software executing on the computer 220.
Other configurations or types of computer systems can be equally well used to execute computer software that assists in implementing the techniques described herein.
Various alterations and modifications can be made to the techniques and arrangements described herein, as would be apparent to one skilled in the relevant art.
Number | Name | Date | Kind |
---|---|---|---|
6112202 | Kleinberg | Aug 2000 | A |
6233571 | Egger et al. | May 2001 | B1 |
6285999 | Page | Sep 2001 | B1 |
6549896 | Candan et al. | Apr 2003 | B1 |
7076483 | Preda et al. | Jul 2006 | B2 |
20020129014 | Kim et al. | Sep 2002 | A1 |
20020130907 | Chi et al. | Sep 2002 | A1 |
20030018636 | Chi et al. | Jan 2003 | A1 |
20030204502 | Tomlin et al. | Oct 2003 | A1 |
20050086260 | Canright et al. | Apr 2005 | A1 |
20060074903 | Meyerzon et al. | Apr 2006 | A1 |
20070067317 | Stevenson | Mar 2007 | A1 |
Number | Date | Country | |
---|---|---|---|
20060136098 A1 | Jun 2006 | US |