The growing amount of information available to users creates a growing need to organize this information into manageable and useful forms. One particular concern is ordering information according to specific criterion, for example, to identify the better, more useful, or more popular items. A variety of ranking systems have been developed that can order a list of items according to such criterion. However, these ranking systems often have limitations. For example, the underlying basis for ordering of items in a list may change or evolve, so that an item may currently deserve a higher or lower rank than previous events indicate. Determining when such changes have occurred and determining when a ranking is outdated may be complicated. Also, the relevant items to be included in a ranking may change, and determining where a new item may fit in a ranking including older items can also be complicated. For example, an older item having a greater number of purchases, favorable reviews, or other events that are relevant to a ranking may or may not deserve to be ranked higher than a newer item having few relevant events on which a ranking can be based.
Use of the same reference symbols in different figures indicates similar or identical items.
Items in an arbitrarily long list that may change can be ordered and reordered in an ongoing manner based on measurements of the performance of the items during a series of time windows. The ordering can be based on a rating system that indicates the relative strength of each item. For example, items that in the past have better performance may achieve a higher rating than items that have received less attention in the past. During each time window, respective measurements of performance for the items can be obtained, and the measured performance for each item can be sequentially compared to the performance measurements of the other items. Each comparison of an item with a competing item indicates whether the item: won, i.e., performed better than the competing item during the time window; lost, i.e., performed worse than the competing item during the time window; or drew, i.e., performed the same as the competing item. Each win, loss, and draw that an item receives during a time window may change the rating of the item, and each change can depend on the current rating of the item, the current rating of the competing item, and whether the item won, lost, or drew against the competing item. In general, a win against a higher rated item may cause a larger rating increase, while a loss against lower rated items causes a larger rating decrease, which may allow the ratings to more quickly rise or fall to appropriate values for current conditions. Each item will generally reach and maintain a rating as long as the underlying basis for the ratings remains constant. Further, a new item added to a list may not be subject to a disadvantage relative to items with long histories of performance measurements because the new item may reach its correct rating relatively quickly.
In an exemplary implementation, device 110 is a server system and network 130 is a wide area network such as the Internet. User devices 120 may be a mixture of different types of devices such as desktop computers, portable computers, tablets, and smart phones that may employ browsers or other applications to communicate with device 110. As will be understood by those of skill in the art, the configuration of devices illustrated in
A processor 112 in server 110 can execute a service 150 that employs or involves a list 170 of items 160 stored in memory 114. Service 150 may perform a variety of functions for which ordering of items 160 in list 170 is desired. Service 150 may, for example, involve presenting information to users employing user devices 120 that are connected to server 110 through network 130, and an ordering of the items may be desired for creation of an efficient presentation of some or all of items 160. A ranking process can define an order of items 160 in list 170 and can be based on any desired criterion for distinguishing items 160. For example, ranking of items may be based on how much attention users of devices 120 pay to items 160. Ranking of the items 160 in list 170 may alternatively be determined based on criteria that are independent of the users, for example. For example, stocks can be ranked based on daily price gains.
Each item 160 in list 170 may represent almost anything, for example, the items may represent links, documents, products for sale, or investments such as stocks. Similarly, criteria for rating and ranking items 160 may similarly be based on any type of performance associated with items 160. The term performance is used generally herein to reflect a measureable quantity associated with a criterion on which items 160 will be rated and ranked. The possible combinations of types for items 160 and performance measurements used to rate or rank items 160 are unlimited, but some examples are described herein for illustration. For example, if items 160 correspond to links to respective information, one performance measurement is the number of clicks a link receives during a specific time window, and service 150 may rank and display the links/items 160 in order according to which links/items 160 are selected most or least. If items 160 correspond to documents, one performance measure for a document may be the total time that users spend viewing the document, and service 110 may order documents/items 160 according to which documents appear to be of the most or least interest. If the items correspond to stocks listed on an exchange, one measurable characteristic of a stock is the percentage price change each day, and service 150 may rank and display the stocks/items 160 according to which stocks have registered the best price performance over a number of days
Service 150 in the specific implementation of
Block 230 selects a current time window [T1,T2], which may be an interval of time that is just beginning However, the rating process could employ historical measurements of the performance of items 160, so that the current time window [T1,T2] does not need to be related to the current time. In
Block 240 determines respective measurements P of the performance of items 160 during the current time window [T1,T2]. The process for measuring performance will generally depend on the criteria associated with the rating and ranking of items 160. For example, in system 100 of
Rating block 250 uses the performance scores PA from block 240 and the current ratings SA(T1) for all items A=1 to N to generate adjusted ratings SA(T2) for all items A=1 to N.
Block 255 can then update or adjust the rating for item A, i.e., determine a new rating SA(T2). In particular, the win-loss values of item A and the ratings for all items at time T1 permit updating the rating of item A using a system similar or identical to the Elö system, which was developed to rank chess players. The new rating SA(T2) for item A can be determined using Equations 1 and 2 in which win-loss values WLAB are for the current time window [T1,T2]. Value EAB can be thought of as an estimated performance of item A relative to item B based on their prior ratings SA(T1) and SB(T1). Factor K affects how quickly a rating converges on a merit-based rating and can be a constant that is selected according to the desired magnitude of adjustments per time window. Factor K could alternatively be a function, for example, that decreases with the number of time windows that process 200 has performed while item A was included in the list. Exponent denominator F in Equation 2 can be a constant or a function selected according to the importance of the divergence between ratings SA and SB. A large value of F means that the score changes more slowly than when F is small. Equation 2 provides one formula for an estimated win-loss value EAB of a higher ranked item A against a lower ranked item B in one implementation of a rating system in which wins and losses respectively count as 1 and 0 and draws count as 0.5. For this specific rating system, the win-loss value WLBA of item B against item A is (1-WLAB), and the estimated win-loss value EBA for item B is (1-EAB). As a result, the change in the rating SB for losing/winning is the negative of the change in rating SA for winning/losing. This characteristic of the rating system maintains the average rating of the items.
Equations 1 and 2 provide one implementation of very specific formulae for updating ratings. More generally, for each time window, the rating of an item may be altered by the addition or subtraction of multiple adjustments. Each of the adjustments may be associated with a competing item and have a value that depends on the rating of the item, the rating of the competing item, and whether the performance score of the item is higher than the performance score of the competing item. Also, although Equations 1 and 2 illustrate an example where higher ratings indicate better performance, either higher or lower ratings could indicate better performance. A variety of alternative formulae, conventions, and rules could be employed.
Block 256 can determine whether new ratings SA(T2) for all items A have been determined. If not, the process loops back to block 251 for selection of the next item A. Rating process 250 is complete once new ratings SA(T2) have been determined for all items A. A block 260 can then rank items 160 in list 170 according to their ratings, e.g., with items having higher ratings being followed by items having lower ratings or with items having lower ratings being followed by items having higher ratings.
Rating process 250 can maintain or alter the ratings of items 160 in list 170. In particular, when a much stronger item A wins over a weaker item B during a time window, WLAB is 1, and expected or estimated win-loss value EAB is also near 1. As a result, the rating of the much stronger item A increases only slightly. Similarly, a loss when the item is much weaker than the competing item causes only a slight decrease in the rating of the item. This reflects that the prior ratings appear to still be appropriate. When the ratings items A and B are nearly the same, the adjustments to the ratings of both items A and B will be moderate regardless of which item wins, reflecting that the win or loss may not be statistically significant, but if a trend develops in which one item consistently wins, the rating of the winning item will increase or the losing item will decrease to create separation between their ratings. If a weaker item wins against a much stronger item, the increase in the rating of the weaker item and the decrease in the rating of the stronger item are relatively large. This reflects that the weaker item is not expected to win and that a win suggests that the underlying basis for the prior ratings may have changed in order for the weaker item to win. The large change in rating allows the ratings of the items to adjust relatively quickly to changes in the underlying basis of the rating. As mentioned above, items may be added or removed from the list, and when an item is added, a provisional rating may be assigned. Even if the originally assigned rating is greater or less than the deserved rating for the new item, the adjustments to the rating over several time windows can cause the rating to converge on a deserved rating. The rating and ranking processes described above can thus rapidly deal with changes in the content of the list being evaluated and changes in the underlying basis for the rating or ranking.
Some implementation of the systems and processes described above can be implemented in a computer-readable media, e.g., a non-transient media, such as an optical or magnetic disk, a memory card, or other solid state storage containing instructions that a computing device can execute to perform specific processes that are described herein. Such media may further be or be contained in a server or other device connected to a network such as the Internet that provides for the downloading, streaming, or other use of data and executable instructions.
Although particular implementations have been disclosed, these implementations are only examples and should not be taken as limitations. Various adaptations and combinations of features of the implementations disclosed are within the scope of the following claims.
This patent document is related to PCT application PCT/US2011/039037, entitled “Rating Items,” filed Jun. 3, 2011, which is hereby incorporated by reference in its entirety.