Prior to making purchases, consumers and others often conduct research, read reviews and search for best prices for products and services. Information about products and services can be found at a variety of types of Internet-accessible Web sites including community sites. Such information is abundant. Product developers, vendors, users and reviewers, among others, submit information to a variety of such sites. Some sites allow users to post opinions about products and services. Some sites also allow users to interact with each other by posting questions and receiving answers to their questions from other users.
Ordinary search services yield thousands and even millions of results for any given product or service. A search of a community site often yields far too many hits with little filtering. Results of a search of a community site are typically presented one at a time and in reverse chronological order merely based on the presence of search terms.
A search of typical question and answer community sites typically results in a listing of questions. For example, a search for a product such as a “Mokia L99” cellular telephone could yield hundreds of results. Only a few results would be viewed by a typical user from such a search. Each entry on a user interface to a search result could be made up of part or all of a question, all or part of an answer to the corresponding question and other miscellaneous information such as a user name of each user who submitted each respective question or answer. Other information presented would include when the question was presented and how many answers were received for a particular question. Each entry listed as a result of a search could be presented as a link so that a user could access a full set of information about a particular question or answer matching a search query. A user would have to follow each hyperlink to view the entire entry to attempt to find useful information.
Such searching of products and services is time-consuming and is often not productive because search queries yield either too much information, not enough information, or just too much random information. Such searching also typically fails to lead a user to the most useful entries on community and other sites because there is little or no automatic parsing or filtering of the information—just a dump of entries matching one or more of desired search terms. Users would have to click through page after page and link after link with the result of spending excessive amounts of time looking for the most useful information responsive to a relatively simple inquiry. To further compound the problem, product and service information is spread over a myriad of sites and is presented in many different formats.
Some community sites offer a means for voting or recommending certain content. In particular, on certain community sites, users can vote for or recommend certain questions and corresponding answers that may contain information of interest to users. However due to such large volumes of submitted questions, many questions (and corresponding answers) do not receive enough hits and thus any voting associated with such questions does not adequately reflect their likely interest to the users of the community site. Voting or recommending can also be skewed by when a particular question or answer is submitted. For example, timing may be important such as what time of day or night the question is submitted or what day of the week the question is submitted. Further, some sites do not offer the ability for users to vote or recommend questions and answers. These and other conditions of voting or recommending of content on community sites present challenges for users to find content which is most valuable or useful.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Information from question-answer community and other Internet-accessible Web sites are crawled and information such as questions and answers are extracted from these sites.
A plurality of questions from the information is identified such that each of a subset of the questions has an indication of preference such as a vote or indication of “interestingness.” For each user, instance pairs are identified for a majority of users whose input reflects question interestingness for all users. Training data from a minority of users is screened out to avoid the use of input that does not reflect question interestingness for all users.
Then, a user weight for each user is determined. The closer a user's indication(s) of “interestingness” matches that of the majority, the more weight is given to that particular user's questions for training purposes. A statistical model is trained by emphasizing training data from instance pairs from the majority of users whose input reflects question interestingness for all users. The training uses the user weights. The questions are then sorted by a value of reflective of “interestingness.”
The Detailed Description is set forth and the teachings are described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items.
This disclosure is directed to predicting or estimating “interestingness” of content (such as questions) on community Internet sites. Herein, while reference may be made to a product, a service, information, data or something else may just as easily be the subject of the features described herein. For the sake of brevity and clarity, not limitation, reference is made to a question about a product.
Further, community sites as understood herein include community-based question submission and question answering sites, and various forum sites, among others. Community sites as used herein include at least community question and answer (community QnA) sites, blogs, forums, email threads of conversation and the like. In short, the techniques described herein can be applied to any set of information connected to members of a group or community Thus, the techniques can be generally applied to information chunks selected from community sites or other sources.
One problem associated with community sites is that what is considered interesting or useful to one user is not necessarily interesting to another user. Yet another problem is that newly submitted information may not get enough exposure for user interaction and thus information that would have been considered very interesting by many users is not identified at the time a particular community user seeks information.
As described herein, in a particular illustrative implementation, instead of a conventional search result, a user receives an enhanced and aggregated search result upon entering a query. The result 100 of such illustrative query is shown in
Exemplary User Interface and Search Results
With reference to
In one implementation, a product feature summary 104 is also provided to a user. This product feature summary 104 includes, by way of example, an overall summary of questions from community sites, some of which are flagged or tagged by users as “interesting” 106 and questions grouped according to product feature 108. For example, in
Product features 108 may be generated by users, automatically generated by a computer process, or identified by some other method or means. These product features 108 may be presented as links to respective product feature Web pages which each contain listing of questions addressed to a single feature or group of related features. For example, in
Product feature Web pages preferably list questions marked as “interesting” ahead of, or differently from, other questions addressing the same product feature. A user would then be directed in a hierarchal fashion to specific product features and then to questions or answers or both questions and answers that have been marked by community site users as “interesting” or programmatically identified as likely to be “interesting.” Another designation other than “interesting” may be used and correlated or combined with those items flagged as “interesting.”
In the lower left portion of
With reference to
In one implementation, a summary of information about each question is presented in the questions listing section 160. For example, such a question summary includes a user rating 130 for a particular question, a bolding of a search term in the question 132 or in an answer 134 to a question. A user rating 130 may take the form of a number of stars (as shown in
The site from which the question appears 136 is also shown. A short summary of each answer and links or other navigation to see other answers 138 to a particular question are also provided. In
In summary as to the user interface 100, a user is simultaneously presented with a variety of features with which to check product details, compare prices provided by a plurality of sites, and gain access to opinions from many other users from one or more sites having questions or from users who have provided answers to questions about a particular product.
Illustrative Network Topology
An exemplary implementation of a process to generate the user interface shown in
With reference to
Using a taxonomy of product names 310, questions (and answers) are grouped by product names 328. Metadata is prepared for each question (and answer) 330 from the extracted information. A metadata extractor 350 prepares such metadata through several functions. The metadata extractor 350 identifies comparative questions 312, predicts question “interestingness” 314 (as explained more fully below), predicts question popularity 316, extracts topics within questions 318, and labels questions by product feature 320.
Metadata is then indexed by question ID 322 and answers are indexed by question ID 324. Using the metadata, questions are grouped by product names 332 and questions are ranked by lexical relevance and using metadata 334.
Predicting question interestingness 314 includes flagging a question or other information as “interesting” when it has not been tagged as “interesting” or with some other user-generated label. Indexing also comprises labeling questions by feature 308 such as by product feature. While question or questions are referenced, the process described herein equally applies to answers to questions and to all varieties of information.
When a search for information about a product or service is desired, a query is submitted 338 through a user device 204. For example, a user submits a query for a “Mokia L99” in search of information about a particular cellular telephone. In response, the server 210 ranks questions, answers and other information by lexical relevance and by using metadata 334 and then generates search results 336 which are then delivered to the user device 204 or other destination. In one implementation, questions are sorted by a relevance score. A user can then interact 340 with the search results which may involve a re-ranking of questions 334.
With reference to
Next, a query may be entered by a user or may be received programmatically from any source. Based on the query, questions and other information are ranked by lexical relevance or interestingness, or relevance and interestingness 416. Then, questions, answers and other information are provided in a sorted or parsed format. In a preferred implementation, such information is provided sorted by relevance or a combined score 418.
In one implementation, through a user interface, after indexing and ranking are completed, a user is able to browse relevant questions, answers and other information addressing a particular product or service sorted by feature. Questions can also be browsed by topic since questions that address the same or similar topic are grouped together so as to provide a user-friendly and user-accessible interface. Further, search results from question and answer community sites and other types of sites are sorted and grouped by similar comparative questions. Product search is enhanced by providing an improved search of questions, answers and other information from community sites. The new search can save effort by users in browsing or searching community sites when users conduct a survey on certain products.
An improved search of questions and answers helps users not only to make decisions when users want to purchase a product or service but also to get instructions after users have already purchased a product or service. Further implementation details for one embodiment are now presented.
Product or Service Features
Each type of product or service is associated with a respective set of features. For example, for digital cameras, product features are zoom, picture quality, size, and price. Other features can be added at any time (or dynamically) and the indexing and other processing can then be re-performed so as to incorporate any newly added feature. Features can be generated by one or more users, user community, or programmatically through one or more computer algorithms and processing.
In one implementation, a feature indexing algorithm is implemented as part of a server operating crawling and indexing of community sites. The feature indexing algorithm uses an algorithm similar to an opinion indexing algorithm. This feature indexing algorithm is used to identify the features for each product or type of product from gathered data and metadata. Features are identified by using probability and identifying nouns and other parts of speech used in questions and answers submitted to community sites and, through probability, identifying the relationships between these parts of speech and the corresponding products or services.
In particular, when provided with sentences from community sites, the feature algorithm or system identifies possible sequences of parts of speech of the sentence that are commonly used to express a feature and the probability that the sequence is the correct sequence for the sentence. For each sequence, the feature identifying system then retrieves a probability derived from training data that the sequence contains a word that expresses a feature. The feature identification system then retrieves a probability from the training data that the feature words of the sentence are used to express a feature. The feature identification system then combines the probabilities to generate an overall probability that a particular sentence with that sequence expresses a feature. Potential features are then identified. Potential features across a plurality of products of a given category of product are then gathered and compared. A set of features is then identified and used. A restricted set if features may be selected by ranking based on a probability score.
In another embodiment, product or service features are determined using two kids of evidence within the gathered data and metadata. One is “surface string” evidence, and the other is “contextual evidence.” An edit distance can be used t compare the similarity between the surface strings of two product feature mentions in the text of questions and answers. Contextual similarity is used to reflect the semantic similarity between two identifiable product features. Surface string evidence or contextual evidence are used to determine the equivalence of a product or service feature in different forms (e.g. battery life and power).
When using contextual similarity, all questions and answers are split into sentences. For each mention of a product feature, the feature “mention,” or term which may be a product feature, is taken as a query and search for all relevant sentences. Then, a vector is constructed for the product feature mention by taking each unique term in the relevant sentences as a dimension of the vector. The cosine similarity between two vectors of product feature mentions can then be present to measure the contextual similarity between the two feature mentions.
Product or Service Topics
Usually, a topic around which users ask questions cannot be predicted or fall within a fixed set of topics for a product or service. While some user questions may be about features, most questions are not. For example, a user may submit “How do I add songs to my Zoon music player?” Thus, the process described herein provides users with a mechanism to browse questions around topics that are automatically extracted from a corpus of questions. To extract the topics automatically, questions are grouped around types of question, and then sequential pattern mining and part-of-speech (POS) tags-based filtering are applied to each group of questions.
POS tagging is also called grammatical tagging or word-category disambiguation. POS tagging is the process of marking up or finding words in a text as corresponding to a particular part of speech. The process is based on both its definition as well as its context—i.e., relationship with adjacent and related words in a phrase, sentence, or paragraph. A simplified form of POS tagging is commonly taught to school-age children, in the identification of words as nouns, verbs, adjectives and adverbs. Once performed by hand, POS tagging is now done in the context of computational linguistics, using algorithms which associate discrete terms, as well as hidden parts of speech, in accordance with a set of descriptive tags. Questions, answers and other information extracted from sites are treated in this manner.
Comparative Questions
Sometimes, users not only care about the product or service that they want to purchase, but also want to compare two or more products or services. As shown in
User Labeling
Some sites such as community sites allow users to label, tag, star or vote certain questions, answers or other information as “interesting.” Product search and product comparisons are merely examples of where a prediction of “interestingness” can be used.
In one particular implementation, “interestingness” is defined as a quadruple (u, x, v, t) such that a user u (is an element of all users U) provides a vote v (interesting or not) for a question x which is posted at a specific time t (within R+). It is noted that v is contained within the set {1, 0} where 1 means that a user provides an “interesting” vote and 0 denotes no vote given. The set of questions with a positive “interestingness” label can be expressed as Q+={x: (u, x, v, t), v=1}.
In the implementation described herein, such a designation of “interesting” is a user-dependent property such that different users may have different preferences as to whether a question is interesting. It is assumed that the identity of users is not available. It is also assumed for purposes of the described implementation that there is a commonality of “interestingness” over all users and this is referred to as “question interestingness,” an indication of whether a question is worthy for recommendation. This term “interestingness” is formally defined in this implementation as the likelihood that a question is considered “interesting” by most users. “Interestingness” is characterized by a measure called question popularity. The higher the popularity of a question, the more likely the question is recommended by most users. For any given question that is labeled as “interesting” by many users, it is probable that it is “interesting” for any individual user in U. A description follows of one implementation to estimate question popularity and then use question popularity to recommend questions.
Data Construction
It is somewhat difficult to make a judgment as to how likely a question is to be recommended. It is easier (computationally) to determine which is more likely to be recommended given a pair of questions. A preference relationship is defined between any two questions such that x(1)x(2) if and only if the popularity of question x(1) is greater than that of x(2). The preference relationship is defined on the basis of user ratings. Two definitions of are provided.
Definition 1: a preference order
x(1)1x(2) (1)
exists if and only if {u: (u,x(1), v,t)εQ+}|−{u: (u,x(2), v,t)εQ+}|≧Δv, where ΔvεN+ and where the operation |{}| represents the size of a set. The more votes a question receives, the more popular or likely to be recommended it is. Thus, 1 is defined on the basis of the number of votes that a question receives. The parameter Δv is introduced to control the margin of separation in terms of votes between x(1) and x(2).
The preference relationships derived according to 1 can be reliable when Δv is set to a relatively large value (e.g. 5). Definition 1 is used to build a test set.
One disadvantage with 1 is that it can only be used to judge the preference order between questions already having votes by users. In an exemplary collection of data in an experimental use of Definition 1, not all of the questions were voted upon. For example, in a category of “travel,” only 13% of questions were voted or identified as “interesting.” Thus, the use of such sparse data makes the training data less reliable and less desirable when used to learn a “question recommendation” model.
One method for addressing data sparsity is just to include all questions without user ratings or votes into Definition 1 directly, which can be done simply by replacing Q+ with Q. The method implicitly assumes that all questions without user ratings are “not recommended.” However, the questions without votes could be worth being recommended as well. As users are not obligated to rate questions in community sites or services, users may not rate a question even if the users feel the question is interesting or recommendable. Thus, to better use questions without votes, definition 2 is introduced.
Definition 2: a preference order
x(1)u2x(2) (2)
exists if and only if
Questions at community sites are usually sorted by posting time when they are presented to users as a list of ranked items. That is, the latest posted question is ranked highest, and then older questions are presented in reverse chronological order. The result is that questions with close posting times tend to be viewed by a particular user within a single page which means that they have about the same chance of being seen by user and about the same chance of being labeled as “interesting” by the user. With the assumption that a user u sees x(1) and x(2) at about the same time within a single page, it can lead to the result that x(1) can be tagged as “interesting” and x(2) left as not “interesting” by a user. Therefore, it is relatively safe to accept that for any given user, x(1) is more “interesting” or popular than x(2).
By using definition 2, more caution is used in identifying whether questions without user votes are “not recommended” or “not interesting.” Particularly, only questions which do not have users' votes and share similar user browsing contexts with questions having user votes are considered “not recommended.”
According to definition 2 (Equation 2), it is possible to build a set of ordered (question) instance pairs for any given user as follows:
where zi equals 1 for x(1)u2 x(2) and −1 otherwise, where i runs from 1 to lu where lu is the number of instance pairs given by a user u. The number of sets is the size of all users U (denoted |U|). S is the union ∪Su.
One assumption is that a majority of users share a common preference about “question interestingness.”
Problem Statement
It is assumed that question x comes from an input space X which is a subset of Rn, where n denotes a number of features of a product (e.g. x⊂Rn). A set of ranking functions f exists where each f is an element of all functions F (e.g. fεF). Each function f can determine the preference relations between instances as follows:
x
i
2
x
j if and only if f(xi)>f(xj) (4)
The best function f* is selected from F that respects the given set of ranked instances S. It is assumed that f is a linear function such that
f
w(x)=w,x (5)
where w denotes a vector of weights and •,• denotes an inner product. Combining Equation 4 and Equation 3 yields
x
i
2
x
j if and only ifw,xi−xj>0 (6)
Note that the relation xiu2xj between instance pairs xi and xj is expressed by a new vector xi−xj. A new vector is created from any instance pair and the relationship between the elements of the instance pair. From the given training data set S, a new training data set S′ is created that contains 1 (lower-case letter “L”) (=Σulu) labeled vectors.
Similarly, S′u is created for each user u.
S′ is taken as classification data and a classification model is constructed that assigns either a positive label z=+1 or a negative label z=−1 to any vector xi(1)−xi(2).
A weight vector w* is learned by the classification model. The weight vector w* is used to form a scoring function fw* for evaluating “interestingness” or popularity of a question x. A popularity score determines the likelihood that the question is recommended by many users.
f
w*(x)=w,x (8)
In one implementation, the Perceptron algorithm is adapted for the above presented learning problem by guiding the learned function by a majority of users. The Perceptron algorithm is a learning algorithm for linear classifiers. A particular variant of the Perceptron algorithm is used and is called the Perceptron algorithm with margins (PAM). The adaptation as disclosed herein is referred to as Perceptron algorithm for preference learning (PAPL). A pseudocode listing for PAPL is as follows.
In this implementation, PAPL makes at least two changes when compared to PAM. First, transformed instances (instead of raw instances) as given in Equation 8 are used as input. Second, an estimation of an intercept is no longer necessary (as in line 6 of Listing 1). The changes do not influence the convergence of the PAPL algorithm.
For each user u, Listing 1 can learn a model (denoted by weight vector wu) on the basis of S′u. However, none of the users can be used for predicting question “interestingness” or popularity because such indications are personal to a particular user, not to all users.
An alternative implementation is to use the model (denoted by w0) learned on the basis of S′. The insufficiency of the model w0 originates from an inability to avoid influences of a minority of users which diverges from the majority of users in terms of preferences about “interesting,” popularity or whether a question is recommended. This influence can be mitigated and w0 can be enhanced or boosted as explained further below.
It is noted that different users might provide different preference labels for a same set of instance pairs. In one implementation, instance pairs from a majority of users are used and instance pairs from an identified minority of users are ignored as noise or weighed less important. In such implementation, this process is done automatically by identifying the majority from the minority.
One solution for mitigating the problem associated with the minority is to give a different weight to each instance of pairs where a bigger weight means the particular instance pair is more important. In this implementation, it is assumed that all instance pairs from a user u share the same weight αu. The next step is to determine a weight for each user.
Every w obtained by PAPL (from Listing 1) is treated as a directional vector. Predicting a preference order between two questions xi(1) and xi(2) is achieved by projecting xi(1) and xi(2) onto the direction denoted by w and then sorting them on a line. Thus, the directional vector wu denoting a user u agreeing with a majority should be close to the directional vector w0 denoting the majority. Furthermore, the closer a user vector is to w0, the more important the user data is.
As one implementation, cosine similarity is used to measure how close two directional vectors are to each other. A set of user weights {αu} is found as follows:
This implementation is termed majority-based perceptron algorithm (MBPA) and emphasizes its training on the instance pairs from a majority of users such as by using Equation 9. Listing 2 provides pseudo code for one implementation of this method.
In MBPA, at iteration 0 (t=0), the condition at line 4 of Listing 2 prevents the minority from participating in the training process. Note that u(i) represents a user who is involved in generating the preference pair xi(1) and xi(2) (such as found in definition 2). Further, at line 5 of Listing 2, training is emphasized over important instance pairs according to Equation 9. At iteration 1, w0 is replaced with w1 and the procedure is iterated where it is expected that wt+1 represents the majority better than wt.
As MBPA is an iterative algorithm, it is helpful to discuss its convergence. Theorem 1 guarantees the convergence of MBPA. First, Definition 3 is defined:
The margin of γ(w, S′) of a score function fw is minimal real-valued output on the training set S′. Specifically,
Theorem 1:
be a set of training examples, and let r:=max∥xi(1)−xi(2)∥. Suppose there exists woptεRn such that ∥wopt∥=1 and
γ(wopt,S′)≧Γ (11)
Then, if wopt, w0>0 the number of updates made by the algorithm MBPA on S′ is bounded by
. Theorem 1 is an extension of Novikoff's theorem.
Learning Features
At community sites, a question is usually associated with three kinds of entities: (a) an asker who posts a question; (b) answers who provide answers to the question; and (c) answers to the question. Using the exemplary method described above, popularity is predicted for not only questions with answers, but also questions without answers. Thus, when modeling question popularity, two aspects are features are explored: features about questions and features about askers of the questions. Table 1 provides a list of features about questions (QU) and Table 2 provides a list of features about askers of questions (AS).
Features about questions (as shown in Table 1) come only from metadata of questions. In one implementation, a question comprises a title, description and posting time. In one implementation, a “bag-of-words” feature in reference to a question title and question description is not used. Features about askers are extracted from historical behaviors or askers. An asker's historical information can indicate if he is a skilled question asker or has a history of asking “interesting” questions.
Experimental Results
Using the above-described technology, experimental results were obtained. As to the dataset—297,919 questions were crawled from the Yahoo! Answers website under the top-level category of “travel.” The questions were posted within nine months between Aug. 1, 2007 and Apr. 30, 2008. Each question comprised two fields, title and description. Each question was also identified by asker of the question. Users of Yahoo! Answers rate or recommend questions by the label of “interesting.”
The following procedure was used to build training sets, a development set, and a test set.
1—Randomly separated all questions into two sets denoted Set-A and Set-B.
2—With Set-A, built two training sets:
TR-1—Extracted from Set-A all questions voted as “interesting” by more than four users and then applied Definition 1 onto the extracted questions (Δv=5). The resulting preference pairs comprise TR-1.
TR-2—Applied Definition 2 to all questions in Set-A which resulted in a data set as Equation 3. The data set is denoted by TR-2.
3—Questions voted as “interesting” by more than four users were extracted. Definition 1 was then applied to the extracted questions (Δv=5). The result was then split into two subsets: a development set DEV and a test set TST.
Among the crawled questions, only about 13% of questions were voted by users as “interesting.” TR-1 was then considered sparse.
The number of preference pairs in the resulting data sets is as follows: TR-1 (188,638 pairs), TR-2 (1,090,694 pairs), DEV (49,766 pairs), and TST (49,148 pairs). TR-2 was larger than TR-1. TR-1 was obtained by setting Δv=5.
Three of the most “interesting” or “popular” questions in the data set which according to users' votes were: “Where in the world would you love to visit?” “Any suggestions for preventing seasickness?” and “Often do hotels have the comforters and pillows washed?”
Error rates of preference pairs were determined using a formula of the form ER=|mistakenly predicted preference pairs|/| all preference pairs in TST|. The use of different features of questions (and answers) as to “interestingness” or “popularity” were evaluated in two ways: (a) calculated information gain of each feature; and (b) evaluated the contribution of each feature in terms of predicting capability.
Table 3 shows the information gain (IG) for each of a list of learning features sorted or ranked by IG as calculated on the training set TR-2. Features about asker (AS) play a major role in predicting question “interestingness” or “popularity.” From the data, the history of an asker posting starred questions is the most important (AS: Ratio of Starred, AS: Stars per Question, and AS: Total Stars Received). In comparison, the WH-words features are weak features in terms of predicting “interestingness” or “popularity.”
Also shown in Table 3 shows the error rates for a series of models trained with PAPL and used on the data set DEV. The error rates do not decrease monotonically, meaning that the features are not independent from each other. The error rates also show that WH-word features do not help (much) in terms of “error rate of preference pairs.”
Effectiveness
The following shows the evaluation according to two aspects: (a) how does the training set TR-2 help boost performance? and (2) how well does the method MBPA perform when compared with PAPL? In the experiments, all features of Table 1 were used. The parameters for PAPL and MBPA were tuned with the development set DEV.
Table 4 shows the results of the evaluation of effectiveness. With reference to Table 4, the training set TR-1 was obtained by setting Δv to 5 which is the same as that in TST. From Table 4, the algorithm MBPA trained with the training set TR-2 outperformed both the PAPL trained with TR-1 and the PAPL trained with TR-2 significantly (e.g. sign-test, p-value<0.01). This result shows that (1) taking into consideration questions without user ratings (or votes) incorporates more evidence than the training set given by Definition 1 (by noting that PAPL trained with TR-2 performs better than the PAPL trained with TR-1); and (2) the majority-based perceptron algorithm (MBPA) is effective in filtering noisy training data.
It is noted that the size of TR-2 is much larger than the size of TR-1. It could be argued that the size of TR-1 could be increased by setting Δv smaller (e.g. <5) to achieve a possibly better performance. Table 5 shows the results of setting Δv smaller than 5. The test set is TST and the model is PAPL. With reference to Table 5, the size of TR-1 becomes larger but the error rate of the corresponding PAPL increases as Δv gets smaller. When Δv=1, the size of TR-1 is even comparable with TR-2, but the model learned with TR-1 still performs significantly worse than that learned with TR-2. This further confirms the use of TR-2 built with the data construction method.
Prediction is easier when finer categories of questions are considered. Users tend to converge in their preference about “interesting” or “popularity” when topics of questions are constrained within a sub-category. For example, it is relatively easy for users to find the same preference when only topics of Asian as a travel area are considered. Table 6 shows the results of predicting “interestingness” or “popularity” for “Asia Pacific” and “Europe” sub-categories of travel questions.
There were 46,541 questions under “Asia Pacific” and 23,080 questions under “Europe.” By comparing Table 6 with Table 4, it can be seen that question “popularity” is predicted more accurately constrained within categories of question topics.
Insights
There is a relationship between a learned preference and users' preferences (represented by
).
Users also tend to have larger cosine similarities compared to ŵ than compared to w0. In one implementation, for ŵ, the algorithm only uses data from users whose similarities are larger than 0 (line 4 of Listing 2 ensures this).
The subject matter described above can be implemented in hardware, or software, or in both hardware and software. Although the subject matter has been described in language specific to structural features or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claimed subject matter. For example, the methodological acts need not be performed in the order or combinations described herein, and may be performed in any combination of one or more acts.
This application is a continuation-in-part of and claims priority to U.S. patent application Ser. No. 12/403,560, filed Mar. 13, 2009 titled “Question and Answer Search,” the entirety of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | 12403560 | Mar 2009 | US |
Child | 12569553 | US |