The present invention relates to the field of computing. More specifically, the present invention relates to the field of ranking users of an online service buyer/provider website.
The Internet has continued to grow from an information sharing tool for the government and universities to a commercially viable E-commerce marketplace used by a significant portion of society. Not only are people able to purchase products online, but they are also able to purchase services. Thus, a service can be obtained by engaging an independent service provider without requiring the addition of an employee. For example, if a company wants to engage a programmer to design a web site for them, the company does not simply hold office interviews and hire someone. Instead, certain web sites provide users with the ability to post their expertise and companies with the ability to post jobs that need to be performed. One leading example of such a website is elance.com. Depending on the site, users and companies form an agreement where one provides a service and the other pays for and receives the result of that service (e.g. a completed web site). ELANCE is a registered trademark of Elance, Inc. of Mountain View, Calif.
As with most web sites, this site and competitive sites that provide service buyers and service providers a meeting ground, contain a significant amount of data. The ability to organize the data properly can impact user satisfaction. For example, a site that lists service providers alphabetically by name would present a significant disadvantage for someone with a name towards the end of the alphabet such as Zeb Zimmerman. If the site only had 5 service providers, the problem would be minimal. However, for sites which have many hundreds or thousands of service providers, a better method of listing the service providers is needed.
Some methods have been implemented to sort the data and provide it in a more useful manner to users. U.S. Pat. No. 6,871,181 to Kansal teaches a method for assessing, scoring, ranking and rating technology vendors for the purpose of comparing vendor bids on a project. A score or ranking is developed for each of the vendors based upon the vendor's historical reliability as well as normalizing the vendor's ranking with respect to the other vendors for the purpose of determining the appropriate vendor. U.S. Patent App. No. 2006/0212359 to Hudgeon teaches ratings based on performance attributes such as service quality, timeliness and cost. Sites such as rentacoder.com and guru.com also use ratings which organize service providers. Rentacoder.com uses an equation which sums the cost of each job times the adjusted rating of each job minus each missed status report value. Guru.com ranks users by category and then based on feedback and money earned.
While the ranking schemes above are helpful in organizing data related to service providers on web sites, there are shortcomings which need to be addressed.
A method of and system for ranking users by reputation enables better searching for a service provider. Service providers are also motivated to conform to reputation requirements since they are published. The reputation requirements include reviews, earnings, duration on a site, recent visits and other components that are able to establish a user's reputation. The components are also weighted so that more important factors count more towards a user's reputation.
In one aspect, a method of searching on a computing device for a service provider to provide a service. The method includes searching for one or more skills. The skills are matched the one or more skills with text within service provider profiles. The matching operation produces search scores. Reputation scores are calculated based on reputation data within service provider profiles. Typically, a final contribution value is associated with each of the reputation data; the final contribution is dependent on a component weight and a category weight. A list of service providers is generated which is based on the search scores and reputation scores.
In another aspect, a system for presenting a list of service providers to perform a task. The system includes a processor and an application executed by the processor. The application is for searching for one or more skills, matching the one or more skills with text within service provider profiles to produce search scores, calculating reputation scores based on reputation data within the service provide profiles, and generating a list of service providers based on the search scores and reputation scores. Typically, a final contribution value is associated with each of the reputation data; the final contribution is dependent on a component weight and a category weight. The application is executed online.
The search scores are based on matches between the one or more skills and the text. The matches can be weighted. The matches can be weighted more when a skill of the one or more skills is within a predetermined preferred section. The preferred sections can be one of a tagline section, a skills section, a keyword tag section, an experience section and a credentials section. The matches can be weighted less when a skill of the more or more skills in within a predetermined non-preferred section. The non-preferred sections can be one of a description section, a summary section and an ‘about us’ section. The preferred and non-preferred sections can be determined by the website administrator or selected by the users. The search scores and the reputation scores are weighted equally or differently. The reputation data comprise components including feedback data, review data, earnings data, duration data, visitation data and project completion data. A portion of the components are correlated in determining the reputation scores. Reputation components are weighted equally or differently. In some embodiments, the reputation components are grouped into categories. The categories are weighted equally or differently. Each reputation component and at least a portion of the service provider profiles is displayed within the list of service providers. Each reputation component is viewable by the service providers, buyers, and other users. The list of service providers is ordered, such as in descending order with the service provider with a highest combination of the search scores and the reputation scores atop the list of service providers. The list of service providers is viewable by the service providers. The method further comprises refining the list of service providers.
Yet, in another aspect, a method of providing a list of service providers on a computing device. The method includes accessing a database of service providers profiles and determining a reputation score for each service provider. A first user interface is provided to allow entry of one or more skills. The one or more skills are matched to the database of service provider profiles to produce a search score for each service provider. Reputation scores are calculated based on reputation data. Typically, a final contribution value is associated with each of the reputation data; the final contribution is dependent on a component weight and a category weight. A list of service providers is generated based on search scores and reputation scores. The search scores are based on weighted matches of the one or more skills and the text within the service provider profiles. The reputation scores are based on reputation data within the service provider profiles. The list of service providers is displayed. A second user interface is provided to allow refining a list of service providers.
Yet, in another aspect, a method of calculating a reputation score on a computing device. The method includes categorizing feedback components. A category weight is assigned for each category. For each component within a category, a component contribution score is determined by multiplying together the category weight, a component weight, and a relative ranking value. For each category, a category contribution score is calculated by adding together component contribution scores within a category. The method further includes adding together category contribution scores to determine the reputation score.
A method of and system for ranking users as a function of reputation encourages users to maintain a high reputation, which in turn, forces users to perform appropriately to achieve the high reputation. The reputation is able to be based upon a number of factors or components including, but not limited to, content of reviews (e.g. good service, bad service or more specific details), quantity of reviews, earnings, duration on a site, visitation quantity and so on. The reputation of the service provider is also able to be weighted so that more important factors have more weight in determining a user's reputation. Once user reputations are established, the users are able to be ordered based on their reputation. In some embodiments, reputation is utilized in conjunction with search results for determining an order for the user listings.
For example, if the user wants to search through service providers, the user selects “providers” in the drop-down menu. The drop-down menu is able to include any set of profiles to search through, for example, service providers and projects. If the user then wants to search for service providers that have experience with the Ajax programming language, the user types “ajax” in the text box. Then, the user presses “enter” or clicks on the command button to begin the search.
The search operation functions by searching for matches through text in service provider profiles.
The search results include the name of the service provider, service provider profile data 106 and other pertinent data. The profile data 106 includes tagline data and company category as well as links to further data. The links are preferably “clickable” links. In addition to search results displayed in the list of search providers, reputation data 108 is also displayed. The reputation data 108 includes, but is not limited to, feedback data, review data, earnings data and other data as desired.
The order in which the search results are displayed depends on a total score which is a combination of a search score for the search results of the profile and a reputation score for the reputation data 108. The combined score can be formed using a simple mathematical summation, a weighted summation or any predetermined means for forming a composite score. In some embodiments, the search results of the profile account for half of the total score and the reputation data 108 account for the other half of the total score for determining the order of the results. In alternative embodiments, other point schemes are used for determining the order of the results.
The search score for the search results of the service provider profile depends on where the text searched for is found. As shown in
For example, the user searches for service providers using the text “Ajax.” A first service provider includes Ajax in its “tagline,” “skills” and “work experience” sections, with 5 years of work experience with Ajax, and would thus receive a search score accordingly. A second service provider only includes Ajax in its “about us” section and would thus receive a lower search score than the first service provider. Assuming their reputation scores are equal, the first service provider would be listed first or at the top of the list and the second service provider would be listed below.
In some embodiments, a reputation score for reputation data 108 is computed similarly to the search score. As described above, the reputation data 108 is able to include components including, but not limited to, feedback data, review data, earnings data, frequency of visits, most recent visit, number of projects completed and other data. In some embodiments, more or less data is included in the reputation data 108. The components of the reputation data 108 are weighted so that some of the components are more important while others are less important, which results in different reputation scores. For example, a feedback rating of 100% positive can be weighted heavier than the frequency of visits to the site. Some components are correlated in determining the reputation score for the reputation data 108. For example, a feedback rating and the quantity of reviews are related such that a feedback rating of 100% is more valuable when there have been 20 reviews versus 1 review.
In other embodiments, each component is weighted twice: one based on a category weight and the other based on a component weight. Specifically, similar components are grouped into same categories. Each category has a category weight or percentage that contributes to the reputation score. The weight assigned for each category is variable and adjustable. Preferably, the more important the category is, the higher the category weight. Alternatively, all categories are weighted equally. More or less categories that contribute to the reputation score are possible. In all embodiments, all category weights together account for 100% of the reputation score.
Each component within a category has a component weight that contributes to the category score. The weight assigned for each component within the category is variable and adjustable. Similarly, the more important the component is, the higher the component weight. Alternatively, all components within the category are weighted equally. More or less components that contribute to the category score are possible. In all embodiments, all component weights within the category together account for 100% of the category score.
As illustrated in a table 300 of
Since the four components under the “customer satisfaction” category together, in turn, account for 25% of the reputation score, each component accounts for a maximum possible percentage of the reputation score. The maximum possible percentage for each component is shown in a “Maximum Possible” column 320. The maximum possible percentage for a component is derived by multiplying the “customer service” category weight (25%) to the component weight. As such, a maximum possible percentage for the “Average Six Months Feedback” component 335 is 12.5% (25%×50%). A maximum possible percentage for the “Average All Time Feedback” component 340 is 7.5% (25%×30%). A maximum possible percentage for the “Percent Repeat” component 345 is 2.5% (25%×10%). A maximum possible percentage for the “Number of Feedback” component 350 is 2.5% (25%×10%).
For each component under the “customer satisfaction” category, a relative ranking is shown in a “Relative Ranking” column 315, and a final contribution is shown in a “Final Contribution” column 325. The final contribution for a component represents a percentage of the reputation score contributed by the service provider, and is derived by multiplying the relative ranking to the maximum possible percentage. For example, a relative ranking for the “Average Six Months Feedback” component 335 is 99%, which means that the service provider ranks higher than 99% of all service providers in terms of feedback received within the last six months. A final contribution for the “Average Six Months Feedback” component 335 is 12.4% (12.5%×99%). A relative ranking for the “Average All Time Feedback” component 340 is 99%, which means that the service provider ranks higher than 99% of all service providers in terms of feedback received for all times. A final contribution for the “Average All Time Feedback” component 340 is 7.4% (7.5%×99%). A relative ranking for the “Percent Repeat” component 345 is 17%, which means that 17% of the service provider's customers are repeat customers. A final contribution for the “Percent Repeat” component 345 is 0.4% (2.5%×17%). A relative ranking for the “Number of Feedbacks” component 350 is 30%, which means that 30% of service buyers whom the service provider had worked for in the past gave the service provider feedback. A final contribution for the “Number of Feedbacks” component 350 is 0.8% (2.5%×30%). Accordingly, a total final contribution 330 from the first category towards the reputation score, which is derived by adding together the final contribution for each component under the “customer satisfaction” component, is 21%. If the service provider had a perfect relative ranking (i.e. 100%) for each customer satisfaction component, then the total contribution 330 would have been 25%, which is the maximum contribution from the first category towards the reputation score.
As illustrated in a table 400 of
Since the two components under the “earnings” category together, in turn, account for 65% of the reputation score, each component accounts for a maximum possible percentage of the reputation score. The maximum possible percentage for each component is shown in the “Maximum Possible” column 420. The maximum possible percentage for a component is derived by multiplying the “earnings” category weight (65%) to the component weight. As such, a maximum possible percentage for the “Six Months Earnings” component 435 is 39% (65%×60%). A maximum possible percentage for the “All Time Earnings” component 440 is 26% (65%×40%).
For each component under the “earnings” category, a relative percentile is shown in a “Relative Percentile” column 415, and a final contribution is shown in the “Final Contribution” column 425. The final contribution for a component represents a percentage of the reputation score contributed by the service provider, and is derived by multiplying the relative percentile to the maximum possible percentage. For example, a relative percentile for the “Six Months Earnings” component 435 is 100%, which means that the service provider is the top earner among all service providers for the past six months. A final contribution for the “Six Months Earnings” component 435 is 39% (39%×100%). A relative percentile for the “All Time Earnings” component 440 is 100%, which means that the service provider is the top earner among all service providers for all times. A final contribution for the “All Times Earning” component 440 is 26% (26%×100%). Accordingly, a total final contribution 430 from the second category towards the reputation score, which is derived by adding together the final contribution for each component under the “earnings” category, is 65%. If the service provider did not have a perfect relative percentile (i.e. 100%) for each earnings component, then the total contribution 430 would not have been 65%, which is a maximum contribution from the second category towards the reputation score.
As illustrated in a table 500 of
Since the three components under the “participation” category together, in turn, make up 10% of the reputation score, each component accounts for a maximum possible percentage of the reputation score. The maximum possible percentage for each component is shown in a “Maximum Possible” column 520. The maximum possible percentage for a component is derived by multiplying the ‘participation’ category weight (10%) to the component weight. As such, a maximum possible percentage for the “Number of Violations Verified” component 535 is 3.3% (10%×33%). A maximum possible percentage for the “Number of Credentials Verified” component 540 is 3.3% (10%×33%). A maximum possible percentage for the “Number of Skills Tested Positively” component 545 is 3.3% (10%×33%).
For each component under the “participation” category, a relative position is shown in a “Relative Position” column 515, and a final contribution is shown in a “Final Contribution” column 525. The final contribution for a component represents a percentage of the reputation score contributed by the service provider, and is derived by multiplying the service provider's relative position to the maximum possible percentage. For example, a relative position for the “Number of Violations Verified” component 535 is 0%, which means that the service provider has no verified violations within the services exchange medium. A final contribution for the “Number of Violations Verified” component 535 is 0% (3.3%×0%). A relative position for the “Number of Credentials Verified” component 540 is 100%, which means that the service provider has credentials that are all verified. A final contribution for the “Number of Credentials Verified” component 540 is 3.3% (3.3%×100%). A relative position for “Number of Skills Tested Positively” component 545 the 0%, which means that the service provider has no skills that are tested positively within the services exchange medium. A final contribution for the “Number of Skills Tested Positively” component 310 is 0% (3.3%×0%). Accordingly, a total final contribution 530 from the third category towards the reputation score, which is derived by adding together the final contribution for each component under the “participation” category, is 3.3%. If the service provider had a perfect relative position (e.g. 100%) for each participation component, then the total contribution 530 would be 10%, which is a maximum contribution from the third category towards the reputation score.
The reputation score is determined by adding together the first category's total final contribution 330, the second category's total final contribution 430, and the third category's total final contribution 530. As such, the service provider's reputation score is 89.3% (21%+65%+3.3%). The reputation score is a quantitative metric which represents the service provider's reputation. It should be apparent to those of ordinary skill in the art that the categories, the components and associated weights as illustrated in
By combining the search score and the reputation score, a total score is determined which is used to generate a list of results ordered based on the total score. Since reputation is an important attribute when choosing a service provider, the list is ordered in a significant manner useful to users when the reputation score is combined with the search score. As such, the list provides the users with matches that are most capable of providing high quality service, descending to those that are not as capable according to reputation. The users are then able to scroll down the list of results to pick the person or company that best suits their needs.
The list of results is also able to be refined further using the refine results components 104. The refine results components 104 are able to include links, scroll bars, text boxes and/or any other user interface element. For example, the refine results components 104 include links to limit the results that are only within a certain category. Scroll bars limit the results to only those with feedback above a certain percentage, the number of reviews above a certain amount, and an hourly rate up to a certain price. A location text box limits the results to those within a certain geographical area. As each refine results component 104 is used, the list of results changes to only include those that also meet the refine results criteria.
For example, the site operator designates text found in the “tagline” section 202 and the “keyword” section 208 as the most important. Text found in the “skills” section 210 is next on the importance list, followed by the “experience” section 206. Text found in the other sections is considered the least important. Furthermore, in some embodiments, some text is dependent on other data such as text found in the “skills” section 210 which has its value modified by the amount of years experience for that skill. Thus, a skill of Ajax with 1 year experience is rated as not as valuable as 3 years of experience.
Examples of suitable computing devices include a personal computer, laptop computer, computer workstation, a server, mainframe computer, mini-computer, handheld computer, personal digital assistant, cellular/mobile telephone, smart appliance, gaming console or any other suitable computing device.
There are multiple aspects of utilizing the method and system for ranking users. From a service buyer's perspective, the service buyer is able to search for a service provider to perform a desired task. The service buyer enters in a search text, for example, a programming skill such as Javascript, and based on the search results and reputation results, service providers are ranked and listed accordingly. The service buyer is then able to select the most appropriate service provider for the service buyer's needs. From a service provider's perspective, the service provider is aware that his/her reputation is a factor in determining where they are ranked and listed when a service buyer searches. Where the service provider appears on a list is able to have an effect on how often the service provider is selected to provide a service which ultimately determines profitability for the service provider. Thus, service providers will likely perform necessary actions to maintain a high reputation, so that they are highly ranked when searched for. This forces service providers to act appropriately, for example, providing quality service, responding to service buyers' questions, in addition to other actions.
In operation, the method and system for ranking users enables service buyers to easily search for a best matching service provider. The method and system also enable behavioral guidance of service providers since the service providers are aware that their reputation determines their ranking. The method and system implement a search which ranks service providers based on weighted textual search results and weighted reputation results. The weighted textual search results depend on where text is found, how many times and other factors used in determining a score for the search results. The weighted reputation results depend on many factors such as reviews, earnings, duration on the site and additional factors. Since service providers are aware that their reputation affects where they are ranked, the reputation influences their behavior on the site. This ranking impact essentially motivates the service providers to perform well or to be accept a poor ranking.
The following is an example of utilizing the method and system for ranking users. Service Provider X generates his profile which includes skills of Javascript, Java and web site design as well as job experience of Javascript for 3 years, Java for 2 years and web site design for 3 years. Initially, when service buyers search for text such as Java, Service Provider X will be listed below others who have an established positive reputation, but above those who have a negative reputation. Since the search results are taken into account too, others with more years experience in Java will also likely appear ahead of Service Provider X, assuming their reputation is not too tarnished. As Service Provider X is hired for more and more work, assuming he performs well and does what is required as a service provider, his reputation will increase. Furthermore, since Service Provider X is aware that his reputation increases as he performs well which results in him being ranked higher, Service Provider X will ensure he performs well and meets the expectations of any service buyers who purchase his services. Thus, the method and system for ranking users influences the behavior of the service providers and ensures service providers provide quality service. Unlike, sites which list service providers solely based on earnings which simply rewards service providers who are able to do a lot of work or are hired for expensive work, a method and system for ranking users based, in part, on reputation truly encourages quality service.
The present invention has been described in terms of specific embodiments incorporating details to facilitate the understanding of principles of construction and operation of the invention. Such reference herein to specific embodiments and details thereof is not intended to limit the scope of the claims appended hereto. It will be readily apparent to one skilled in the art that other various modifications may be made in the embodiment chosen for illustration without departing from the spirit and scope of the invention as defined by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
4703325 | Chamberlin et al. | Oct 1987 | A |
4799156 | Shavit et al. | Jan 1989 | A |
5008853 | Bly et al. | Apr 1991 | A |
5548506 | Srinivasan | Aug 1996 | A |
5557515 | Abbruzzese et al. | Sep 1996 | A |
5592620 | Chen et al. | Jan 1997 | A |
5664115 | Fraser | Sep 1997 | A |
5715402 | Popolo | Feb 1998 | A |
5732400 | Mandler et al. | Mar 1998 | A |
5794207 | Walker et al. | Aug 1998 | A |
5835896 | Fisher et al. | Nov 1998 | A |
5842178 | Giovannoli | Nov 1998 | A |
5862223 | Walker et al. | Jan 1999 | A |
5905975 | Ausubel | May 1999 | A |
5924082 | Silverman et al. | Jul 1999 | A |
5949976 | Chappelle | Sep 1999 | A |
5956715 | Glasser et al. | Sep 1999 | A |
5966130 | Benman, Jr. | Oct 1999 | A |
5987498 | Athing et al. | Nov 1999 | A |
6009154 | Rieken et al. | Dec 1999 | A |
6041307 | Ahuja et al. | Mar 2000 | A |
6049777 | Sheena et al. | Apr 2000 | A |
6061665 | Bahreman | May 2000 | A |
6064980 | Jacobi et al. | May 2000 | A |
6078906 | Huberman | Jun 2000 | A |
6092049 | Chislenko et al. | Jul 2000 | A |
6101482 | DiAngelo et al. | Aug 2000 | A |
6119101 | Peckover | Sep 2000 | A |
6128624 | Papierniak et al. | Oct 2000 | A |
6141653 | Conklin et al. | Oct 2000 | A |
6154731 | Monks et al. | Nov 2000 | A |
6161099 | Harrington et al. | Dec 2000 | A |
6223177 | Tatham et al. | Apr 2001 | B1 |
6226031 | Barraclough et al. | May 2001 | B1 |
6233600 | Salas et al. | May 2001 | B1 |
6311178 | Bi et al. | Oct 2001 | B1 |
6336105 | Conklin et al. | Jan 2002 | B1 |
6374292 | Srivastava et al. | Apr 2002 | B1 |
6415270 | Rackson et al. | Jul 2002 | B1 |
6442528 | Notani et al. | Aug 2002 | B1 |
6484153 | Walker et al. | Nov 2002 | B1 |
6557035 | McKnight | Apr 2003 | B1 |
6564246 | Varma et al. | May 2003 | B1 |
6567784 | Bukow | May 2003 | B2 |
6598026 | Ojha et al. | Jul 2003 | B1 |
6832176 | Hartigan et al. | Dec 2004 | B2 |
6859523 | Jilk et al. | Feb 2005 | B1 |
6871181 | Kansal | Mar 2005 | B2 |
7069242 | Sheth et al. | Jun 2006 | B1 |
7096193 | Beaudoin et al. | Aug 2006 | B1 |
7310415 | Short | Dec 2007 | B1 |
7406443 | Fink et al. | Jul 2008 | B1 |
1287997 | Diller et al. | Oct 2008 | A1 |
7437327 | Lam et al. | Oct 2008 | B2 |
7466810 | Quon et al. | Dec 2008 | B1 |
7587336 | Wallgren et al. | Sep 2009 | B1 |
8024225 | Sirota et al. | Sep 2011 | B1 |
20010011222 | McLauchlin et al. | Aug 2001 | A1 |
20010032170 | Sheth | Oct 2001 | A1 |
20010034688 | Annunziata | Oct 2001 | A1 |
20010041988 | Lin | Nov 2001 | A1 |
20020010685 | Ashby | Jan 2002 | A1 |
20020023046 | Callahan et al. | Feb 2002 | A1 |
20020026398 | Sheth | Feb 2002 | A1 |
20020032576 | Abbott et al. | Mar 2002 | A1 |
20020120522 | Yang | Aug 2002 | A1 |
20020120554 | Vega | Aug 2002 | A1 |
20020129139 | Ramesh | Sep 2002 | A1 |
20020133365 | Grey et al. | Sep 2002 | A1 |
20020194077 | Dutta | Dec 2002 | A1 |
20030046155 | Himmel et al. | Mar 2003 | A1 |
20030055780 | Hansen et al. | Mar 2003 | A1 |
20030101126 | Cheung et al. | May 2003 | A1 |
20030191684 | Lumsden et al. | Oct 2003 | A1 |
20040063463 | Boivin | Apr 2004 | A1 |
20040122926 | Moore et al. | Jun 2004 | A1 |
20040128224 | Dabney et al. | Jul 2004 | A1 |
20050043998 | Bross et al. | Feb 2005 | A1 |
20050222907 | Pupo | Oct 2005 | A1 |
20060031177 | Rule | Feb 2006 | A1 |
20060095366 | Sheth et al. | May 2006 | A1 |
20060122850 | Ward et al. | Jun 2006 | A1 |
20060136324 | Barry et al. | Jun 2006 | A1 |
20060155609 | Caiafa | Jul 2006 | A1 |
20060212359 | Hudgeon | Sep 2006 | A1 |
20070027746 | Grabowich | Feb 2007 | A1 |
20070027792 | Smith | Feb 2007 | A1 |
20070067196 | Usui | Mar 2007 | A1 |
20070078699 | Scott et al. | Apr 2007 | A1 |
20070162379 | Skinner | Jul 2007 | A1 |
20070174180 | Shin | Jul 2007 | A1 |
20070192130 | Sandhu | Aug 2007 | A1 |
20070233510 | Howes | Oct 2007 | A1 |
20080059523 | Schmidt et al. | Mar 2008 | A1 |
20080065444 | Stroman et al. | Mar 2008 | A1 |
20080082662 | Dandliker et al. | Apr 2008 | A1 |
20080109491 | Gupta | May 2008 | A1 |
20080154783 | Rule et al. | Jun 2008 | A1 |
20080187114 | Altberg et al. | Aug 2008 | A1 |
20080294631 | Malhas et al. | Nov 2008 | A1 |
20090011395 | Schmidt et al. | Jan 2009 | A1 |
20090017788 | Doyle et al. | Jan 2009 | A1 |
20090055404 | Heiden et al. | Feb 2009 | A1 |
20090177691 | Manfredi et al. | Jul 2009 | A1 |
20090210282 | Elenbaas et al. | Aug 2009 | A1 |
20090234706 | Adams et al. | Sep 2009 | A1 |
20090287592 | Brooks et al. | Nov 2009 | A1 |
20100017253 | Butler et al. | Jan 2010 | A1 |
20100115040 | Sargent et al. | May 2010 | A1 |
20110238505 | Chiang et al. | Sep 2011 | A1 |
Number | Date | Country |
---|---|---|
0 952 536 | Oct 1999 | EP |
WO 0115050 | Mar 2001 | WO |
WO 0173645 | Oct 2001 | WO |
WO 02061531 | Aug 2002 | WO |
Entry |
---|
Author: Majithia et al.; Title: Reputation-based Semantic Service Discovery; Date: 2004; pp. 1-6. |
Author: Xu et al.; Tile: Reputation-Enhanced QoS-based Web Services Discovery; Date: 2007; pp. 1-8. |
Author: Massimo Paolucci et al.; Title: Semantic Matching of Web Services Capabilities; Published date: 2002; Publisher: Springer-Verlag Berlin Heidelberg 2002; pp. 1-15. |
Davenport, Thomas H. and Keri Pearlson, “Two Cheers for the Virtual Office”, summer 1998, abstract, retrieved from the Internet: <URL: http://www.pubservice.com/MSStore?ProductDetails.aspx?CPC=3944>. |
PCT International Search Report and Written Opinion, PCT/US06/22734, Jun. 3, 2008, 5 pages. |
U.S. Appl. No. 60/206,203, filed May 22, 2000, Anumolu et al. |
U.S. Appl. No. 60/999,147, filed Oct. 15, 2007, Diller et al. |
U.S. Appl. No. 61/131,920, filed Jun. 12, 2008, Diller et al. |
U.S. Appl. No. 09/644,665, filed Aug. 24, 2000, Sheth et al. |
ants.com web pages [online]. Ants.com [retrieved on Aug. 22, 2008]. Retrieved from the Internet: <URL: http://www.ants.com/ants/>. |
bizbuyer.com web pages [online]. BizBuyer.com, Inc. [retrieved Aug. 18-21, 2000]. Retrieved from the Internet: <URL: http://www.bizbuyer.com/>. |
BullhornPro web pages [online]. Bullhorn, Inc. [retrieved on Jan. 4, 2001]. Retrieved from the Internet: <URL: http://www.bullhornpro.com/>. |
Cassidy, M., “Going for Broke,” San Jose Mercury News, Monday, Aug. 16, 1999, pp. 1E and 4E, published in San Jose, CA. |
efrenzy.com web pages [online]. eFrenzy, Inc. [retrieved on Aug. 22, 2000]. Retrieved from the Internet: <URL: http://www.efrenzy.com/index.isp>. |
Eisenberg, D., “We're for Hire, Just Click,” Time Magazine, Aug. 16, 1999, vol. 154, No. 7 [online] [retrieved on Aug. 19, 1999]. Retrieved from the Internet: <URL: http://www.pathfinder.com/time/magazine/articles/0,3266,29393,00.html>. |
eworkexchange.com web pages [online]. eWork Exchange, Inc. [retrieved on Aug. 18-22, 2000]. Retrieved from the Internet: <URL: http://www.eworks.com/>. |
eWork Exchange web pages [online]. eWork Exchange, Inc. [retrieved on Jan. 5, 2001]. Retrieved from the Internet: <URL: http://www.eworks.com/>. |
eWork ProSource web pages [online]. eWork Exchange, Inc. [retrieved on Jan. 3, 2001]. Retrieved from the Internet: <URL: http://www.ework.com/>. |
FeeBid.com web pages [online]. FeeBid.com [retrieved on Dec. 18, 2000]. Retrieved from the Internet: <URL: http://www.feebid.com>. |
freeagent.com web pages [online]. FreeAgent.com [retrieved Aug. 18-22, 2000]. Retrieved from the Internet: <URL: http://www.freeagent.com/>. |
guru.com.com web pages [online]. Guru.com, Inc. [retrieved Aug. 18, 2000]. Retrieved from the Internet: <URL: http://www.guru.com/>. |
Herhold, S., “Expert Advice is Collectible for Start-up,” San Jose Mercury News, Monday, Aug. 16, 1999, pp. 1E and 6E, San Jose, CA. |
hotdispatch.com web pages [online]. HotDispatch, Inc. [retrieved on Aug. 22, 2000]. Retrieved from the Internet: <URL: http://www.hotdispatch.com/>. |
Humphreys, Paul et al., “A Just-in-Time Evaluation Strategy for International Procurement,” MCB UP Limited, 1998, pp. 1-11. |
“IBNL Forges Into the Future of Buying and Selling with Source Interactive Software,” PR Newswire, Jan. 10, 1996. [replacement copy retrieved on May 4, 2009]. Retrieved from Internet: <URL: http://www.highbeam.com>. |
imandi.com web pages [online]. Imandi Corporation [retrieved on Aug. 22, 2000]. Retrieved from the Internet: <URL: http://www.imandi.com/>. |
Malone, Thomas W. et al., “The Dawn of the E-Lance Economy,” Harvard Business Review, Sep.-Oct. 1998, pp. 145-152. |
“Netscape Selects Netopia as the Exclusive ‘Virtual Office’ Offering on the New Netscape Small Business Source Service,” PR Newswire, May 11, 1998, Mountain View and Alameda, California. |
onvia.com web pages [online]. Onvia.com [retrieved Aug. 22, 2000]. Retrieved from the Internet: <URL: http://www.onvia.com/usa/home/index.cfm>. |
Opus360 web pages [online]. Opus360 Corporation [retrieved on Jan. 3, 2001] Retrieved from the Internet: <URL: http://www.opus360com/>. |
smarterwork.com web pges [online]. smarterwork.com, Inc. [retrieved on Aug. 22, 2000]. Retrieved from the Internet: <URL: http://www.smarterwork.com/>. |
workexchange.com web pages [online]. WorkExchange, Inc. [retrieved Aug. 22, 2000]. Retrieved from the Internet: <URL: http://www.workexchange.com/unique/workexchange/index1.cfm>. |
Non-Final Office Action dated Nov. 8, 2010, U.S. Appl. No. 12/476,039, filed Jun. 1, 2009, Ved Ranjan Sinha et al. |
madbid.com. <http://web.archive.org/eb/20080829025830http://uk.madbid.com/faq/>. |
Non-Final Office Action dated Feb. 13, 2012, U.S. Appl. No. 12/755,304, filed Apr. 6, 2010, Jonathan Paul Diller et al. |
morebusiness.com, “How to Write Winning Business Proposals: Writing Strategies,” Office Action dated Oct. 6, 2011, <http://www.morebusiness.com/running—your—business/management/v1n11.brc>, published Aug. 1, 1998. |
Office Action dated Jul. 11, 2012, U.S. Appl. No. 12/755,304, filed Apr. 6, 2010, 22 pages. |
Office Action dated Sep. 19, 2012, U.S. Appl. No. 12/476,039, filed Jun. 1, 2009, Ved Ranjan Sinha et al. |