This application is related to U.S. patent application Ser. No. 10/378,814, entitled “METHOD AND SYSTEM FOR CUSTOMIZED CONFIGURATION OF AN APPEARANCE OF A WEBSITE FOR A USER” to Evan KIRSHENBAUM, et al.; U.S. patent application Ser. No. 10/378,857, entitled “SYSTEM, METHOD AND APPARATUS USING BIOMETRICS TO COMMUNICATE CUSTOMER DISSATISFACTION VIA STRESS LEVEL” to Carol McKENNAN, et al.; U.S. patent application Ser. No. 10/379,212, entitled “APPARATUS AND METHOD FOR THEOREM CERTIFICATION WITHOUT DISCLOSING DOCUMENTS THAT LEAD TO THE THEOREM” to Mathias SALLE; U.S. patent application Ser. No. 10/378,813, entitled “METHOD AND SYSTEM FOR SELLING AN ITEM OVER A COMPUTER NETWORK” to Evan KIRSHENBAUM, et al.; U.S. patent application Ser. No. 10/378,592, entitled “METHOD AND SYSTEM ENABLING THE TRADING OF A RIGHT TO PURCHASE GOODS OR SERVICES” to Robert C. VACANTE, et al.; U.S. patent application Ser. No. 10/378,823, entitled “METHOD AND SYSTEM FOR PROCESSING USER FEEDBACK RECEIVED FROM A USER OF A WEBSITE” to Mathias SALLE, et al., and U.S. patent application Ser. No. 10/378,835, entitled “A METHOD AND SYSTEM ENABLING THE TRADING OF A FUTURES CONTRACT FOR THE PURCHASE OF GOODS OR SERVICES” to Robert C. VACANTE, et al., all of which are concurrently herewith being filed under separate covers, the subject matters of which are herein incorporated by reference.
The technical field relates generally to networked computer systems. More particularly, the technical field relates to a software method of evaluating performance of a website using an agent to interact with the website according to a behavior model for a customer segment.
In the field of network computing, it is desirable for users, or customers, of websites to know how a website performs relative to similar websites, before the customer uses the website. For example, a customer wanting to purchase a particular good or category of goods via a network, such as the Internet, often has numerous choices of websites that sell the desired product. A buyer of a book can access various websites that sell books. Given the wide availability of similar websites, it is desirable for customers to have some means of distinguishing the websites when deciding which website to use. Some websites may be slower than others, less reliable than others, or more complicated to use than other websites. A customer may want to avoid these lesser quality websites and instead immediately access a better website. Likewise, it is desirable for operators of good websites to have a means of advertising that their websites are “better” than other, similar websites, as judged by an objective rating system.
Existing systems do not provide an efficient, effective way of rating websites based on performance. One existing method for evaluating websites is to receive feedback from customers of the website and to display this feedback on the website or on a related website that includes customer feedback for multiple websites. One problem with this method is that it requires input from actual customers, which takes time to gather and process, particularly to obtain a statistically meaningful sample. Another problem with this method is that the feedback is from self-selected customers. Only those customers who take the time to provide feedback are considered, and their opinions likely will tend to be more skewed one way or another. Also, customer feedback is always a subjective matter that relies on individual responses based on a limited experience with one website. One particular customer's experience with the website at one instance, for one transaction might not hold true for other customers. What is needed is a more reliable method of evaluating websites.
A method is disclosed for evaluating the performance of a website. An agent accesses the website and interacts with the website in a manner that simulates an example session of interaction between a customer and the website. The agent interacts with the website according to the behavior model and gathers website performance data related to the interaction. The performance data is compared to a utility function for the behavior model. A rating is assigned to the website based on the comparison, and the rating is made available to potential website customers seeking information related to the website's performance. In one embodiment, different behavior models and utility functions are associated with different segments of website customers.
A tangible, computer-readable medium having computer-executable instructions is also disclosed for performing a method of evaluating performance of a website. A session is initiated between the website and a customer segment agent, and the customer segment agent interacts with the website according to a behavior model that is associated with a customer segment. Performance data is gathered for the website based on the interaction with the agent. The performance data is compared to a utility function for the customer segment, and a rating is assigned to the website based on the comparison.
A computer system is also disclosed for evaluating performance of a website. The system includes a data mining system that accesses customer data for customers of the website. The data mining system identifies customer segments and creates a behavior model and a utility function for each of the customer segments. The computer system also includes a customer segment agent that receives the behavior models and the utility functions from the data mining system. The customer segment agent interacts with the website according to the behavior models. While interacting with the website, the customer segment agent collects performance data for the website for each of the behavior models and compares the performance data to the utility functions to create a rating for the website for each of the customer segments.
The detailed description will refer to the following drawings, wherein like numerals refer to like elements, and wherein:
In the example of
The data mining system 16 also derives a utility function 52 for each customer segment. The utility function 52 is a benchmark for website performance for the behavior model 56. When the data mining system 16 creates the behavior model 56 to test the website 30, the data mining system 16 also creates the utility function 52 so that the customer segment agent 20 has a way of measuring the website's performance in response to requests following the behavior model 56. In one embodiment, the utility function 52 is a collection of website performance data that models an ideal performance of the behavior model 56. The agent 20 interacts with the website 30 according to the behavior model 56, collects website performance data, and compares the website performance data to the utility function 52 to create a rating for the website 30.
A behavior model 56 is a parameterization of a program that decides on a next action based on previous actions, used to simulate an example session between a website customer (e.g., 34) and the website 30. In one example, a behavior model 56 applies statistical models to particular customer transactions with the website. For example, with an electronic commerce website different behavior models are used for different segments of customers. In the case of a website (e.g., 30) that sells goods and services, various customer segments may be created, for example, for customers (e.g., 34) who are likely to carefully research products before purchasing, customers (e.g., 34) who are on a budget, customers (e.g., 34) who more often purchase based on impulse, customers (e.g., 34) who have more extensive knowledge of the products for sale, etc.
A data system 16 groups customers using customer data and segmentation rules 54. Segmentation rules 54 are sent to a website rating service 60 and are used by the rating service 60 to provide website rating information to the website customers (e.g., 34). The agent 20 applies various types of behavior models (e.g., 56) to the website 30 to determine how the website 30 performs for different segments of customers. In one embodiment, the utility function 52 is derived by determining customer preferences according to the method and system described in U.S. patent application Ser. No. 09/554,751, entitled “Modeling Decision-Maker Preferences Using Evolution Based on Sampled Preferences,” filed on Apr. 7, 2000, by Evan Kirshenbaum, which is hereby incorporated by reference.
The data mining system 16 retrieves customer data from a database 14 and groups website customers into customer segments based on the customer data. The data mining system 16 creates customer segmentation rules 54 that are sent to a website rating service 60 and used to rate the website 30. The data mining system 16 also creates a behavior model 56 for each customer segment. The behavior model 56 is sent to the exerciser 24 of the customer segment agent 20. The exerciser 24 initiates as session with the website 30 and uses the behavior model 56 to interact with the website 30. The data mining system 16 also sends a utility function 52 to the agent 20. The utility function 52 is used by the assessor 24 of the agent 20 to evaluate the website 30.
The behavior model 56 causes the agent 20 to perform one or more typical transactions that might be performed by customers (e.g., 34) within a particular customer segment. By way of example, the system 10 may be implemented to evaluate a website 30 that sells books. A customer segment may be identified from the customer data as being those customers who purchase 3-5 books during a single session of interaction with the website 30. The behavior model 56 may be created to access the website 30 and perform all of the steps at the website 30 necessary to purchase four books. In one embodiment, the books are actually purchased and delivered to an address so that the functionality of the website 30 may be fully tested along with the related shipping and delivery functions.
In another embodiment, the behavior model 56 is a statistical model that performs different types of transactions. For example, the behavior model 56 might represent a customer segment that performs a particular type of transaction (e.g. purchase of a book) ten percent (10%) of the time, a different type of transaction (e.g., searching to multiple types of books) forty percent (40%) of the time, etc. In this example, the agent 20 interacts with the website 30 during multiple sessions at different times and performs the different transactions associated with the behavior model 56. For example, the agent 20 might perform the first type of transaction (e.g., purchase of a book) during 10% of the website sessions in the example above. The system 10 tests the website's performance of the behavior model 56. Performance data for the website 30 is gathered by the agent 20 and is stored in a database 40. By way of example, performance data includes data related to the complexity of the text displayed on the website 30, the formality of the language used by the website 30, the amount of animation, the number of items suggested, the use of “pop-up” menus or windows, etc.
In the example of
A website rating service 60 provides a way for a customer (e.g., 34) to learn the rankings for various websites (e.g., 30) associated with the customer' segment. In response to a query from the customer 34, the rating service 60 uses the customer segmentation rules 54 and information provided by or known about the customer 34 to identify one or more customer segments that apply to the customer 34. Based on the query, the rating service 60 also identifies one or more websites (e.g., 30) and for each website (e.g., 30) retrieves from the rankings database 42 the ranking associated with that website (e.g., 30) and the customer segment(s) associated with the customer 34. The rating service 60en arranges for these rankings to be displayed to the customer 34.
In an alternative embodiment, in response to a query from a customer 34 for a website (e.g., 30) for which rankings are not available in the rankings database 42, the rating service 60 arranges for the customer segment agent 20 to access the website (e.g., 30), acting according to the behavior model 56 associated with the customer segment associated with the customer 34 and evaluating the results according to the utility function 52 associated with that customer segment. The rating service 60 then arranges for the rankings thus learned to be communicated to the customer 34.
Websites (e.g., 30) may be identified for testing in various ways. In the embodiment shown in
Also shown in
The data mining system 16 analyzes 105 customer data to identify groups of customers (e.g., 34), or “customer segments.” The data mining system 16 creates 106 customer segmentation rules 54. The data mining system 16 also creates 107 a behavior model 56 for each customer segment and derives 108 a utility function 52 for each customer segment. A website (e.g., 30) is identified 109 for analysis. As explained with respect to
The rating is displayed 160 on the website 30 so customers have access to the objective performance rating information. In one embodiment, the rating is provided by a website rating, service 60 that is independent of the website 30. In another embodiment, the rating service 60 provides the rating to the customer 34 on a separate website that includes ratings for multiple websites (e.g., 30). In one embodiment, the rating is used in connection with a search engine or similar application in which a customer (e.g., 34) searches for websites (e.g., 30) that are similar. For example, the customer (e.g., 34) may want to find websites (e.g., 30) that sell shoes. The search engine or similar application returns websites (e.g., 30) that meet the customer's search criteria and ranks the websites (e.g., 30) according to the rankings stored in the rankings database 42 based on websites' compliance with the utility function 52 for the customer segment.
In one embodiment, the rating information may be used as a certification process, whereby the rating indicates whether the website 30 meets the certification criteria. In one example, the utility function 52 specifies a minimum performance threshold, for instance, to indicate whether the website 30 passes or fails a test for certification, and the rating information indicates whether the website 30 meets the minimum performance threshold. In one embodiment, the rating is displayed on the website 30 using a portal that allows the rating information to be provided and updated by a source independent of the website.
In one embodiment, each website (e.g., 30) is assigned a separate rating for each customer segment. When a customer (e.g., 34) accesses the website 30, the customer (e.g., 34) is classified as one of the customer segments. A default customer segment may be applied to new customers or if there is otherwise insufficient customer data to classify the customer within a segment. The rating of the website 30 displayed for the customer 34 is the website's rating for the customer segment assigned to the customer 34. For example, four customer segments—A, B, C, D—may be assigned to a website 30. A behavior model 56 and utility function 52 are created 107, 108 for each customer segment. The agent 20 interacts with the website according to each behavior model 56, and a separate rating is assigned 150 to the website 30 for each of the customer segments A, B, C, D. Thereafter, when a customer 34 accesses the website 30, the customer 34 is associated with one of the customer segments, for example, segment B. The website's rating for customer segment B is displayed 160 for the customer because segment B is associated with the customer. If the customer 34 was classified as being in customer segment C, then the website's rating for customer segment C would be displayed 160. In one example, the rating is provided to the customer 34 by an independent rating service 60. The rating service 60 uses customer segmentation rules 54 received from the data mining system 16 to associate the customer 34 with one of the customer segments. The rating service 60 then provides to the customer 34 the rating for the website so that it is associated with the customer's segment.
In an alternative embodiment, the rating service 60 receives a query from a customer (e.g., 34), identifies one or more customer segments, and identifies a set of websites relevant to the query. In this embodiment, if the website rankings database 42 does not contain a ranking for one or more of the identified websites (e.g., 30) as pertains to one or more of the identified customer segments, then the rating service 60 causes the customer segment agent 20 to interact with the websites (e.g., 30) to retrieve performance data and to create rankings for the websites (e.g., 30). After the customer segment agent 20 has obtained rankings for the missing websites, the rating service 60, retrieves the rankings from the rankings database 42 (or receives them directly from the agent 20) and provides them to the customer 34.
Although the present invention has been described with respect to particular embodiments thereof, variations are possible. The present invention may be embodied in specific forms without departing from the essential spirit or attributes thereof. In addition, although aspects of an implementation consistent with the present invention are described as being stored in memory, one skilled in the art will appreciate that these aspects can also be stored on or read from other types of computer program products or computer-readable media, such as secondary storage devices, including hard disks, floppy disks, or CD-ROM; a carrier wave from the Internet or other network; or other forms of RAM or read-only memory (ROM). It is desired that the embodiments described herein be considered in all respects illustrative and not restrictive and that reference be made to the appended claims and their equivalents for determining the scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
5537618 | Boulton et al. | Jul 1996 | A |
5734890 | Case et al. | Mar 1998 | A |
6275811 | Ginn | Aug 2001 | B1 |
6289353 | Hazlehurst et al. | Sep 2001 | B1 |
6289502 | Garland et al. | Sep 2001 | B1 |
6314420 | Lang et al. | Nov 2001 | B1 |
6408293 | Aggarwal et al. | Jun 2002 | B1 |
6449632 | David et al. | Sep 2002 | B1 |
6466686 | Senior | Oct 2002 | B2 |
6606581 | Nickerson et al. | Aug 2003 | B1 |
6850988 | Reed | Feb 2005 | B1 |
20020087679 | Pulley et al. | Jul 2002 | A1 |
20020107719 | Tsang et al. | Aug 2002 | A1 |
20020107741 | Stern et al. | Aug 2002 | A1 |
20020123957 | Notarius et al. | Sep 2002 | A1 |
20030167195 | Fernandes et al. | Sep 2003 | A1 |
20040015415 | Cofino et al. | Jan 2004 | A1 |
20040019518 | Abraham et al. | Jan 2004 | A1 |
20040153358 | Lienhart | Aug 2004 | A1 |
Number | Date | Country |
---|---|---|
1087618 | Mar 2001 | EP |
2375624 | Nov 2002 | GB |
WO9939273 | Aug 1999 | WO |
WO 0051051 | Aug 2000 | WO |
WO 0137137 | May 2001 | WO |
Number | Date | Country | |
---|---|---|---|
20040176992 A1 | Sep 2004 | US |