1. Field of the Invention
The present invention relates to data processing methods for automatically detecting and handling likely failure events experienced by users of a web site or other interactive service.
2. Description of the Related Art
Web sites or other interactive services commonly provide mechanisms for users to provide feedback regarding problems they encounter. Typical problems that are reported include errors on the web site, such as pages that do not resolve or functionality that is broken. The operator of a web site may use such feedback to correct errors and make general improvements to the web site. In some cases, the operator may also provide personalized responses to the feedback messages received from users.
For complex web sites that support large numbers of users (e.g., millions of customers), this method of obtaining user feedback has significant limitations. For example, large numbers of users may provide feedback on the same type of problem, even though feedback from a small number of users may be sufficient to correct the problem. In addition, a significant portion of the feedback messages collected from users may provide suggestions that are of little or no value to the service provider. Thus, a heavy burden is often placed on those responsible for reviewing and responding to feedback messages, especially if an attempt is made to respond to each message.
In many cases, a user's inability to perform a particular task may be the result of an error on the part of the user. For example, in attempting to locate a particular item in an online catalog or directory, the user may search for the item in the wrong category or may inaccurately describe the item in a search query. This type of error is often unreported by the user and therefore goes unnoticed by the service provider. By failing to receive or review feedback on these types of problems, the web site or other interactive service operator provides a substandard experience to the user that may result in lost business.
The present invention provides a web site system, or other multi-user server system providing an interactive service, that monitors user activity data reflective of the activities of its users. A failure analysis component analyzes the user activity data on a user-by-user or aggregate basis, according to a set of rules, to automatically detect likely failure scenarios or events. The failure events preferably include patterns of user activity that, when viewed as a whole, indicate that a user or group of users has failed to achieve a desired objective or has had difficulty achieving a desired objective. For example, a failure event may be detected in which a user conducts multiple, related searches that are deemed to be unsuccessful (e.g., because no search results were returned or because no search result items were selected for viewing).
A failure event filtering component intelligently selects specific failure events for which to request feedback from a user or users regarding the failure event, preferably taking into consideration information about the failure event itself and information about the user. The decision of whether to request user feedback about the failure event preferably takes into consideration some or all of the following: (a) the type of failure event detected, (b) the transaction history of the user, (c) the frequency with which the user has previously responded to feedback requests, (d) the usefulness of any feedback previously provided by the user, as determined by a rating of previous user feedback messages, (e) the quantity of feedback already collected regarding this type of failure event, (f) the number of feedback messages currently queued for review and response, and (g) the quantity of resources currently available to respond to feedback messages from users.
Requests for user feedback, and responses to the user feedback messages, are preferably presented on a response page that may also display other types of personalized information. To respond to a feedback request, the user accesses and completes an online feedback form that corresponds to the type of failure detected. One feature of the invention is thus the use of a personal log to request feedback from users on specific failure events; this feature may be used regardless of how the failure events are detected, and regardless of whether feedback requests are sent to users selectively. Alternatively, feedback may be requested via email, an instant message, a pop-up window, or another communications method.
Neither this summary nor the following detailed description purports to define the invention. The invention is defined by the claims.
A specific embodiment of the invention will now be described with reference to the drawings. This embodiment is intended to illustrate, and not limit, the present invention. The scope of the invention is defined by the claims.
The user activity data collected over a period of time, such as over a single browsing session or a sequence of browsing sessions, is analyzed by a failure analysis agent 30. The failure analysis agent 30 applies a set of rules to the activity data to evaluate whether a failure event has occurred during the time period to which the activity data corresponds. These rules define the various types of failures that are detectable. Examples of types of failure events that may be detected include, for example, the following: (a) numerous re-submissions of the same or similar search queries over time; (b) multiple search query submissions that result in a null query result, (c) a likely spelling error in a search query, followed by termination by the user of the search process, (d) the user's failure to purchase items added to an online shopping cart, particularly if the user is new to the web site, and (e) the recurring display of an error message to the user. In some cases, the failure analysis agent 30 may also take into consideration the user's account information, such as the user's overall transaction history, when determining whether a failure event has occurred.
As indicated by the foregoing examples, the “failures” need not be technical in nature, but may include scenarios in which the user has potentially failed to accomplish an intended objective. In addition, at least some of the failures are detected by analyzing a pattern of activity (e.g., a sequence of page requests), as opposed to a single event. The types of failures that are detected by the failure analysis agent 30 will generally depend upon the purpose or function of the particular system (e.g., Internet search, product sales, financial transactions, etc.).
If the analysis agent 30 determines that a particular user action or pattern of actions represents a likely failure, the action or pattern of actions is treated as a failure event. As depicted by event 2 in
The feedback request (
In the example shown in
If the user responds to the feedback request (event 5 in
The response sent to the user may take a variety of forms depending on the failure event experienced by the user. The response may, for example, provide suggestions to the user on how to overcome a particular type of problem. For example, the response may notify the user of a particular product that meets the characteristics specified by the user, or may assist the user in locating an item. An example of a type of response 80 that may be provided is shown in
As illustrated in
The filtering component 32 preferably takes into consideration a variety of different criteria in determining whether to request feedback from the user. Preferably, some or all of the following factors are taken into consideration:
All combinations of the foregoing criteria are included within the scope of this disclosure. Other criteria may additionally be taken into consideration.
The process of selecting specific failure events for which to solicit feedback serves three important goals. One goal is to obtain feedback about those problems that are the most likely to adversely affect the business or reputation of the web site. This goal is preferably achieved in part by taking into consideration the transaction history of the particular user, and/or the type of transaction that was being performed at the time of failure. For instance, the filtering algorithm may favor (tend to request feedback for) failure events of users with significant purchase histories, and failure events that occurred during attempts to conduct relatively high-priced transactions.
A second goal of the filtering process is to seek feedback from those who are the most likely to provide valuable feedback. This goal is preferably achieved in part by taking into consideration the quality of feedback previously provided by the user, the reputation of the user as determined from other sources of information, and/or the frequency with which the user has responded to feedback requests. The extent to which the user has used the web site system (e.g., frequent user versus occasional or new user), as reflected within the user's activity and transaction histories, may also be considered.
A third goal of the filtering process is to control the work load of the systems (either human or computer-implemented) that respond to the feedback requests, such that timely responses may be provided to all or substantially all of the users that provide feedback. This goal may be achieved by taking into consideration (a) the amount of feedback that has already been obtained on a particular type of problem, and/or (b) the number of user feedback messages that are currently queued for preparation of a response. To reduce redundant work by the same or different human operators, the failure events are preferably automatically sorted and “bucketized,” so that an operator can easily identify failure events and user feedback that refer to the same type of problem or to the same area of the web site.
For purposes of illustration, it will be assumed throughout the following description that the user applications 46 provide user functionality for interacting with (such as by browsing and conducting searches within) an electronic catalog of items (such as physical products), and for purchasing items selected from the electronic catalog. In addition, it will be assumed that the applications 46 provide functionality for users to post product reviews and/or other types of content on the web site, and for other users to vote on or otherwise rate the quality of such postings. For example, in one embodiment, a user can post a product review on a detail page for a product, and visitors to the detail page can vote on whether the product review is helpful. As is known in the art, the votes or rating submissions collected through this process may be used to assign reputation levels to the authors of such content. These reputation levels may in turn be published on the web site, or otherwise used, to assist users in locating product reviews or other postings from reputable authors. The reputation levels of the users may additionally or alternatively be based on other criteria. For example, if the web site supports user-to-user sales, the reputation level of a seller may be dependent upon the number of sales made by the seller, the associated feedback from buyers, and/or other sales-related data associated with the seller.
As depicted in
Data regarding the browsing activities of each user of the system is collected over time in a user activity database 56. The user activity data may be stored in any of a variety of repository types, such as a server log file or “web log,” a database of event data mined from a web log, or a history server that stores event data in real time in association with corresponding users. One example of a history server architecture that may be used to store user activity data is disclosed in U.S. patent application Ser. No. 10/612,395, filed Jul. 2, 2003, the disclosure of which is hereby incorporated by reference. Another possible approach is to collect the activity data on the associated user computers 44 using a special client-side software component, such as a browser plug-in.
The user activity data may, for example, include information about every selection event or “click” performed by each user, including information about dynamic page content served to the user. In some embodiments, the activity data may also include the HTML or other coding of some or all of the pages served to the user, such that an operator may view the actual sequence of web pages displayed to the user during the course of the failure event. A user's activity data may, in some embodiments, also reflect actions performed by the user on other web sites. For example, activity data may be collected from partner web sites, or may be obtained by having users install a browser “toolbar” plug-in that reports accesses to other web sites.
As illustrated in
The detected failure events may be recorded within a failure events database 60. The data stored in this database 60 for a given failure event may include some or all of the following: the type of failure detected (e.g., “repetitive search queries that returned no search results”); the associated sequence of user actions or events, possibly including copies of the HTML pages actually viewed by the user; an ID of the user; an indication of whether a request for feedback has been sent to the user for the failure event; the feedback provided by the user, if any; and the current resolution status of the failure event, if applicable.
Table 1 illustrates one example of a code sequence that may be used by the failure analysis agent 30 to detect failure events. In this example, the searches conducted by each customer over the last 30 days are analyzed to detect likely failures by users to locate desired products. If, on more than one day, a given user has entered search terms that have located the same product or products in the catalog, the search results are generalized to identify the browse node(s) located by these searches. (Each product falls within at least one browse node in the preferred embodiment.) If the same browse node appears on at least two days, and the customer has not purchased anything within that browse node recently, a “missed search” failure event is generated using an object called CloseTheLoopMgr. This failure event may, for example, result in the generation of a feedback request that asks the user for information about what he or she was searching for within this particular browse node.
Failure event data is read from the database 60 and processed by the filtering component 32, as described above. As each failure event is analyzed, the reputation level, transaction history, and/or other account data of the associated user may be retrieved and analyzed. The filtering algorithm may assign a score to each failure event, such that some or all of the above-mentioned factors influence the score. For example, a component score may be generated for each of the foregoing factors, and these component scores may be weighted and summed to generate the score for the failure event. The decision of whether to solicit feedback may then be made based on whether this score exceeds a particular threshold, which may be statically or dynamically set.
The weights applied or the factors considered may depend on the type of failure detected. For example, for an event in which an item has been left in a shopping cart for an extended period of time, a feedback request message may be sent only if the user has never completed a shopping cart transaction on the site. For search related failures, the user's purchase history may be accorded lesser weight. As mentioned above, one objective of the filtering process is to adaptively regulate the quantity of feedback requests sent to the user population as a whole such that the response systems can provide responses to all or substantially all users who provide feedback. Another objective is to reduce feedback regarding problems that are already known to the web site operator.
If a decision is made to request feedback from the user, a feedback request message is sent to a user messaging component 64 (
As depicted in
The operator UI 38 (
When a given feedback message is selected by an operator for viewing, the operator UI 38 may also display the associated user activity data that gave rise to the failure event, and may display selected account information of the user. To respond to the feedback, the operator may type in a personalized message, and/or select from various boilerplate textual passages that are related to the reported problem. Operator responses to the feedback messages (and/or responses selected by an automated agent) are posted to the user messaging component 64 via the workflow engine 68, and may also be recorded in the failure events database 60.
The operator UI 38 may also include controls for the operator to grade or rate the usefulness of the user's feedback. As described above, these ratings may be recorded in association with the user for subsequent use. In addition, the operator UI 38 may provide access to various failure and feedback statistics generated by the workflow engine 68 or some other component. These statistics may reveal meaningful trends and problem areas that are helpful to the web site operator in improving the site. For example, in the context of the feedback form shown in
The various functional components shown in
As will be apparent, some of these functional components, such as those responsible for detecting failure events and for selectively soliciting feedback, may be implemented as a web service, which need not be uniquely associated with any particular web site. The web service may, for example, receive and analyze the clickstream data associated with multiple, independent web sites to detect failure events. The web service could also handle such tasks as deciding whether to solicit feedback, sending feedback request messages to users, and processing the feedback messages from users.
Although this invention has been described in terms of certain preferred embodiments and applications, other embodiments and applications that are apparent to those of ordinary skill in the art, including embodiments which do not provide all of the features and advantages set forth herein, are also within the scope of this invention. Accordingly, the scope of the present invention is defined only by the appended claims.
This application is a continuation-in-part of U.S. application Ser. No. 10/854,030, filed May 26, 2004 now abandoned.
Number | Name | Date | Kind |
---|---|---|---|
5537618 | Boulton et al. | Jul 1996 | A |
5794237 | Gore, Jr. | Aug 1998 | A |
5892917 | Myerson | Apr 1999 | A |
6366910 | Rajaraman et al. | Apr 2002 | B1 |
6915234 | Curtin et al. | Jul 2005 | B2 |
7117207 | Kerschberg et al. | Oct 2006 | B1 |
7140025 | Dillow et al. | Nov 2006 | B1 |
7216121 | Bachman et al. | May 2007 | B2 |
7219148 | Rounthwaite et al. | May 2007 | B2 |
7257514 | Faihe | Aug 2007 | B2 |
7275016 | Gross et al. | Sep 2007 | B2 |
7409593 | Aaron | Aug 2008 | B2 |
20030135500 | Chevrel et al. | Jul 2003 | A1 |
20030154135 | Covington et al. | Aug 2003 | A1 |
20030167195 | Fernandes et al. | Sep 2003 | A1 |
20030172075 | Reisman | Sep 2003 | A1 |
20040068495 | Inaba et al. | Apr 2004 | A1 |
20040168117 | Renaud | Aug 2004 | A1 |
20050114199 | Hanif et al. | May 2005 | A1 |
20050125382 | Karnawat et al. | Jun 2005 | A1 |
20050125440 | Hirst | Jun 2005 | A1 |
20060004891 | Hurst-Hiller et al. | Jan 2006 | A1 |
20060074709 | McAllister | Apr 2006 | A1 |
20070055694 | Ruge et al. | Mar 2007 | A1 |
Number | Date | Country |
---|---|---|
WO 0127787 | Apr 2001 | WO |
WO 03010621 | Feb 2003 | WO |
Number | Date | Country | |
---|---|---|---|
Parent | 10854030 | May 2004 | US |
Child | 10959239 | US |