A multitude of services are available to consumers. Such services may include telephone services, power services, email services, and the like. And multiple choices of services providers exist for each category of services. It is difficult, however, for consumers to know which service provider is better than another within a given service category.
For a detailed description of exemplary embodiments of the invention, reference will now be made to the accompanying drawings in which:
Certain terms are used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, computer companies may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following discussion and in the claims, the terms “including” and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to . . . .” Also, the term “couple” or “couples” is intended to mean either an indirect, direct, optical or wireless electrical connection. Thus, if a first device couples to a second device, that connection may be through a direct electrical connection, through an indirect electrical connection via other devices and connections, through an optical electrical connection, or through a wireless electrical connection.
The term “service consumer” is used herein. A service consumer may refer to one or more human beings that use one or more services. While a service consumer literally may be a human being, the term “service consumer” is generally used herein to refer to the computer system owned, operated, and/or used by such human beings as they use various services.
The following discussion is directed to various embodiments of the invention. Although one or more of these embodiments may be preferred, the embodiments disclosed should not be interpreted, or otherwise used, as limiting the scope of the disclosure, including the claims. In addition, one skilled in the art will understand that the following description has broad application, and the discussion of any embodiment is meant only to be exemplary of that embodiment, and not intended to intimate that the scope of the disclosure, including the claims, is limited to that embodiment.
In various embodiments, the reputation service 10 is implemented as software that executes on one or more processors of one or more computers.
Referring again to
The service consumer 50 may include a variety of hardware and software for consuming the services 52.
Referring still to
The service quality assurance logic 54 may include an automated monitoring infrastructure 56, survey collection 58, and feedback providers 60. The automated monitoring infrastructure 56 may include software agents embedded in and around the service 52. Each such agent monitors one more specific metrics and does so without human involvement. Some such agents may be programmable in terms of the sort of metrics they are to monitor. Personnel may also complete survey forms periodically and submit such survey responses on-line via the survey collector 58. The feedback provider 60 comprises a component that allows users to provide feedback about their experiences with a service provider in an ad hoc manner. Via the feedback provider 60, a user can, for example, lodge complaint such as in a free-form text field on web page.
The collector agent 62 collects all such monitored/feedback information and generates one or more shared service reports 66 which contain monitored/feedback information and which are transmitted to the service report collector 12 of the reputation service 10. Such service reports may be requested by the reputation service 10 or may be automatically provided at predetermined intervals by the collector agent 62.
The service report collector 12 retrieves service reports from one, multiple, or all service consumers 50 that subscribe to the reputation service 10. The collected service reports are then stored in a report database 14.
From the database 14, reputation service 10 retrieves one or more of the service reports and, if desired, processes the reports through a reputation calculation engine 16. The reputation calculation engine 16 applies one or more calculation rules 18 to compute a score for each service provider based on the contents of the service reports provided by the various service consumers 50. In some embodiments, the computed score is a weighted average of the various reported metrics and the calculation rules 16 specifies that a weighted average is to be computed and the applicable weights. Numerous other mathematical algorithms for computing a score can be applied as well. An overall score can be, but need not be, computed.
The reputation service 10 also generates a reputation summary 20 for each service category. For the example of a technical help service, the following table represents an illustrative reputation summary.
In the above example, there are four service providers of technical help services, SP1-4. For each service provider, four metrics are provided—availability, resolution rate, average resolution time, and support quality. As can be seen SP1 was deemed to be available 99.945% of the time, had a resolution rate of 82%, had an average resolution time of 16.2 hours, and service consumers rated SP 1 on average as four stars. By contrast, SP4 was available only 92.374% of the time, had a resolution rate of only 58%, had an average resolution time of 24.2 hours, and was rated on average as only a single star by its consumers. The reputation viewer 22 includes a graphical or textual interface to permit a user to view the summarized results.
Although not shown in the table above, an overall score could be calculated as well for each service provider and provided to the reputation viewer 22. In one example, the reputation calculation engine 16 computes a weighted average of the various metrics to calculate a numerical overall score. A metric that is considered more important may be assigned a higher weight. A user (e.g., a person, a department, an organization, etc.) of the reputation service 10 specifies the calculation rule (e.g., average) to be applied as well as the weights via a graphical user interface implemented by software 106.
Based on such summaries, service consumers can compare how the service providers whose services they use compare to other service providers of similar services. Further still, a consumer in the market to purchase a service in a particular category can consult such summaries when deciding which service to purchase. It might be that a consumer would want the highest quality service (SP1 in the example above), or might be tolerant of lesser quality service given the price.
Referring still to
The collector agent 62 receives the set of reporting metrics 26 from the reputation service 10 and applies a sharing policy 64 to filter the monitored/feedback information from the service quality assurance logic 54. Each service consumer 50 may have its own sharing policy 64 which may be configurable by a graphical user interface accessible to a user of the service consumer 50. The sharing policy 64 for a given service consumer 50 may define those metrics that that particular service consumer 50 may report back to the reputation service 10. Any metric not listed in such a sharing policy 64 is not permitted to be reported back to the reputation service 10. For example, if the reporting metrics 26 include the four metrics availability, resolution rate, average resolution time, and support quality, but only the three metrics availability, average resolution time, and support quality are included in a service consumer's sharing policy 64, then the fourth metric (resolution rate) may be monitored by the service consumer 50 but not included in the shared service report 66 and thus not reported back to the reputation service 10 by that particular service consumer 50. In other embodiments, the sharing policy 64 may list those metrics that are not permitted to be provided to the reputation service 10, and thus any metric not listed in the sharing policy 64 can be included in the shared service report 66. The sharing policies 64 permit the service consumers 50 some degree of control over what metric information is provided to the reputation service 10. The collector agent 62 for a given service consumer 50 compares the metrics being monitored by the service quality assurance logic 54 to the sharing policy and thereby produces a subset of the monitored metrics to be included in a service report 66. The collector agent 62 of each service consumer 50 operates in a similar fashion. Each such service consumer thus produces its own service report 66 based on its own sharing policy 64.
Referring to
At 158, the service consumer 50 monitors the service for the various metrics and, at 160, filters the collected metrics in light of the sharing policy. At 162, the service consumer 50 provides the filtered metrics to the reputation service 10 which at 164 compiles a summary of the filtered metrics. The reputation service 10 also may compute (166) an overall score of the filtered metrics as explained previously.
The above discussion is meant to be illustrative of the principles and various embodiments of the present invention. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.