The present invention relates generally to information handling, and more particularly to methods and systems for evaluating the performance of information handling in a network environment.
Various approaches have been proposed for monitoring, simulating, or testing web sites. However, some of these approaches address substantially different problems (e.g. problems of simulation and hypothetical phenomena), and thus are significantly different from the present invention. Other examples include services available from vendors such as Atesto Technologies Inc., Keynote Systems, and Mercury Interactive Corporation. These services may involve a script that runs on a probe computer. The approaches mentioned above do not necessarily allow some useful comparisons.
It is very useful to measure the performance of applications such as web sites, web services, or other applications accessible to a number of users via a network. Concerning two or more such applications, it is very useful to compare numerical measures. Accurate evaluation or comparison may allow proactive management and reduce mean time to repair problems, for example. However, accurate evaluation or comparison may be hampered by inconsistent calculation and communication of measures. Inconsistent, variable, or heavily customized techniques are common. There are no generally-accepted techniques to be used on applications that have been deployed in a production environment. Inconsistent techniques for calculating and communicating measurements result in problems such as unreliable performance data, and increased costs for administration, training and creating reports. Thus there is a need for systems and methods that solve problems related to inconsistent calculation and communication of measurements.
An example of a solution to problems mentioned above comprises: (a) collecting data from a production environment, utilizing a plurality of probes; (b) performing calculations, regarding availability or response time or both, with at least part of the data; (c) outputting statistics, resulting from the calculations; and (d) performing (a)-(c) above for a plurality of applications, whereby the applications may be compared.
Another example of a solution comprises receiving data for a plurality of transaction steps, from a plurality of probes; calculating statistics based on the data; mapping the statistics to at least one threshold value; and outputting a representation of the mapping.
A better understanding of the present invention can be obtained when the following detailed description is considered in conjunction with the following drawings. The use of the same reference symbols in different drawings indicates similar or identical items.
The examples that follow involve the use of one or more computers and may involve the use of one or more communications networks. The present invention is not limited as to the type of computer on which it runs, and not limited as to the type of network used. The present invention is not limited as to the type of medium or format used for output. Means for providing graphical output may include sketching diagrams by hand on paper, printing images or numbers on paper, displaying images or numbers on a screen, or some combination of these, for example. A model of a solution might be provided on paper, and later the model could be the basis for a design implemented via computer, for example.
The following are definitions of terms used in the description of the present invention and in the claims:
“About,” with respect to numbers, includes variation due to measurement method, human error, statistical variance, rounding principles, and significant digits.
“Application” means any specific use for computer technology, or any software that allows a specific use for computer technology.
“Availability” means ability to be accessed or used.
“Business process” means any process involving use of a computer by any enterprise, group, or organization; the process may involve providing goods or services of any kind.
“Client-server application” means any application involving a client that utilizes a service, and a server that provides a service. Examples of such a service include but are not limited to: information services, transactional services, access to databases, and access to audio or video content.
“Comparing” means bringing together for the purpose of finding any likeness or difference, including a qualitative or quantitative likeness or difference. “Comparing” may involve answering questions including but not limited to: “Is a measured response time greater than a threshold response time?” Or “Is a response time measured by a remote probe significantly greater than a response time measured by a local probe?”
“Component” means any element or part, and may include elements consisting of hardware or software or both.
“Computer-usable medium” means any carrier wave, signal or transmission facility for communication with computers, and any kind of computer memory, such as floppy disks, hard disks, Random Access Memory (RAM), Read Only Memory (ROM), CD-ROM, flash ROM, non-volatile ROM, and non-volatile memory.
“Mapping” means associating, matching or correlating.
“Measuring” means evaluating or quantifying.
“Output” or “Outputting” means producing, transmitting, or turning out in some manner, including but not limited to printing on paper, or displaying on a screen, wiring to a disk, or using an audio device.
“Performance” means execution or doing; for example, “performance” may refer to any aspect of an application's operation, including availability, response time, time to complete batch processing or other aspects.
“Probe” means any computer used in evaluating, investigating, or quantifying the functioning of a component or the performance of an application; for example a “probe” may be a personal computer executing a script, acting as a client, and requesting services from a server.
“Production environment” means any set of actual working conditions, where daily work or transactions take place.
“Response time” means elapsed time in responding to a request or signal.
“Script” means any program used in evaluating, investigating, or quantifying performance; for example a script may cause a computer to send requests or signals according to a transaction scenario. A script may be written in a scripting language such as Perl or some other programming language.
“Service level agreement” (or “SLA”) means any oral or written agreement between provider and user. For example, “service level agreement” includes but is not limited to an agreement between vendor and customer, and an agreement between an information technology department and an end user. For example, a “service level agreement” might involve one or more client-server applications, and might include specifications regarding availability, response times or problem-solving.
“Statistic” means any numerical measure calculated from a sample.
“Storing” data or information, using a computer, means placing the data or information, for any length of time, in any kind of computer memory, such as floppy disks, hard disks, Random Access Memory (RAM), Read Only Memory (ROM), CD-ROM, flash ROM, non-volatile ROM, and non-volatile memory.
“Threshold value” means any value used as a borderline, standard, or target; for example, a “threshold value” may be derived from customer requirements, corporate objectives, a service level agreement, industry norms, or other sources.
While the computer system described in
In other words, probes shown at 221 and 235, report generators shown at 231 and 232, and communication links among them (symbolized by arrows) may comprise means for receiving data from a plurality of probes; means for calculating statistics based on the data; and means for mapping the statistics to at least one threshold value. Report generators at 231 and 232, and reports 241 and 242, may comprise means for outputting a representation of the mapping. Note that in an alternative example, report generator 232 might obtain data from databases at 251 and at 222, then generate reports 241 and 242.
Turning now to some details of
The example in
Continuing with some details of
Turning now to some details of receiving data from a plurality of probes, Component Probes measure availability, utilization and performance of infrastructure components, including servers, LAN, and services. Local component probes (LCP's) may be deployed locally in hosting sites, service delivery centers or data centers (e.g. at 211). Network Probes measure network infrastructure response time and availability. Remote Network Probes (RNP's) may be deployed in a local hosting site or data center (e.g. at 211) if measuring the intranet or at Internet Service Provider (ISP) sites if measuring the Internet.
Application Probes measure availability and performance of applications and business processes.
Local Application Probe (LAP): Application probes deployed in a local hosting site or data center (e.g. at 211) are termed Local Application Probes.
Remote Application Probe (RAP): An application probe deployed from a remote location is termed a Remote Application Probe.
The concept of “probe” is a logical one. Thus for example, implementing a local application probe could actually consist of implementing multiple physical probes.
Providing a script for a probe would comprise defining a set of transactions that are frequently performed by end users. Employing a plurality of probes would comprise placing at least one remote probe (shown at 235 in
This example in
The broken line AA shows where the report is divided into two sheets. The wavy lines just above row 330 show where rows are omitted from this example, to make the length manageable. Columns 303-312 display response time data in seconds. Each of the columns 303-311 represent a transaction step. Column 312 represents the total of the response times for all the transaction steps. A description of the transaction step is shown in the column heading in row 321. Column 313 displays availability information, using a color code. In this example, a special color is shown by darker shading, seen in the cells of column 311. For example, the cell in column 313 is green if all the transaction steps are completed; otherwise the cell is red, representing a failed attempt to execute all the transaction steps. Thus column 313 may provide a measure of end-to-end availability from a probe location, since a business process could cross multiple applications deployed in multiple hosting centers. Column 302 shows probe location and Internet service provider information. Column 301 shows time of script execution. Each row from row 323 downward to row 330 represents one iteration of the script; each of these rows represents how one end user's execution of a business process would be handled by the web site.
Turning to some details of
Continuing with details of
Continuing with details of
Continuing with details of
Continuing with details of
Turning now to particular features shown in
In each of cells 331-369, a statistic is aligned with a corresponding threshold value in row 422. Cells 331-369 reflect calculating, mapping, and outputting, for statistics. In row 330, cells 331-339 display average performance values. In row 340, cells 341-349 display standard performance values. A transaction step's availability proportion is expressed as a ratio of successful executions to attempts, in row 350, cells 351-359. The proportion is expressed as a percentage of successful executions in row 360, cells 361-369. Finally, this example in
Column 503 displays a standard total availability, based on Column 502's daily total availability (e.g. a 30-day rolling average). Here, standard total availability is calculated from the last 30-day period (rolling average, 24×30) and is represented as a percentage.
Column 504 displays a daily adjusted availability. It is calculated based on some threshold, such as a commitment to a customer to make an application available during defined business hours, for example. In other words, column 504's values are adjusted to measure availability against a commitment to a customer or a service level agreement, for example. Column 504 is one way of mapping measures to a threshold value. Column 504 reflects calculating, mapping, and outputting, for an adjusted availability value. In this example, daily adjusted availability is calculated from the daily filtered measurements captured during defined business hours, and is represented as a percentage. This value is used for assessing compliance with an availability threshold.
Column 505 displays a standard adjusted availability, based on Column 504's daily adjusted availability (e.g. 30-day rolling average). In this example, standard adjusted availability is calculated from the daily filtered measurements captured during defined business hours, across the last 30-day period (rolling average, defined business hours×30). Column 505 may provide a cumulative view over a 30-day period, reflecting the degree of stability for an application or a business process. The change from 100% on Feb. 9 to 99.9% on Feb. 10, in column 505, shows the effect of the 96% value on Feb. 10, in columns 502 and 504. The 96% value on Feb. 10, in columns 502 and 504, indicates an availability failure equal to 1 hour.
Blocks 602 and 610 are connected by an arrow, symbolizing that in the planning phase, customer requirements at 610 (e.g. targets for performance or availability) are understood and documented. Thus block 610 comprises setting threshold values, and documenting the threshold values. Work proceeds with developing the application at block 603. The documented threshold values may provide guidance and promote good design decisions in developing the application. Once developed, an application is evaluated against the threshold values. Thus the qualifying or testing phase at block 604, and block 610, are connected by an arrow, symbolizing measuring the application's performance against the threshold values at 610. This may lead to identifying an opportunity to improve the performance of an application, in the qualifying or testing phase at block 604.
As an application is deployed into a production environment, parameters are established to promote consistent measurement by probes. Thus the example in
In the example in
Using a script developed at block 701, local and remote application probes may measure the end-to-end user experience for repeatable transactions, either simple or complex. End-to-end measurements focus on measuring the business process (as defined by a repeatable sequence of events) from the end user's perspective. End-to-end measurements tend to cross multiple applications, services, and infrastructure. Examples would include: create an order, query an order, etc. Ways to implement a script that runs on a probe are well-known (see details of example implementations below). Vendors provide various services that involve a script that runs on a probe.
Block 702 represents setting threshold values. Threshold values may be derived from a service level agreement [SLA], or from sources shown in
Operations at 703 and 704 were covered in the description given above for
The example in
Operations at 703, 704, and 705 may be performed repeatedly (shown by the “No” branch being taken at decision 706 and the path looping back to bbck 703) until the process is terminated (shown by the “Yes” branch being taken at decision 706, and the process terminating at block 707). Operations in
Operations at blocks 801-803 may be performed repeatedly, as with
Operations at blocks 801-803 may be performed for a plurality of applications, whereby the applications may be compared.
Regarding
This final section of the detailed description provides details of example implementations, mainly referring back to
Continuing with some details of example implementations, we located application probes locally at hosting sites (e.g. local probe shown at 221, within data center 211) and remotely at relevant end-user sites (remote probes at 235). This not only exercised the application code and application hosting site infrastructure, but also probed the ability of the application and network to deliver data from the application hosting site to the remote end-user sites. While we measured the user availability and performance from a customer perspective (remote probes at 235), we also measured the availability and performance of the application at the location where it was deployed (local probe shown at 221, within data center 211). This provided baseline performance measurement data, that could be used for analyzing the performance measurements from the remote probes (at 235).
In one example, Local probe 221 was implemented with a personal computer, utilizing IBM's Enterprise Probe Platform technology, but other kinds of hardware and software could be used. A local probe 221 was placed on the IBM network just outside the firewall at the center where the web site was hosted. A local probe 221 was used to probe one specific site per probe. There could be multiple scripts per site. A local probe 221 executed the script every 20 minutes, in one example. Intervals of other lengths also could be used. In one example, local application probe 221 automatically sent events to the management console 205 used by the operations department.
In one example, Local probe 221 sent to a database 251 the data produced by the measuring process. Database 251 was implemented by using a software product sold under the trademark DB2 (by IBM), but other database management software could be used, such as software products sold under the trademarks ORACLE, INFORMIX, SYBASE, MYSQL, Microsoft Corporation's SQL SERVER, or similar software. For local probe data, an automated reporting tool (shown as report generator 231) ran continuously at set intervals, obtained data from database 251, and sent reports 241 via email to these IBM entities: the web site owner, the hosting center, and IBM's world wide command center. Reports 241 also could be posted on a web site at the set intervals. Report generator 231 was implemented by using the Perl scripting language and the AIX operating system. However, some other programming language could be used, and another operating system could be used, such as LINUX, or another form of UNIX, or some version of Microsoft Corporation's WINDOWS, or some other operating system.
Continuing with details of example implementations, a standard policy for operations measurements (appropriate for measuring the performance of two or more applications) was developed. This measurement policy facilitated consistent assessment of IBM's portfolio of e-business initiatives. In a similar way, a measurement policy could be developed for other applications, utilized by some other organization, according to the teachings of the present invention. The above-mentioned measurement policy comprised measuring the performance of an application continuously, 7 days per week, 24 hours per day, including an application's scheduled and unscheduled down time. The above-mentioned measurement policy comprised measuring the performance of an application from probe locations (symbolized by probes at 235 in
For measuring availability, the above-mentioned measurement policy comprised measuring availability of an application from at least two different probe locations. A preferred approach utilized at least two remote probes (symbolized by probes shown at 235), and utilized probe locations that were remote from an application's front end. A local probe and a remote probe (symbolized by probes shown at 221 and 235 in
To conclude the implementation details,
In conclusion, we have shown examples of solutions to problems that are related to inconsistent measurement, and in particular, solutions for calculating and communicating measurements.
One of the possible implementations of the invention is an application, namely a set of instructions (program code) executed by a processor of a computer from a computer-usable medium such as a memory of a computer. Until required by the computer, the set of instructions may be stored in another computer memory, for example, in a hard disk drive, or in a removable memory such as an optical disk (for eventual use in a CD ROM) or floppy disk (for eventual use in a floppy disk drive), or downloaded via the Internet or other computer network. Thus, the present invention may be implemented as a computer-usable medium having computer-executable instructions for use in a computer. In addition, although the various methods described are conveniently implemented in a general-purpose computer selectively activated or reconfigured by software, one of ordinary skill in the art would also recognize that such methods may be carried out in hardware, in firmware, or in more specialized apparatus constructed to perform the method.
While the invention has been shown and described with reference to particular embodiments thereof, it will be understood by those skilled in the art that the foregoing and other changes in form and detail may be made therein without departing from the spirit and scope of the invention. The appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of this invention. Furthermore, it is to be understood that the invention is solely defined by the appended claims. It will be understood by those with skill in the art that if a specific number of an introduced claim element is intended, such intent will be explicitly recited in the claim, and in the absence of such recitation no such limitation is present. For non-limiting example, as an aid to understanding, the appended claims may contain the introductory phrases “at least one” or “one or more” to introduce claim elements.
However, the use of such phrases should not be construed to imply that the introduction of a claim element by indefinite articles such as “a” or “an” limits any particular claim containing such introduced claim element to inventions containing only one such element, even when the same claim includes the introductory phrases “at least one” or “one or more” and indefinite articles such as “a” or “an;” the same holds true for the use in the claims of definite articles.
The present patent application is related to co-pending patent applications: Method and System for Probing in a Network Environment, application Ser. No. 10/062,329, filed on Jan. 31, 2002, Method and System for Performance Reporting in a Network Environment, application Ser. No. 10/062,369, filed on Jan. 31, 2002, End to End Component Mapping and Problem-Solving in a Network Environment, application Ser. No. 10/122,001, filed on Apr. 11, 2002, Graphics for End to End Component Mapping and Problem-Solving in a Network Environment, application Ser. No. 10/125,619, filed on Apr. 18, 2002, and E-Business Operations Measurements, application Ser. No. 10/256,094, filed on Sep. 26, 2002. These co-pending patent applications are assigned to the assignee of the present application, and herein incorporated by reference. A portion of the disclosure of this patent document contains material which is subjectto copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
Number | Date | Country | |
---|---|---|---|
Parent | 10383853 | Mar 2003 | US |
Child | 11855247 | Sep 2007 | US |