This invention relates to networking, and more particularly to adjusting data storage rates in an application performance management (APM) system.
Application performance management (APM) uses monitoring and/or troubleshooting tools for observation of network traffic and for application and network optimization and maintenance. The current state of the art in most application performance management systems employs multi-threaded, pipelined collections of acquisition, real time analysis and storage elements. These APM systems can only analyze data up to a finite data rate, past which point they fail to function or must fundamentally shift their operation (for example, relegating analysis in favor of storage).
In high traffic networks, data volume can lead to oversubscription, the condition where the incoming data rate is too high for network monitoring systems to process. One way this problem manifests itself is in terms of analysis latency. There is software latency in all application specific application analyzers (applications such as: Http, Oracle, Citrix, TCP, etc). When it is attempted to analyze too much data, the aggregate latency across various discrete portions of a monitoring system puts enough collective drag on the overall system that it becomes difficult to keep up with processing and analyzing the incoming data. It is computationally impractical to perform full analysis in real time of every packet/flow/conversation and store all the corresponding low level metadata on a highly utilized computer network.
Another manifestation of this problem is output latency. In some cases while analysis systems can keep up with incoming traffic from an analysis point of view, due to the volume of data that is being written to disk (transactions, packets, statistics, etc), the disk writes take long enough that “back pressure” is exerted upstream onto analysis which eventually slows down analysis to the point where the analysis can no longer keep up with incoming traffic. In a multithreaded, decoupled system the “back pressure” is the competition for CPU bandwidth between, for example, a DBMS and APM analysis software. During periods of sustained DBMS writes, the DBMS engine necessarily uses more of the total CPU “budget”, thereby leaving less CPU time for analysis.
An object of the invention is to provide for dynamically adjusting data storage rate in an APM system, by monitoring data acquisition hardware and reducing the data storage rate when a determination is made that the data rate is too high for processing by downstream analysis processes.
Accordingly, it is another object of the present invention to provide an improved APM system that dynamically adjust the data storage rate.
It is a further object of the present invention to provide an improved network monitoring system that adjusts data storage rates dynamically to avoid analysis errors from oversubscription.
It is yet another object of the present invention to provide improved methods of network monitoring and analysis that enable dynamic adjustment of data storage rates.
The subject matter of the present invention is particularly pointed out and distinctly claimed in the concluding portion of this specification. However, both the organization and method of operation, together with further advantages and objects thereof, may best be understood by reference to the following description taken in connection with accompanying drawings wherein like reference characters refer to like elements.
The system according to a preferred embodiment of the present invention comprises a monitoring system and method and an analysis system and method for dynamically adjusting data storage rates in an APM system.
The rate of storage of data describing observed network traffic is dynamically adjusted to prevent storage overhead from negatively impacting the ability of the system to continue analyzing the network traffic. This solves the problem of the disparity between computing performance and data storage performance negatively affecting the overall analysis throughput of an application performance monitoring system.
The invention monitors the incoming network traffic rates and the rate at which the traffic is being stored and computes the amount of time that the current rate of storage can be maintained without dropping incoming packets, called time to failure (TTF). If the TTF value drops below a certain threshold, the amount of data being stored will be decreased. This process of computing the TTF value and reacting is repeated until the system reaches a stable state where the current rate of storage can be maintained indefinitely without the system dropping incoming packets. Conversely, if the system detects that it is storing data under its maximum capacity and not all of the desired data is being stored, the system will increase the rate of storage reassess the stability of the system.
Referring to
A network analysis device 16 is also connected to the network, and may include a user interface 18 that enables a user to interact with the network analysis device to operate the analysis device and obtain data therefrom, whether at the location of installation or remotely from the physical location of the analysis product network attachment.
The network analysis device comprises hardware and software, CPU, memory, interfaces and the like to operate to connect to and monitor traffic on the network, as well as performing various testing and measurement operations, transmitting and receiving data and the like. When remote, the network analysis device typically is operated by running on a computer or workstation interfaced with the network. One or more monitoring devices may be operating at various locations on the network, providing measurement data at the various locations, which may be forwarded and/or stored for analysis.
The analysis device comprises an analysis engine 22 which receives the packet network data and interfaces with data store 24.
In operation, the network test instrument is attached to the network, and observes transmissions on the network to collect data and analyze and produce statistics and metadata thereon. In a particular embodiment, the instrument monitors the storage buffer utilization, to determine whether or not storage processes are able to keep up with the rate at which data is scheduled to be written.
To scale back the amount of data that is stored, tracking is made of the backlog of data that is scheduled to be written to disk and then it is decided whether or not to actually write the data. Each individual object that can write data to storage and that is to be monitored has a buffer of data to be written. Each one of these objects keeps track of how full this buffer is at any point in time. This storage utilization information is aggregated by the a performance manager which then passes this information to a downstream software agent (the storage attenuator) that decides whether or not to write/exclude more data as appropriate. This decision is passed back to the individual data writer threads.
Referring to
In operation, to scale back the amount of data that is stored, the backlog of data that is scheduled to be written to disk is used to decide whether or not to actually write the data. The modulate storage decision 54 from the storage attenuator governs whether to write/exclude more data as appropriate.
In order to scale back the data that is stored, the incoming data is sampled at the “conversation” level, rather than the flow or packet level. The conversation level means, for example, a series of data exchanges between two IP addresses with a given protocol type. Since some data is excluded from detailed storage when scaling takes place, in order to maintain some meaning to the stored data in later analysis, flows/packets that are excluded from storage are accounted for by determining packet count/byte count characteristics of the particular metrics that is of interest (for example, transactions) with respect to a given criteria (for example, application (as defined by port), IP addresses), using the flows that get stored and ultimately analyzed as the source of empirical observations. Then the desired metric is inferred using the counts of the excluded traffic. While this results in some limitations on the data analysis, such as reduced accuracy, or limitation on flexibility of sorting criteria, this approach does allow determination of transient phenomena, such as spikes in traffic.
The modulation of storage may be accomplished by reference to attenuation schedules, multiple such schedules being possible. In a particular embodiment, a general attenuation schedule is provided for normal operation and an aggressive attenuation schedule is provided for situations where the hardware monitoring determines that the general attenuation schedule is not sufficient resolve the storage backlog. The schedules provide a percentage value of conversations that are to be attenuated, whereby the conversations that are attenuated are not passed on for storage.
Example attenuation schedules are:
Accordingly, the invention accordingly provides dynamic adjustment of data storage rates in an APM system, to avoid oversubscription, while still providing data storage for downstream analysis and inference based on discarded data. The system, method and apparatus dynamically adjust the rate of network data storage when the data rates present exceed the capacity of the system storage elements to store them, solving the problem of allowing excessive network data storage backlog to overwhelm an application performance monitoring system.
While a preferred embodiment of the present invention has been shown and described, it will be apparent to those skilled in the art that many changes and modifications may be made without departing from the invention in its broader aspects. The appended claims are therefore intended to cover all such changes and modifications as fall within the true spirit and scope of the invention.