The present invention relates generally to communication networks and, more particularly, to a method for monitoring data streams in packet networks such as Internet Protocol (IP) networks.
The Internet has emerged as a critical communication infrastructure, carrying traffic for a wide range of important applications. Internet services such as Voice over Internet Protocol (VoIP) are becoming ubiquitous and more and more businesses and consumers are relying on these IP services to meet their voice and data service needs. In turn, service providers must maintain a level of services that will meet the expectation of their customers.
As such, service providers of communication networks may deploy one or more network monitoring devices to monitor data streams for purposes such as performance monitoring, anomalies detection, security monitoring and the like. Unfortunately, the enormous amount of data that traverses through such networks would require a substantial amount of computational resources to monitor a never ending (e.g., online) stream of data. Thus, network monitoring devices must adopt data stream management methods that are efficient and capable of processing a large amount of data in the least amount of time while minimizing space usage, e.g., memory or storage space usage.
Therefore, there is a need for a method and apparatus for performing data stream monitoring that reduces computational time and space usage.
In one embodiment, the present invention discloses a method and apparatus for computing quantiles. For example, the present invention reads a plurality of items from a data stream and inserts each of the plurality of items that was read from the data stream into a data structure. Periodically, the data structure is compressed to reduce the number of stored items in the data structure. In turn, the compressed data structure can be used to output a biased or targeted quantile.
The teaching of the present invention can be readily understood by considering the following detailed description in conjunction with the accompanying drawings, in which:
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures.
The present invention broadly discloses a method and apparatus for data stream monitoring of IP traffic. More specifically, the present invention discloses an efficient method for computing biased quantiles over data streams.
Skew is prevalent in many data sources such as IP traffic streams. Distributions with skew typically have long tails which are of great interest. For example, in network management, it is important to understand what performance users experience. One measure of performance perceived by the users is the round trip time (RTT) (which in turn affects dynamics of the network through mechanisms such as Transmission Control Protocol (TCP) flow control). RTTs display a large amount of skew: the tails of the distribution of round trip times can become very stretched. Hence, to gauge the performance of the network in detail and its effect on all users (not just those experiencing the average performance), it is important to know not only the median RTT but also the 90%, 95% and 99% quantiles of TCP round trip times to each destination. In developing data stream management systems that interact with IP traffic data, there exists the facility for posing such queries. However, the challenge is to develop approaches to answer such queries efficiently and accurately given that there may be many destinations to track. In such settings. the data rate is typically very high and resources are limited in comparison to the amount of data that is observed. Hence it is often necessary to adopt the data stream methodology: analyze IP packet headers in one pass over the data with storage space and total processing time that is significantly sublinear in the size of the input.
In one embodiment, IP traffic streams and other streams are summarized using quantiles: these are order statistics such as the minimum, maximum and median values. In a data set of size n, the φ-quantile is the item with rank ┌φn┐1. The minimum d maximum are easy to calculate precisely in one pass but exact computation of certain quantiles can require space linear in n. So the notion of ε-approximate quantiles relaxes the requirement to finding an item with rank between (φ−ε)n and (φ+ε)n. Much attention has been given to the case of finding a set of uniform quantiles: given 0<φ<1, return the approximate φ, 2φ, 3φ, . . . , └1/φ┘φ quantiles of a stream of values. Note that the error in the rank of each returned value is bounded by the same amount, εn; we call this the uniform error case.
However, summarizing distributions which have high skew using uniform quantiles is not always informative because it does not describe the interesting tail region. adequately. In contrast, the present invention discloses the method of high-biased quantiles: to find the 1−φ, 1−φ2, 1−φ3, . . . , 1−φk quantiles of the distribution. In order to give accurate and meaningful answers to these queries, the present method also scales the approximation factor ε so the more biased the quantile, the more accurate the approximation should be. The approximate low-biased quantiles should now be in the range (1−(1±ε)φj)n: instead of additive error in the rank ±εn, we now require relative error of factor (135 ε).
Finding high- (or low-) biased quantiles can be seen as a special case of a more general problem of finding targeted quantiles. Rather than requesting the same ε for all quantiles (e.g., the uniform case) or ε scaled by φ (the biased case), one might specify in advance an arbitrary set of quantiles and the desired errors of ε for each in the form (φj, εj). For example, input to the targeted quantiles problem might be {(0.5, 0.1), (0.2, 0.05), (0.9, 0.01)}, meaning that the median should be returned with 10% error, the 20th percentile with 5% error, and the 90th percentile with 1%.
Both the biased and targeted quantiles problems could be solved trivially by running a uniform solution with ε=minjεj. But this is wasteful in resources since there is no need for all of the quantiles with such fine accuracy. In other words, the present method would like solutions which are more efficient than this naive approach both in terms of memory used as well as in running time, thereby adapting to the precise quantile and error requirements of the problem.
To better under the present invention, the present method begins by formally defining the problem of biased quantiles. To simplify the notation, the present disclosure is presented in terms of low-biased quantiles; high-biased quantiles can be obtained via symmetry, by reversing the ordering relation.
Definition 1: Let a be a sequence of n items, and let A be the sorted version of a. Let φ be a parameter in the range o<φ<1. The low-biased quantiles of a are the set of values A[[φjn]] for j=1, . . . , log1/φn.
Sometimes one may not require the full set of biased-quantiles, and instead only searches for the first k. The present algorithms will take k as a parameter.
It is well known that computing quantiles exactly requires space linear in n. In contrast, the present method seeks solutions that are significantly sublinear in n, preferably depending on log n or small polynomials in this quantity. Therefore, the present method will allow approximation of the quantiles, by giving a small range of tolerance around the answer.
Definition 2: Let φ be a parameter in the range 0<φ<1 supplied in advance. The approximate low-biased quantiles of a sequence of n items, a, is a set of k items q1, . . . , qk which satisfy
A[└(1−ε)φjn└]≦qj≦A[┌(1+ε)φjn┐].
In fact, one can solve a slightly more general problem: after processing the input, then for any supplied value φ′≦φk, one will be able to return an ε-approximate quantile q′ that satisfies
A[└(1−ε)φ′n┘]≦q′≦A[┌(1+ε)φ′n┐]
Any such solution clearly can be used to compute a-set of approximate low-biased quantiles.
The present method keeps information about particular items from the input, and also stores some additional tracking information. The intuition for this method is as follows: suppose we have kept enough information so that the median can be estimated with an absolute error of εn in rank. Now suppose that there are so many insertions of items above the median that this item is now the first quartile (the item which occurs ¼ through the sorted order). For this to happen, then the current number of items must be at least 2 n. Hence, if the same absolute uncertainty of εn is maintained, then this corresponds to a relative error of at most 0.5ε. This shows that we will be able to support greater accuracy for the high-biased quantiles provided we manage the data structure correctly.
The term “item” may encompass various types of data. For example, each item could be related to a tuple, where each tuple could be related to a round trip time of a packet in an IP data stream. However, this is only an exemplary illustration and should not be interpreted as a limitation of the present invention.
The data structure at time n, S(n), consists of a sequence of s tuples (ti=(vi, gi, Δi)), where each vi is a sampled item from the data stream and two additional values are kept: (1) gi is the difference between the lowest possible rank of item i and the lowest possible rank of item i−1; and (2) Δi is the difference between the greatest possible rank of item i and the lowest possible rank of item i. The total space used is therefore O(s). For each entry vi, let ri=Σj=1i−1gj. Hence, the true rank of vi is bounded below by ri+gi and above by ri+gi+Δi. ri can be thought of as an overly conservative bound on the rank of the item vi: it is overtight to make the accuracy guarantees later.
Depending on the problem being solved (uniform, biased, or targeted quantiles), the present method will maintain an appropriate restriction on gi+Δi. We will denote this with a function f(ri, n), which for the current values of ri and n gives an upper bound on the permitted value of gi+Δi. For biased quantiles, this invariant is:
Definition 3: (Biased Quantiles Invariant) We set f(ri, n)=max{└2εri┘,1}. Hence, we ensure that gi+Δi≦└2εri┘ for all i.
As each item is read, an entry is created in the data structure for it Periodically, the data structure is “pruned” of unnecessary entries to limit its size. We ensure that the invariant is maintained at all times, which is necessary to show that the present method operates correctly. The operations are defined in
In step 210, method 200 reads an item v, e.g., an item from a data stream, into an entry of a data structure.
In step 220, method 200 inserts the newly read item into the data structure. Specifically, to insert a new item, v, we find i such that vi<v≦vi+1, we compute ri and insert the tuple (v, g=1, Δ=f(ri, n)−1). This gives the correct settings to g and Δ since the rank of v must be at least 1 more than the rank of vi, and (assuming the invariant holds before the insertion), the uncertainty in the rank of v is at most one less than the uncertainty of vi (=Δi), which is itself bounded by f(ri, n) (since Δi is always an integer). We also ensure that min and max are kept exactly, so when v<vi, we insert the tuple (v, g=1, Δ=0) before vi. Similarly, when v>vs, we insert (v, g=1, Δ=0) after vs. To simplify presentation of the algorithms, we add sentinel values (v0=−∞, g=0, Δ=0) and (vs+1=+∞, g=0, Δ32 0).
Once the item is inserted into the data structure, method 200 proceeds to step 225 to determine whether a compress operation is to be performed. If the query is negatively answered, then method 200 proceeds to step 210 and reads the next item. If the query is positively answered, then method proceeds to step 225. It should be noted that the present method performs a compress function on the growing data structure periodically in accordance with a predefined period. This predefined time period is configurable in accordance with the requirement of a particular implementation.
In step 225, method 200 compresses the data structure. Specifically, the present method will periodically scan the data structure and merges adjacent nodes or entries in the data structure when this compress function does not violate the invariant. That is; remove nodes (vi, gi, Δi) and (vi+1, gi+1, Δi+1) and replace with (vi+1, (gi+gi+1),+Δi+1) provided that (gi+gi+1 +Δi+1)≦f(ri, n). This also maintains the semantics of g and Δ being the difference in rank between vi and vi−1, and the difference between the highest and lowest possible ranks of vi, respectively. Once the compress function is finished, method 200 returns to step 210.
Since the data structure is constantly being updated, one can compute a quantile from the data structure by inputting a φ. Namely, given a value 0≦φ≦1, let i be the smallest index so that ri+gi+Δi>φn+½f(φn, n). Output vi−1 as the approximated quantile.
The above routines are the same for the different problems we consider, being parametrized by the setting of the invariant function f.
The method of
Next, we demonstrate that any algorithm which maintains the biased quantiles invariant guarantees that the output function will correctly approximate biased quantiles. Because i is the smallest index so that ri+gi+Δi>φn+f(φn, n)/2=φn+εφn, then ri−1+gi−1+Δi−1≦(1+ε) φn. Using the invariant, then (1+2ε)ri>(1+ε)φn and consequently ri>(1−ε) φn. Hence (1−ε) φn<ri−1+gi−1≦ri−1+gi−1+Δi−1≦(1+ε) φn. Recall that the true rank of vi is between ri+gi and ri+gi+Δi: so the derived inequality means that vi−1 is within the necessary error bounds for biased quantiles.
This gives an error bound of ±εφn for every value of φ. In some cases we have a lower bound on how precisely we need to know the biased quantiles: this is when we only require the first k biased quantiles. It corresponds to a lower bound on the allowed error of εφkn. Clearly we could use the above algorithm which gives stronger error bounds for some items, but this may be inefficient in terms of space. Instead, we modify the invariant as follows to avoid this slackness and so reduce the space needed. The algorithm is identical to before but we modify the invariant to be f(ri, n)=2ε max{ri, φkn, ½ε}. This invariant is preserved by the Insert and Compress steps. The Output function can be proved to correctly compute biased quantiles with this lower bound on the approximation error using straightforward modification of the above proof.
The worst case space requirement for finding biased quantiles should be
Consider the space used by the algorithm to maintain the biased quantiles for the values whose rank is between n/2 and n. Here we maintain a synopsis where the error is bounded below by εn. So the space required to maintain this region of ranks should be bounded by O(1/ε log εn). Similarly for the range of ranks n/4 to n/2, items are maintained to an error no less than ε/2 but we are maintaining a range of at most half as many ranks. Thus the space for this should be bounded by the same amount O(1/ε log εn). This argument can be repeated until we reach n/2x=φkn where the same amount of space suffices to maintain information about ranks up to φk with error εφk. The total amount of space is no more than
If φ is not specified a priori, then this bound can be easily rewritten in terms of k and ε. Also, we never need k log 1/φ to be greater than log εn, which corresponds to an absolute error of less than 1, so the bound is equivalent to O(1/ε log2 εn).
We also note the following lower bound for any method that finds the biased quantiles.
Theorem 2 Any algorithm that guarantees to find biased quantiles φ with error at most φεn in rank must store
items.
Proof: We show that if we query all possible values of φ, there must be at least this many different answers produced. Assume without loss of generality that every item in the input stream is distinct. Consider each item stored by the algorithm. Let the true rank of this item be R. This is a good approximate answer for items whose rank is between R/(1+ε) and R/(1−ε). The largest stored item must cover the greatest item from the input, which has rank n, meaning that the lowest rank input item covered by the same stored item has rank no lower than n(1−ε)/(1+ε). We can iterate this argument, to show that the /th largest stored item covers input items no less than n(1−ε)/(1+ε)l. This continues until we reach an input item of rank at most m=nφk. Below this point, we need only guarantee an error of εφk. By the same covering argument, this requires at least p=(nφk)/(εnφk)=1/ε items. Thus we can bound the space for this algorithm as p+l, when n(1−ε)/(1+ε)l≦m. Then, since 1−ε/1+ε≦(1−ε), we have ln(m/n)≧l ln(1−ε). Since ln(1−ε)≦−ε, we find l≧1/ε ln n/m=1/ε ln n/nφk. This bounds
and gives the stated space bounds.
Note that it is not meaningful to set k to be too large, since then the error in rank becomes less than 1, which corresponds to knowing the exact rank of the smallest items. That is, we never need to have εnφk<1; this bounds k log 1/φ≦; log (εn) and so the space lower bounds translates to
The targeted quantiles problem considers the case that we are concerned with an arbitrary set of quantile values with associated error bounds that are supplied in advance. Formally, the problem is as follows:
Definition 4 (Targeted Quantiles Problem) The input is a set of tuples T={(φj, εj)}. Following a stream of input values, the goal is to return a set of |T| values vj such that
A[┌(φj−εj)n┐]≦vj≦A[┌(φj+εj)n┐].
As in the biased quantiles case, we will maintain a set of items drawn from the input as a data structure, S(n). We will keep tuples <ti=(vi, gi, Δi)> as before, but will keep a different constraint on the values of gi and Δi.
Definition 5 (Targeted Quantiles Invariant) We define the invariant function f(ri, n) as:
fj(ri,n)=2εjri/φj,φjn≦ri≦n; (i)
fj(ri,n)=2εj(n−ri)/(1−φj),0≦ri≦φjn; (ii)
and take f(ri,n)=max{minj└fj(ri,n)┘,1}. As before we ensure that for all i, gi+Δi≦f(ri,n).
An example invariant f is shown in
captures the uniform error approximate quantiles problem. Similarly setting
captures the biased quantiles problem.
The present invention presents a few alternatives used to gain an understanding of which factors are important for achieving good performance over a data stream. The three alternatives presented below exhibit standard data structure trade-offs, but this list is by no means exhaustive.
The running time of the algorithm to process each new update v depends on (i) the data structures used to implement the sorted list of tuples, S, and (ii) the frequency with which Compress is run. The time for each Insert operation is that to find the position of the new data item v in the sorted list. With a sensible implementation (e.g., a balanced tree structure), this is O(log s), and with augmentation we can efficiently maintain ri of each tuple in the same time bounds.
The periodic reduction in size of the quantile summary done by Compress is based on the invariant function f which determines tuples eligible for deletion (that is, merging the tuple into its adjacent tuple). Note that this invariant function can change dynamically when the ranks change; hence, it is not possible to efficiently maintain candidates for compression incrementally. As a consequence, Compress is much simpler to implement since it requires a linear pass over the sorted elements in time O(s). However, instead of periodically performing a full scan, it can be prudent to amortize the time cost and the space used by the algorithm, and thus perform partial scans at higher frequency. This is governed by the function Compress_Condition ( ), which can be implemented in a variety of ways: it could always return true, or return true every 1/ε tuples, or with some other frequency. Note that the frequency of compressing does not affect the correctness, just the aggressiveness with which we prune the data structure.
Three alternatives for maintaining the quantile summary tuples ordered on vi-values in the presence of insertions and deletions are now disclosed.
Batch: This method maintains the tuples of S(n) in a linked list. Incoming items are buffered into blocks of size ½ε, sorted, and then batch-merged into S(n). Insertions and deletions can be performed in constant time. However, the periodic buffer sort, occurring every ½ε items, costs O((1/ε) log(1/ε).
Cursor: This method also maintains tuples of (n) in a linked list. Incoming items are buffered in sorted order and are inserted using an insertion cursor which, like the compress cursor, sequentially scans a fraction of the tuples and inserts a buffered item whenever the cursor is at the appropriate position. Maintaining the buffer in sorted order costs O(log(1/ε) per item.
Tree: This method maintains S(n) using a balanced binary tree. Hence, insertions and deletions cost O(log s). In the worst case, all εs tuples considered for compression can be deleted, so the cost per item is Oεs log s).
It should be noted that the present invention can be implemented in software and/or in a combination of software and hardware, e.g., using application specific integrated circuits (ASIC), a general-purpose computer or any other hardware equivalents. In one embodiment, the present module or process 505 for computing quantiles can be loaded into memory 504 and executed by processor 502 to implement the functions as discussed above. As such, the present method 505 for computing quantiles (including associated data structures) of the present invention can be stored on a computer readable medium or carrier, e.g., RAM memory, magnetic or optical drive or diskette and the like.
While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
This application claims the benefit of U.S. Provisional Application No. 60/632,656 filed on Dec. 2, 2004, which is herein incorporated by reference.
Number | Date | Country | |
---|---|---|---|
60632656 | Dec 2004 | US |