The present disclosure relates to a systematic framework for application protocol field extraction.
In the past, most network devices were content-unaware; such devices extracted only transportation information contained in Layer 3 (L3) and Layer 4 (L4) headers such as source IP address and destination port number instead of Layer 7 (L7) packet payload content to manage network traffic and implement network security. The main reason for using content-unaware networking devices is that it is much cheaper and easier to extract L3 and L4 packet header information than it is to extract L7 packet payload content.
However, modern network management now requires networking devices that can extract specific content from within packet payloads. A typical application will require these content-aware devices to extract particular L7 fields. For example, data loss prevention tools (DLP) often extract HTTP fields to detect covert data channels. Intrusion detection systems rely on L7 field extraction as a primitive operation. Load balancing devices may extract method names and parameters from flows carrying SOAP and XML-RPC traffic and then route the request to the appropriate server that is best able to respond to the request. Finally, existing network monitoring tools such as SNORT and BRO extract 1.7 fields for behavioral analysis.
The problem of online L7 field extraction that occurs within content-aware networking devices is addressed. To do this well, support is needed for automatic translation from grammar representations to automata implementations and automated optimization of the resulting automata implementations. Unfortunately, such automated translation and optimization is difficult because network protocols include features that are not easily represented using standard parsing models such as context-free grammars (CFGs) or regular expressions (Res). For example, the HTTP header field, “Content-Length”, specifies the length of the HTTP body. Unaugmented, a CFG would require a new rule for each legitimate field length, which makes them impractical for L7 parsing.
Online L7 field extraction in a content-aware networking device is fundamentally different than end host protocol parsing because the content-aware network devices must handle millions of concurrent multiplexed network flows. This difference has several technical implications. First, buffering a flow before parsing should be avoided; thus parsing and field extraction should occur incrementally. Second, online L7 field extraction must support efficient context-switching; this requires the parsing state of flows to be minimized. Third, the online L7 field extraction must occur at line-speed.
Prior online L7 field extraction solutions suffer from one of two drawbacks. They are either hand optimized for better performance, or they are derived from an unoptimizable parsing model: recursive descent parsing with code execution. Hand optimized solutions suffer from a high production cost and are prone to errors. The recursive descent solutions offer an excessively rich parsing model that can not be automatically optimized.
Thus, there is no existing solution for online L7 field extraction that supports automated translation from a grammar-based extraction specification to an automata implementation with automated optimization. To illustrate one dimension where previous solutions struggle, the conflict between automated translation and optimization with line-speed extraction is highlighted. One technique that can be exploited to achieve line-speed extraction is to ignore (not parse) unnecessary data; referred to as selective parsing. Previous selective parsing work does achieve high throughput, but these solutions are achieved through hand pruning rather than automated translation and optimization.
This section provides background information related to the present disclosure which is not necessarily prior art.
A computer-implemented system is provided for implementing application protocol field extraction. The system includes: an automata generator configured to receive the extraction specification that specifies data elements to be extracted from data packets and generate a counting automaton; and a field extractor configured to receive a data flow and operates to extract data elements from the data packets in accordance with the counting automaton. The extraction specification is expressed in terms of a context-free grammar, where the grammar defines grammatical structures of data packets transmitted in accordance with an application protocol and includes counters used to chronicle parsing history of production rules comprising the grammar.
In another aspect, a particular method is provided for transforming the extraction specification to an equivalent regular grammar. The method includes: identifying production rules from the extraction specification having nonterminals as either normal or non-normal; replacing each of the production rules identified as normal with a regular rule using decomposition methods; approximating a regular rule for each of the production rules identified as non-normal; and concatenating the regular rules to form a regular grammar. The method may further include eliminating any of the regular rules without terminal symbols from the regular grammar.
This section provides a general summary of the disclosure, and is not a comprehensive disclosure of its full scope or all of its features. Further areas of applicability will become apparent from the description provided herein. The description and specific examples in this summary are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
The drawings described herein are for illustrative purposes only of selected embodiments and not all possible implementations, and are not intended to limit the scope of the present disclosure. Corresponding reference numerals indicate corresponding parts throughout the several views of the drawings.
The FlowSifter framework 10 is comprised of three modules: a grammar optimizer 13, an automata generator 16 and a field extractor 18. As used herein, the term module may refer to, be part of, or include an application specific integrated circuit (ASIC); an electronic circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor that executes computer executable instructions; or other suitable components that provide the described functionality.
In operation, the grammar optimizer 13 receives the extraction specification 12 and outputs an optimized extraction grammar 15. When the extraction specification is incomplete, the grammar optimizer 13 interfaces with a corresponding complete protocol grammar residing in the protocol library 14 and operates to translate the extraction specification to a complete extraction specification using the complete protocol grammar.
The automata generator 16 receives the optimized extraction grammar and generates a counting automaton 17 which is a special type of finite automaton that is augmented with counters as further described below. The field extractor 18 in turn uses the counting automaton to extract relevant fields from data flows. In essence, the counting automaton serves as an L7 protocol configuration for the field extractor module.
The FlowSifter framework 10 recognizes that the automated translation and optimization of an online L7 field extractor requires grammar and automata models that are weaker than standard automata augmented with inline code but richer than finite state automata. Counting regular grammars (CRGs) and counting automata (CA) satisfy this requirement and address the other technical challenges. Because CA are state machines, they may be efficiently implemented in either software or hardware. Counting automatas have a fixed number of counters rather than a stack, so the parsing state size of any extractor is small and bounded. CRG based extractors are automatically derived from grammar specifications. By using counting context-free grammars (CCFGs) to define protocols and extractor specifications, the FlowSifter framework 10 automatically transforms grammar specifications into CRGs, which are executed as CA. For protocols that contain recursively nested fields, FlowSifter framework 10 uses an approximation method to generate a CRG that navigates the recursive structures of the protocol to locate and extract the desired L7 fields.
Given the difficulty of optimizing in-lined code segments, a parsing model is defined that augments rules for context-free grammars and regular grammars with counters, guards, and actions. These new grammar models are more expressive than their common counterparts, but they are still amenable to automatic simplification and optimization.
FlowSifter framework 10 produces an L7 field extractor from two inputs: a protocol specification and an extraction specification. Protocol specifications are CCFGs that precisely specify how to parse the network protocol. Protocol specifications are generic for any desired extraction and reused by multiple extraction specifications. Extraction specifications are written as annotated partial CCRGs. An extraction specification simply refers to the protocol specification for parts of the grammar that need no special handling. It specifies in detail the grammar rules that are required to extract the desired L7 fields.
Formally, a counting context-free grammar (CCFG) is a five-tuple
Γ=(N,Σ,C,R,S)
where N,Σ,C, and R are finite sets of nonterminals, terminals, counters, and production rules, respectively, and S is the start nonterminal. The set of terminals includes an empty terminal, which we denote by ε. A counter is a variable with an integer value, initialized to zero. The counters can be used to remember some parsing history such as the value of length fields. In parsing an HTTP flow, a counter is used to store the value of the “Content-Length” field. The counters also provide a mechanism for eliminating unbounded stacks.
A production rule is written as
guard:nonter min al→body
The guard is a conjunction of unary predicates over the counters in C, i.e. expressions of a single counter that return true or false. An example guard is (c1>2; c2>2), which checks counters c1 and c2 , and evaluates to true if both are greater than 2. If a counter is not included in a guard, then its value does not affect the evaluation of the guard. An empty guard that always evaluates as true is allowed. Guards are used to guide the parsing based on “history” not encoded in the current state.
The nonterminal following the guard is called the head of the rule. Following it, the body is an ordered sequence of terminals and nonterminals, any of which can have associated actions. An action is a set of unary update expressions, each updating the value of one counter, and is associated with a specific terminal or nonterminal in a rule. The action is run after parsing the associated terminal or nonterminal. An example action in CCFG is (c1:=c1*2; c2:=c2+1). If a counter is not included in an action, then the value of that counter is unchanged. An empty action which updates no counters is allowed.
Producing a language from a CCFG works in the same way as a leftmost derivation for CFG. Start with a body of the start non-terminal. At each step, first remove any actions before the leftmost nonterminal by running them in order. Then expand the leftmost nonterminal using a production whose guard evaluates to true. Repeating this procedure results in a string of nonterminals from our language. This leftmost derivation procedure matches the parsing semantics used herein.
With reference to the example set forth below, an application protocol feature that CFGs cannot easily represent can be easily specified with a Varstring language. The Varstring language consists of strings with two fields separated by a space: a length field and a data field, where the value of the length field specifies the length of the variable-length data field. A Dyck language CCFG is also presented; the Dyck language is the set of strings of balanced parentheses ‘]’. The adopted convention is that the head of the first rule is the start nonterminal.
The six production rules in Varstring are explained. The first rule S→BV means that a message has two fields, the length field represented by nonterminal B and a variable length data field represented by nonterminal V. The second rule B→‘0’(c:=c*2) B means that if the character ‘0’ is encountered when parsing the length field, double the value of counter c. Similarly, the third rule B→‘1’(c:=c*2+) B means that if ‘1’ is encountered when parsing the length field, double the value of counter c first and then increase its value by 1. The fourth rule B→‘␣’ means that the parsing of the length field terminates when a space is encountered. These three rules fully specify how to parse the length field and store the length in c. For example, after parsing a length field with “10”, the value of the counter c will be 2(=((0*2)+1)*2). The fifth rule (c>0):V→Σ(c:=c−1) V means that when parsing the data field, decrement the counter c by one each time a character is parsed. The guard allows use of this rule as long as c>0. The sixth rule (c=0):V→ε means that when c=0, the parsing of the variable-length field is terminated.
The CFGs that obey a regularity property are called regular grammars. Counting regular grammars (CRGs) are those CCFGs that obey a similar regularity property. For CRGs, all rules in the grammar must be one of the following two forms:
guard:X→α[action]Y (1)
guard:X→α[action] (2)
where X and Y are nonterminals and α∈Σ. CRG rules that fit equation 1 are the nonterminating rules; whereas, those that fit equation 2 are the terminating production rules as derivations end when they are applied. CCFG rules that fit either equation are regular rules; other rules are non-regular rules.
To build a CA field extractor, start with a CRG and yet do not force the user to give a strict CRG. The user can express the extraction specification as a CCFG and FlowSifter framework will transform it into an equivalent CRG. Any grammar that can be converted into an equivalent CRG is normal. The user's extraction specification CCFG is expressed as
Γx=(Nx,Σ,Cx,Rx,Sx).
FlowSifter framework also does not require Γx to be complete. That is, FlowSifter framework allows Γx to refer to terminals and nonterminals specified in the L7 protocol's corresponding CGFG, which we refer to as
Γp=(Np,Σ,Cp,Rp,Sp).
However, Γx is not allowed to modify Γp without changing the semantics of Γx, To deal with the restrictions on grammars in practice, the user will submit their grammar and revise it based on feedback from the automatic analysis until it is deemed normal.
The purpose of the FlowSifter framework is to call application processing functions on user-specified fields. Based on the extracted field values that they receive, these application processing functions will take application specific actions such as stopping the flow for security purposes or routing the flow to a particular server for load balancing purposes. FlowSifter allows calls to these functions in the actions of a rule. Application processing functions can also return a value back into the extractor to change how it continues parsing the rest of the flow. Since the application processing functions are part of the layer above the FlowSifter framework, their specification is beyond the scope of this disclosure. Further, a shorthand is provided for calling an application processing function ƒ on a piece of the grammar:
ƒ{body}
where body is a rule body that makes up the field to be extracted.
Next, two user-friendly extraction specifications that are annotated partial CCFGs are as follows:
The first, Γxv, specifies the extraction of the variable-length field V for the Varstring CCFG in the example above. This field is passed to an application processing function vstr. For example, given input stream “101Hello”, the field “Hello” will be extracted. This example illustrates several features. First, it shows how the FlowSifter framework can handle variable-length field extractions. Second, it shows how the user can leverage the protocol library to simplify writing the extraction specification. While the Varstring protocol CCFG is not large, it is much easier to write a one production, incomplete CCFG Γxv rather than a complete extraction grammar. The second extraction specification, Γxd , is associated with the Dyck CCFG in the example above and specifies the extraction of the contents of the first pair of square parentheses; this field is passed to an application processing function named parameter. For example, given the input stream [ [ [ ] ]] [ [] [] ], the [ [ ] ] will be extracted. This example illustrates how the FlowSifter framework can extract specific fields within a recursive protocol by referring to the protocol grammar.
Operation of the FlowSifter framework is described in more detail. First, the FlowSifter framework 10 takes the potentially incomplete extraction CCFG Γx and the L7 grammar Γp and turns them into a complete extraction CRG
Γƒ=(Nƒ,Σ,Cƒ,Rƒ,Sƒ).
Recall that Nx and Np are disjoint and that Rx may include nonterminals from Np only in the result of a production rule. Furthermore, the nonterminals in Nx do not appear in any production rules in Rp. Nx is referred to as the extraction nonterminals and Np as the protocol nonterminals.
For a given CCFG Γ=(N,Σ,C,R,S), let Γ(X) for X∈N denote the grammar (N,Σ,C,R,X); that is, X is the start nonterminal, and say that X is normal if Γ(X) is normal. If all protocol nonterminals Y∈Np are treated as terminals in Γx, then assume that Γx is normal. It follows that for each X∈Nx, X is normal if all protocol nonterminals are treated as terminals. However, it is possible that some protocol nonterminal Y∈Np that is reachable from Sx is not normal. For example, Γp(Y) may define a feature such as nesting of balanced opening and closing parentheses that require unlimited memory to precisely parse. Therefore, the FlowSifter framework combines Γx with Γp to produce a CRG.
With reference to
Determining if a context free grammar describes a regular language is undecidable. Thus, we cannot precisely identify normal nonterminals. FlowSifter identifies nonterminals in Np that are guaranteed to be normal using the following sufficient but not necessary condition. Each nonterminal X∈Np is normal if
Once FlowSifter has identified each nonterminal as regular or not, the normal nonterminals are normalized; whereas, the non-normal nonerminals are approximated. Since the process for identifying nonterminals is not accurate, a normal nonterminal may be misidentified as not normal. Fortunately, the cost of such a mistake is relatively low; it's only one counter in memory and some unnecessary predicate checks.
Normalization replaces a normal nonterminal's rules with a collection of equivalent regular rules. The basic idea behind normalization is to use standard decomposition techniques to turn nonregular rules into a collection of equivalent regular rules. Consider an arbitrary nonregular rule
guard:X→body.
First express the body as Y1. . . . Yn, where Yi,1≦i≦n is either a terminal (possibly with an action) or a nonterminal. Because this is a nonregular rule, either Y1 is a nonterminal or n>2 (or both). Handle the cases as follows:
Counting approximation is used to produce regular rules for L7 protocol structure that are not normal. The basic idea is to parse only the start and end terminals for Γ(X) ignoring any other parsing information contained within this subgrammar. By using the counters to track nesting depth, we can approximate the parsing stack for nonterminals in the protocol grammar. We only apply this to nonterminals from Np, so we don't affect extraction on grammatical streams.
Given a CCFG Γƒ with a nonterminal X∈N, that does not identify as normal, FlowSifter computes a counting approximation of Γƒ(X) which are denoted as start and stop. These are the terminals that mark the start and end of a string that can be produced by Γ(X). The remaining terminals we denote as other. For example, in the Dyck extraction grammar Γxd(S) are {‘[’} and {‘]’}, respectively, and other has no elements. FlowSifter replaces all rules with head X with the following rules that use a new counter cnt:
(cnt=0):X→ε
(cnt≧0):X→start(cnt:=cnt+1)X
(cnt>0):X→stop(cnt:=cnt−1)X
(cnt>0):X→other X
The first rule allows exiting X when the recursion level is zero. The second and third increase and decrease the recursion level when matching start and stop terminals. The final production rule consumes the other terminals, approximating the grammar while cnt>0.
For example, if we apply counting approximation to the nonterminal S from the Dyck extraction grammar Γxd above, the resulting new production rules are as follows.
(cnt=0)S→ε
(cnt 0)S→‘[’(cnt:=cnt+1)S
cnt>0)S→‘]’(cnt:=cnt−1)S
Counting approximation can be applied to any subgrammar Γƒ(X) with unambiguous starting and stopping terminals. Ignoring all parsing information other than nesting depth of start and end terminals in the flow leads to potentially faster flow processing and fixed memory cost. Most importantly, these errors introduced do not interfere with field extraction because we do not approximate extraction specification nonterminals.
The final step to producing a CRG is to remove any rules without terminal symbols. This guarantees that every derivation consumes input. Such rules are called idle rules, and they have the form: X→Y without any terminal α. Idle rules are eliminated by hoisting the contents of Y into X. We must also compose the actions and predicates. For a CRG with n variables, to compose a rule
(q1 . . . qn):Y→α(act)(g1, . . . , gn)
into the idle rule
(p1 . . . pn):X→Y(ƒ1, . . . , ƒn),
we create a new rule
(p1′ . . . pn′)X→α(act)Z(ƒ1′, . . . , ƒn′)
where pi′=piqi and ƒi′=ƒi∘gi for 1≦i≦n. That is, compose the actions associated with Y in X into Z's actions and merge the predicates.
The automata generator module 16 takes an optimized extraction grammar as its input and generates an equivalent counting automaton, which will serve as the data structure (or say the “configuration”) of the field extractor module 18. Counting Automata (CA) allow efficient use of CRGs in online field extraction by leveraging deterministic finite state automata (DFA) for matching flow data. Much work has been done on efficient implementation of DFAs on network and security devices. This work is built upon by using regular expressions as the terminal symbols in our CCFGs and CRGs. This implies that each transition in the resulting CA uses its own DFA to process the flow payload, determine the next CA state, and update the CA counters.
First, define a DFA with labeled decisions. A Labeled DFA is a 5-tuple
DFA(Σ,D)=(Q,Σ,δ,q0,DF)
where Q is a set of states, Σ is an alphabet, q0 is the initial state, δ:Q×Σ→Q is the transition function and DF:Q→D is a partial function assigning a subset of the states a decision from the decision set D. The notation DFA(Σ,D) denotes the set of DFA over an alphabet Σ and a decision set D. A Counting Automata (CA) is a 5-tuple
(Q,Σ,C,δ,q0,c0)
where Qc is a set of states, Σ is an alphabet, C is a set of possible counter configurations, q0 is the initial state, and c0 is the initial counter configuration. The transition function is δ:Q×C→DFA(Σ,(Q×(C→C)); that is, given the current state qj∈Q and an action function act, that updates the counter configuration.
FlowSifter generates a CA (Q,Σ,C,δ,q0) from a CRG Γ=(N,Σg,Cg,R,S) as follows. Some components of the grammar are directly inherited by the CA. The states of the CA are exactly the set of nonterminals of the CRG, and the initial state is also the start nonterminal, so Q=N and q0=S. The CA also works over the same alphabet as the grammar, so Σ=Σg. For the set of possible counter configurations C, assume each counter from Cg has some maximum size, typically 2sizeof(int)−1. We could reduce the size of each counter to reduce the final parsing state size of the CA. Formally C={(c1,c2, . . . , c|C
To apply a CA to a flow, first identify δ(q0,c0)=df α0 and run this DFA on the flow until it returns a decision (q1,act0). If the DFA does not return any decision, the flow does not match the grammar, and processing is stopped. The action function acto is applied to get the new counter configuration c1=act0(c0). Then identify the appropriate DFA δ(q1,c1)=df α1 which resumes processing the flow. The CA continues in this fashion alternating between CA states where counters are updated and predicates computed and DFA states where flow input is consumed until the entire flow is processed. The parsing state of the CA consists of a DFA state, a counter configuration, plus some flow state variables such as the flow offset that the next DFA should start at.
A CA reports extraction events by having its actions call application processing functions which are defined in the extraction specification. The CA waits for a return value from the called application processing function so it can complete updating the counters before it continues processing the input flow. In many cases, the application processing function never needs to return an actual value to the CA, so it can immediately return a null value so that the CA can immediately resume processing the input flow.
The description above has assumed that only one regular expression from the rules in r(c) will match the flow data at a time. However, multiple regular expressions may match the same flow data and have different actions. This is addressed by assigning priorities to the different rules in r(c) and take these priorities into account when constructing the DFA that corresponds to δ(X,c). For example, use
as part of our protocol specification for processing HTTP headers. We give the first rule higher priority which allows us to easily match the special case of TOKEN where the header name is “Content-Length” and do a different action.
To call an application processing function, we need to give it the positions of the extracted field within the stream. Thus, we need a pos ( ) function that returns the current position in the input flow, and the actions need access to such functions and flow state variables in the parsing state so that they can put the offset of the start of the field into a counter.
One optimization we have implemented is to allow actions to modify the flow offset in the parsing state. Specifically, if we are processing an HTTP flow and do not need to parse within the body, the actions increase the offset by the value in the HTTP content-Length field rather than use the DFA to parse through the body byte by byte.
Another optimization we have implemented eliminates some DFA to CA transitions. Suppose the optimized CRG has a nonterminal X with a single rule with no actions such as X→|rx|Y. We can save the switch from DFA to CA at the end of /rx/ and the switch back to DFA at the beginning of Y by inlining Y into X, as idle rule elimination, except we prepend /rx/ to the beginning of each of Y's terminal symbols. We also perform this optimization when Y has a single rule and all of X's rules that end in Y have no actions. This increases the complexity of the DFAs for each non-terminal but improves parsing speed.
Field extractor performance is evaluated in three areas: speed, memory and extractor definition complexity. Speed is important to keep up with incoming packets at the speed of the interface. Because memory bandwidth is limited and saving and loading extractor state to DRAM is necessary when parsing a large number of simultaneous flows, memory use is also a critical aspect of field extraction. Lastly, the complexity of writing field extractors is an important consideration, as this determines the rate at which new protocol field extractors can be deployed.
Tests are performed using two types of traces, HTTP and SOAP. We use HTTP traffic in our comparative tests because the majority of non-P2P traffic on the Internet is HTTP and because HTTP field extraction is critical for L7 load balancing. We use a SOAP-like protocol to demonstrate FlowSifter's ability to perform field extraction on flows with recursive structure. SOAP is a very common protocol for RPC in business applications, and SOAP is the successor of SML-RPC. Parsing SOAP at the firewall is important for detecting overflows of a particular parameter.
Our trace data form is interleaved packets from multiple flows. In contrast, previous work has used traces that consist of pre-assembled complete flows. We use the interleaved packet format because it is impractical for a network device to pre-assemble each flow before passing it to the parser. Specifically, the memory costs of this pre-assembly would be very large and the resulting delays in flow transmission would be unacceptably long.
Our HTTP packet data comes from the MIT Lincoln Labs (LL) DARPA intrusion detection data sets. This LL data set has 12 total weeks of data from 1998 and 1999. We obtained the HTTP packet data by pre-filtering for traffic on port 80 with elimination of TCP retransmissions and delaying out-of-order packets. Each day's traffic became one test case. We eliminated the unusually small (<25 MB) from our test data sets to improve timing accuracy. This left 45 test traces, with between 0.16 and 2.5 Gbit of data and between 27K and 566K packets per trace.
We generated 17 traces of SOAP-like flows by encapsulating a constructed SOAP body in a fixed HTTP and SOAP header and footer. Each trace is parametrized by n, the number of levels of recursive structure guaranteed to be built. We varied n from 0 to 16. The SOAP body was composed of nested tags, on level l, a child node was inserted with (0.8max(0,l−n)) chance. After inserting a child node, the generator inserted a sibling node with (0.6*0.8max(0,l−n)) chance. This produced a wide variety of recursive structures.
For each value of n, we generated 10 traces. For each trace, we generated 10,000 flows. These 10,000 artificial flows were turned into a multiplexed stream of packets or trace as follows. During each unit of virtual time, one new flow was added to the set of active flows. Then each active flow sent a random amount of its contents, with equal chances of sending 0, rand (50), rand (200), rand (1000) and 1000+rand (500) bytes. If the transmission amount for a flow exceeded its remaining content, that flow sent all remaining data and was then removed from the set of active flows. The data being sent in one unit of virtual time was then shuffled and accumulated into a virtual packet flow. On our 10,000 generated flows, this produced around 18K packets for n=0 and 106K packets for n=16.
An exemplary FlowSifter implementation was written in Objective CamI (excluding DFA generation) and runs on a desktop PC running Linux 2.6.35 on an AMD Phenom X4 945 with 4 GB RAM. It generates the CA from protocol and extraction grammars and simulates it on trace payloads.
We constructed HTTP field extractors using FlowSifter, BinPAC from version 1.5.1 of Bro, and UltraPAC from NetShield's SYN r1928. The basic method for field extractor construction with all three systems is identical. First, a base parser is constructed from an HTTP protocol grammar. Next, field extractor is constructed by compiling an extraction specification with the base parser. Each system provides its own method for melding a base parser with an extraction specification to construct a field extractor. We used UltraPAC's default HTTP field extractor which extracts the following HTTP fields: method, URI, header name, and header value. We modified BinPAC's default HTTP field extractor to extract these same fields by adding extraction actions. FlowSifter's base HTTP parser was written from the HTTP protocol spec. We then wrote an extraction specification to extract these same HTTP fields.
For SOAP traffic, we can only test FlowSifter. We again wrote a base SOAP parser using a simplified SOAP protocol spec. We then made an extraction specification to extract some specific SOAP fields and formed the SOAP field extraction by compiling the extraction specification with the base SOAP parser. We attempted to develop field extractions for BinPAC and UltraPAC, but they seem incapable of easily parsing xml-style recursive structures. BinPAC assumes it can buffer enough flow data to be able to generate a parse node at once. UltraPAC's Parsing State Machine can't represent the recursive structure of the stack, so it would require generating the counting approximation by hand.
For any trace, there are two basic metrics for measuring a field extractor's performance: parsing speed and memory used. We define a third metric, efficiency, which we define as the parsing speed divided by the log10 of the memory needed. High efficiency indicates higher speed with less memory needed.
We use the term speedup to indicate the ratio of FlowSifter's parsing speed on a trace divided by another field extractor's parsing speed on the same trace. We use the term memory compression to indicate the ratio of another parser's memory used on a trace divided by FlowSifter's memory used on the same trace. The average speedup or average memory compression of FlowSifter for a set of traces is the average of the speedups or memory compressions for each trace. Parser Complexity is measured by comparing the definitions of the base HTTP protocol papers. We could only compare the HTTP protocol parsers since we failed to construct SOAP field extractors for either BinPAC or UltraPAC.
We measure parsing speed as the number of bits parsed divided by the time spent parsing. We use Linux process counters to measure the user plus system time needed to parse a trace.
We measure the memory taken by a field extractor on a trace by measuring the memory use of the extractor before and right at the end of processing the given trace and taking the difference. BinPAC and UltraPAC use manual memory management, so we measure memory use by using tcmalloc's generic.current_allocated_bytes parameter. This allows us to precisely identify the exact amount of memory allocated to the extractor and not yet freed. Since FlowSifter runs in a garbage collected environment, its environment provides an equivalent measure of live heap data.
Empirical cumulative distribution functions (CDFs) for all three field extractors' memory usage, parsing speed and efficiency on the 45 Lincoln Labs traces in
FlowSifter parses the input faster than either BinPAC or UltraPAC. On average, FlowSifter runs 4.1 times faster than BinPAC and 1.84 times faster than UltraPAC.
FlowSifter's optimal DFA parsing speed is 1.8 Gbps. We determined this by running a simple DFA on a simple input flow. As shown in
However, the CA introduces two factors that can lead to slower parsing: evaluating expressions and context switching. In our current implementation, both predicates and actions are interpreted. A more efficient implementation could compile these so they run at the speed of the processor. Each CA transition also leads to potentially a new DFA that will process the next piece of the flow. FlowSifter suffers a context switching cost with each such DFA change.
To test FlowSifter's approximation performance, we made a SOAP field extractor that extracts a single field two levels deep and then rant it on our 10 traces for each value of n ranging from 0 to 16. As expected, FlowSifter's SOAP field extractor had a slower parsing speed than FlowSifter's HTTP field extractor. There are two main reasons for the slowdown. First, there are fewer opportunities for selective parsing. For example, FlowSifter cannot skip any fields such as the HTTP body. Second, as the recursion level increases, the number of CA transitions per DFA transition increases. This causes FlowSifter to check and modify counters more often, slowing execution.
Each point in
FlowSifter's memory usage is consistently 344 bytes per flow. This is due to FlowSifter's use of a fixed-size array of counters to store almost all of the parsing state. BinPAC and UltraPAC use much more memory respectively averaging 5.5 KB and 2.7 KB per flow. This is mainly due to their buffering requirements, as they must parse an entire record at once. For HTTP traffic, this means an entire line must be buffered before they parse it. When matching a regular expression against flow content, if there is not enough flow to finish, they buffer additional content before trying to match again.
The final point of comparison is less scientific than the others, but is relevant for practical use of parser generators. The complexity of writing a base protocol parser for each of these systems can be approximated by the size of the parser file. We exclude comments and blank lines for this comparison, but even doing this, the results should be taken as a very rough estimate of complexity.
The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.
Example embodiments are provided so that this disclosure will be thorough, and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well-known device structures, and well-known technologies are not described in detail.
This application claims the benefit of U.S. Provisional Application No. 61/365,079, filed on Jul. 16, 2010. The entire disclosure of the above application is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
6523172 | Martinez-Guerra et al. | Feb 2003 | B1 |
7577633 | Shankar et al. | Aug 2009 | B2 |
7962323 | Johnson et al. | Jun 2011 | B2 |
20080140589 | Basu et al. | Jun 2008 | A1 |
20090070459 | Cho et al. | Mar 2009 | A1 |
Entry |
---|
R. Smith et al, “XFA: Faster Signature Matching With Extended Automata”, IEEE Symposium on Security and Privacy, May 2008. |
Number | Date | Country | |
---|---|---|---|
20120191833 A1 | Jul 2012 | US |
Number | Date | Country | |
---|---|---|---|
61365079 | Jul 2010 | US |