Rule parser

Information

  • Patent Grant
  • 8656039
  • Patent Number
    8,656,039
  • Date Filed
    Tuesday, June 8, 2004
    20 years ago
  • Date Issued
    Tuesday, February 18, 2014
    10 years ago
Abstract
In one embodiment of the present invention, a rule compiler can compress a plurality of rules to be parsed over a block of data into one state table tree structure. In one embodiment of the present invention, rue parsing over the block of data includes selecting a unit of the block of data, indexing into a state table of the state table tree using the selected unit. The state table indexed into can be used for determining whether a decision regarding the block of data can be reached based on the indexed entry, and for selecting a next state table indicated by the indexed entry if the decision regarding the block of data cannot be reached.
Description
FIELD OF THE INVENTION

The present invention relates to computer technology, and in particular, to a rule parser.


BACKGROUND

Computer networks and systems have become indispensable tools for modern business. Modern enterprises use such networks for communications and for storage. The information and data stored on the network of a business enterprise is often a highly valuable asset. Modem enterprises use numerous tools to keep outsiders, intruders, and unauthorized personnel from accessing valuable information stored on the network. These tools include firewalls, intrusion detection systems, and packet sniffer devices. However, once an intruder has gained access to sensitive content, there is no network device that can prevent the electronic transmission of the content from the network to outside the network. Similarly, there is no network device that can analyse the data leaving the network to monitor for policy violations, and make it possible to track down information leeks. What is needed is a comprehensive system to capture, store, and analyse all data communicated using the enterprises network.


SUMMARY OF THE INVENTION

In one embodiment of the present invention, a rule compiler can compress a plurality of rules to be parsed over a block of data into one state table tree structure. In one embodiment of the present invention, rue parsing over the block of data includes selecting a unit of the block of data, indexing into a state table of the state table tree using the selected unit. The state table indexed into can be used for determining whether a decision regarding the block of data can be reached based on the indexed entry, and for selecting a next state table indicated by the indexed entry if the decision regarding the block of data cannot be reached.





BRIEF DESCRIPTION OF THE DRAWINGS

The present invention is illustrated by way of example, and not by way of refer to similar elements and in which:



FIG. 1 is a block diagram illustrating a computer network connected to the Internet;



FIG. 2 is a block diagram illustrating one configuration of a capture system according to one embodiment of the present invention;



FIG. 3 is a block diagram illustrating the capture system according to one embodiment of the present invention;



FIG. 4 is a block diagram illustrating an object assembly module according to one embodiment of the present invention;



FIG. 5 is a block diagram illustrating an object store module according to one embodiment of the present invention;



FIG. 6 is a block diagram illustrating an example hardware architecture for a capture system according to one embodiment of the present invention;



FIG. 7 is a block diagram illustrating a rule compiler and a capture filter according to one embodiment of the present invention;



FIG. 8 illustrates an example of tags being parsed using rules according to one embodiment of the present invention;



FIG. 9 illustrates a simplified example of rule compiling according to one embodiment of the present invention;



FIG. 10 illustrates an example state table entry according to one embodiment of the present invention; and



FIG. 11 is a flow diagram illustrating a rule parsing method according to one embodiment of the present invention.





DETAILED DESCRIPTION

Although the present system will be discussed with reference to various illustrated examples, these examples should not be read to limit the broader spirit and scope of the present invention. Some portions of the detailed description that follows are presented in terms of algorithms and symbolic representations of operations on data within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the computer science arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared and otherwise manipulated.


It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, it will be appreciated that throughout the description of the present invention, use of terms such as “processing”, “computing”, “calculating”, “determining”, “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.


As indicated above, one embodiment of the present invention is instantiated in computer software, that is, computer readable instructions, which, when executed by one or more computer processors/systems, instruct the processors/systems to perform the designated actions. Such computer software may be resident in one or more computer readable media, such as hard drives, CD-ROMs, DVD-ROMs, read-only memory, read-write memory and so on. Such software may be distributed on one or more of these media, or may be made available for download across one or more computer networks (e.g., the Internet). Regardless of the format, the computer programming, rendering and processing techniques discussed herein are simply examples of the types of programming, rendering and processing techniques that may be used to implement aspects of the present invention. These examples should in no way limit the present invention, which is best understood with reference to the claims that follow this description.


Networks



FIG. 1 illustrates a simple prior art configuration of a local area network (LAN) 10 connected to the Internet 12. Connected to the LAN 102 are various components, such as servers 14, clients 16, and switch 18. There are numerous other known networking components and computing devices that can be connected to the LAN 10. The LAN 10 can be implemented using various wireline or wireless technologies, such as Ethernet and 802.11b. The LAN 10 may be much more complex than the simplified diagram in FIG. 1, and may be connected to other LANs as well.


In FIG. 1, the LAN 10 is connected to the Internet 12 via a router 20. This router 20 can be used to implement a firewall, which are widely used to give users of the LAN 10 secure access to the Internet 12 as well as to separate a company's public Web server (can be one of the servers 14) from its internal network, i.e., LAN 10. In one embodiment, any data leaving the LAN 10 towards the Internet 12 must pass through the router 12. However, there the router 20 merely forwards packets to the Internet 12. The router 20 cannot capture, analyse, and searchably store the content contained in the forwarded packets.


One embodiment of the present invention is now illustrated with reference to FIG. 2. FIG. 2 shows the same simplified configuration of connecting the LAN 10 to the Internet 12 via the router 20. However, in FIG. 2, the router 20 is also connected to a capture system 22. In one embodiment, the router 12 splits the outgoing data stream, and forwards one copy to the Internet 12 and the other copy to the capture system 22.


There are various other possible configurations. For example, the router 12 can also forward a copy of all incoming data to the capture system 22 as well. Furthermore, the capture system 22 can be configured sequentially in front of, or behind the router 20, however this makes the capture system 22 a critical component in connecting to the Internet 12. In systems where a router 12 is not used at all, the capture system can be interposed directly between the LAN 10 and the Internet 12. In one embodiment, the capture system 22 has a user interface accessible from a LAN-attached device, such as a client 16.


In one embodiment, the capture system 22 intercepts all data leaving the network. In other embodiments, the capture system can also intercept all data being communicated inside the network 10. In one embodiment, the capture system 22 reconstructs the documents leaving the network 10, and stores them in a searchable fashion. The capture system 22 can then be used to search and sort through all documents that have left the network 10. There are many reasons such documents may be of interest, including network security reasons, intellectual property concerns, corporate governance regulations, and other corporate policy concerns.


Capture System


One embodiment of the present invention is now described with reference to FIG. 3. FIG. 3 shows one embodiment of the capture system 22 in more detail. The capture system 22 includes a network interface module 24 to receive the data from the network 10 or the router 20. In one embodiment, the network interface module 24 is implemented using one or more network interface cards (NIC), e.g., Ethernet cards. In one embodiment, the router 20 delivers all data leaving the network to the network interface module 24.


The captured raw data is then passed to a packet capture module 26. In one embodiment, the packet capture module 26 extracts data packets from the data stream received from the network interface module 24. In one embodiment, the packet capture module 26 reconstructs Ethernet packets from multiple sources to multiple destinations for the raw data stream.


In one embodiment, the packets are then provided the object assembly module 28. The object assembly module 28 reconstructs the objects being transmitted by the packets. For example, when a document is transmitted, e.g. as an email attachment, it is broken down into packets according to various data transfer protocols such as Transmission Control Protocol/Intemet Protocol (TCP/IP) and Ethernet. The object assembly module 28 can reconstruct the document from the captured packets.


One embodiment of the object assembly module 28 is now described in more detail with reference to FIG. 4. When packets first enter the object assembly module, they are first provided to a reassembler 36. In one embodiment, the reassembler 36 groups—assembles—the packets into unique flows. For example, a flow can be defined as packets with identical Source IP and Destination IP addresses as well as identical TCP Source and Destination Ports. That is, the reassembler 36 can organize a packet stream by sender and recipient.


In one embodiment, the reassembler 36 begins a new flow upon the observation of a starting packet defined by the data transfer protocol. For a TCP/IP embodiment, the starting packet is generally referred to as the “SYN” packet. The flow can terminate upon observation of a finishing packet, e.g., a “Reset” or “FIN” packet in TCP/IP. If now finishing packet is observed by the reassembler 36 within some time constraint, it can terminate the flow via a timeout mechanism. In an embodiment using the TPC protocol, a TCP flow contains an ordered sequence of packets that can be assembled into a contiguous data stream by the ressembler 36. Thus, in one embodiment, a flow is an ordered data stream of a single communication between a source and a destination.


The flown assembled by the reassember 36 can then be provided to a protocol demultiplexer (demux) 38. In one embodiment, the protocol demux 38 sorts assembled flows using the TCP Ports. This can include performing a speculative classification of the flow contents based on the association of well-known port numbers with specified protocols. For example, Web Hyper Text Transfer Protocol (HTTP) packets—i.e., Web traffic—are typically associated with port 80, File Transfer Protocol (FTP) packets with port 20, Kerberos authentication packets with port 88, and so on. Thus in one embodiment, the protocol demux 38 separates all the different protocols in one flow.


In one embodiment, a protocol classifier 40 also sorts the flows in addition to the protocol demux 38. In one embodiment, the protocol classifier 40—operating either in parallel or in sequence with the protocol demux 38—applies signature filters to the flows to attempt to identify the protocol based solely on the transported data. Furthermore, the protocol demux 38 can make a classification decision based on port number which is subsequently overridden by protocol classifier 40. For example, if an individual or program attempted to masquerade an illicit communication (such as file sharing) using an apparently benign port such as port 80 (commonly used for HTTP Web browsing), the protocol classifier 40 would use protocol signatures, i.e., the characteristic data sequences of defined protocols, to verify the speculative classification performed by protocol demux 38.


In one embodiment, the object assembly module 28 outputs each flow organized by protocol, which represent the underlying objects. Referring again to FIG. 3, these objects can then be handed over to the object classification module 30 (sometimes also referred to as the “content classifier”) for classification based on content. A classified flow may still contain multiple content objects depending on the protocol used. For example, protocols such as HTTP (Internet Web Surfing) may contain over 100 objects of any number of content types in a single flow. To deconstruct the flow, each object contained in the flow is individually extracted, and decoded, if necessary, by the object classification module 30.


The object classification module 30 uses the inherent properties and signatures of various documents to determine the content type of each object. For example, a Word document has a signature that is distinct from a PowerPoint document, or an Email document. The object classification module 30 can extract out each individual object and sort them out by such content types. Such classification renders the present invention immune from cases where a malicious user has altered a file extension or other property in an attempt to avoid detection of illicit activity.


In one embodiment, the object classification module 30 determines whether each object should be stored or discarded. In one embodiment, this determination is based on a various capture rules. For example, a capture rule can indicate that Web Traffic should be discarded. Another capture rule can indicate that all PowerPoint documents should be stored, except for ones originating from the CEO's IP address. Such capture rules can be implemented as regular expressions, or by other similar means.


In one embodiment, the capture rules are authored by users of the capture system 22. The capture system 22 is made accessible to any network-connected machine through the network interface module 24 and user interface 34. In one embodiment, the user interface 34 is a graphical user interface providing the user with friendly access to the various features of the capture system 22. For example, the user interface 34 can provide a capture rule authoring tool that allows users to write and implement any capture rule desired, which are then applied by the object classification module 30 when determining whether each object should be stored. The user interface 34 can also provide pre-configured capture rules that the user can select from along with an explanation of the operation of such standard included capture rules. In one embodiment, the default capture rule implemented by the object classification module 30 captures all objects leaving the network 10.


If the capture of an object is mandated by the capture rules, the object classification module 30 can also determine where in the object store module 32 the captured object should be stored. With reference to FIG. 5, in one embodiment, the objects are stored in a content store 44 memory block. Within the content store 44 are files 46 divided up by content type. Thus, for example, if the object classification module determines that an object is a Word document that should be stored, it can store it in the file 46 reserved for Word documents. In one embodiment, the object store module 32 is integrally included in the capture system 22. In other embodiments, the object store module can be external—entirely or in part—using, for example, some network storage technique such as network attached storage (NAS) and storage area network (SAN).


Tag Data Structure


In one embodiment, the content store is a canonical storage location, simply a place to deposit the captured objects. The indexing of the objects stored in the content store 44 is accomplished using a tag database 42. In one embodiment, the tag database 42 is a database data structure in which each record is a “tag” that indexes an object in the content store 44 and contains relevant information about the stored object. An example of a tag record in the tag database 42 that indexes an object stored in the content store 44 is set forth in Table 1:










TABLE 1





Field Name
Definition







MAC Address
Ethernet controller MAC address unique to each



capture system


Source IP
Source Ethernet IP Address of object


Destination IP
Destination Ethernet IP Address of object


Source Port
Source TCP/IP Port number of object


Destination
Destination TCP/IP Port number of the object


Port


Protocol
IP Protocol that carried the object


Instance
Canonical count identifying object within a protocol



capable of carrying multiple data within a single TCP/IP



connection


Content
Content type of the object


Encoding
Encoding used by the protocol carrying object


Size
Size of object


Timestamp
Time that the object was captured


Owner
User requesting the capture of object (rule author)


Configuration
Capture rule directing the capture of object


Signature
Hash signature of object


Tag Signature
Hash signature of all preceding tag fields









There are various other possible tag fields, and some embodiments can omit numerous tag fields listed in Table 1. In other embodiments, the tag database 42 need not be implemented as a database, and a tag need not be a record. Any data structure capable of indexing an object by storing relational data over the object can be used as a tag data structure. Furthermore, the word “tag” is merely descriptive, other names such as “index” or “relational data store,” would be equally descriptive, as would any other designation performing similar functionality.


The mapping of tags to objects can, in one embodiment, be obtained by using unique combinations of tag fields to construct an object's name. For example, one such possible combination is an ordered list of the Source IP, Destination IP, Source Port, Destination Port, Instance and Timestamp. Many other such combinations including both shorter and longer names are possible. In another embodiment, the tag can contain a pointer to the storage location where the indexed object is stored.


The tag fields shown in Table 1 can be expressed more generally, to emphasize the underlying information indicated by the tag fields in various embodiments. Some of these possible generic tag fields are set forth in Table 2:










TABLE 2





Field Name
Definition







Device Identity
Identifier of capture device


Source Address
Origination Address of object


Destination Address
Destination Address of object


Source Port
Origination Port of object


Destination Port
Destination Port of the object


Protocol
Protocol that carried the object


Instance
Canonical count identifying object within a protocol



capable of carrying multiple data within a single



connection


Content
Content type of the object


Encoding
Encoding used by the protocol carrying object


Size
Size of object


Timestamp
Time that the object was captured


Owner
User requesting the capture of object (rule author)


Configuration
Capture rule directing the capture of object


Signature
Signature of object


Tag Signature
Signature of all preceding tag fields









For many of the above tag fields in Tables 1 and 2, the definition adequately describes the relational data contained by each field. For the content field, the types of content that the object can be labelled as are numerous. Some example choices for content types (as determined, in one embodiment, by the object classification module 30) are JPEG, GIF, BMP, TIFF, PNG (for objects containing images in these various formats); Skintone (for objects containing images exposing human skin); PDF, MSWord, Excel, PowerPoint, MSOffice (for objects in these popular application formats); HTML, WebMail, SMTP, FTP (for objects captured in these transmission formats); Telnet, Rlogin, Chat (for communication conducted using these methods); GZIP, ZIP, TAR (for archives or collections of other objects); C++ Source, C Source, FORTRAN Source, Verilog Source (for source or design code authored in these high-level programming languages); C Shell, K Shell, Bash Shell (for shell program scripts); Plaintext (for otherwise unclassified textual objects ); Crypto (for objects that have been encrypted or that contain cryptographic elements); Binary Unknown, ASCII Unknown, and Unknown (as catchall categories).


The signature contained in the Signature and Tag Signature fields can be any digest or hash over the object, or some portion thereof. In one embodiment, a well known hash, such as MD5 or SHA1 can be used. In one embodiment, the signature is a digital cryptographic signature. In one embodiment, a digital cryptographic signature is a hash signature that is signed with the private key of the capture system 22. Only the capture system 22 knows its own private key, thus, the integrity of the stored object can be verified by comparing a hash of the stored object to the signature decrypted with the public key of the capture system 22, the private and public keys being a public key cryptosystem key pair. Thus, if a stored object is modified from when it was originally captured, the modification will cause the comparison to fail.


Similarly, the signature over the tag stored in the Tag Signature field can also be a digital cryptographic signature. In such an embodiment, the integrity of the tag can also be verified. In one embodiment, verification of the object using the signature, and the tag using the tag signature is performed whenever an object is presented, e.g., displayed to a user. In one embodiment, if the object or the tag is found to have been compromised, an alarm is generated to alert the user that the object displayed may not be identical to the object originally captured.


Rule Parser


As described above, in one embodiment, the object classification module 30 determines whether each captured object/document should be stored. In one embodiment, this determination is based on capture rules provided by a user (or pre-configured into the system). In one embodiment, the capture rules can specify which captured objects should be stored based on the information collected in the tag associated with the object, such as content type, source IP, and so on. Thus, in one embodiment, the capture system 22 includes—e.g., in the object classification module 30, or as an independent module—a capture filter 60 configured to make a determination about what to do with each captured object.


One embodiment of the capture filter 60 is now described with reference to FIG. 7. In one embodiment, the capture filter 60 receives as input a tag—e.g., such as a tag described with reference to Tables 1 and 2—associated with a captured object. The tag is provided to a rule parser 62 that parses all the capture rules over the tag to see if it satisfies any of the capture rules.


The rule parser 62 provides the capture filter 60 with a decision based on which, if any, rules applied to the tag. The decision can be any decision supported by the system 22, such as store the object, discard the object, log the object, and so on. The capture filter 60 then directs further processing of the captured object and its tag based on the decision. For example, if a capture rule indicating that an object should be stored is hit by the tag, then the capture filter 60 will cause the object and its tag to be stored in the object store module 32



FIG. 8 provides a simplified illustration of the operation of the rule parser 62. The rule parser 63 applies rules 70 to tags 68 associated with objects. For example, given the rules 70 in FIG. 8, the object described by tag 68(a) from Bob will be kept because it hits Rule 1, the object described by tag 68(b) from Bill will be kept because it hits Rule 2, and the object described by.tag 68(c) from Bob will be kept or dropped depending on which rule has precedence. If either Rule 1 or 2 has precedence over Rule 3, then the object will be kept, otherwise, it will be dropped. The illustration in FIG. 8 demonstrates, that in one embodiment, rules have a precedence order, and rules are not orthogonal, i.e., one tag can hit more than one rule.


Referring again to FIG. 7, in one embodiment the rule parser 62 parses all capture rules by traversing a state table tree one time. In one embodiment, the state table tree is generated by a rule compiler 64 that compiles the capture rules authored by the system users into the state table tree. In one embodiment, the state table tree is a data structure in the form of a tree whose nodes are state tables indicating the state of the parsing. Other appropriate data structures may be used.


A simplified illustration of how the rule compiler 64 can translate a rule into a state table chain and compress a plurality of rules into a state table tree is now provided with reference to FIG. 9. The tag 68 shown on FIG. 9 is a six-digit number, with digits ranging from 0-9. There are three rules defined by a pattern that is hit if the tag satisfies the pattern. While simplified, this approach is directly analogous for finding patterns in the tag fields of the capture system 22, or to any similar pattern matching scheme.


Rule 1 is hit if the first three (from left to right) digits of the tag 68 are 123. Similarly, Rule 2 is hit if the second digit is 4, the third digit is between 5-9, and the fifth digit is 7. Each rule is expressed as a chain of state tables 72, referred to as a state table chain 74. The state tables are used by reading digits from left to right from the tag, and using the digits to index into the state tables. In one real world embodiment, the tag is read on a per byte basis, making each state table have 256 rows, each having an 8-bit index.


In the example in FIG. 9, following the state table chain for each rule will always result in either a HIT or MISS determination for any possible tag. The entries of the state tables either indicate a DONE condition that show whether the rule was hit or missed by the tag 68, or they indicate the next table to be used. An entry can also indicate whether the reading of the digits should skip ahead (or backwards) when indexing into the next table.


In one embodiment, the rule compiler 64 generates the state table tree 76 by compressing a plurality of state table chains 74, as illustrated in FIG. 9. Traversing the state table tree 76 in FIG. 9 parses the tag 68 for all three rules simultaneously. The demonstration in FIG. 9 is highly simplified. A real world tag may be much larger than the example tag 68. For example, a tag as shown in Table 1 will generally be between 64 and 264 bytes in size. Parsing such a tag on a per-byte basis would involve much larger state tables, longer state table chains, and a more complicated state table tree. However, the underlying concepts would be similar to those described with reference to FIG. 9.


Since the state table tree 76 shown in FIG. 9 is a simplified example, it can easily be collapsed into a single-branched tree. However, the state table tree may have more than one branch. In one embodiment, each branch is at most as long as the longest state table chain used to construct the tree. Since tree traversal speed is determined by branch length, such a tree can still traverse all rules within a predictable and fast time. How compressed the state table tree of a real world embodiment is depends on a tradeoff between memory available to store the tree, and speed required to edit the tree (e.g., when the user authors new rules and/or deletes old rules). The tradeoff is that the more compressed the state table tree is the less memory it uses, but the more time it takes to edit.


In one embodiment, rules can be inserted, deleted, or edited at any time. This can be done by de-compiling the state table tree (or relevant portions of the state table tree), making the appropriate changes, and re-compiling the tree. For example, in one embodiment, if Rule 2 in FIG. 9 where to be edited, the state table chain 74 for Rule 2 is extracted out of the tree 76 and edited, and then the tree 76 is re-compiled.


Certain rules can be edited, inserted, or deleted without affecting the tree 76 to an extent to require de- and re-compiling. On the state table tree 76 shown in FIG. 9, state table 4:4 is a “leaf node,” a node on the tree having no children. In a more complex real-world state table tree there may be many leaf nodes. Since leaf nodes have no children (i.e., do not effect further processing), if a new, edited, or deleted rule only affects a leaf nodes, then the edit can be implemented without de-compiling and re-compiling any parts of the tree. This results in a highly efficient method of inserting, deleting, and editing some rules.


One embodiment of an entry 78 for a state table 72 is now described with reference to FIG. 10. In one embodiment, the entry includes an index field 80. The index 80 is the value used to index into the state table 72. For example, if the tag were read byte by byte (8 bits), then the index 80 would be 8 bits long, ranging in decimal value from 0 to 255 (00000000 to 11111111 in binary).


In one embodiment, the entry 78 also includes a status indicator 82. The status indicator 82 provides information about the status of the rule matching. In one embodiment, there are three possible statuses being indicated: HIT, MISS, and NOT DONE. A HIT status indicates that a rule has been hit and the parsing is finished. A MISS status indicates that the tag cannot possibly hit any rules included in the state table tree, and the parsing is finished. A NOT DONE status indicates that no determination about HIT or MISS conditions can be made at the present time.


In one embodiment, the entry 78 includes a rule match indicator 84 that is accessed if the status indicator 82 shown a HIT condition. In one embodiment, the rule match indicator 84 identifies the rule that is hit (e.g., rule 3). In one embodiment, the rule is not identified by name, and the rule match indicator 84 contains the next point of program execution, which is determined by the rule hit.


Similarly, a MISS condition indicated by the status indicator 82 results in the accessing of the exit location indicator 86. In one embodiment, the exit location indicator 86 contains the next point of program execution, which is configured to take into account that none of the rules were hit. In another embodiment, program execution may continue from a single place after flags indicating the results of the parsing have been set.


In one embodiment, a NOT DONE condition indicates that the forward/reverse operator 88 should be accessed. The forward reverse operator 88 indicates how many positions to go forwards or backwards before reading the next unit of the tag. The forward reverse operator can be implemented as a number that can take positive or negative values indicating how many bytes (or other units of reading the tag) to skip and in what direction.


In one embodiment, the entry 78 also includes a next table location indicator 90 identifying the next state table of the state table tree to index into using the next byte of tag read.


A simplified flow diagram for parsing a tag using the state table tree is now described with reference to FIG. 11. First, the initial (root) state table node of the tree is selected in block 1102, and the initial byte of the tag being parsed is selected in block 1104. Then, the selected state table is indexed into using the selected byte in block 1106, as described above.


In block 1108, a decision is made as to whether the indexed state table entry is indicating an exit. If yes, then, in block 1110, the decision reached is indicated. For example, a decision may be “Rule 2 hit,” or “Global Miss.” If an exit is not indicated, i.e., if rule parsing is not finished, then, in block 1112, the next state table node of the state table tree is selected, e.g., as indicated by the indexed entry.


In block 1114, the next byte of the tag is read. This could include performing a forward or backward skip, if one is indicated by the indexed entry, or it may include sequentially inputting the next byte if no jump is required. Then, the processing proceeds again from block 1106, using the newly selected state table and tag byte as inputs.


General Matters


In several embodiments, the capture system 22 has been described above as a stand-alone device. However, the capture system of the present invention can be implemented on any appliance capable of capturing and analyzing data from a network. For example, the capture system 22 described above could be implemented on one or more of the servers 14 or clients 16 shown in FIG. 1. The capture system 22 can interface with the network 10 in any number of ways, including wirelessly.


In one embodiment, the capture system 22 is an appliance constructed using commonly available computing equipment and storage systems capable of supporting the software requirements. In one embodiment, illustrated by FIG. 6, the hardware consists of a capture entity 46, a processing complex 48 made up of one or more processors, a memory complex 50 made up of one or more memory elements such as RAM and ROM, and storage complex 52, such as a set of one or more hard drives or other digital or analog storage means. In another embodiment, the storage complex 52 is external to the capture system 22, as explained above. In one embodiment, the memory complex stored software consisting of an operating system for the capture system device 22, a capture program, and classification program, a database, a filestore, an analysis engine and a graphical user interface.


Thus, a capture system, a rule parser, and a rule compiler have been described. The above-described rule parser and rule compiler can be implemented outside of a capture system, and can be used for any rule parsing or pattern recognition. The capture filter implementation described above is only one embodiment of the present invention.


In the forgoing description, various specific values were given names, such as “tag,” and various specific modules, such as the “rule compiler” and “capture filter” have been described. However, these names are merely to describe and illustrate various aspects of the present invention, and in no way limit the scope of the present invention. Furthermore, various modules, such as the rule compiler 64 and the rule parser 62 in FIG. 7, can be implemented as software or hardware modules, or without dividing their functionalities into modules at all. The present invention is not limited to any modular architecture either in software or in hardware, whether described above or not.

Claims
  • 1. A method comprising: receiving a plurality of capture rules used to determine whether intercepted objects are to be stored;for each received rule, constructing a state table chain configured to parse a tag for the rule;generating a state table tree using the plurality of state table chains, the state table tree being configured to parse the tag for the plurality of capture rules; andintercepting packets being transmitted on a network, the packets associated with a document that includes the intercepted objects, wherein the document is captured based on a particular capture rule associated with the intercepted objects, and wherein the tag comprises a data structure containing meta-data associated with a particular intercepted object, and wherein the document is stored in response to traversing the state table tree to parse the tag and match the tag to the particular capture rule.
  • 2. The method of claim 1, wherein receiving the plurality of capture rules comprises a user inputting at least one of the plurality of capture rules via a user interface.
  • 3. The method of claim 1, wherein generating the state table tree comprises combining the plurality of state table chains to a configured tradeoff level, the tradeoff level indicating a tradeoff between memory usage and editing speed for the state table tree.
  • 4. The method of claim 1, further comprising receiving an edited version of one of the plurality of capture rules, and re-generating the state table tree in response to the received edited version.
  • 5. A method of rule parsing over a block of data comprising: selecting a unit of the block of data;indexing into a state table using the selected unit;determining whether a decision regarding the block of data can be reached based on the indexed entry;selecting a next state table indicated by the indexed entry if the decision regarding the block of data cannot be reached, wherein the block of data comprises a tag and each unit of the tag comprises a byte; andintercepting packets being transmitted on a network, the packets associated with a document that includes intercepted objects, wherein the document is captured based on a particular capture rule associated with the intercepted objects, and wherein the tag comprises a data structure containing meta-data associated with a particular intercepted object, and wherein the document is stored in response to traversing the state table to parse the tag and match the tag to the particular capture rule.
  • 6. The method of claim 5, further comprising selecting a next unit of the block of data if the decision regarding the block of data cannot be reached.
  • 7. The method of claim 6, further comprising iteratively repeating indexing into the next state table using the next unit, determining whether the decision regarding the block of data can be reached based on the indexed entry, and selecting another next state table indicated by the indexed entry if the decision regarding the block of data cannot be reached, until the decision regarding the block of data is reached.
  • 8. The method of claim 6, wherein selecting the next unit the block of data is based on a forward/reverse operator indicated by the indexed entry.
  • 9. The method of claim 5, wherein the block of data comprises a tag and each unit of the tag comprises a byte.
  • 10. A capture device comprising: a user interface to enable a user to author a plurality of capture rules;a rule compiler to generate a state table tree, wherein a single traversal of the state table tree applies all of the plurality of capture rules to a tag containing meta-data over an intercepted object; anda rule parser to parse the capture rules by traversing the state table tree using the tag, the capture device being configured to intercept packets being transmitted on a network, the packets associated with a document that includes the intercepted objects, wherein the document is captured based on a particular capture rule associated with the intercepted objects, and wherein the tag comprises a data structure containing meta-data associated with a particular intercepted object, and wherein the document is stored in response to traversing the state table tree to parse the tag and match the tag to the particular capture rule.
  • 11. The capture device of claim 10, further comprising a capture filter to determine whether to store the intercepted object based on the decision, wherein the decision comprises a determination of one of the plurality of capture rules being hit by the tag.
  • 12. The capture device of claim 11, further comprising an object store module to store the intercepted object and the tag.
  • 13. The capture device of claim 12, wherein the object store module comprises a canonical content store to store the intercepted object and a tag database to store the tag.
  • 14. The capture device of claim 10, further comprising an object capture and classification module to populate the tag by reconstructing and classifying the captured object.
  • 15. The capture device of claim 10, wherein the user interface is configured to allow a user to edit the plurality of capture rules, delete one of the plurality of capture rules, and insert a new capture rule, and the rule compiler is configured to re-generate the state table tree in response to the user editing, deleting, or inserting a capture rule.
  • 16. The capture device of claim 10, wherein the rule compiler is configured to generate the state table tree by constructing a state table chain corresponding to each capture rule, and generating the state table tree by combining at least a part of each state table chain.
  • 17. A non-transitory machine-readable medium having stored thereon data representing instructions that, when executed by a processor, cause the processor to perform operations comprising: receiving a plurality of capture rules used to determine whether intercepted objects are to be stored;for each received rule, constructing a state table chain configured to parse a tag for the rule;generating a state table tree using the plurality of state table chains, the state table tree being configured to parse the tag for the plurality of capture rules; andintercepting packets being transmitted on a network, the packets associated with a document that includes the intercepted objects, wherein the document is captured based on a particular capture rule associated with the intercepted objects, and wherein the tag comprises a data structure containing meta-data associated with a particular intercepted object, and wherein the document is stored in response to traversing the state table tree to parse the tag and match the tag to the particular capture rule.
  • 18. The non-transitory machine-readable medium of claim 17, wherein receiving the plurality of capture rules comprises a user inputting at least one of the plurality of capture rules via a user interface.
  • 19. The non-transitory machine-readable medium of claim 17, wherein generating the state table tree comprises combining the plurality of state table chains to a configured tradeoff level, the tradeoff level indicating a tradeoff between memory usage and editing speed for the state table tree.
  • 20. The non-transitory machine-readable medium of claim 17, further comprising receiving an edited version of one of the plurality of capture rules, and re-generating the state table tree in response to the received edited version.
  • 21. A non-transitory machine-readable medium having stored thereon data representing instructions that, when executed by a processor, cause the processor to perform operations comprising: selecting a unit of a block of data;indexing into a state table using the selected unit;determining whether a decision regarding the block of data can be reached based on the indexed entry;selecting a next state table indicated by the indexed entry if the decision regarding the block of data cannot be reached, wherein the block of data comprises a tag and each unit of the tag comprises a byte; andintercepting packets being transmitted on a network, the packets associated with a document that includes intercepted objects, wherein the document is captured based on a particular capture rule associated with the intercepted objects, and wherein the tag comprises a data structure containing meta-data associated with a particular intercepted object, and wherein the document is stored in response to traversing the state table to parse the tag and match the tag to the particular capture rule.
  • 22. The non-transitory machine-readable medium of claim 21, wherein the instructions further cause the processor to select a next unit of the block of data if the decision regarding the block of data cannot be reached.
  • 23. The non-transitory machine-readable medium of claim 22, wherein the instructions further cause the processor to iteratively repeat indexing into the next state table using the next unit, determining whether the decision regarding the block of data can be reached based
  • 24. The non-transitory machine-readable medium of claim 22, wherein selecting the next unit the block of data is based on a forward/reverse operator indicated by the indexed entry.
PRIORITY AND RELATED APPLICATIONS

This patent application is related to, incorporates by reference, and claims the priority benefit of U.S. Provisional Application 60/528,642, entitled “METHOD AND APPARATUS FOR DYNAMIC RULE PARSER AND CAPTURE SYSTEM,” filed Dec. 10, 2003.

US Referenced Citations (389)
Number Name Date Kind
4286255 Siy Aug 1981 A
4710957 Bocci et al. Dec 1987 A
5249289 Thamm et al. Sep 1993 A
5465299 Matsumoto et al. Nov 1995 A
5479654 Squibb Dec 1995 A
5497489 Menne Mar 1996 A
5542090 Henderson et al. Jul 1996 A
5557747 Rogers et al. Sep 1996 A
5623652 Vora et al. Apr 1997 A
5768578 Kirk Jun 1998 A
5781629 Haber et al. Jul 1998 A
5787232 Greiner et al. Jul 1998 A
5794052 Harding Aug 1998 A
5813009 Johnson et al. Sep 1998 A
5873081 Harel Feb 1999 A
5924096 Draper et al. Jul 1999 A
5937422 Nelson et al. Aug 1999 A
5943670 Prager Aug 1999 A
5987610 Franczek et al. Nov 1999 A
5995111 Morioka et al. Nov 1999 A
6026411 Delp Feb 2000 A
6073142 Geiger et al. Jun 2000 A
6078953 Vaid et al. Jun 2000 A
6094531 Allison et al. Jul 2000 A
6108697 Raymond et al. Aug 2000 A
6122379 Barbir Sep 2000 A
6161102 Yanagihara et al. Dec 2000 A
6175867 Taghadoss Jan 2001 B1
6192472 Garay et al. Feb 2001 B1
6243091 Berstis Jun 2001 B1
6243720 Munter et al. Jun 2001 B1
6278992 Curtis et al. Aug 2001 B1
6292810 Richards Sep 2001 B1
6336186 Dyksterhouse et al. Jan 2002 B1
6343376 Saxe et al. Jan 2002 B1
6356885 Ross et al. Mar 2002 B2
6363488 Ginter et al. Mar 2002 B1
6389405 Oatman et al. May 2002 B1
6389419 Wong et al. May 2002 B1
6408294 Getchius et al. Jun 2002 B1
6408301 Patton et al. Jun 2002 B1
6411952 Bharat et al. Jun 2002 B1
6457017 Watkins et al. Sep 2002 B2
6460050 Pace et al. Oct 2002 B1
6493761 Baker et al. Dec 2002 B1
6499105 Yoshiura et al. Dec 2002 B1
6502091 Chundi et al. Dec 2002 B1
6515681 Knight Feb 2003 B1
6516320 Odom et al. Feb 2003 B1
6523026 Gillis Feb 2003 B1
6539024 Janoska et al. Mar 2003 B1
6556964 Haug et al. Apr 2003 B2
6556983 Altschuler et al. Apr 2003 B1
6571275 Dong et al. May 2003 B1
6584458 Millett et al. Jun 2003 B1
6598033 Ross et al. Jul 2003 B2
6629097 Keith Sep 2003 B1
6662176 Brunet et al. Dec 2003 B2
6665662 Kirkwood et al. Dec 2003 B1
6675159 Lin et al. Jan 2004 B1
6691209 O'Connell Feb 2004 B1
6754647 Tackett et al. Jun 2004 B1
6757646 Marchisio Jun 2004 B2
6771595 Gilbert et al. Aug 2004 B1
6772214 McClain et al. Aug 2004 B1
6785815 Serret-Avila et al. Aug 2004 B1
6804627 Marokhovsky et al. Oct 2004 B1
6820082 Cook et al. Nov 2004 B1
6857011 Reinke Feb 2005 B2
6937257 Dunlavey Aug 2005 B1
6950864 Tsuchiya Sep 2005 B1
6976053 Tripp et al. Dec 2005 B1
6978297 Piersol Dec 2005 B1
6978367 Hind et al. Dec 2005 B1
7007020 Chen et al. Feb 2006 B1
7020654 Najmi Mar 2006 B1
7020661 Cruanes et al. Mar 2006 B1
7062572 Hampton Jun 2006 B1
7062705 Kirkwood et al. Jun 2006 B1
7072967 Saulpaugh et al. Jul 2006 B1
7082443 Ashby Jul 2006 B1
7093288 Hydrie et al. Aug 2006 B1
7103607 Kirkwood et al. Sep 2006 B1
7130587 Hikokubo et al. Oct 2006 B2
7133400 Henderson et al. Nov 2006 B1
7139973 Kirkwood et al. Nov 2006 B1
7143109 Nagral et al. Nov 2006 B2
7158983 Willse et al. Jan 2007 B2
7185073 Gai et al. Feb 2007 B1
7185192 Kahn Feb 2007 B1
7194483 Mohan et al. Mar 2007 B1
7219134 Takeshima et al. May 2007 B2
7243120 Massey Jul 2007 B2
7246236 Stirbu Jul 2007 B2
7254562 Hsu et al. Aug 2007 B2
7254632 Zeira et al. Aug 2007 B2
7266845 Hypponen Sep 2007 B2
7272724 Tarbotton et al. Sep 2007 B2
7277957 Rowley et al. Oct 2007 B2
7290048 Barnett et al. Oct 2007 B1
7293067 Maki et al. Nov 2007 B1
7293238 Brook et al. Nov 2007 B1
7296011 Chaudhuri et al. Nov 2007 B2
7296070 Sweeney et al. Nov 2007 B2
7296088 Padmanabhan et al. Nov 2007 B1
7296232 Burdick et al. Nov 2007 B1
7299277 Moran et al. Nov 2007 B1
7373500 Ramelson et al. May 2008 B2
7424744 Wu et al. Sep 2008 B1
7426181 Feroz et al. Sep 2008 B1
7434058 Ahuja et al. Oct 2008 B2
7467202 Savchuk Dec 2008 B2
7477780 Boncyk et al. Jan 2009 B2
7483916 Lowe et al. Jan 2009 B2
7493659 Wu et al. Feb 2009 B1
7505463 Schuba et al. Mar 2009 B2
7506055 McClain et al. Mar 2009 B2
7506155 Stewart et al. Mar 2009 B1
7509677 Saurabh et al. Mar 2009 B2
7516492 Nisbet et al. Apr 2009 B1
7539683 Satoh et al. May 2009 B1
7551629 Chen et al. Jun 2009 B2
7577154 Yung et al. Aug 2009 B1
7581059 Gupta et al. Aug 2009 B2
7596571 Sifry Sep 2009 B2
7599844 King et al. Oct 2009 B2
7664083 Cermak et al. Feb 2010 B1
7685254 Pandya Mar 2010 B2
7730011 Deninger et al. Jun 2010 B1
7739080 Beck et al. Jun 2010 B1
7760730 Goldschmidt et al. Jul 2010 B2
7760769 Lovett et al. Jul 2010 B1
7774604 Lowe et al. Aug 2010 B2
7814327 Ahuja et al. Oct 2010 B2
7818326 Deninger et al. Oct 2010 B2
7844582 Arbilla et al. Nov 2010 B1
7849065 Kamani et al. Dec 2010 B2
7899828 de la Iglesia et al. Mar 2011 B2
7907608 Liu et al. Mar 2011 B2
7921072 Bohannon et al. Apr 2011 B2
7930540 Ahuja et al. Apr 2011 B2
7949849 Lowe et al. May 2011 B2
7958227 Ahuja et al. Jun 2011 B2
7962591 Deninger et al. Jun 2011 B2
7984175 de la Iglesia et al. Jul 2011 B2
7996373 Zoppas et al. Aug 2011 B1
8005863 de la Iglesia et al. Aug 2011 B2
8010689 Deninger et al. Aug 2011 B2
8055601 Pandya Nov 2011 B2
8166307 Ahuja et al. Apr 2012 B2
8176049 Deninger et al. May 2012 B2
8200026 Deninger et al. Jun 2012 B2
8205242 Liu et al. Jun 2012 B2
8271794 Lowe et al. Sep 2012 B2
8301635 de la Iglesia et al. Oct 2012 B2
8307007 de la Iglesia et al. Nov 2012 B2
8307206 Ahuja et al. Nov 2012 B2
8463800 Deninger et al. Jun 2013 B2
8473442 Deninger et al. Jun 2013 B1
8504537 de la Iglesia et al. Aug 2013 B2
20010013024 Takahashi et al. Aug 2001 A1
20010032310 Corella Oct 2001 A1
20010037324 Agrawal et al. Nov 2001 A1
20010046230 Rojas Nov 2001 A1
20020032677 Morgenthaler et al. Mar 2002 A1
20020032772 Olstad et al. Mar 2002 A1
20020046221 Wallace et al. Apr 2002 A1
20020052896 Streit et al. May 2002 A1
20020065956 Yagawa et al. May 2002 A1
20020078355 Samar Jun 2002 A1
20020091579 Yehia et al. Jul 2002 A1
20020103876 Chatani et al. Aug 2002 A1
20020107843 Biebesheimer et al. Aug 2002 A1
20020116124 Garin et al. Aug 2002 A1
20020126673 Dagli et al. Sep 2002 A1
20020128903 Kernahan Sep 2002 A1
20020129140 Peled et al. Sep 2002 A1
20020159447 Carey et al. Oct 2002 A1
20030009718 Wolfgang et al. Jan 2003 A1
20030028493 Tajima Feb 2003 A1
20030028774 Meka Feb 2003 A1
20030046369 Sim et al. Mar 2003 A1
20030053420 Duckett et al. Mar 2003 A1
20030055962 Freund et al. Mar 2003 A1
20030065571 Dutta Apr 2003 A1
20030084300 Koike May 2003 A1
20030084318 Schertz May 2003 A1
20030084326 Tarquini May 2003 A1
20030093678 Bowe et al. May 2003 A1
20030099243 Oh et al. May 2003 A1
20030105716 Sutton et al. Jun 2003 A1
20030105739 Essafi et al. Jun 2003 A1
20030105854 Thorsteinsson et al. Jun 2003 A1
20030131116 Jain et al. Jul 2003 A1
20030135612 Huntington Jul 2003 A1
20030167392 Fransdonk Sep 2003 A1
20030185220 Valenci Oct 2003 A1
20030196081 Savarda et al. Oct 2003 A1
20030204741 Schoen et al. Oct 2003 A1
20030221101 Micali Nov 2003 A1
20030225796 Matsubara Dec 2003 A1
20030225841 Song et al. Dec 2003 A1
20030231632 Haeberlen Dec 2003 A1
20030233411 Parry et al. Dec 2003 A1
20040001498 Chen et al. Jan 2004 A1
20040010484 Foulger et al. Jan 2004 A1
20040015579 Cooper et al. Jan 2004 A1
20040036716 Jordahl Feb 2004 A1
20040054779 Takeshima et al. Mar 2004 A1
20040059736 Willse et al. Mar 2004 A1
20040059920 Godwin Mar 2004 A1
20040071164 Baum Apr 2004 A1
20040111406 Udeshi et al. Jun 2004 A1
20040111678 Hara Jun 2004 A1
20040114518 McFaden et al. Jun 2004 A1
20040117414 Braun et al. Jun 2004 A1
20040120325 Ayres Jun 2004 A1
20040122863 Sidman Jun 2004 A1
20040122936 Mizelle et al. Jun 2004 A1
20040139120 Clark et al. Jul 2004 A1
20040181513 Henderson et al. Sep 2004 A1
20040181690 Rothermel et al. Sep 2004 A1
20040193594 Moore et al. Sep 2004 A1
20040194141 Sanders Sep 2004 A1
20040196970 Cole Oct 2004 A1
20040199595 Banister et al. Oct 2004 A1
20040205457 Bent et al. Oct 2004 A1
20040215612 Brody Oct 2004 A1
20040220944 Behrens et al. Nov 2004 A1
20040230572 Omoigui Nov 2004 A1
20040249781 Anderson Dec 2004 A1
20040267753 Hoche Dec 2004 A1
20050004911 Goldberg et al. Jan 2005 A1
20050021715 Dugatkin et al. Jan 2005 A1
20050021743 Fleig et al. Jan 2005 A1
20050022114 Shanahan et al. Jan 2005 A1
20050027881 Figueira et al. Feb 2005 A1
20050033726 Wu et al. Feb 2005 A1
20050033747 Wittkotter Feb 2005 A1
20050033803 Vleet et al. Feb 2005 A1
20050038788 Dettinger et al. Feb 2005 A1
20050038809 Abajian et al. Feb 2005 A1
20050044289 Hendel et al. Feb 2005 A1
20050050205 Gordy et al. Mar 2005 A1
20050055327 Agrawal et al. Mar 2005 A1
20050055399 Savchuk Mar 2005 A1
20050075103 Hikokubo et al. Apr 2005 A1
20050086252 Jones et al. Apr 2005 A1
20050091443 Hershkovich et al. Apr 2005 A1
20050091532 Moghe Apr 2005 A1
20050097441 Herbach et al. May 2005 A1
20050108244 Riise et al. May 2005 A1
20050114452 Prakash May 2005 A1
20050120006 Nye Jun 2005 A1
20050127171 Ahuja et al. Jun 2005 A1
20050128242 Suzuki Jun 2005 A1
20050131876 Ahuja et al. Jun 2005 A1
20050132046 de la Iglesia et al. Jun 2005 A1
20050132079 de la Iglesia et al. Jun 2005 A1
20050132197 Medlar Jun 2005 A1
20050132198 Ahuja et al. Jun 2005 A1
20050132297 Milic-Frayling et al. Jun 2005 A1
20050138110 Redlich et al. Jun 2005 A1
20050138242 Pope et al. Jun 2005 A1
20050138279 Somasundaram Jun 2005 A1
20050149494 Lindh et al. Jul 2005 A1
20050149504 Ratnaparkhi Jul 2005 A1
20050166066 Ahuja et al. Jul 2005 A1
20050177725 Lowe et al. Aug 2005 A1
20050180341 Nelson et al. Aug 2005 A1
20050182765 Liddy Aug 2005 A1
20050188218 Walmsley et al. Aug 2005 A1
20050203940 Farrar et al. Sep 2005 A1
20050204129 Sudia et al. Sep 2005 A1
20050228864 Robertson Oct 2005 A1
20050235153 Ikeda Oct 2005 A1
20050273614 Ahuja et al. Dec 2005 A1
20050289181 Deninger et al. Dec 2005 A1
20060005247 Zhang et al. Jan 2006 A1
20060021045 Cook Jan 2006 A1
20060021050 Cook et al. Jan 2006 A1
20060037072 Rao et al. Feb 2006 A1
20060041560 Forman et al. Feb 2006 A1
20060041570 Lowe et al. Feb 2006 A1
20060041760 Huang Feb 2006 A1
20060047675 Lowe et al. Mar 2006 A1
20060075228 Black et al. Apr 2006 A1
20060080130 Choksi Apr 2006 A1
20060083180 Baba et al. Apr 2006 A1
20060106793 Liang May 2006 A1
20060106866 Green et al. May 2006 A1
20060150249 Gassen et al. Jul 2006 A1
20060167896 Kapur et al. Jul 2006 A1
20060184532 Hamada et al. Aug 2006 A1
20060235811 Fairweather Oct 2006 A1
20060242126 Fitzhugh Oct 2006 A1
20060242313 Le et al. Oct 2006 A1
20060251109 Muller et al. Nov 2006 A1
20060253445 Huang et al. Nov 2006 A1
20060271506 Bohannon et al. Nov 2006 A1
20060272024 Huang et al. Nov 2006 A1
20060288216 Buhler et al. Dec 2006 A1
20070006293 Balakrishnan et al. Jan 2007 A1
20070011309 Brady et al. Jan 2007 A1
20070028039 Gupta et al. Feb 2007 A1
20070036156 Liu et al. Feb 2007 A1
20070039049 Kupferman et al. Feb 2007 A1
20070050334 Deninger et al. Mar 2007 A1
20070050381 Hu et al. Mar 2007 A1
20070050467 Borrett et al. Mar 2007 A1
20070081471 Talley et al. Apr 2007 A1
20070094394 Singh et al. Apr 2007 A1
20070106660 Stern et al. May 2007 A1
20070106685 Houh et al. May 2007 A1
20070106693 Houh et al. May 2007 A1
20070110089 Essafi et al. May 2007 A1
20070112837 Houh et al. May 2007 A1
20070112838 Bjarnestam et al. May 2007 A1
20070116366 Deninger et al. May 2007 A1
20070124384 Howell et al. May 2007 A1
20070136599 Suga Jun 2007 A1
20070139723 Beadle et al. Jun 2007 A1
20070140128 Klinker et al. Jun 2007 A1
20070143559 Yagawa Jun 2007 A1
20070162609 Pope et al. Jul 2007 A1
20070220607 Sprosts et al. Sep 2007 A1
20070226504 de la Iglesia et al. Sep 2007 A1
20070226510 de la Iglesia et al. Sep 2007 A1
20070248029 Merkey et al. Oct 2007 A1
20070271254 de la Iglesia et al. Nov 2007 A1
20070271371 Ahuja et al. Nov 2007 A1
20070271372 Deninger et al. Nov 2007 A1
20070280123 Atkins et al. Dec 2007 A1
20080027971 Statchuk Jan 2008 A1
20080028467 Kommareddy et al. Jan 2008 A1
20080030383 Cameron Feb 2008 A1
20080082497 Leblang et al. Apr 2008 A1
20080091408 Roulland et al. Apr 2008 A1
20080112411 Stafford et al. May 2008 A1
20080115125 Stafford et al. May 2008 A1
20080140657 Azvine et al. Jun 2008 A1
20080141117 King et al. Jun 2008 A1
20080159627 Sengamedu Jul 2008 A1
20080235163 Balasubramanian et al. Sep 2008 A1
20080263019 Harrison et al. Oct 2008 A1
20080270462 Thomsen Oct 2008 A1
20090070327 Loeser et al. Mar 2009 A1
20090070328 Loeser et al. Mar 2009 A1
20090070459 Cho et al. Mar 2009 A1
20090100055 Wang Apr 2009 A1
20090157659 Satoh et al. Jun 2009 A1
20090178110 Higuchi Jul 2009 A1
20090187568 Morin Jul 2009 A1
20090216752 Terui et al. Aug 2009 A1
20090222442 Houh et al. Sep 2009 A1
20090235150 Berry Sep 2009 A1
20090254532 Yang et al. Oct 2009 A1
20090288164 Adelstein et al. Nov 2009 A1
20090300709 Chen et al. Dec 2009 A1
20090326925 Crider et al. Dec 2009 A1
20100011016 Greene Jan 2010 A1
20100011410 Liu Jan 2010 A1
20100037324 Grant et al. Feb 2010 A1
20100088317 Bone et al. Apr 2010 A1
20100100551 Knauft et al. Apr 2010 A1
20100121853 de la Iglesia et al. May 2010 A1
20100174528 Oya et al. Jul 2010 A1
20100185622 Deninger et al. Jul 2010 A1
20100191732 Lowe et al. Jul 2010 A1
20100195909 Wasson et al. Aug 2010 A1
20100268959 Lowe et al. Oct 2010 A1
20100332502 Carmel et al. Dec 2010 A1
20110004599 Deninger et al. Jan 2011 A1
20110040552 Van Guilder et al. Feb 2011 A1
20110131199 Simon et al. Jun 2011 A1
20110149959 Liu et al. Jun 2011 A1
20110167212 Lowe et al. Jul 2011 A1
20110167265 Ahuja et al. Jul 2011 A1
20110196911 de la Iglesia et al. Aug 2011 A1
20110197284 Ahuja et al. Aug 2011 A1
20110208861 Deninger et al. Aug 2011 A1
20110219237 Ahuja et al. Sep 2011 A1
20110258197 de la Iglesia et al. Oct 2011 A1
20110276575 de la Iglesia et al. Nov 2011 A1
20110276709 Deninger et al. Nov 2011 A1
20120114119 Ahuja et al. May 2012 A1
20120179687 Liu Jul 2012 A1
20120180137 Liu Jul 2012 A1
20120191722 Deninger et al. Jul 2012 A1
Foreign Referenced Citations (3)
Number Date Country
2499806 Sep 2012 EP
WO 2004008310 Jan 2004 WO
WO 2012060892 May 2012 WO
Non-Patent Literature Citations (47)
Entry
Chapter 1. Introduction, “Computer Program product for analyzing network traffic,” Ethereal. Computer program product for analyzing network traffic, pp. 17-26, http://web.archive.org/web/20030315045117/www.ethereal.com/distribution/docs/user-guide, printed Mar. 12, 2009.
Microsoft Outlook, Out look, copyright 1995-2000, 2 pages.
Preneel, Bart, “Cryptographic Hash Functions”, Proceedings of the 3rd Symposium on State and Progress of Research in Cryptography, 1993, pp. 161-171.
U.S. Appl. No. 12/190,536, filed Aug. 12, 2008, entitled “Configuration Management for a Capture/Registration System,” Inventor(s) Jitendra B. Gaitonde et al.
U.S. Appl. No. 12/352,720, filed Jan. 13, 2009, entitled “System and Method for Concept Building,” Inventor(s) Ratinder Paul Singh Ahuja et al.
U.S. Appl. No. 12/354,688, filed Jan. 15, 2009, entitled “System and Method for Intelligent Term Grouping,” Inventor(s) Ratinder Paul Ahuja et al.
U.S. Appl. No. 12/358,399, filed Jan. 23, 2009, entitled “System and Method for Intelligent State Management,” Inventor(s) William Deninger et al.
U.S. Appl. No. 12/410,875, filed Mar. 25, 2009, entitled “System and Method for Data Mining and Security Policy Management,” Inventor(s) Ratinder Paul Singh Ahuja et al.
U.S. Appl. No. 12/410,905, filed Mar. 25, 2009, entitled “System and Method for Managing Data and Policies,” Inventor(s) Ratinder Paul Singh Ahuja et al.
U.S. Appl. No. 12/360,537, filed Jan. 27, 2009, entitled “Database for a Capture System,” Inventor(s) Rick Lowe et al.
U.S. Appl. No. 11/254,436, filed Oct. 19, 2005, entitled “Attributes of Captured Objects in a Capture System,” Inventor(s) William Deninger et al.
U.S. Appl. No. 12/472,150, filed May 26, 2009, entitled “Identifying Image Type in a Capture System,” Inventor(s) William Deninger et al.
U.S. Appl. No. 11/900,964, filed Sep. 14, 2007, entitled “System and Method for Indexing a Capture System,” Inventor(s) Ashok Doddapaneni et al.
U.S. Appl. No. 12/171,232, filed Jul. 10, 2008, entitled “System and Method for Data Mining and Security Policy Management,” Inventor(s) Weimin Liu et al.
U.S. Appl. No. 12/690,153, filed Jan. 20, 2010, entitled “Query Generation for a Capture System,” Inventor(s) Erik de la Iglesia, et al.
U.S. Appl. No. 12/751,876, filed Mar. 31, 2010, entitled “Attributes of Captured Objects in a Capture System,” Inventor(s) William Deninger, et al.
U.S. Appl. No. 12/829,220, filed Jul. 1, 2010, entitled “Verifying Captured Objects Before Presentation,” Inventor(s) Rick Lowe, et al.
U.S. Appl. No. 12/873,061, filed Aug. 31, 2010, entitled “Document Registration,” Inventor(s) Ratinder Paul Singh Ahuja, et al.
U.S. Appl. No. 12/873,860, filed Sep. 1, 2010, entitled “A System and Method for Word Indexing in a Capture System and Querying Thereof,” Inventor(s) William Deninger, et al.
U.S. Appl. No. 12/939,340, filed Nov. 3, 2010, entitled “System and Method for Protecting Specified Data Combinations,” Inventor(s) Ratinder Paul Singh Ahuja, et al.
U.S. Appl. No. 12/967,013, filed Dec. 13, 2010, entitled “Tag Data Structure for Maintaining Relational Data Over Captured Objects,” Inventor(s) Erik de la Iglesia, et al.
Han, Olap Mining: An Integration of OLAP with Data Mining, Oct. 1997, pp. 1-18.
Niemi, Constructing OLAP Cubes Based on Queries, Nov. 2001, pp. 1-7.
Schultz, Data Mining for Detection of New Malicious Executables, May 2001, pp. 1-13.
U.S. Appl. No. 13/024,923, filed Feb. 10, 2011, entitled “High Speed Packet Capture,” Inventor(s) Weimin Liu, et al.
U.S. Appl. No. 13/047,068, filed Mar. 14, 2011, entitled “Cryptographic Policy Enforcement,” Inventor(s) Ratinder Paul Singh Ahuja, et al.
U.S. Appl. No. 13/049,533, filed Mar. 16, 2011, entitled “File System for a Capture System,” Inventor(s) Rick Lowe, et al.
U.S. Appl. No. 13/089,158, filed Apr. 18, 2011, entitled “Attributes of Captured Objects in a Capture System,” Inventor(s) Ratinder Paul Singh Ahuja, et al.
U.S. Appl. No. 13/099,516, filed May 3, 2011, entitled “Object Classification in a Capture System,” Inventor(s) William Deninger, et al.
Mao et al. “MOT: Memory Online Tracing of Web Information System,” Proceedings of the Second International Conference on Web Information Systems Engineering (WISE '01); pp. 271-277, (IEEE-0-7695-1393-X/02) Aug. 7, 2002 (7 pages).
International Search Report and Written Opinion and Declaration of Non-Establishment of International Search Report for International Application No. PCT/US2011/024902 mailed Aug. 1, 2011 (8 pages).
U.S. Appl. No. 13/168,739, filed Jun. 24, 2011, entitled “Method and Apparatus for Data Capture and Analysis System,” Inventor(s) Erik de la Iglesia, et al.
U.S. Appl. No. 13/187,421, filed Jul. 20, 2011, entitled “Query Generation for a Capture System,” Inventor(s) Erik de la Iglesia, et al.
U.S. Appl. No. 13/188,441, filed Jul. 21, 2011, entitled “Locational Tagging in a Capture System,” Inventor(s) William Deninger et al.
Webopedia, definition of “filter”, 2002, p. 1.
Werth, T. et al., “Chapter 1—DAG Mining in Procedural Abstraction,” Programming Systems Group; Computer Science Department, University of Erlangen-Nuremberg, Germany.
U.S. Appl. No. 13/422,791, filed on Mar. 16, 2012, entitled “System and Method for Data Mining and Security Policy Management”, Inventor, Weimin Liu.
U.S. Appl. No. 13/424,249, filed on Mar. 19, 2012, entitled “System and Method for Data Mining and Security Policy Management”, Inventor, Weimin Liu.
U.S. Appl. No. 13/431,678, filed on Mar. 27, 2012, entitled “Attributes of Captured Objects in a Capture System”, Inventors William Deninger, et al.
U.S. Appl. No. 13/436,275, filed on Mar. 30, 2012, entitled “System and Method for Intelligent State Management”, Inventors William Deninger, et al.
U.S. Appl. No. 13/337,737, filed Dec. 27, 2011, entitled “System and Method for Providing Data Protection Workflows in a Network Environment”, Inventor(s) Ratinder Paul Singh Ahuja, et al.
U.S. Appl. No. 13/338,060, filed Dec. 27, 2011, entitled “System and Method for Providing Data Protection Workflows in a Network Environment”, Inventor(s) Ratinder Paul Singh Ahuja, et al.
U.S. Appl. No. 13/338,159, filed Dec. 27, 2011, entitled “System and Method for Providing Data Protection Workflows in a Network Environment”, Inventor(s) Ratinder Paul Singh Ahuja, et al.
U.S. Appl. No. 13/338,195, filed Dec. 27, 2011, entitled “System and Method for Providing Data Protection Workflows in a Network Environment”, Inventor(s) Ratinder Paul Singh Ahuja, et al.
Walter Allasia et al., Indexing and Retrieval of Multimedia Metadata on a Secure DHT, University of Torino, Italy, Department of Computer Science, Aug. 31, 2008, 16 pages.
International Preliminary Report on Patentability Written Opinion of the International Searching Authority for International Application No. PCT/US2011/024902 dated May 7, 2013 (5 pages).
U.S. Appl. No. 13/896210, filed May 16, 2013, entitled “System and Method for Data Mining and Security Policy Management” Inventor(s) Ratinder Paul Singh Ahuja et al.
Related Publications (1)
Number Date Country
20050132034 A1 Jun 2005 US
Provisional Applications (1)
Number Date Country
60528642 Dec 2003 US