1. Field of the Invention
The present invention relates generally to network security and, more particularly, to systems and methods for detecting and/or preventing the transmission of malicious packets, such as polymorphic worms and viruses.
2. Description of Related Art
Availability of low cost computers, high speed networking products, and readily available network connections has helped fuel the proliferation of the Internet. This proliferation has caused the Internet to become an essential tool for both the business community and private individuals. Dependence on the Internet arises, in part, because the Internet makes it possible for multitudes of users to access vast amounts of information and perform remote transactions expeditiously and efficiently. Along with the rapid growth of the Internet have come problems caused by malicious individuals or pranksters launching attacks from within the network. As the size of the Internet continues to grow, so does the threat posed by these individuals.
The ever-increasing number of computers, routers, and connections making up the Internet increases the number of vulnerable points from which these malicious individuals can launch attacks. These attacks can be focused on the Internet as a whole or on specific devices, such as hosts or computers, connected to the network. In fact, each router, switch, or computer connected to the Internet may be a potential entry point from which a malicious individual can launch an attack while remaining largely undetected. Attacks carried out on the Internet often consist of malicious packets being injected into the network. Malicious packets can be injected directly into the network by a computer, or a device attached to the network, such as a router or switch, can be compromised and configured to place malicious packets onto the network.
One particularly troublesome type of attack is a self-replicating network-transferred computer program, such as a virus or worm, that is designed to annoy network users, deny network service by overloading the network, or damage target computers (e.g., by deleting files). A virus is a program that infects a computer or device by attaching itself to another program and propagating itself when that program is executed, possibly destroying files or wiping out memory devices. A worm, on the other hand, is a program that can make copies of itself and spread itself through connected systems, using up resources in affected computers or causing other damage.
Various defenses, such as e-mail filters, anti-virus programs, and firewall mechanisms, have been employed against viruses and worms. Unfortunately, many viruses and worms are polymorphic. Polymorphic viruses and worms include viruses and worms that deliberately have a different set of bytes in each copy, as opposed to being substantially similar in each copy, to make them difficult to detect. Detection techniques based on byte sequence comparison, including older virus-detection techniques, may be generally ineffective in detecting polymorphic viruses and worms.
Accordingly, there is a need for new defenses to thwart the attack of polymorphic viruses and worms.
Systems and methods consistent with the present invention address these and other needs by providing a new defense that attacks malicious packets, such as polymorphic viruses and worms, at their most common denominator (i.e., the need to transfer a copy of their code over a network to multiple target systems).
In accordance with an aspect of the invention as embodied and broadly described herein, a method for detecting transmission of potentially malicious packets is provided. The method includes receiving packets; generating hash values based on variable-sized blocks of the received packets; comparing the generated hash values to hash values associated with prior packets; and determining that one of the received packets is a potentially malicious packet when one or more of the generated hash values associated with the received packet match one or more of the hash values associated with the prior packets.
In accordance with another aspect of the invention, a system for hampering transmission of potentially malicious packets is provided. The system includes means for observing packets, means for generating hash values based on variable-sized blocks of the observed packets, and means for comparing the generated hash values to hash values corresponding to prior packets. The system further includes means for identifying one of the observed packets as a potentially malicious packet when the generated hash values corresponding to the observed packet match the hash values corresponding to the prior packets, and means for hampering transmission of the observed packet when the observed packet is identified as a potentially malicious packet.
In accordance with yet another aspect of the invention, a device for detecting transmission of malicious packets is provided. The device includes a hash memory and a hash processor. The hash memory is configured to store information associated with hash values corresponding to prior packets. The hash processor is configured to observe a packet and generate one or more hash values based on variable-sized blocks of the packet. The hash processor is further configured to compare the one or more generated hash values to the hash values corresponding to the prior packets and identify the packet as a potentially malicious packet when a predetermined number of the one or more generated hash values match the hash values corresponding to the prior packets.
In accordance with a further aspect of the invention, a method for detecting transmission of a potentially malicious packet is provided. The method includes receiving a packet, selecting blocks of received packet of random block sizes, and performing multiple different hash functions on each of the blocks to generate multiple hash values. The method further includes comparing the generated hash values to hash values associated with prior packets, and identifying the received packet as a potentially malicious packet when one or more of the generated hash values correspond to one or more of the hash values associated with the prior packets.
In accordance with another aspect of the invention, a method for detecting transmission of a potentially malicious packet is provided. The method includes receiving a packet, selecting multiple blocks of the received packet of different block sizes, and performing a different hash function on each of the blocks to generate multiple hash values. The method further includes comparing the generated hash values to hash values associated with prior packets, and identifying the received packet as a potentially malicious packet when one or more of the generated hash values correspond to one or more of the hash values associated with the prior packets.
In accordance with yet another aspect of the invention, a method for detecting files suspected of containing a virus or worm on a computer is provided. The method includes receiving one or more first hash values associated with the virus or worm, hashing one or more variable-sized portions of the files to generate second hash values, comparing the second hash values to the one or more first hash values, and identifying one of the files as a file suspected of containing the virus or worm when one or more of the second hash values correspond to at least one of the one or more first hash values.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate the invention and, together with the description, explain the invention. In the drawings,
The following detailed description of the invention refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims and equivalents.
Systems and methods consistent with the present invention provide mechanisms to detect and/or prevent the transmission of malicious packets. Malicious packets, as used herein, may include polymorphic viruses and worms, but may also apply to non-polymorphic viruses and worms and possibly other types of data with duplicated content, such as illegal mass e-mail (e.g., spam), that are repeatedly transmitted through a network.
Polymorphic viruses and worms are generally composed of two pieces: an obscured payload (which contains the majority of the virus/worm), and a decoding bootstrap that must be initially executable by the victim machine “as is,” and turns the obscured payload into the executable remainder of the virus/worm. The design of the polymorphic viruses and worms are such that the contents of the obscured payload are essentially undetectable (e.g., by strong encryption), leaving two basic ways to detect the virus/worm: (1) detect it after the decoding bootstrap has run, which is a technique employed by many of today's virus detection software; and (2) detect the decoding bootstrap in a manner consistent with the principles of the invention.
While the decoding bootstrap must be executable by the target machine, it does not have to be the exact same code for every copy of the virus/worm. In other words, it can be made arbitrarily variable, as long as the effect of executing it results in the decoding of the obscured payload.
The most sophisticated polymorphic viruses/worms employ techniques, such as the interspersal of “no-ops” or other code that does not affect the decoding process, but adds to the variability of the byte string making up the decoder bootstrap. Another technique includes changing details of instructions in the actual decoder code, such as changing which registers are employed by the decoding code, or stringing small code fragments together with “branch” or “jump” instructions, allowing the execution sequence of the instructions to be relatively independent of the sequence of bytes making up the decoder bootstrap. “Dead” code, or gibberish bytes, can also be inserted between active code segments strung together this way.
Thus, detecting the decoder bootstrap of a polymorphic virus/worm is a very difficult task. It is most difficult when only one copy of the virus/worm is examined. When many potential copies of the virus/worm can be observed, however, certain similarities between various copies will eventually emerge, because there are only a finite set of transformations that the decoding bootstrap can be put through and still function properly. This opens up the opportunity to detect such viruses/worms in places where many copies can be observed over time, such as in the network nodes (and links) through which they propagate.
Another vulnerability to detection that some e-mail-based viruses/worms have is that they require user interaction with the message carrying the virus/worm in order to be executed. Thus, they are often accompanied by a text message in the body of the e-mail that is designed to entice the user into performing the necessary action to execute the virus/worm (usually opening a file attached to the e-mail message). A polymorphic virus/worm could relatively easily change the e-mail text used in minor ways, but to make substantial changes would likely render the message incoherent to the receiver and, thus, either make him suspicious or unlikely to perform the action needed for the virus/worm to execute. Systems and methods consistent with the principles of the invention can also detect the text of the e-mail message as possibly related to a virus/worm attack.
Systems and methods consistent with the principles of the invention hash incoming packets, using a varying hash-block size, varying between a minimum and a maximum value. The hash block size may be chosen randomly within this interval for each block, but other methods of varying the block size could also be used, as long as the method was not easily predictable by an attacker.
This serves two purposes. First, it reduces the need to hash multiple copies of non-polymorphic viruses/worms for pretraining, because each packet would now have a finite chance of sharing a block with previous packets, rather than no chance, if it did not share a prior copy's alignment within a packet. Second, it allows relatively short sequences of bytes to be hashed sometimes, greatly improving the chances of catching a fixed segment of a polymorphic virus/worm.
Systems and methods consistent with the present invention provide virus, worm, and unsolicited e-mail detection and/or prevention in e-mail servers. Placing these features in e-mail servers provides a number of new advantages, including the ability to align hash blocks to crucial boundaries found in e-mail messages and eliminate certain counter-measures by the attacker, such as using small Internet Protocol (IP) fragments to limit the detectable content in each packet. It also allows these features to relate e-mail header fields with the potentially-harmful segment of the message (usually an “attachment”), and decode common file-packing and encoding formats that might otherwise make a virus or worm undetectable by the packet-based technique (e.g., “.zip files”).
By placing these features within an e-mail server, the ability to detect replicated content in the network at points where large quantities of traffic are present is obtained. By relating many otherwise-independent messages and finding common factors, the e-mail server may detect unknown, as well as known, viruses and worms. These features may also be applied to detect potential unsolicited commercial e-mail (“spam”).
E-mail servers for major Internet Service Providers (ISPs) may process a million e-mail messages a day, or more, in a single server. When viruses and worms are active in the network, a substantial fraction of this e-mail may actually be traffic generated by the virus or worm. Thus, an e-mail server may have dozens to thousands of examples of a single e-mail-borne virus pass through it in a day, offering an excellent opportunity to determine the relationships between e- mail messages and detect replicated content (a feature that is indicative of virus/worm propagation) and spam, among other, more legitimate traffic (such as traffic from legitimate mailing lists).
Systems and methods consistent with the principles of the invention provide mechanisms to detect and stop e-mail-borne viruses and worms before the addressed user receives them, in an environment where the virus is still inert. Current e-mail servers do not normally execute any code in the e-mail being transported, so they are not usually subject to virus/worm infections from the content of the e-mails they process - though, they may be subject to infection via other forms of attack.
Besides e-mail-borne viruses and worms, another common problem found in e-mail is mass-e-mailing of unsolicited commercial e-mail, colloquially referred to as “spam.” It is estimated that perhaps 25%-50% of all e-mail messages now received for delivery by major ISP e-mail servers is spam.
Users of network e-mail services are desirous of mechanisms to block e-mail containing viruses or worms from reaching their machines (where the virus or worm may easily do harm before the user realizes its presence). Users are also desirous of mechanisms to block unsolicited commercial e-mail that consumes their time and resources.
Many commercial e-mail services put a limit on each user's e-mail accumulating at the server, and not yet downloaded to the customer's machine. If too much e-mail arrives between times when the user reads his e-mail, additional e-mail is either “bounced” (i.e., returned to the sender's e-mail server) or even simply discarded, both of which events can seriously inconvenience the user. Because the user has no control over arriving e-mail due to e-mail-borne viruses/worms, or spam, it is a relatively common occurrence that the user's e-mail quota overflows due to unwanted and potentially harmful messages. Similarly, the authors of e-mail- borne viruses, as well as senders of spam, have no reason to limit the size of their messages. As a result, these messages are often much larger than legitimate e-mail messages, thereby increasing the risks of such denial of service to the user by overflowing the per-user e-mail quota.
Users are not the only group inconvenienced by spam and e-mail-borne viruses and worms. Because these types of unwanted e-mail can form a substantial fraction, even a majority, of e-mail traffic in the Internet, for extended periods of time, ISPs typically must add extra resources to handle a peak e-mail load that would otherwise be about half as large. This ratio of unwanted-to-legitimate e-mail traffic appears to be growing daily. Systems and methods consistent with the principles of the invention provide mechanisms to detect and discard unwanted e-mail in network e-mail servers.
Public network 150 may include a collection of network devices, such as routers (R1-R5) or switches, that transfer data between autonomous systems, such as autonomous systems 110-140. In an implementation consistent with the present invention, public network 150 takes the form of the Internet, an intranet, a public telephone network, a wide area network (WAN), or the like.
An autonomous system is a network domain in which all network devices (e.g., routers) in the domain can exchange routing tables. Often, an autonomous system can take the form of a local area network (LAN), a WAN, a metropolitan area network (MAN), etc. An autonomous system may include computers or other types of communication devices (referred to as “hosts”) that connect to public network 150 via an intruder detection system (IDS); a firewall, one or more border routers, or a combination of these devices.
Autonomous system 110, for example, includes hosts (H) 111-113 connected in a LAN configuration. Hosts 111-113 connect to public network 150 via an intruder detection system (IDS) 114. Intruder detection system 114 may include a commercially-available device that uses rule-based algorithms to determine if a given pattern of network traffic is abnormal. The general premise used by an intruder detection system is that malicious network traffic will have a different pattern from normal, or legitimate, network traffic.
Using a rule set, intruder detection system 114 monitors inbound traffic to autonomous system 110. When a suspicious pattern or event is detected, intruder detection system 114 may take remedial action, or it can instruct a border router or firewall to modify operation to address the malicious traffic pattern. For example, remedial actions may include disabling the link carrying the malicious traffic, discarding packets corning from a particular source address, or discarding packets addressed to a particular destination.
Autonomous system 120 contains different devices from autonomous system 110. These devices aid autonomous system 120 in identifying and/or preventing the transmission of potentially malicious packets within autonomous system 120 and tracing the propagation of the potentially malicious packets through autonomous system 120 and, possibly, public network 150. While
Autonomous system 120 includes hosts (H) 121-123, intruder detection system (IDS) 124, and security server (SS) 125 connected to public network 150 via a collection of devices, such as security routers (SR11-SR14) 126-129. Hosts 121-123 may include computers or other types of communication devices connected, for example, in a LAN configuration. Intruder detection system 124 may be configured similar to intruder detection system 114.
Security server 125 may include a device, such as a general-purpose computer or a server, that performs source path identification when a malicious packet is detected by intruder detection system 124 or a security router 126-129. While security server 125 and intruder detection system 124 are shown as separate devices in
Security server 125 may include a processor 102A, main memory 104A, read only memory (ROM) 106A, storage device 108A, bus 110A, display 112A, keyboard 114A, cursor control 116A, and communication interface 118A. Processor 102A may include any type of conventional processing device that interprets and executes instructions.
Main memory 104A may include a random access memory (RAM) or a similar type of dynamic storage device. Main memory 104A may store information and instructions to be executed by processor 102A. Main memory 104A may also be used for storing temporary variables or other intermediate information during execution of instructions by processor 102A. ROM 106A may store static information and instructions for use by processor 102A. It will be appreciated that ROM 106A may be replaced with some other type of static storage device. Storage device 108A, also referred to as a data storage device, may include any type of magnetic or optical media and their corresponding interfaces and operational hardware. Storage device 1008 nay store information and instructions for use by processor 102A.
Bus 110A may include a set of hardware lines (conductors, optical fibers, or the like) that allow for data transfer among the components of security server 125. Display device 112A may be a cathode ray tube (CRT), liquid crystal display (LCD) or the like, for displaying information in an operator or machine-readable form. Keyboard 114A and cursor control 116A may allow the operator to interact with security server 125. Cursor control 116A may include, for example, a mouse. In an alternative configuration, keyboard 114A and cursor control 116A can be replaced with a microphone and voice recognition mechanisms to enable an operator or machine to interact with security server 125.
Communication interface 118A enables security server 125 to communicate with other devices/systems via any communications medium. For example, communication interface 118A may include a modem, an Ethernet interface to a LAN, an interface to the Internet, a printer interface, etc. Alternatively, communication interface 118A can include any other type of interface that enables communication between security server 125 and other devices, systems, or networks. Communication interface 118A can be used in lieu of keyboard 114A and cursor control 116A to facilitate operator or machine remote control and communication with security server 125.
As will be described in detail below, security server 125 may perform source path identification and/or prevention measures for a malicious packet that entered autonomous system 120. Security server 125 may perform these functions in response to processor 102A executing sequences of instructions contained in, for example, memory 104A. Such instructions may be read into memory 104A from another computer-readable medium, such as storage device 108A, or from another device coupled to bus 110A or coupled via communication interface 118A.
Alternatively, hardwired circuitry may be used in place of or in combination with software instructions to implement the functions of security server 125. For example, the functionality may be implemented in an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or the like, either alone or in combination with other devices.
Security routers 126-129 may include network devices, such as routers, that may detect and/or prevent the transmission of malicious packets and perform source path identification functions. Security routers 127-129 may include border routers for autonomous system 120 because these routers include connections to public network 150. As a result, security routers 127-129 may include routing tables for routers outside autonomous system 120.
Packet detection logic 200 may include hash processor 210 and hash memory 220. Hash processor 210 may include a conventional processor, an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or some other type of device that generates one or more representations for each received packet and records the packet representations in hash memory 220.
A packet representation will likely not be a copy of the entire packet, but rather it may include a portion of the packet or some unique value representative of the packet. Because modern routers can pass gigabits of data per second, storing complete packets is not practical because memories would have to be prohibitively large. By contrast, storing a value representative of the contents of a packet uses memory in a much more efficient manner. By way of example, if incoming packets range in size from 256 bits to 1000 bits, a fixed width number may be computed across blocks making up the content (or payload) of a packet in a manner that allows the entire packet to be identified.
To further illustrate the use of representations, a 32-bit hash value, or digest, may be computed across blocks of each packet. Then, the hash value may be stored in hash memory 220 or may be used as an index, or address, into hash memory 220. Using the hash value, or an index derived therefrom, results in efficient use of hash memory 220 while still allowing the content of each packet passing through packet detection logic 200 to be identified.
Systems and methods consistent with the present invention may use any storage scheme that records information about each packet in a space-efficient fashion, that can definitively determine if a packet has not been observed, and that can respond positively (i.e., in a predictable way) when a packet has been observed. Although systems and methods consistent with the present invention can use virtually any technique for deriving representations of packets, the remaining discussion will use hash values as exemplary representations of packets having passed through a participating router.
Hash processor 210 may determine one or more hash values over variable-sized blocks of bytes in the payload field (i.e., the contents) of an observed packet. When multiple hashes are employed, they may, but need not, be done on the same block of payload bytes. As described in more detail below, hash processor 210 may use the hash results of the hash operation to recognize duplicate occurrences of packet content and raise a warning if it detects packets with replicated content within a short period of time. Hash processor 210 may also use the hash results for tracing the path of a malicious packet through the network.
According to implementations consistent with the present invention, the content (or payload) of a packet may be hashed to detect the packet or trace the packet through a network. In other implementations, the header of a packet may be hashed. In yet other implementations, some combination of the content and the header of a packet may be hashed.
In one implementation consistent with the principles of the invention, hash processor 210 may perform three hashes covering each byte of the payload field. Thus, a hash block size may be chosen uniformly from a range of 4 to 128 bytes, in 4-byte increments (to accommodate a common data-path granularity in high-speed network devices). At the start of the packet payload, hash processor 210 may select a random block size from this range and hash the block with the three different hash functions, or hash processor 210 may select a different block size for each hash function. In the former case, a new block size may be chosen when the first block finishes, and all three hash functions may start at the same place on the new block. In the latter case, as each hash function completes its current block, it selects a random size for the next block it will hash.
Each hash value may be determined by taking an input block of data and processing it to obtain a numerical value that represents the given input data. Suitable hash functions are readily known in the art and will not be discussed in detail herein. Examples of hash functions include the Cyclic Redundancy Check (CRC) and Message Digest 5 (MD5). The resulting hash value, also referred to as a message digest or hash digest, may include a fixed length value. The hash value may serve as a signature for the data over which it was computed. For example, incoming packets could have fixed hash value(s) computed over their content.
The hash value essentially acts as a fingerprint identifying the input block of data over which it was computed. Unlike fingerprints, however, there is a chance that two very different pieces of data will hash to the same value, resulting in a hash collision. An acceptable hash function should provide a good distribution of values over a variety of data inputs in order to prevent these collisions. Because collisions occur when different input blocks result in the same hash value, an ambiguity may arise when attempting to associate a result with a particular input.
Hash processor 210 may store a representation of each packet it observes in hash memory 220. Hash processor 210 may store the actual hash values as the packet representations or it may use other techniques for minimizing storage requirements associated with retaining hash values and other information associated therewith. A technique for minimizing storage requirements may use one or more bit arrays or Bloom filters.
Rather than storing the actual hash value, which can typically be on the order of 32 bits or more in length, hash processor 210 may use the hash value as an index for addressing a bit array within hash memory 220. In other words, when hash processor 210 generates a hash value for a block of a packet, the hash value serves as the address location into the bit array. At the address corresponding to the hash value, one or more bits may be set at the respective location thus indicating that a particular hash value, and hence a particular data packet content, has been seen by hash processor 210. For example, using a 32-bit hash value provides on the order of 4.3 billion possible index values into the bit array. Storing one bit per block rather than storing the block itself, which can be 512 bits long, produces a compression factor of 1:512. While bit arrays are described by way of example, it will be appreciated by those skilled in the relevant art, that other storage techniques may be employed without departing from the spirit of the invention.
As shown in
As shown in
Because shorter block sizes are more likely to be repeated in totally random traffic, another variation might include the use of different memories for different block sizes. Thus, a given count level for a shorter block size may be less reason for suspicion than the same count level found in a longer block size.
In an alternate implementation consistent with the principles of the invention, hash memory 220 may be preprogrammed to store hash values corresponding to known malicious packets, such as known viruses and worms. Hash memory 220 may store these hash values separately from the hash values of observed packets. In this case, hash processor 210 may compare a hash value for a received packet to not only the hash values of previously observed packets, but also to hash values of known malicious packets.
In yet another implementation consistent with the principles of the invention, hash memory 220 may be preprogrammed to store source addresses of known sources of legitimate duplicated content, such as packets from a multicast server, a popular page on a web server, an output from a mailing list “exploder” server, or the like. In this case, hash processor 210 may compare the source address for a received packet to the source addresses of known sources of legitimate duplicated content.
Over time, hash memory 220 may fill up and the possibility of overwriting an existing index value increases. The risk of overwriting an index value may be reduced if the bit array is periodically flushed to other storage media, such as a magnetic disk drive, optical media, solid state drive, or the like. Alternatively, the bit array may be slowly and incrementally erased. To facilitate this, a time-table may be established for flushing/erasing the bit array. If desired, the flushing/erasing cycle can be reduced by computing hash values only for a subset of the packets passing through the router. While this approach reduces the flushing/erasing cycle, it increases the possibility that a target packet may be missed (i.e., a hash value is not computed over a portion of it).
When hash memory 220 includes counter fields 322, non-zero storage locations may be decremented periodically rather than being erased. This may ensure that the “random noise” from normal packets would not remain in the bit array indefinitely. Replicated traffic (e.g., from a virus/worm propagating repeatedly across the network), however, would normally cause the relevant storage locations to stay substantially above the “background noise” level.
As shown in
Indicator filed 312 may store one or more bits that indicate whether a packet block with the corresponding hash value has been observed by hash processor 210. Counter field 322 may record the number of occurrences of packet blocks with the corresponding hash value. Counter field 322 may periodically decrement its count for flushing purposes.
As shown in
In an alternate implementation consistent with the principles of the invention, hash memory 220 may be preprogrammed to store hash values corresponding to known malicious packets, such as known viruses and worms. Hash memory 220 may store these hash values separately from the hash values of observed packets. In this case, hash processor 210 may compare a hash value for a received packet to not only the hash values of previously observed packets, but also to hash values of known malicious packets.
In yet another implementation consistent with the principles of the invention, hash memory 220 may be preprogrammed to store source addresses of known sources of legitimate duplicated content, such as packets from a multicast server, a popular page on a web server, an output from a mailing list “exploder” server, or the like. In this case, hash processor 210 may compare the source address for a received packet to the source addresses of known sources of legitimate duplicated content.
Exemplary Processing for Malicious Packet Detection/Prevention
Processing may begin when packet detection logic 200 receives, or otherwise observes, a packet (act 405). Hash processor 210 may generate one or more hash values by hashing variable-sized blocks from the packet's payload field (act 410). Hash processor 210 may use one or more conventional techniques to perform the hashing operation.
In one implementation consistent with the principles of the invention, three hashes may be performed covering each byte of the payload field. A hash block size may be chosen uniformly from a range of 4 to 128 bytes, in 4-byte increments. At the start of the packet payload, a random block size may be selected from this range and the block may be hashed with the three different hash functions. A new block size may then be chosen when the first block finishes, and all three hash functions may start at the same place on the new block. Alternatively, a different block size may be selected for each hash function. In this case, as each hash function completes its current block, it selects a random size for the next block it will hash.
Hash processor 210 may optionally compare the generated hash value(s) to hash values of known viruses and/or worms within hash memory 220 (act 415). In this case, hash memory 220 may be preprogrammed to store hash values corresponding to known viruses and/or worms. If one or more of the generated hash values match one of the hash values of known viruses and/or worms, hash processor 210 may take remedial actions (acts 420 and 425). The remedial actions may include raising a warning for a human operator, delaying transmission of the packet, capturing a copy of the packet for human or automated analysis, dropping the packet and possibly other packets originating from the same Internet Protocol (IP) address as the packet, sending a Transmission Control Protocol (TCP) close message to the sender thereby preventing complete transmission of the packet, disconnecting the link on which the packet was received, and/or corrupting the packet content in a way likely to render any code contained therein inert (and likely to cause the receiver to drop the packet). Some of the remedial actions, such as dropping or corrupting the packet, may be performed probabilistically based, for example, on the count value in counter field 322 (
If the generated hash value(s) do not match any of the hash values of known viruses and/or worms, or if such a comparison was not performed, hash processor 210 may optionally determine whether the packet's source address indicates that the packet was sent from a legitimate source of duplicated packet content (i.e., a legitimate “replicator”) (act 430). For example, hash processor 210 may maintain a list of legitimate replicators in hash memory 220 and check the source address of the packet with the addresses of legitimate replicators on the list. If the packet's source address matches the address of one of the legitimate replicators, then hash processor 210 may end processing of the packet. For example, processing may return to act 405 to await receipt of the next packet.
Otherwise, hash processor 210 may record the generated hash value(s) in hash memory 220 (act 435). For example, hash processor 210 may set the one or more bits stored in indicator field 312 (
Hash processor 210 may then determine whether any prior packets with the same hash value(s) have been received (act 440). For example, hash processor 210 may use each of the generated hash value(s) as an address into hash memory 220. Hash processor 210 may then examine indicator field 312 at each address to determine whether the one or more bits stored therein indicate that a prior packet has been received. Alternatively, hash processor 210 may examine counter field 322 to determine whether the count value indicates that a prior packet has been received.
If there were no prior packets received with the same hash value(s), then processing may return to act 405 to await receipt of the next packet. If hash processor 210 determines that a prior packet has been observed with the same hash value, however, hash processor 210 may determine whether the packet is potentially malicious (act 445). Hash processor 210 may use a set of rules to determine whether to identify a packet as potentially malicious. For example, the rules might specify that more than x (where x>1) packets with the same hash value have to be observed by hash processor 210 before the packets are identified as potentially malicious. The rules might also specify that these packets have to have been observed by hash processor 210 within a specified period of time of one another. The reason for the latter rule is that, in the case of malicious packets, such as polymorphic viruses and worms, multiple packets will likely pass through packet detection logic 200 within a short period of time.
A packet may contain multiple hash blocks that partially match hash blocks associated with prior packets. For example, a packet that includes multiple hash blocks may have somewhere between one and all of its hashed content blocks match hash blocks associated with prior packets. The rules might specify the number of blocks and/or the number and/or length of sequences of blocks that need to match before hash processor 210 identifies the packet as potentially malicious. The rules might differ for different block sizes.
When hash processor 210 determines that the packet is not malicious (e.g., not a polymorphic worm or virus), such as when less than x number of packets with the same hash value or less than a predetermined number of the packet blocks with the same hash values are observed or when the packets are observed outside the specified period of time, processing may return to act 405 to await receipt of the next packet. When hash processor 210 determines that the packet may be malicious, however, hash processor 210 may take remedial actions (act 450). In some cases, it may not be possible to determine whether the packet is actually malicious because there is some probability that there was a false match or a legitimate replication. As a result, hash processor 210 may determine the probability of the packet actually being malicious based on information gathered by hash processor 210.
The remedial actions may include raising a warning for a human operator, saving the packet for human analysis, dropping the packet, corrupting the packet content in a way likely to render any code contained therein inert (and likely to cause the receiver to drop the packet), delaying transmission of the packet, capturing a copy of the packet for human or automated analysis, dropping other packets originating from the same IP address as the packet, sending a TCP close message to the sender thereby preventing complete transmission of the packet, and/or disconnecting the link on which the packet was received. Some of the remedial actions, such as dropping or corrupting the packet, may be performed probabilistically based, for example, on the count value in counter field 322 (
Once a malicious packet, such as a polymorphic virus or worm, has been identified, the path taken by the malicious packet may be traced. To do this, processing similar to that described in U.S. patent application Ser. No. 10/251,403, now U.S. Pat. No. 7,328,349, from which this application claims priority and which has been previously incorporated by reference, may be performed.
Processing may begin with intruder detection system 124 detecting a malicious packet. Intruder detection system 124 may use conventional techniques to detect the malicious packet. For example, intruder detection system 124 may use rule-based algorithms to identify a packet as part of an abnormal network traffic pattern. When a malicious packet is detected, intruder detection system 124 may notify security server 125 that a malicious packet has been detected within autonomous system 120. The notification may include the malicious packet or portions thereof along with other information useful for security server 125 to begin source path identification. Examples of information that intruder detection system 124 may send to security server 125 along with the malicious packet include time-of-arrival information, encapsulation information, link information, and the like.
After receiving the malicious packet, security server 125 may generate a query that includes the malicious packet and any additional information desirable for facilitating communication with participating routers, such as security routers 126-129 (acts 505 and 510). Examples of additional information that may be included in the query are, but are not limited to, destination addresses for participating routers, passwords required for querying a router, encryption keying information, time-to-live (TTL) fields, information for reconfiguring routers, and the like. Security server 125 may then send the query to security router(s) located one hop away (act 515). The security router(s) may analyze the query to determine whether they have seen the malicious packet. To make this determination, the security router(s) may use processing similar to that described below with regard to
After processing the query, the security router(s) may send a response to security server. The response may indicate that the security router has seen the malicious packet, or alternatively, that it has not. It is important to observe that the two answers are not equal in their degree of certainty. If a security router does not have a hash matching the malicious packet, the security router has definitively not seen the malicious packet. If the security router has a matching hash, however, then the security router has seen the malicious packet or a packet that has the same hash value as the malicious packet. When two different packets, having different contents, hash to the same value it is referred to as a hash collision.
The security router(s) may also forward the query to other routers or devices to which they are connected. For example, the security router(s) may forward the query to the security router(s) that are located two hops away from security server, which may forward the query to security router(s) located three hops away, and so on. This forwarding may continue to include routers or devices within public network 150 if these routers or devices have been configured to participate in the tracing of the paths taken by malicious packets. This approach may be called an inward-out approach because the query travels a path that extends outward from security server 125. Alternatively, an outward-in approach may be used.
Security server 125 receives the responses from the security routers indicating whether the security routers have seen the malicious packet (acts 520 and 525). If a response indicates that the security router has seen the malicious packet, security server 125 associates the response and identification (ID) information for the respective security router with active path data (act 530). Alternatively, if the response indicates that the security router has not seen the malicious packet, security server 125 associates the response and the ID information for the security router with inactive path data (act 535).
Security server 125 uses the active and inactive path data to build a trace of the potential paths taken by the malicious packet as it traveled, or propagated, across the network (act 540). Security server 125 may continue to build the trace until it receives all the responses from the security routers (acts 540 and 545). Security server 125 may attempt to build a trace with each received response to determine the ingress point for the malicious packet. The ingress point may identify where the malicious packet entered autonomous system 120, public network 150, or another autonomous system.
As security server 125 attempts to build a trace of the path taken by the malicious packet, several paths may emerge as a result of hash collisions occurring in the participating routers.
When hash collisions occur, they act as false positives in the sense that security server 125 interprets the collision as an indication that the malicious packet has been observed. Fortunately, the occurrences of hash collisions can be mitigated. One mechanism for reducing hash collisions is to compute large hash values over the packets since the chances of collisions rise as the number of bits comprising the hash value decreases. Another mechanism to reduce false positives resulting from collisions is for each security router (e.g., security routers 126-129) to implement its own unique hash function. In this case, the same collision will not occur in other security routers.
A further mechanism for reducing collisions is to control the density of the hash tables in the memories of participating routers. That is, rather than computing a single hash value and setting a single bit for an observed packet, a plurality of hash values may be computed for each observed packet using several unique hash functions. This produces a corresponding number of unique hash values for each observed packet. While this approach fills the hash table at a faster rate, the reduction in the number of hash collisions makes the tradeoff worthwhile in many instances. For example, Bloom Filters may be used to compute multiple hash values over a given packet in order to reduce the number of collisions and, hence, enhance the accuracy of traced paths.
When security server 125 has determined an ingress point for the malicious packet, it may notify intruder detection system 124 that the ingress point for the malicious packet has been determined (act 550). Security server 125 may also take remedial actions (act 555). Often it will be desirable to have the participating router closest to the ingress point close off the ingress path used by the malicious packet. As such, security server 125 may send a message to the respective participating router instructing it to close off the ingress path using known techniques.
Security server 125 may also archive copies of solutions generated, data sent, data received, and the like either locally or remotely. Furthermore, security server 125 may communicate information about source path identification attempts to devices at remote locations coupled to a network. For example, security server 125 may communicate information to a network operations center, a redundant security server, or to a data analysis facility for post processing.
Processing may begin when security router 126 receives a query from security server 125 (act 605). As described above, the query may include a TTL field. A TTL field may be employed because it provides an efficient mechanism for ensuring that a security router responds only to relevant, or timely, queries. In addition, employing TTL fields may reduce the amount of data traversing the network between security server 125 and participating routers because queries with expired TTL fields may be discarded.
If the query includes a TTL field, security router 126 may determine if the TTL field in the query has expired (act 610). If the TTL field has expired, security router 126 may discard the query (act 615). If the TTL field has not expired, security router 126 may hash the malicious packet contained within the query at each possible starting offset within a block (act 620). Security router 126 may generate multiple hash values because the code body of a virus or worm may appear at any arbitrary offset within the packet that carries it (e.g., each copy may have an e-mail header attached that differs in length for each copy).
Security router 126 may then determine whether any of the generated hash values match one of the recorded hash values in hash memory 220 (act 625). Security router 126 may use each of the generated hash values as an address into hash memory 220. At each of the addresses, security router 126 may determine whether indicator field 312 indicates that a prior packet with the same hash value has been observed. If none of the generated hash values match a hash value in hash memory 220, security router 126 does not forward the query (act 630), but instead may send a negative response to security server 125 (act 635).
If one or more of the generated hash values match a hash value in hash memory 220, however, security router 126 may forward the query to all of its output ports excluding the output port in the direction from which the query was received (act 640). Security router 126 may also send a positive response to security server 125, indicating that the packet has been observed (act 645). The response may include the address of security router 126 and information about observed packets that have passed through security router 126.
A preferred embodiment uses a server and one or more specially configured network components, or devices, such as a router, within an autonomous system (AS) to determine the ingress point, or location, for a malicious packet (MP1).
The rightmost portion of
SS1 may be comprised of a general-purpose computer, or server, operatively coupled to the network of AS1 and executing machine-readable code enabling it to perform source path isolation in conjunction with SR14-17 and IDS1. While SS1 and IDS1 are shown as separate devices in
The central portion of
The lower portion of
The leftmost portion of
To launch an attack, an intruder generates malicious data traffic and places it onto a link for transmission to one or more destination devices having respective destination addresses. In
Detection and source path isolation of MP1 may be accomplished as follows. Detection device, here IDS1, identifies MP1 using known methods. After detecting MP1, IDS1 generates a notification packet, or triggering event, and sends it to SS1 thus notifying SS1 that a malicious packet has been detected within AS1. The notification packet may include MP1 or portions thereof along with other information useful for SS1 to begin source path isolation. Examples of information that may be sent from IDS1 to SS1 along with MP1 are time-of-arrival, encapsulation information, link information, and the like. When MP1 (or fraction thereof) has been identified and forwarded to SS1 it is referred to as a target packet (TP1) because it becomes the target of the source path isolation method further described herein.
SS1 may then generate a query message (QM1) containing TP1, a portion thereof, or a representation of TP1 such as a hash value. After generating QM1 containing identification information about TP1, SS1 sends it to some, or all, participating routers. Accordingly, SS1 may send QM1 to participating routers located one hop away; however the disclosed invention is not limited to single hops. For example, SR16 is one hop away from SS1, whereas SR14, SR15 and SR17 are two hops away from SS1 and one hop away from SR16, respectively. When SR16 receives QM1 from SS1, SR16 determines if TP1 has been seen. This determination is made by comparing TP1 with a database containing signatures of other characteristics representative of packets having passed through SR16. Typically, SR16 is considered to have observed, or encountered, a packet when the packet is passed from one of its input ports to one of its output ports such as would be done when SR16 forwards during normal operation within a network.
To determine if a packet has been observed, SR16 first stores a representation of each packet it forwards. Then SR16 compares the stored representation to the information about TP1 contained in QM1. Typically, a representation of a packet passed through SR16 will not be a copy of the entire packet, but rather it will be comprised of a portion of the packet or some unique value representative of the packet. Since modern routers can pass gigabits of data per second, storing complete packets is not practical because memories become prohibitively large. In contrast, storing a value representative of the contents of a packet uses memory in a more efficient manner. By way of example, if incoming packets range in size from 256 bits to 1000 bits, a fixed width number may be computed across the bits making up a packet in a manner that allows the entire packet to be uniquely identified. A hash value, or hash digest, is an example of such a fixed width number. To further illustrate the use of representations, if a 32-bit hash digest is computed across each packet, then the digest may be stored in memory or, alternatively, the digest may be used as an index, or address, into memory. Using the digest, or an index derived therefrom, results in efficient use of memory while still allowing identification of each packet passing through a router. The disclosed invention works with any storage scheme that saves information about each packet in a space efficient fashion, that can definitively determine if a packet has not been observed, and that will respond positively (i.e. in a predictable way) when a packet has been observed. Although the invention works with virtually any technique for deriving representations of packets, for brevity, the remaining discussion will use hash digests as exemplary representations of packets having passed through a participating router.
Returning to the discussion of
In
Still referring to
The process used to perform source path isolation in
Further detail of the operation of a source path isolation server (SS) and a source path isolation router (SR) are provided hereinbelow.
After receiving TP1, SS1 may generate QM1 comprising TP1 and any additional information desirable for facilitating communication with participating routers (SRs) (step 904).
Examples of additional information that may be included in QM1 are, but are not limited to, destination addresses for participating routers, passwords required for querying a router, encryption keying information, time-to-live (TTL) fields, a hash digest of TP1, information for reconfiguring routers, and the like. SS1 may then send QM1 to SRs located at least one hop away (step 906). SR may then process QM1 by hashing TP1 contained therein and comparing the resulting value to hash values stored in local memory, where the stored hash values identify packets having previously passed through SR.
After processing QM1, an SR may send a reply to SS1 (step 908). The response may indicate that a queried router has seen TP1, or alternatively, that it has not (step 910). It is important to observe that the two answers are not equal in their degree of certainty. If SR does not have a hash matching TP1, SR has definitively not seen TP1. However, if SR has a matching hash, then SR has seen TP1 or a packet that has the same hash as TP1. When two different packets, having different contents, hash to the same value it is referred to as a hash collision.
If a queried SR has seen TP1, a reply and identification (ID) information for the respective SR is associated as active path data (step 914). Alternatively, if an SR has not seen TP1, the reply is associated as inactive path data (step 912). Replies received from queried SRs are used to build a source path trace of possible paths taken by TP through the network using known methods (step 916). SS1 may then attempt to identify the ingress point for TP1 (step 918). If SS1 is unable to determine the ingress point of TP1, subsequent responses from participating routers located an additional hop away are processed by executing steps 908-918 again (step 924).
Examples of source path tracing techniques that may be employed with embodiments disclosed herein are, but are not limited to, a breadth-first search or a depth-first search. In a breadth-first search, all SRs in an area are queried to determine which SRs may have observed a target packet. Then, one or more graphs, containing nodes, are generated from the responses received by SS1. Where the nodes indicate locations that TP1 may have passed. Any graphs containing a node where TP1 was observed are associated as active, or candidate, paths, i.e. paths that TP1 may have traversed. With a depth-first search, only SRs adjacent to a location where TP1 was observed are queried. SRs issuing a positive reply are treated as starting points for candidate graphs because they have observed TP1. Next, all SRs adjacent to those that responded with a positive reply are queried. The process of moving the query/response process out one hop at a time is referred to as a round. This process is repeated until all participating routers have been queried or all SRs in a round respond with a negative reply indicating that they have not observed TP1. When a negative reply is received, it is associated as inactive path data.
When SS1 has determined an ingress point for TP1, it may send a message to IDS1 indicating that a solution has been found (step 920). Often it will be desirable to have the participating router closest to the ingress point close off the ingress path used by TP1. As such, SS1 may send a message to the respective participating router instructing it to close off the ingress path using known techniques (step 922). SS1 may also archive path solutions, data sent, data received, and the like either locally or remotely. Furthermore, SS1 may communicate information about source path isolation attempts to devices at remote locations coupled to a network. For example, SS1 may communicate information to a network operations center (NOC), a redundant source path isolation server, or to a data analysis facility for post processing.
Here it is noted that as SS1 attempts to build a trace of the path taken by TP1, multiple paths may emerge as a result of hash collisions occurring in participating routers. When collisions occur, they act as false positives in the sense that SS1 interprets the collision as an indication that a desired TP1 has been observed. Fortunately the occurrences of hash collisions can be mitigated. One mechanism for reducing hash collisions is to compute large hash values over the packets since the chances of collisions rise as the number of bits comprising the hash value decreases. Another mechanism for reducing collisions is to control the density of the hash tables in the memories of participating routers. That is, rather than computing a single hash value and setting a single bit for an observed packet, a plurality of hash values are computed for each observed packet using several unique hash functions. This produces a corresponding number of unique hash values for each observed packet. While this approach fills the router's hash table at a faster rate, the reduction in the number of hash collisions makes the tradeoff worthwhile in many instances. For example, Bloom Filters may be used to compute multiple hash values over a given packet in order to reduce the number of collisions and hence enhance the accuracy of traced paths. Therefore, the disclosed invention is not limited to any particular method of computing hash functions nor is it limited to a particular type of source path localization algorithm or technique.
To participate in source path isolation of target packets, a router is modified so that it can determine a hash value over the immutable portion of each packet received and/or forwarded. A router forwards a packet when it moves a data packet present at an input port to an output port for transmittal toward a desired destination. Modifying a router to record information about observed packets after computing a hash value provides an efficient method for retaining unique information about each packet seen, or observed, by a participating router. Techniques for quickly computing hash values are readily available and they can be implemented in the processing hardware and software currently used in routers without unduly reducing performance of the forwarding engines within the routers. In order to make use of hash value information, a participating router, SR, may store information in a manner facilitating rapid recall when QM1 is received from SS1. Since, modern routers are capable of forwarding large numbers of packets very quickly, attempting to store even a byte per data packet would require very large amounts of high-speed memory. Employing hash values significantly reduces the memory requirements for storing information about packets.
An SR determines a hash value over an immutable portion of a packet observed at an input port. The hash value is determined by taking an input block of data, such as a data packet, and processing it to obtain a numerical value that is unique for the given input data. The hash value, also referred to as a message digest or hash digest, is a fixed length whereas the input data may vary in size. Since the hash digest is unique for each input block of data, it serves as a signature for the data over which it was computed. For example, incoming packets varying in size from 32 bits to 1000 bits could have a fixed 32-bit hash value computed over their length. Furthermore, the hash value may be computed in such a way that it is a function of all of the bits making up the input data, or alternatively it can be computed over a portion of input data. When used, a hash value essentially acts as a fingerprint identifying the input block of data over which it was computed. However, unlike fingerprints, there is a chance that two very different pieces of data will hash to the same value, i.e. a hash collision. An acceptable hash function should provide a good distribution of values over a variety of data inputs in order to prevent these collisions. Since collisions occur when different, i.e. unique, input blocks result in the same hash value, an ambiguity arises when attempting to associate a result with a particular input. Suitable hash functions are readily known in the art and will not be discussed in detail herein. For example, hash functions used in the art, which may be used in conjunction with the matter disclosed herein, can be found in Cryptography And Network Security Principles And Practice, Stallings, Prentice Hall (2000). An example of a useful hash function that can be used with the invention is the Cyclical Redundancy Check (CRC).
To further reduce collisions, each SR may implement its own unique hash function. By way of example, if there are two adjacent routers, SR15 and SR16, coupled together and each employs the same hash function, and there are two target packets, TP1 and TP2 on a network. Now assume, TP1 passes only through SR15, and TP2 passes through SR16 before arriving at SR15. If TP1 and TP2 have a hash collision at SR15, then the tracing algorithm will include SR16 in the traced path because SR16 would incorrectly report TP2's hash value as a potential signal that TP1 had passed through SR16 because of the identical hash values of TP1 and TP2. However, if SR16 employs a different hash function, then TP1 and TP2 will have different hash values at SR16, and thus SR16 would not be included in the tracing path even though a collision occurred between TP1 and TP2 at SR15.
Generally packets have an immutable portion and a mutable portion. These names are used to help distinguish between the portions of the packet that may change as it is routed through the network and the portion, or portions, remaining intact, or unchanged. Immutable is used to describe the portions of a packet that do not change as a function of the packet's path across, or through, a network. In contrast, mutable describes the portions of a packet that change as a function of the packet's path through the network. Typically, the data, or baggage, portion of a packet is thought to be immutable whereas the header portion is considered to be mutable. Although the header portion may be largely comprised of mutable fields, it often contains immutable fields as well. When practicing the invention it is desirable to compute hash values over at least a portion of the immutable fields of a packet to produce hash values that do not change as the packet traverses a network.
Embodiments disclosed herein may store the actual hash values to identify packets traversing the network, or they may use other techniques for minimizing storage requirements associated with retaining hash values and other information associated therewith. One such technique for minimizing storage requirements uses a bit array for storing hash values. Rather than storing the actual hash value, which can typically be on the order of 32 bits or more in length, the invention uses the hash value as an index for addressing into a bit array. In other words, when a hash value is computed for a forwarded packet, the hash value serves as the address location into the bit array. At the address corresponding to the hash value, a single bit is set at the respective location thus indicating that a particular hash value, and hence a particular data packet, has been seen by the router. For example, using a 32 bit hash value provides on the order of 4.3 billion possible index values into a bit array. Storing one bit per packet rather than storing the packet itself, which can be 1000 bits long, produces a storage ratio of 1:1000. While bit arrays are described by way of example, it will be obvious to those skilled in the relevant art, that other storage techniques may be employed without departing from the spirit of the invention.
While using a bit array significantly reduces memory requirements for participating routers, they are not eliminated. Over time, a memory will fill up and the possibility of overwriting an existing index value increases. The risk of overwriting an index value may be reduced if the bit array is periodically flushed to other storage media such as a magnetic disk drive, optical media, solid state drive, or the like. To facilitate this, a time-table may be established for flushing the bit array, wherein such time-table may be based on the speed of the router, number of input data streams, the size of available fast memory, and the like. If desired, the flushing cycle can be reduced by computing hash values only for a subset of the packets passing through a router. While this approach reduces the flushing cycle, it increases the possibility that a target packet may be missed, i.e. a hash value is not computed over a portion of it.
If the TTL field is not expired, SRI determines if TP1 has been transformed (step 1008). TP1 is transformed when it undergoes a transformation in route through a network such that a hash value computed over the immutable portion of the packet has a different value from that of the non-transformed portion. For example, TP1 may have undergone a transformation of the baggage portion of the packet in an attempt to make identification of TP1 and/or its source more difficult. If TP1 has been transformed, SRI creates a new query packet (QM2) containing a hash value for the immutable portion of the transformed packet (step 1010). Where no packet transformation has occurred, the method determines if the hash value computed matches an index value in the bit array (step 1012). As previously noted, index values contained in the bit array identify hash values of packets that have been forwarded by a queried router, here SRI. Depending on available memory in SRI, the hash value may be compared to bit array indices retrieved either from disk or from volatile memory.
If the hash value does not match an index value, SR I does not forward QM1 (step 1016), but instead may send a negative reply to SS1 (step 1018). If a queried SR determines that TP1 has been transformed, the hash value of this variant, referred to as QM2, may be added to the baggage portion of QM1 (step 1014), or alternatively can be used to create a new message (not shown) for forwarding to other devices. Next, QM1 is preferably forwarded to all interfaces excluding the one that QM1 was received on (step 1020). After forwarding the message, SRI sends a positive reply to SSI indicating that the packet has been observed (step 1022). The reply may contain the address of SRI, information about observed packets, and information about transformed packets, such as QM2, that have passed through SRI.
As previously disclosed herein, a hash value is preferably determined over an immutable portion of TP1 when it passes through SRI, and the resulting hash value is used as an index value, or address, into a memory. The index value is used to facilitate the storage of information about a packet so that it can be uniquely identified. In
When a hash value is determined for a particular TP1, an indicator bit, or flag, is set at an address corresponding to that hash value. The indicator bit is used to confirm that a particular TP1 has either been “seen” or “not-seen”. If a hash value is computed for a TP1, then the indicator bit is set to some state, for example to a “1”. The “1” indicates that the respective TP1 has been “seen” by SRI. In
Data structure 1200 is comprised of a record R(1) containing attributes, or parameters, having data associated therewith. In the upper left portion of
Within 1200 are exemplary column headings indicating still other attributes that may be used to facilitate source path isolation of TP. For example, a network component identification attribute, shown as node D, may be used to identify particular nodes, such as routers, switches, bridges, or the like, within a network that have been queried by SS. Link may be used to identify the particular link on which TP was observed. A reply packet attribute, shown as Node Response, may be used to indicate if a queried node has observed TP. Node time may indicate the time, preferably using some common reference, at which a respective node observed TP.
Time is useful for assessing how long TP has been in the network and for performing comparisons with fields such as time-to-live (TTL). The attribute Transformed is used to track variants of TP in the event it has undergone a transformation. If TP has been transformed, it may be useful to have multiple entries associated the respective TP. For example in
Processor 1302 may be any type of conventional processing device that interprets and executes instructions. Main memory 1304 may be a random access memory (RAM) or a similar dynamic storage device. Main memory 1304 stores information and instructions to be executed by processor 1302. Main memory 1304 may also be used for storing temporary variables or other intermediate information during execution of instructions by processor 1302. ROM 1306 stores static information and instructions for processor 1302. It will be appreciated that ROM 1306 may be replaced with some other type of static storage device. Storage device 1308, also referred to as data storage device, may include any type of magnetic or optical media and their corresponding interfaces and operational hardware. Storage device 1308 stores information and instructions for use by processor 1302. Bus 1310 includes a set of hardware lines (conductors, optical fibers, or the like) that allow for data transfer among the components of system 1320. Display device 1312 may be a cathode ray tube (CRT), liquid crystal display (LCD) or the like, for displaying information in an operator or machine-readable form. Keyboard 1314 and cursor control 1316 allow the operator to interact with system 1320. Cursor control 1316 may be, for example, a mouse. In an alternative configuration, keyboard 1314 and cursor control 1316 can be replaced with a microphone and voice recognition means to enable an operator or machine to interact with system 1320.
Communication interface 1318 enables system 1320 to communicate with other devices/systems via any communications medium. For example, communication interface 1318 may be a modem, an Ethernet interface to a LAN, an interface to the Internet, a printer interface, etc. Alternatively, communication interface 1318 can be any other interface that enables communication between system 1320 and other devices, systems or networks. Communication interface 1318 can be used in lieu of keyboard 1314 and cursor control 1316 to facilitate operator or machine remote control and communication with system 1320. As will be described in detail below, system 1320 may provide SS1 operating within AS1 with the ability to perform source path isolation for a given TP. SS1 may receive MP1 from IDS1 and generate QM1 in response to processor 1302 executing sequences of instructions contained in, for example, memory 1304. Such instructions may be read into memory 1304 from another computer-readable medium, such as storage device 1308, or from another device coupled to bus 1310 or coupled via communication interface 1318. Execution of sequences of instructions contained in memory 1304 causes processor 1302 to perform the method described in conjunction with
Network 1430 may facilitate communication between mail clients 1410 and mail server 1420. Typically, network 1430 may include a collection of network devices, such as routers or switches, that transfer data between mail clients 1410 and mail server 1420. In an implementation consistent with the present invention, network 1430 may take the form of a wide area network, a local area network, an intranet, the Internet, a public telephone network, a different type of network, or a combination of networks.
Mail clients 1410 may include personal computers, laptops, personal digital assistants, or other types of wired or wireless devices that are capable of interacting with mail server 1420 to receive e-mails. In another implementation, clients 1410 may include software operating upon one of these devices. Client 1410 may present e-mails to a user via a graphical user interface.
Mail server 1420 may include a computer or another device that is capable of providing e-mail services for mail clients 1410. In another implementation, server 1420 may include software operating upon one of these devices.
Processor 1520 may include any type of conventional processor or microprocessor that interprets and executes instructions. Main memory 1530 may include a random access memory (RAM) or another type of dynamic storage device that stores information and instructions for execution by processor 1520. ROM 1540 may include a conventional ROM device or another type of static storage device that stores static information and instructions for use by processor 1520. Storage device 1550 may include a magnetic and/or optical recording medium and its corresponding drive.
Input device 1560 may include one or more conventional mechanisms that permit an operator to input information to server 1420, such as a keyboard, a mouse, a pen, voice recognition and/or biometric mechanisms, etc. Output device 1570 may include one or more conventional mechanisms that output information to the operator, such as a display, a printer, a pair of speakers, etc. Communication interface 1580 may include any transceiver-like mechanism that enables server 1420 to communicate with other devices and/or systems. For example, communication interface 1580 may include mechanisms for communicating with another device or system via a network, such as network 1430.
As will be described in detail below, server 1420, consistent with the present invention, provides e-mail services to clients 1410, while detecting unwanted e-mails and/or preventing unwanted e-mails from reaching clients 1410. Server 1420 may perform these tasks in response to processor 1520 executing sequences of instructions contained in, for example, memory 1530. These instructions may be read into memory 1530 from another computer-readable medium, such as storage device 1550 or a carrier wave, or from another device via communication interface 1580.
Execution of the sequences of instructions contained in memory 1530 may cause processor 1520 to perform processes that will be described later. Alternatively, hardwired circuitry may be used in place of or in combination with software instructions to implement processes consistent with the present invention. Thus, processes performed by server 1420 are not limited to any specific combination of hardware circuitry and software.
SMTP block 1610 may permit mail server 1420 to communicate with other mail servers connected to network 1430 or another network. SMTP is designed to efficiently and reliably transfer e-mail across networks. SMTP defines the interaction between mail servers to facilitate the transfer of e-mail even when the mail servers are implemented on different types of computers or running different operating systems.
POP block 1620 may permit mail clients 1410 to retrieve e-mail from mail server 1420. POP block 1620 may be designed to always receive incoming e-mail. POP block 1620 may then hold e-mail for mail clients 1410 until mail clients 1410 connect to download them.
IMAP block 1630 may provide another mechanism by which mail clients 1410 can retrieve e-mail from mail server 1420. IMAP block 1630 may permit mail clients 1410 to access remote e-mail as if the e-mail was local to mail clients 1410.
Hash processing block 1640 may interact with SMTP block 1610, POP block 1620, and/or IMAP block 1630 to detect and prevent transmission of unwanted e-mail, such as e-mails containing viruses or worms and unsolicited commercial e-mail (spam).
An e-mail representation will likely not be a copy of the entire e-mail, but rather it may include a portion of the e-mail or some unique value representative of the e-mail. For example, a fixed width number may be computed across portions of the e-mail in a manner that allows the entire e-mail to be identified.
To further illustrate the use of representations, a 32-bit hash value, or digest, may be computed across portions of each e-mail. Then, the hash value may be stored in hash memory 1720 or may be used as an index, or address, into hash memory 1720. Using the hash value, or an index derived therefrom, results in efficient use of hash memory 1720 while still allowing the content of each e-mail passing through mail server 1420 to be identified.
Systems and methods consistent with the present invention may use any storage scheme that records information about one or more portions of each e-mail in a space-efficient fashion, that can definitively determine if a portion of an e-mail has not been observed, and that can respond positively (i.e., in a predictable way) when a portion of an e-mail has been observed. Although systems and methods consistent with the present invention can use virtually any technique for deriving representations of portions of e-mails, the remaining discussion will use hash values as exemplary representations of portions of e-mails received by mail server 1420.
In implementations consistent with the principles of the invention, hash processor 1710 may hash one or more portions of a received e-mail to produce a hash value used to facilitate hash-based detection. For example, hash processor 1710 may hash one or more of the main text within the message body, any attachments, and one or more header fields, such as sender-related fields (e.g., “From:,” “Sender:,” “Reply-To:,” “Return-Path:,” and “Error-To:”). Hash processor 1710 may perform one or more hashes on each of the e-mail portions using the same or different hash functions.
As described in more detail below, hash processor 1710 may use the hash results of the hash operation to recognize duplicate occurrences of e-mails and raise a warning if the duplicate e-mail occurrences arrive within a short period of time and raise their level of suspicion above some threshold. It may also be possible to use the hash results for tracing the path of an unwanted e-mail through the network.
Each hash value may be determined by taking an input block of data and processing it to obtain a numerical value that represents the given input data. Suitable hash functions are readily known in the art and will not be discussed in detail herein. Examples of hash functions include the Cyclic Redundancy Check (CRC) and Message Digest 5 (MD5). The resulting hash value, also referred to as a message digest or hash digest, may include a fixed length value. The hash value may serve as a signature for the data over which it was computed.
The hash value essentially acts as a fingerprint identifying the input block of data over which it was computed. Unlike fingerprints, however, there is a chance that two very different pieces of data will hash to the same value, resulting in a hash collision. An acceptable hash function should provide a good distribution of values over a variety of data inputs in order to prevent these collisions. Because collisions occur when different input blocks result in the same hash value, an ambiguity may arise when attempting to associate a result with a particular input.
Hash processor 1710 may store a representation of each e-mail it observes in hash memory 1720. Hash processor 1710 may store the actual hash values as the e-mail representations or it may use other techniques for minimizing storage requirements associated with retaining hash values and other information associated therewith. A technique for minimizing storage requirements may use one or more arrays or Bloom filters.
Rather than storing the actual hash value, which can typically be on the order of 32 bits or more in length, hash processor 1710 may use the hash value as an index for addressing an array within hash memory 1720. In other words, when hash processor 1710 generates a hash value for a portion of an e-mail, the hash value serves as the address location into the array. At the address corresponding to the hash value, a count value may be incremented at the respective storage location, thus, indicating that a particular hash value, and hence a particular e-mail portion, has been seen by hash processor 1710. In one implementation, the count value is associated with an 8-bit counter with a maximum value that sticks at 255. While counter arrays are described by way of example, it will be appreciated by those skilled in the relevant art, that other storage techniques may be employed without departing from the spirit of the invention.
Hash memory 1720 may store a suspicion count that is used to determine the overall suspiciousness of an e-mail message. For example, the count value (described above) may be compared to a threshold, and the suspicion count for the e-mail may be incremented if the threshold is exceeded. Hence, there may be a direct relationship between the count value and the suspicion count, and it may be possible for the two values to be the same. The larger the suspicion count, the more important the hit should be considered in determining the overall suspiciousness of the packet. Alternatively, the suspicion count can be combined in a “scoring function” with values from this or other hash blocks in the same message in order to determine whether the message should be considered suspicious.
It is not enough, however, for hash memory 1720 to simply identify that an e-mail contains content that has been seen recently. There are many legitimate sources (e.g., e-mail list servers) that produce multiple copies of the same message, addressed to multiple recipients. Similarly, individual users often e-mail messages to a group of people and, thus, multiple copies might be seen if several recipients happen to receive their mail from the same server. Also, people often forward copies of received messages to friends or co-workers.
In addition, virus/worm authors typically try to minimize the replicated content in each copy of the virus/worm, in order to not be detected by existing virus and worm detection technology that depends on detecting fixed sequences of bytes in a known virus or worm. These mutable viruses/worms are usually known as polymorphic, and the attacker's goal is to minimize the recognizability of the virus or worm by scrambling each copy in a different way. For the virus or worm to remain viable, however, a small part of it can be mutable in only a relatively small number of ways, because some of its code must be immediately-executable by the victim's computer, and that limits the mutation and obscurement possibilities for the critical initial code part.
In order to accomplish the proper classification of various types of legitimate and unwanted e-mail messages, multiple hash memories 1720 can be employed, with separate hash memories 1720 being used for specific sub-parts of a standard e-mail message. The outputs of different ones of hash memories 1720 can then be combined in an overall “scoring” or classification function to determine whether the message is undesirable or legitimate, and possibly estimate the probability that it belongs to a particular class of traffic, such as a virus/worm message, spam, e-mail list message, normal user-to-user message.
For e-mail following the Internet mail standard RFC 822 (and its various extensions), hashing of certain individual e-mail header fields into field-specific hash memories 1720 may be useful. Among the header fields for which this may be helpful are: (1) various sender-related fields, such as “From:”, “Sender:”, “Reply-To:”, “Return-Path:” and “Error-To:”; (2) the “To:” field (often a fixed value for a mailing list, frequently missing or idiosyncratic in spam messages); and (3) the last few “Received:” headers (i.e., the earliest ones, since they are normally added at the top of the message), excluding any obvious timestamp data. It may also be useful to hash a combination of the “From:” field and the e-mail address of the recipient (transferred as part of the SMTP mail-transfer protocol, and not necessarily found in the message itself).
Any or all of hash memories 1420 may be pre-loaded with knowledge of known good or bad traffic. For example, known viruses and spam content (e.g., the infamous “Craig Shergold letter” or many pyramid swindle letters) can be pre-hashed into the relevant hash memories 1720, and/or periodically refreshed in the memory as part of a periodic “cleaning” process described below. Also, known legitimate mailing lists, such as mailing lists from legitimate e-mail list servers, can be added to a “From:” hash memory 1720 that passes traffic without further examination.
Over time, hash memories 1720 may fill up and the possibility of overflowing an existing count value increases. The risk of overflowing a count value may be reduced if the counter arrays are periodically flushed to other storage media, such as a magnetic disk drive, optical media, solid state drive, or the like. Alternatively, the counter arrays may be slowly and incrementally erased. To facilitate this, a time-table may be established for flushing/erasing the counter arrays. If desired, the flushing/erasing cycle can be reduced by computing hash values only for a subset of the e-mails received by mail server 1420. While this approach reduces the flushing/erasing cycle, it increases the possibility that a target e-mail may be missed (i.e., a hash value is not computed over a portion of it).
Non-zero storage locations within hash memories 1720 may be decremented periodically rather than being erased. This may ensure that the “random noise” from normal e-mail traffic would not remain in a counter array indefinitely. Replicated traffic (e.g., e-mails containing a virus/worm that are propagating repeatedly across the network), however, would normally cause the relevant storage locations to stay substantially above the “background noise” level.
One way to decrement the count values in the counter array fairly is to keep a total count, for each hash memory 1720, of every time one of the count values is incremented. After this total count reaches some threshold value (probably in the millions), for every time a count value is incremented in hash memory 1720, another count value gets decremented. One way to pick the count value to decrement is to keep a counter, as a decrement pointer, that simply iterates through the storage locations sequentially. Every time a decrement operation is performed, the following may done: (a) examine the candidate count value to be decremented and if non-zero, decrement it and increment the decrement pointer to the next storage location; and (b) if the candidate count value is zero, then examine each sequentially-following storage location until a non-zero count value is found, decrement that count value, and advance the decrement pointer to the following storage location.
It may be important to avoid decrementing any counters below zero, while not biasing decrements unfairly. Because it may be assumed that the hash is random, this technique should not favor any particular storage location, since it visits each of them before starting over. This technique may be superior to a timer-based decrement because it keeps a fixed total count population across all of the storage locations, representing the most recent history of traffic, and is not subject to changes in behavior as the volume of traffic varies over time.
A variation of this technique may include randomly selecting a count value to decrement, rather than processing them cyclically. In this variation, if the chosen count value is already zero, then another one could be picked randomly, or the count values in the storage locations following the initially-chosen one could be examined in series, until a non-zero count value is found.
Processing may begin when hash processor 1710 (
It may be desirable to pre-process the main text to remove attempts to fool pattern-matching mail filters. An example of this is HyperText Markup Language (HTML) e-mail, where spammers often insert random text strings in HTML comments between or within words of the text. Such e-mail may be referred to as “polymorphic spam” because it attempts to make each message appear unique. This method for evading detection might otherwise defeat the hash detection technique, or other string-matching techniques. Thus, removing all HTML comments from the message before hashing it may be desirable. It might also be useful to delete HTML tags from the message, or apply other specialized, but simple, pre-processing techniques to remove content not actually presented to the user. In general, this may be done in parallel with the hashing of the message text, since viruses and worms may be hidden in the non-visible content of the message text.
Hash processor 1710 may also hash any attachments, after first attempting to expand them if they appear to be known types of compressed files (e.g., “zip” files) (act 1806). When hashing an attachment, hash processor 1710 may perform one or more conventional hashes covering one or more portions, or all, of the attachment. For example, hash processor 1710 may perform hash functions on fixed or variable sized blocks of the attachment. It may be beneficial for hash processor 1710 to perform multiple hashes on each of the blocks using the same or different hash functions.
Hash processor 1710 may compare the main text and attachment hashes with known viruses, worms, or spam content in a hash memory 1720 that is pre-loaded with information from known viruses, worms, and spam content (acts 1808 and 1810). If there are any hits in this hash memory 1720, there is a probability that the e-mail message contains a virus or worm or is spam. A known polymorphic virus may have only a small number of hashes that match in this hash memory 1720, out of the total number of hash blocks in the message. A non-polymorphic virus may have a very high fraction of the hash blocks hit in hash memory 1720. For this reason, storage locations within hash memory 1720 that contain entries from polymorphic viruses or worms may be given more weight during the pre-loading process, such as by giving them a high initial suspicion count value.
A high fraction of hits in this hash memory 1720 may cause the message to be marked as a probable known virus/worm or spam. In this case, the e-mail message can be sidetracked for remedial action, as described below.
A message with a significant “score” from polymorphic virus/worm hash value hits may or may not be a virus/worm instance, and may be sidetracked for further investigation, or marked as suspicious before forwarding to the recipient. An additional check may also be made to determine the level of suspicion.
For example, hash processor 1710 may hash a concatenation of the From and To header fields of the e-mail message (act 1812) (
When this occurs, hash processor 1710 may take remedial action (act 1818). The remedial action taken might take different forms, which may be programmable or determined by an operator of mail server 1420. For example, hash processor 1710 may discard the e-mail. This is not recommended for anything but virtually-certain virus/worm/spam identification, such as a perfect match to a known virus.
As an alternate technique, hash processor 1710 may mark the e-mail with a warning in the message body, in an additional header, or other user-visible annotation, and allow the user to deal with it when it is downloaded. For data that appears to be from an unknown mailing list, a variant of this option is to request the user to send back a reply message to the server, classifying the suspect message as either spam or a mailing list. In the latter case, the mailing list source address can be added to the “known legitimate mailing lists” hash memory 1720.
As another technique, hash processor 1710 may subject the e-mail to more sophisticated (and possibly more resource-consuming) detection algorithms to make a more certain determination. This is recommended for potential unknown viruses/worms or possible detection of a polymorphic virus/worm.
As yet another technique, hash processor 1710 may hold the e-mail message in a special area and create a special e-mail message to notify the user of the held message (probably including From and Subject fields). Hash processor 1710 may also give instructions on how to retrieve the message.
As a further technique, hash processor 1710 may mark the e-mail message with its suspicion score result, but leave it queued for the user's retrieval. If the user's quota would overflow when a new message arrives, the score of the incoming message and the highest score of the queued messages are compared. If the highest queued message has a score above a settable threshold, and the new message's score is lower than the threshold, the queued message with the highest score may be deleted from the queue to make room for the new message. Otherwise, if the new message has a score above the threshold, it may be discarded or “bounced” (e.g., the sending e-mail server is told to hold the message and retry it later). Alternatively, if it is desired to never bounce incoming messages, mail server 1420 may accept the incoming message into the user's queue and repeatedly delete messages with the highest suspicion score from the queue until the total is below the user's quota again.
As another technique, hash processor 1710 may apply hash-based functions as the e-mail message starts arriving from the sending server and determine the message's suspicion score incrementally as the message is read in. If the message has a high-enough suspicion score (above a threshold) during the early part of the message, mail server 1420 may reject the message, optionally with either a “retry later” or a “permanent refusal” result to the sending server (which one is used may be determined by settable thresholds applied to the total suspicion score, and possibly other factors, such as server load). This results in the unwanted e-mail using up less network bandwidth and receiving server resources, and penalizes servers sending unwanted mail, relative to those that do not.
If the suspicion count for the main text or any attachment is not significantly higher than the From/To suspicion count (act 1816), hash processor 1710 may determine whether the main text or any attachment has significant replicated content (non-zero or high suspicion count values for many hash blocks in the text/attachment content in all storage locations of hash memories 1720) (act 1820) (
If the message text is substantially replicated (e.g., greater than 90%), hash processor 1710 may check one or more portions of the e-mail message against known legitimate mailing lists within hash memory 1720 (act 1822) (
If there is a match with a legitimate mailing list (act 1824), then the message is probably a legitimate mailing list duplicate and may be passed with no further examination. This assumes that the mailing list server employs some kind of filtering to exclude unwanted e-mail (e.g., refusing to forward e-mail that does not originate with a known list recipient or refusing e-mail with attachments).
If there is no match with any legitimate mailing lists within hash memory 1720, hash processor 1710 may hash the sender-related fields (e.g., From, Sender, Reply-To) (act 1826). Hash processor 1710 may then determine the suspicion count for the sender-related hashes in hash memories 1720 (act 1828).
Hash processor 1710 may determine whether the suspicion counts for the sender-related hashes are similar to the suspicion count(s) for the main text hash(es) (act 1830) (
As an additional check, hash processor 1710 may hash the concatenation of the sender-related field with the highest suspicion count value and the e-mail recipient's address (act 1832). Hash processor 1710 may then check the suspicion count for the concatenation in a hash memory 1720 used just for this check (act 1834). If it matches with a significant suspicion count value (act 1836) (
If the message text or attachments are mostly replicated (e.g., greater than 90% of the hash blocks), but with mostly low suspicion count values in hash memory 1720 (act 1838), then the message is probably a case of a small-scale replication of a single message to multiple recipients. In this case, the e-mail message may then be passed without further examination.
If the message text or attachments contain some significant degree of content replication (say, greater than 50% of the hash blocks) and at least some of the hash values have high suspicion count values in hash memory 1720 (act 1840), then the message is fairly likely to be a virus/worm or spam. A virus or worm should be considered more likely if the high-count matches are in an attachment. If the highly-replicated content is in the message text, then the message is more likely to be spam, though it is possible that e-mail text employing a scripting language (e.g., Java script) might also contain a virus.
If the replication is in the message text, and the suspicion count is substantially higher for the message text than for the From field, the message is likely to be spam (because spammers generally vary the From field to evade simpler spam filters). A similar check can be made for the concatenation of the From and To header fields, except that in this case, it is most suspicious if the From/To hash misses (finds a zero suspicion count), indicating that the sender does not ordinarily send e-mail to that recipient, making it unlikely to be a mailing list, and very likely to be a spammer (because they normally employ random or fictitious From addresses).
In the above cases, hash processor 1710 may take remedial action (act 1842). The particular type of action taken by hash processor 1710 may vary as described above.
Systems and methods consistent with the present invention provide mechanisms to detect and/or prevent transmission of malicious packets, such as polymorphic viruses and worms.
Systems and methods consistent with the principles of the invention detect polymorphic viruses and worms with some finite probability, which may depend on the size of the decoder bootstrap code segment and the techniques used to obscure it (such as code rearrangement and the insertion of gibberish bytes). Also, the number of virus and worm examples that must be seen before detection becomes probable depends on the threshold settings, the degree to which different copies of the virus/worm resemble each other, the minimum hash block size used, and the rate at which copies arrive. Essentially, what happens is that short code sequences of the virus/worm decoder bootstrap will occasionally be in a single hash block, without any of the obscuring “cover” of gibberish bytes.
If the bootstrap is only obscured by inserted no-ops or irrelevant code sequences, packet detection logic 200 may eventually see samples of all variants of these in various lengths, and also in conjunction with the active code, and will actually recognize the virus/worm more easily, though usually after seeing many samples.
In either case, some set of byte sequences commonly found in the virus/worm, and found much less commonly in other network traffic, may be detected often enough that these sequences will rise above the “noise” level of the data stored in hash memory 220 and, thus, be detectable. Not every packet containing the virus/worm decoder bootstrap, however, will be detected this way, since it may be that none of the hash blocks in the particular packet isolated the fixed, active code elements. Thus, systems and methods consistent with the principles of the invention may be used to provide a warning that a virus/worm is potentially propagating and capture suspicious packets for human analysis.
Non-polymorphic viruses and worms may also be detected somewhat more quickly by these techniques because block alignment is not the same in every packet and partial matches will be more common early in the appearance of the virus/worm in the network, at least for longer packets. The certainty of detection will be correspondingly lower. So, it may take somewhat more examples of the virus/worm to reach the same degree of certainty of detection of the virus/worm, as with the fixed-length hash blocks, due to the randomness introduced into the hash-sampling process.
The foregoing description of preferred embodiments of the present invention provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention.
For example, systems and methods have been described with regard to network-level devices. In other implementations, the systems and methods described herein may be used with a stand-alone device at the input or output of a network link or at other protocol levels, such as in mail relay hosts (e.g., Simple Mail Transfer Protocol (SMTP) servers).
To this regard, the variable-sized block hashing technique described previously can be used in conjunction with traditional host-based virus scanning software. For example, training data may be obtained from a network application and the hash memory contents may then be transmitted to one or more hosts to aid in looking for the suspected virus or worm on the host. In other words, the host may receive hash values associated with the suspected virus or worm from the network application. The host may hash one or more variable-sized portions of the files stored in its memory to generate hash values associated with these files. The host may compare the generated hash values to the hash values associated with the suspected virus or worm and identify one or more files that may contain the suspected virus or worm when the hash values match. The technique may be used as a prioritization stage to determine which files most likely contain a virus or worm. The virus scanning software could then use other, more expensive, techniques to scan these files.
The variable-sized block hashing technique may also be used in conjunction with network-based applications, where suspicious messages are delivered to a reassembly process and the resulting messages scanned by a more conventional (e.g., execution simulating) virus detector.
While a series of acts has been described with regard to the flowchart of
Further, certain portions of the invention have been described as “logic” that performs one or more functions. This logic may include hardware, such as an ASIC or a FPGA, software, or a combination of hardware and software.
No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Where only one item is intended, the term “one” or similar language is used. The scope of the invention is defined by the claims and their equivalents.
This application is a continuation of U.S. patent application Ser. No. 12/249,823, filed Oct. 10, 2008, which, in turn, is a continuation of U.S. patent application Ser. No. 10/654,771, filed Sep. 4, 2003, which, in turn, claims priority under 35 U.S.C. §119 based on U.S. Provisional Application No. 60/407,975, filed Sep. 5, 2002, all of which are incorporated herein by reference. U.S. patent application Ser. No. 10/654,771 is also a continuation-in-part of U.S. patent application Ser. No. 10/251,403, filed Sep. 20, 2002, now U.S. Pat. No. 7,328,349, which claims priority under 35 U.S.C. §119 based on U.S. Provisional Application No. 60/341,462, filed Dec. 14, 2001, both of which are incorporated herein by reference. U.S. patent application Ser. No. 10/654,771 is also a continuation-in-part of U.S. patent application Ser. No. 09/881,145, and U.S. patent application Ser. No. 09/881,074, now U.S. Pat. No. 6,981,158, both of which were filed on Jun. 14, 2001, and both of which claim priority under 35 U.S.C §119 based on U.S. Provisional Application No. 60/212,425, filed Jun. 19, 2000, all of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
3956615 | Anderson et al. | May 1976 | A |
4104721 | Markstein et al. | Aug 1978 | A |
4177510 | Appell et al. | Dec 1979 | A |
4200770 | Hellman et al. | Apr 1980 | A |
4289930 | Connolly et al. | Sep 1981 | A |
4384325 | Slechta, Jr. et al. | May 1983 | A |
4386233 | Smid et al. | May 1983 | A |
4386416 | Giltner et al. | May 1983 | A |
4405829 | Rivest et al. | Sep 1983 | A |
4442484 | Childs, Jr. et al. | Apr 1984 | A |
4532588 | Foster | Jul 1985 | A |
4584639 | Hardy | Apr 1986 | A |
4590470 | Koenig | May 1986 | A |
4607137 | Jansen et al. | Aug 1986 | A |
4621321 | Boebert et al. | Nov 1986 | A |
4641274 | Swank | Feb 1987 | A |
4648031 | Jenner | Mar 1987 | A |
4701840 | Boebert et al. | Oct 1987 | A |
4710763 | Franke et al. | Dec 1987 | A |
4713753 | Boebert et al. | Dec 1987 | A |
4713780 | Schultz et al. | Dec 1987 | A |
4754428 | Schultz et al. | Jun 1988 | A |
4837798 | Cohen et al. | Jun 1989 | A |
4853961 | Pastor | Aug 1989 | A |
4864573 | Horsten | Sep 1989 | A |
4868877 | Fischer | Sep 1989 | A |
4870571 | Frink | Sep 1989 | A |
4885789 | Burger et al. | Dec 1989 | A |
4910774 | Barakat | Mar 1990 | A |
4914568 | Kodosky et al. | Apr 1990 | A |
4926480 | Chaum | May 1990 | A |
4947430 | Chaum | Aug 1990 | A |
4951196 | Jackson | Aug 1990 | A |
4975950 | Lentz | Dec 1990 | A |
4979210 | Nagata et al. | Dec 1990 | A |
4996711 | Chaum | Feb 1991 | A |
5005200 | Fischer | Apr 1991 | A |
5008814 | Mathur | Apr 1991 | A |
5020059 | Gorin et al. | May 1991 | A |
5051886 | Kawaguchi et al. | Sep 1991 | A |
5054096 | Beizer | Oct 1991 | A |
5070528 | Hawe et al. | Dec 1991 | A |
5093914 | Coplien et al. | Mar 1992 | A |
5105184 | Pirani et al. | Apr 1992 | A |
5119465 | Jack et al. | Jun 1992 | A |
5124984 | Engel | Jun 1992 | A |
5144557 | Wang et al. | Sep 1992 | A |
5144659 | Jones | Sep 1992 | A |
5144660 | Rose | Sep 1992 | A |
5144665 | Takaragi et al. | Sep 1992 | A |
5153918 | Tuai | Oct 1992 | A |
5164988 | Matyas et al. | Nov 1992 | A |
5167011 | Priest | Nov 1992 | A |
5191611 | Lang | Mar 1993 | A |
5200999 | Matyas et al. | Apr 1993 | A |
5204961 | Barlow | Apr 1993 | A |
5210795 | Lipner et al. | May 1993 | A |
5210824 | Putz et al. | May 1993 | A |
5210825 | Kavaler | May 1993 | A |
5214702 | Fischer | May 1993 | A |
5224163 | Gasser et al. | Jun 1993 | A |
5226080 | Cole et al. | Jul 1993 | A |
5228083 | Lozowick et al. | Jul 1993 | A |
5235642 | Wobber et al. | Aug 1993 | A |
5239466 | Morgan et al. | Aug 1993 | A |
5241594 | Kung | Aug 1993 | A |
5247661 | Hager et al. | Sep 1993 | A |
5263147 | Francisco et al. | Nov 1993 | A |
5263157 | Janis | Nov 1993 | A |
5265163 | Golding et al. | Nov 1993 | A |
5265164 | Matyas et al. | Nov 1993 | A |
5267313 | Hirata | Nov 1993 | A |
5272754 | Boerbert | Dec 1993 | A |
5276735 | Boebert et al. | Jan 1994 | A |
5276736 | Chaum | Jan 1994 | A |
5276737 | Micali | Jan 1994 | A |
5276869 | Forrest et al. | Jan 1994 | A |
5276901 | Howell et al. | Jan 1994 | A |
5278901 | Shieh et al. | Jan 1994 | A |
5280527 | Gullman et al. | Jan 1994 | A |
5283887 | Zachery | Feb 1994 | A |
5293250 | Okumura et al. | Mar 1994 | A |
5299263 | Beller et al. | Mar 1994 | A |
5303303 | White | Apr 1994 | A |
5305385 | Schanning et al. | Apr 1994 | A |
5311591 | Fischer | May 1994 | A |
5311593 | Carmi | May 1994 | A |
5313521 | Torii et al. | May 1994 | A |
5313637 | Rose | May 1994 | A |
5315657 | Abadi et al. | May 1994 | A |
5315658 | Micali | May 1994 | A |
5319776 | Hile et al. | Jun 1994 | A |
5325370 | Cleveland et al. | Jun 1994 | A |
5329623 | Smith et al. | Jul 1994 | A |
5333266 | Boaz et al. | Jul 1994 | A |
5341426 | Barney et al. | Aug 1994 | A |
5347578 | Duxbury | Sep 1994 | A |
5351293 | Michener et al. | Sep 1994 | A |
5355472 | Lewis | Oct 1994 | A |
5355474 | Thuraisngham et al. | Oct 1994 | A |
5359659 | Rosenthal | Oct 1994 | A |
5361002 | Casper | Nov 1994 | A |
5367621 | Cohen et al. | Nov 1994 | A |
5371794 | Diffie et al. | Dec 1994 | A |
5377354 | Scannell et al. | Dec 1994 | A |
5379340 | Overend et al. | Jan 1995 | A |
5379374 | Ishizaki et al. | Jan 1995 | A |
5386470 | Carter et al. | Jan 1995 | A |
5388189 | Kung | Feb 1995 | A |
5404231 | Bloomfield | Apr 1995 | A |
5406557 | Baudoin | Apr 1995 | A |
5406628 | Beller et al. | Apr 1995 | A |
5410326 | Goldstein | Apr 1995 | A |
5414650 | Hekhuis | May 1995 | A |
5414833 | Hershey et al. | May 1995 | A |
5416842 | Aziz | May 1995 | A |
5418908 | Keller et al. | May 1995 | A |
5424724 | Williams et al. | Jun 1995 | A |
5432932 | Chen et al. | Jul 1995 | A |
5436972 | Fischer | Jul 1995 | A |
5440723 | Arnold et al. | Aug 1995 | A |
5455828 | Zisapel | Oct 1995 | A |
5479411 | Klein | Dec 1995 | A |
5481312 | Cash et al. | Jan 1996 | A |
5481613 | Ford et al. | Jan 1996 | A |
5483466 | Kawahara et al. | Jan 1996 | A |
5485409 | Gupta et al. | Jan 1996 | A |
5485460 | Schrier et al. | Jan 1996 | A |
5491750 | Bellare et al. | Feb 1996 | A |
5495610 | Shing et al. | Feb 1996 | A |
5499294 | Friedman | Mar 1996 | A |
5504454 | Daggett et al. | Apr 1996 | A |
5509074 | Choudhury et al. | Apr 1996 | A |
5511122 | Atkinson | Apr 1996 | A |
5511163 | Lerche et al. | Apr 1996 | A |
5513126 | Harkins et al. | Apr 1996 | A |
5513323 | Williams et al. | Apr 1996 | A |
5521910 | Matthews | May 1996 | A |
5530852 | Meske, Jr. et al. | Jun 1996 | A |
5535276 | Ganesan | Jul 1996 | A |
5537533 | Staheli et al. | Jul 1996 | A |
5539824 | Bjorklund et al. | Jul 1996 | A |
5541993 | Fan et al. | Jul 1996 | A |
5544320 | Konrad | Aug 1996 | A |
5548646 | Aziz et al. | Aug 1996 | A |
5550984 | Gelb | Aug 1996 | A |
5550994 | Tashiro et al. | Aug 1996 | A |
5553145 | Micali | Sep 1996 | A |
5555309 | Kruys | Sep 1996 | A |
5557346 | Lipner et al. | Sep 1996 | A |
5557742 | Smaha et al. | Sep 1996 | A |
5557765 | Lipner et al. | Sep 1996 | A |
5561703 | Arledge et al. | Oct 1996 | A |
5564106 | Puhl et al. | Oct 1996 | A |
5566170 | Bakke et al. | Oct 1996 | A |
5572590 | Chess | Nov 1996 | A |
5572643 | Judson | Nov 1996 | A |
5577209 | Boyle et al. | Nov 1996 | A |
5583940 | Vidrascu et al. | Dec 1996 | A |
5583995 | Gardner et al. | Dec 1996 | A |
5586260 | Hu | Dec 1996 | A |
5602918 | Chen et al. | Feb 1997 | A |
5604490 | Blakley, III et al. | Feb 1997 | A |
5606668 | Shwed | Feb 1997 | A |
5608819 | Ikeuchi | Mar 1997 | A |
5608874 | Ogawa et al. | Mar 1997 | A |
5615340 | Dai et al. | Mar 1997 | A |
5619648 | Canale et al. | Apr 1997 | A |
5621579 | Yuen | Apr 1997 | A |
5621889 | Lermuzeaux et al. | Apr 1997 | A |
5623598 | Voigt et al. | Apr 1997 | A |
5623600 | Ji et al. | Apr 1997 | A |
5623601 | Vu | Apr 1997 | A |
5623637 | Jones et al. | Apr 1997 | A |
5625695 | M'Raihi et al. | Apr 1997 | A |
5627977 | Hickey et al. | May 1997 | A |
5629982 | Micali | May 1997 | A |
5631961 | Mills et al. | May 1997 | A |
5632011 | Landfield et al. | May 1997 | A |
5636371 | Yu | Jun 1997 | A |
5638487 | Chigier | Jun 1997 | A |
5640454 | Lipner et al. | Jun 1997 | A |
5644404 | Hashimoto et al. | Jul 1997 | A |
5644571 | Seaman | Jul 1997 | A |
5647000 | Leighton | Jul 1997 | A |
5649095 | Cozza | Jul 1997 | A |
5655081 | Bonnell et al. | Aug 1997 | A |
5657461 | Harkins et al. | Aug 1997 | A |
5666416 | Micali | Sep 1997 | A |
5666530 | Clark et al. | Sep 1997 | A |
5671279 | Elgamal | Sep 1997 | A |
5673322 | Pepe et al. | Sep 1997 | A |
5675507 | Bobo, II | Oct 1997 | A |
5675733 | Williams | Oct 1997 | A |
5677955 | Doggett et al. | Oct 1997 | A |
5684951 | Goldman et al. | Nov 1997 | A |
5687235 | Perlman et al. | Nov 1997 | A |
5689565 | Spies et al. | Nov 1997 | A |
5689566 | Nguyen | Nov 1997 | A |
5694616 | Johnson et al. | Dec 1997 | A |
5696822 | Nachenberg | Dec 1997 | A |
5699431 | Van Oorschot et al. | Dec 1997 | A |
5699513 | Feigen et al. | Dec 1997 | A |
5706442 | Anderson et al. | Jan 1998 | A |
5706507 | Schloss | Jan 1998 | A |
5708780 | Levergood et al. | Jan 1998 | A |
5708826 | Ikeda et al. | Jan 1998 | A |
5710883 | Hong et al. | Jan 1998 | A |
5717757 | Micali | Feb 1998 | A |
5717758 | Micali | Feb 1998 | A |
5724428 | Rivest | Mar 1998 | A |
5724512 | Winterbottom | Mar 1998 | A |
5727156 | Herr-Hoyman et al. | Mar 1998 | A |
5740231 | Cohn et al. | Apr 1998 | A |
5742759 | Nessett et al. | Apr 1998 | A |
5742769 | Lee et al. | Apr 1998 | A |
5745573 | Lipner et al. | Apr 1998 | A |
5745574 | Muftic | Apr 1998 | A |
5751956 | Kirsch | May 1998 | A |
5758343 | Vigil et al. | May 1998 | A |
5761531 | Ohmura et al. | Jun 1998 | A |
5764906 | Edelstein et al. | Jun 1998 | A |
5765030 | Nachenberg et al. | Jun 1998 | A |
5768388 | Goldwasser et al. | Jun 1998 | A |
5768528 | Stumm | Jun 1998 | A |
5769942 | Maeda | Jun 1998 | A |
5771348 | Kubatzki et al. | Jun 1998 | A |
5778372 | Cordell et al. | Jul 1998 | A |
5781729 | Baker et al. | Jul 1998 | A |
5781735 | Southard | Jul 1998 | A |
5781857 | Hwang et al. | Jul 1998 | A |
5781901 | Kuzma | Jul 1998 | A |
5790664 | Coley et al. | Aug 1998 | A |
5790789 | Suarez | Aug 1998 | A |
5790790 | Smith et al. | Aug 1998 | A |
5790793 | Higley | Aug 1998 | A |
5790856 | Lillich | Aug 1998 | A |
5793763 | Mayes et al. | Aug 1998 | A |
5793868 | Micali | Aug 1998 | A |
5793954 | Baker et al. | Aug 1998 | A |
5793972 | Shane | Aug 1998 | A |
5796830 | Johnson et al. | Aug 1998 | A |
5796942 | Esbensen | Aug 1998 | A |
5796948 | Cohen | Aug 1998 | A |
5798706 | Kraemer et al. | Aug 1998 | A |
5799083 | Brothers et al. | Aug 1998 | A |
5801700 | Ferguson | Sep 1998 | A |
5802178 | Holden et al. | Sep 1998 | A |
5802277 | Cowlard | Sep 1998 | A |
5802371 | Meier | Sep 1998 | A |
5805719 | Pare, Jr. et al. | Sep 1998 | A |
5805801 | Holloway et al. | Sep 1998 | A |
5812398 | Nielsen | Sep 1998 | A |
5812763 | Teng | Sep 1998 | A |
5812776 | Gifford | Sep 1998 | A |
5812844 | Jones et al. | Sep 1998 | A |
5815573 | Johnson et al. | Sep 1998 | A |
5815657 | Williams et al. | Sep 1998 | A |
5821398 | Speirs et al. | Oct 1998 | A |
5822526 | Waskiewicz | Oct 1998 | A |
5822527 | Post | Oct 1998 | A |
5826013 | Nachenberg | Oct 1998 | A |
5826014 | Coley et al. | Oct 1998 | A |
5826022 | Nielsen | Oct 1998 | A |
5826029 | Gore, Jr. et al. | Oct 1998 | A |
5828832 | Holden et al. | Oct 1998 | A |
5828893 | Wied et al. | Oct 1998 | A |
5832208 | Chen et al. | Nov 1998 | A |
5835087 | Herz et al. | Nov 1998 | A |
5835090 | Clark et al. | Nov 1998 | A |
5835600 | Rivest | Nov 1998 | A |
5835758 | Nochur et al. | Nov 1998 | A |
5842216 | Anderson et al. | Nov 1998 | A |
5845084 | Cordell et al. | Dec 1998 | A |
5850442 | Muftic | Dec 1998 | A |
5852665 | Gressel et al. | Dec 1998 | A |
5855020 | Kirsch | Dec 1998 | A |
5857022 | Sudia | Jan 1999 | A |
5859966 | Hayman et al. | Jan 1999 | A |
5860068 | Cook | Jan 1999 | A |
5862325 | Reed et al. | Jan 1999 | A |
5864667 | Barkan | Jan 1999 | A |
5864683 | Boebert et al. | Jan 1999 | A |
5864852 | Luotonen | Jan 1999 | A |
5872844 | Yacobi | Feb 1999 | A |
5872849 | Sudia | Feb 1999 | A |
5872931 | Chivaluri | Feb 1999 | A |
5878230 | Weber et al. | Mar 1999 | A |
5884033 | Duvall et al. | Mar 1999 | A |
5889943 | Ji et al. | Mar 1999 | A |
5892825 | Mages et al. | Apr 1999 | A |
5892903 | Klaus | Apr 1999 | A |
5892904 | Atkinson et al. | Apr 1999 | A |
5893114 | Hashimoto et al. | Apr 1999 | A |
5896499 | McKelvey | Apr 1999 | A |
5898830 | Wesinger, Jr. et al. | Apr 1999 | A |
5898836 | Freivald et al. | Apr 1999 | A |
5901227 | Perlman | May 1999 | A |
5903651 | Kocher | May 1999 | A |
5903723 | Beck et al. | May 1999 | A |
5903882 | Asay et al. | May 1999 | A |
5905859 | Holloway et al. | May 1999 | A |
5907618 | Gennaro et al. | May 1999 | A |
5907620 | Klemba et al. | May 1999 | A |
5911776 | Guck | Jun 1999 | A |
5912972 | Barton | Jun 1999 | A |
5919257 | Trostle | Jul 1999 | A |
5919258 | Kayashima et al. | Jul 1999 | A |
5920630 | Wertheimer et al. | Jul 1999 | A |
5922074 | Richard et al. | Jul 1999 | A |
5923846 | Gage et al. | Jul 1999 | A |
5923885 | Johnson et al. | Jul 1999 | A |
5928329 | Clark et al. | Jul 1999 | A |
5930479 | Hall | Jul 1999 | A |
5933478 | Ozaki et al. | Aug 1999 | A |
5933498 | Schneck et al. | Aug 1999 | A |
5933647 | Aronberg et al. | Aug 1999 | A |
5937066 | Gennaro et al. | Aug 1999 | A |
5937164 | Mages et al. | Aug 1999 | A |
5940591 | Boyle et al. | Aug 1999 | A |
5941998 | Tillson | Aug 1999 | A |
5946679 | Ahuja et al. | Aug 1999 | A |
5948062 | Tzelnic et al. | Sep 1999 | A |
5948104 | Gluck et al. | Sep 1999 | A |
5950195 | Stockwell et al. | Sep 1999 | A |
5951644 | Creemer | Sep 1999 | A |
5951698 | Chen et al. | Sep 1999 | A |
5956403 | Lipner et al. | Sep 1999 | A |
5956481 | Walsh et al. | Sep 1999 | A |
5958005 | Thorne et al. | Sep 1999 | A |
5958010 | Agarwal et al. | Sep 1999 | A |
5959976 | Kuo | Sep 1999 | A |
5960170 | Chen et al. | Sep 1999 | A |
5963915 | Kirsch | Oct 1999 | A |
5964889 | Nachenberg | Oct 1999 | A |
5970248 | Meier | Oct 1999 | A |
5974141 | Saito | Oct 1999 | A |
5978799 | Hirsch | Nov 1999 | A |
5983012 | Bianchi et al. | Nov 1999 | A |
5983228 | Kobayashi et al. | Nov 1999 | A |
5987606 | Cirasole et al. | Nov 1999 | A |
5987609 | Hasebe | Nov 1999 | A |
5991406 | Lipner et al. | Nov 1999 | A |
5991807 | Schmidt et al. | Nov 1999 | A |
5991879 | Still | Nov 1999 | A |
5991881 | Conklin et al. | Nov 1999 | A |
5996011 | Humes | Nov 1999 | A |
5996077 | Williams | Nov 1999 | A |
5999723 | Nachenberg | Dec 1999 | A |
5999932 | Paul | Dec 1999 | A |
5999967 | Sundsted | Dec 1999 | A |
6000041 | Baker et al. | Dec 1999 | A |
6003027 | Prager | Dec 1999 | A |
6006329 | Chi | Dec 1999 | A |
6009103 | Woundy | Dec 1999 | A |
6009274 | Fletcher et al. | Dec 1999 | A |
6009462 | Birrell et al. | Dec 1999 | A |
6012144 | Pickett | Jan 2000 | A |
6014651 | Crawford | Jan 2000 | A |
6021510 | Nachenberg | Feb 2000 | A |
6023723 | McCormick et al. | Feb 2000 | A |
6026414 | Anglin | Feb 2000 | A |
6029256 | Kouznetsov | Feb 2000 | A |
6035423 | Hodges et al. | Mar 2000 | A |
6038233 | Hamamoto et al. | Mar 2000 | A |
6049789 | Frison et al. | Apr 2000 | A |
6052531 | Waldin, Jr. et al. | Apr 2000 | A |
6052709 | Paul | Apr 2000 | A |
6052788 | Wesinger, Jr. et al. | Apr 2000 | A |
6055519 | Kennedy et al. | Apr 2000 | A |
6058381 | Nelson | May 2000 | A |
6058482 | Liu | May 2000 | A |
6061448 | Smith et al. | May 2000 | A |
6061722 | Lipa et al. | May 2000 | A |
6067410 | Nachenberg | May 2000 | A |
6070243 | See et al. | May 2000 | A |
6072942 | Stockwell et al. | Jun 2000 | A |
6073140 | Morgan et al. | Jun 2000 | A |
6075863 | Krishnan et al. | Jun 2000 | A |
6078929 | Rao | Jun 2000 | A |
6085320 | Kaliski, Jr. | Jul 2000 | A |
6088803 | Tso et al. | Jul 2000 | A |
6088804 | Hill et al. | Jul 2000 | A |
6092067 | Girling et al. | Jul 2000 | A |
6092102 | Wagner | Jul 2000 | A |
6092114 | Shaffer et al. | Jul 2000 | A |
6092191 | Shimbo et al. | Jul 2000 | A |
6092194 | Touboul | Jul 2000 | A |
6092201 | Turnbull et al. | Jul 2000 | A |
6094277 | Toyoda | Jul 2000 | A |
6094731 | Waldin et al. | Jul 2000 | A |
6097811 | Micali | Aug 2000 | A |
6104500 | Alam et al. | Aug 2000 | A |
6108683 | Kamada et al. | Aug 2000 | A |
6108688 | Nielsen | Aug 2000 | A |
6108691 | Lee et al. | Aug 2000 | A |
6108786 | Knowlson | Aug 2000 | A |
6112181 | Shear et al. | Aug 2000 | A |
6118856 | Paarsmarkt et al. | Sep 2000 | A |
6119137 | Smith et al. | Sep 2000 | A |
6119142 | Kosaka | Sep 2000 | A |
6119157 | Traversat et al. | Sep 2000 | A |
6119165 | Li et al. | Sep 2000 | A |
6119230 | Carter | Sep 2000 | A |
6119231 | Foss et al. | Sep 2000 | A |
6119236 | Shipley | Sep 2000 | A |
6122661 | Stedman et al. | Sep 2000 | A |
6123737 | Sadowsky | Sep 2000 | A |
6134550 | Van Oorschot et al. | Oct 2000 | A |
6134551 | Aucsmith | Oct 2000 | A |
6138254 | Voshell | Oct 2000 | A |
6141695 | Sekiguchi et al. | Oct 2000 | A |
6141778 | Kane et al. | Oct 2000 | A |
6144744 | Smith, Sr. et al. | Nov 2000 | A |
6145083 | Shaffer et al. | Nov 2000 | A |
6151643 | Cheng et al. | Nov 2000 | A |
6151675 | Smith | Nov 2000 | A |
6154769 | Cherkasova et al. | Nov 2000 | A |
6154844 | Touboul et al. | Nov 2000 | A |
6154879 | Pare et al. | Nov 2000 | A |
6161130 | Horvitz et al. | Dec 2000 | A |
6161137 | Ogdon et al. | Dec 2000 | A |
6167407 | Nachenberg et al. | Dec 2000 | A |
6167438 | Yates et al. | Dec 2000 | A |
6169969 | Cohen | Jan 2001 | B1 |
6178242 | Tsuria | Jan 2001 | B1 |
6178509 | Nardone et al. | Jan 2001 | B1 |
6182142 | Win et al. | Jan 2001 | B1 |
6182226 | Reid et al. | Jan 2001 | B1 |
6185678 | Arbaugh et al. | Feb 2001 | B1 |
6185682 | Tang | Feb 2001 | B1 |
6185689 | Todd, Sr. et al. | Feb 2001 | B1 |
6192360 | Dumais et al. | Feb 2001 | B1 |
6192407 | Smith et al. | Feb 2001 | B1 |
6199102 | Cobb | Mar 2001 | B1 |
6202157 | Brownlie et al. | Mar 2001 | B1 |
6215763 | Doshi et al. | Apr 2001 | B1 |
6216265 | Roop et al. | Apr 2001 | B1 |
6219706 | Fan et al. | Apr 2001 | B1 |
6219714 | Inhwan et al. | Apr 2001 | B1 |
6223094 | Muehleck et al. | Apr 2001 | B1 |
6223172 | Hunter et al. | Apr 2001 | B1 |
6223213 | Cleron et al. | Apr 2001 | B1 |
6226666 | Chang et al. | May 2001 | B1 |
6230190 | Edmonds et al. | May 2001 | B1 |
6230194 | Frailong et al. | May 2001 | B1 |
6230266 | Perlman et al. | May 2001 | B1 |
6233577 | Ramasubramani et al. | May 2001 | B1 |
6240401 | Oren et al. | May 2001 | B1 |
6243815 | Antur et al. | Jun 2001 | B1 |
6249575 | Heilmann et al. | Jun 2001 | B1 |
6249585 | McGrew et al. | Jun 2001 | B1 |
6249807 | Shaw et al. | Jun 2001 | B1 |
6253337 | Maloney et al. | Jun 2001 | B1 |
6260043 | Puri et al. | Jul 2001 | B1 |
6260142 | Thakkar et al. | Jul 2001 | B1 |
6266337 | Marco | Jul 2001 | B1 |
6266668 | Vanderveldt et al. | Jul 2001 | B1 |
6266692 | Greenstein | Jul 2001 | B1 |
6266700 | Baker et al. | Jul 2001 | B1 |
6266774 | Sampath et al. | Jul 2001 | B1 |
6269380 | Terry et al. | Jul 2001 | B1 |
6269447 | Maloney et al. | Jul 2001 | B1 |
6269456 | Hodges et al. | Jul 2001 | B1 |
6272532 | Feinleib | Aug 2001 | B1 |
6272632 | Carman et al. | Aug 2001 | B1 |
6275937 | Hailpern et al. | Aug 2001 | B1 |
6275942 | Bernhard et al. | Aug 2001 | B1 |
6275977 | Nagai et al. | Aug 2001 | B1 |
6279113 | Vaidya | Aug 2001 | B1 |
6279133 | Vafai et al. | Aug 2001 | B1 |
6282565 | Shaw et al. | Aug 2001 | B1 |
6285991 | Powar | Sep 2001 | B1 |
6289214 | Backstrom | Sep 2001 | B1 |
6292833 | Liao et al. | Sep 2001 | B1 |
6298445 | Shostack et al. | Oct 2001 | B1 |
6301668 | Gleichauf et al. | Oct 2001 | B1 |
6301699 | Hollander et al. | Oct 2001 | B1 |
6304898 | Shiigi | Oct 2001 | B1 |
6304904 | Sathyanarayan et al. | Oct 2001 | B1 |
6304973 | Williams | Oct 2001 | B1 |
6311207 | Mighdoll et al. | Oct 2001 | B1 |
6311273 | Helbig et al. | Oct 2001 | B1 |
6314190 | Zimmermann | Nov 2001 | B1 |
6317829 | Van Oorschot | Nov 2001 | B1 |
6320948 | Heilmann et al. | Nov 2001 | B1 |
6321267 | Donaldson | Nov 2001 | B1 |
6324569 | Ogilvie et al. | Nov 2001 | B1 |
6324647 | Bowman-Amuah | Nov 2001 | B1 |
6324656 | Gleichauf et al. | Nov 2001 | B1 |
6327579 | Crawford | Dec 2001 | B1 |
6327594 | Van Huben et al. | Dec 2001 | B1 |
6327620 | Tams et al. | Dec 2001 | B1 |
6327652 | England et al. | Dec 2001 | B1 |
6330551 | Burchetta et al. | Dec 2001 | B1 |
6330589 | Kennedy | Dec 2001 | B1 |
6330670 | England et al. | Dec 2001 | B1 |
6332163 | Bowman-Amuah | Dec 2001 | B1 |
6338141 | Wells | Jan 2002 | B1 |
6341369 | Degenaro et al. | Jan 2002 | B1 |
6347374 | Drake et al. | Feb 2002 | B1 |
6347375 | Reinert et al. | Feb 2002 | B1 |
6353886 | Howard et al. | Mar 2002 | B1 |
6356859 | Talbot et al. | Mar 2002 | B1 |
6356935 | Gibbs | Mar 2002 | B1 |
6357008 | Nachenberg | Mar 2002 | B1 |
6362836 | Shaw et al. | Mar 2002 | B1 |
6363489 | Comay et al. | Mar 2002 | B1 |
6367009 | Davis et al. | Apr 2002 | B1 |
6367012 | Atkinson et al. | Apr 2002 | B1 |
6370648 | Diep | Apr 2002 | B1 |
6373950 | Rowney | Apr 2002 | B1 |
6381694 | Yen | Apr 2002 | B1 |
6385596 | Wiser et al. | May 2002 | B1 |
6385655 | Smith et al. | May 2002 | B1 |
6389419 | Wong et al. | May 2002 | B1 |
6393465 | Leeds | May 2002 | B2 |
6393568 | Ranger et al. | May 2002 | B1 |
6397259 | Lincke et al. | May 2002 | B1 |
6397335 | Franczek et al. | May 2002 | B1 |
6400804 | Bilder | Jun 2002 | B1 |
6401210 | Templeton | Jun 2002 | B1 |
6405318 | Rowland | Jun 2002 | B1 |
6411716 | Brickell | Jun 2002 | B1 |
6424650 | Yang et al. | Jul 2002 | B1 |
6430184 | Robins et al. | Aug 2002 | B1 |
6430688 | Kohl et al. | Aug 2002 | B1 |
6434536 | Geiger | Aug 2002 | B1 |
6438549 | Aldred et al. | Aug 2002 | B1 |
6438576 | Huang et al. | Aug 2002 | B1 |
6438612 | Ylonen et al. | Aug 2002 | B1 |
6442588 | Clark et al. | Aug 2002 | B1 |
6442686 | McArdle et al. | Aug 2002 | B1 |
6442688 | Moses et al. | Aug 2002 | B1 |
6442689 | Kocher | Aug 2002 | B1 |
6446109 | Gupta | Sep 2002 | B2 |
6449367 | Van Wie et al. | Sep 2002 | B2 |
6449640 | Haverstock et al. | Sep 2002 | B1 |
6452613 | Lefebvre et al. | Sep 2002 | B1 |
6453345 | Trcka et al. | Sep 2002 | B2 |
6453352 | Wagner et al. | Sep 2002 | B1 |
6453419 | Flint et al. | Sep 2002 | B1 |
6460050 | Pace et al. | Oct 2002 | B1 |
6460141 | Olden | Oct 2002 | B1 |
6469969 | Carson et al. | Oct 2002 | B2 |
6470086 | Smith | Oct 2002 | B1 |
6477651 | Teal | Nov 2002 | B1 |
6484203 | Porras et al. | Nov 2002 | B1 |
6487599 | Smith et al. | Nov 2002 | B1 |
6487658 | Micali | Nov 2002 | B1 |
6487666 | Shanklin et al. | Nov 2002 | B1 |
6496974 | Sliger et al. | Dec 2002 | B1 |
6496979 | Chen et al. | Dec 2002 | B1 |
6499107 | Gleichauf et al. | Dec 2002 | B1 |
6502191 | Smith et al. | Dec 2002 | B1 |
6507851 | Fujiwara et al. | Jan 2003 | B1 |
6510431 | Eichstaedt et al. | Jan 2003 | B1 |
6510464 | Grantges, Jr. et al. | Jan 2003 | B1 |
6510466 | Cox et al. | Jan 2003 | B1 |
6516316 | Ramasubramani et al. | Feb 2003 | B1 |
6516411 | Smith | Feb 2003 | B2 |
6519264 | Carr et al. | Feb 2003 | B1 |
6519703 | Joyce | Feb 2003 | B1 |
6526171 | Furukawa | Feb 2003 | B1 |
6529498 | Cheng | Mar 2003 | B1 |
6539430 | Humes | Mar 2003 | B1 |
6546416 | Kirsch | Apr 2003 | B1 |
6546493 | Magdych et al. | Apr 2003 | B1 |
6550012 | Villa et al. | Apr 2003 | B1 |
6560632 | Chess et al. | May 2003 | B1 |
6574611 | Matsuyama et al. | Jun 2003 | B1 |
6574737 | Kingsford et al. | Jun 2003 | B1 |
6577920 | Hypponen et al. | Jun 2003 | B1 |
6578025 | Pollack et al. | Jun 2003 | B1 |
6578147 | Shanklin et al. | Jun 2003 | B1 |
6584488 | Brenner et al. | Jun 2003 | B1 |
6584564 | Olkin et al. | Jun 2003 | B2 |
6587949 | Steinberg | Jul 2003 | B1 |
6606708 | Devine et al. | Aug 2003 | B1 |
6609196 | Dickinson, III et al. | Aug 2003 | B1 |
6609205 | Bernhard et al. | Aug 2003 | B1 |
6611869 | Eschelbeck et al. | Aug 2003 | B1 |
6611925 | Spear | Aug 2003 | B1 |
6615242 | Riemers | Sep 2003 | B1 |
6622150 | Kouznetsov et al. | Sep 2003 | B1 |
6647400 | Moran | Nov 2003 | B1 |
6650890 | Irlam et al. | Nov 2003 | B1 |
6654787 | Aronson et al. | Nov 2003 | B1 |
6658568 | Ginter et al. | Dec 2003 | B1 |
6662230 | Eichstaedt et al. | Dec 2003 | B1 |
6668269 | Kamada et al. | Dec 2003 | B1 |
6675153 | Cook et al. | Jan 2004 | B1 |
6675209 | Britt | Jan 2004 | B1 |
6678270 | Garfinkel | Jan 2004 | B1 |
6681331 | Munson et al. | Jan 2004 | B1 |
6684335 | Epstein, III et al. | Jan 2004 | B1 |
6687687 | Smadja | Feb 2004 | B1 |
6687732 | Bector et al. | Feb 2004 | B1 |
6691156 | Drummond et al. | Feb 2004 | B1 |
6694023 | Kim | Feb 2004 | B1 |
6697950 | Ko | Feb 2004 | B1 |
6701440 | Kim et al. | Mar 2004 | B1 |
6704874 | Porras et al. | Mar 2004 | B1 |
6707915 | Jobst et al. | Mar 2004 | B1 |
6711127 | Gorman et al. | Mar 2004 | B1 |
6711679 | Guski et al. | Mar 2004 | B1 |
6715082 | Chang et al. | Mar 2004 | B1 |
6721721 | Bates et al. | Apr 2004 | B1 |
6725223 | Abdo et al. | Apr 2004 | B2 |
6725377 | Kouznetsov | Apr 2004 | B1 |
6728886 | Ji et al. | Apr 2004 | B1 |
6731756 | Pizano et al. | May 2004 | B1 |
6732101 | Cook | May 2004 | B1 |
6732149 | Kephart | May 2004 | B1 |
6732157 | Gordon et al. | May 2004 | B1 |
6735700 | Flint et al. | May 2004 | B1 |
6735703 | Kilpatrick et al. | May 2004 | B1 |
6738462 | Brunson | May 2004 | B1 |
6738814 | Cox et al. | May 2004 | B1 |
6738932 | Price | May 2004 | B1 |
6741595 | Maher, III et al. | May 2004 | B2 |
6742015 | Bowman-Amuah | May 2004 | B1 |
6742124 | Kilpatrick et al. | May 2004 | B1 |
6742128 | Joiner | May 2004 | B1 |
6745192 | Libenzi | Jun 2004 | B1 |
6748531 | Epstein | Jun 2004 | B1 |
6754705 | Joiner et al. | Jun 2004 | B2 |
6757830 | Tarbotton et al. | Jun 2004 | B1 |
6760765 | Asai et al. | Jul 2004 | B1 |
6760845 | Cafarelli et al. | Jul 2004 | B1 |
6766450 | Micali | Jul 2004 | B2 |
6768991 | Hearnden | Jul 2004 | B2 |
6769016 | Rothwell et al. | Jul 2004 | B2 |
6772334 | Glawitsch | Aug 2004 | B1 |
6772346 | Chess et al. | Aug 2004 | B1 |
6775657 | Baker | Aug 2004 | B1 |
6775704 | Watson et al. | Aug 2004 | B1 |
6779033 | Watson et al. | Aug 2004 | B1 |
6782503 | Dawson | Aug 2004 | B1 |
6785728 | Schneider et al. | Aug 2004 | B1 |
6785732 | Bates et al. | Aug 2004 | B1 |
6785818 | Sobel et al. | Aug 2004 | B1 |
6789202 | Ko et al. | Sep 2004 | B1 |
6792546 | Shanklin et al. | Sep 2004 | B1 |
6799197 | Shetty et al. | Sep 2004 | B1 |
6802002 | Corella | Oct 2004 | B1 |
6804237 | Luo et al. | Oct 2004 | B1 |
6804778 | Levi et al. | Oct 2004 | B1 |
6804783 | Wesinger, Jr. et al. | Oct 2004 | B1 |
6826698 | Minkin et al. | Nov 2004 | B1 |
6842860 | Branstad et al. | Jan 2005 | B1 |
6842861 | Cox et al. | Jan 2005 | B1 |
6845449 | Carman et al. | Jan 2005 | B1 |
6847888 | Fox et al. | Jan 2005 | B2 |
6851057 | Nachenberg | Feb 2005 | B1 |
6859793 | Lambiase | Feb 2005 | B1 |
6862581 | Lambiase | Mar 2005 | B1 |
6870849 | Callon et al. | Mar 2005 | B1 |
6883101 | Fox et al. | Apr 2005 | B1 |
6892178 | Zacharia | May 2005 | B1 |
6892179 | Zacharia | May 2005 | B1 |
6892237 | Gai et al. | May 2005 | B1 |
6892241 | Kouznetsov et al. | May 2005 | B2 |
6895385 | Zacharia et al. | May 2005 | B1 |
6895436 | Caillau et al. | May 2005 | B1 |
6907430 | Chong et al. | Jun 2005 | B2 |
6909205 | Corcoran et al. | Jun 2005 | B2 |
6910134 | Maher, III et al. | Jun 2005 | B1 |
6910135 | Grainger | Jun 2005 | B1 |
6915426 | Carman et al. | Jul 2005 | B1 |
6922776 | Cook et al. | Jul 2005 | B2 |
6928550 | Le Pennec et al. | Aug 2005 | B1 |
6928556 | Black et al. | Aug 2005 | B2 |
6934857 | Bartleson et al. | Aug 2005 | B1 |
6941348 | Petry et al. | Sep 2005 | B2 |
6941467 | Judge et al. | Sep 2005 | B2 |
6944673 | Malan et al. | Sep 2005 | B2 |
6947442 | Sato et al. | Sep 2005 | B1 |
6947936 | Suermondt et al. | Sep 2005 | B1 |
6950933 | Cook et al. | Sep 2005 | B1 |
6952776 | Chess | Oct 2005 | B1 |
6954775 | Shanklin et al. | Oct 2005 | B1 |
6968336 | Gupta | Nov 2005 | B1 |
6968461 | Lucas et al. | Nov 2005 | B1 |
6971019 | Nachenberg | Nov 2005 | B1 |
6976168 | Branstad et al. | Dec 2005 | B1 |
6976271 | Le Pennec et al. | Dec 2005 | B1 |
6978223 | Milliken | Dec 2005 | B2 |
6981146 | Sheymov | Dec 2005 | B1 |
6981158 | Sanchez et al. | Dec 2005 | B1 |
6985923 | Bates et al. | Jan 2006 | B1 |
6993660 | Libenzi et al. | Jan 2006 | B1 |
7010696 | Cambridge et al. | Mar 2006 | B1 |
7055173 | Chaganty et al. | May 2006 | B1 |
7058974 | Maher, III et al. | Jun 2006 | B1 |
7080000 | Cambridge | Jul 2006 | B1 |
7085934 | Edwards | Aug 2006 | B1 |
7093002 | Wolff et al. | Aug 2006 | B2 |
7107618 | Gordon et al. | Sep 2006 | B1 |
7117358 | Bandini et al. | Oct 2006 | B2 |
7117533 | Libenzi | Oct 2006 | B1 |
7120252 | Jones et al. | Oct 2006 | B1 |
7127743 | Khanolkar et al. | Oct 2006 | B1 |
7134141 | Crosbie et al. | Nov 2006 | B2 |
7136487 | Schon et al. | Nov 2006 | B1 |
7150042 | Wolff et al. | Dec 2006 | B2 |
7159237 | Schneier et al. | Jan 2007 | B2 |
7181015 | Matt | Feb 2007 | B2 |
7213260 | Judge | May 2007 | B2 |
7222157 | Sutton et al. | May 2007 | B1 |
7225255 | Favier et al. | May 2007 | B2 |
7225466 | Judge | May 2007 | B2 |
7234168 | Gupta et al. | Jun 2007 | B2 |
7308715 | Gupta et al. | Dec 2007 | B2 |
7310818 | Parish et al. | Dec 2007 | B1 |
7328349 | Milliken | Feb 2008 | B2 |
7366764 | Vollebregt | Apr 2008 | B1 |
7409714 | Gupta et al. | Aug 2008 | B2 |
7458098 | Judge et al. | Nov 2008 | B2 |
7519994 | Judge et al. | Apr 2009 | B2 |
7533272 | Gordon et al. | May 2009 | B1 |
7624274 | Alspector et al. | Nov 2009 | B1 |
7693945 | Dulitz et al. | Apr 2010 | B1 |
20010005889 | Albrecht | Jun 2001 | A1 |
20010009580 | Ikeda | Jul 2001 | A1 |
20010011308 | Clark et al. | Aug 2001 | A1 |
20010034839 | Karjoth et al. | Oct 2001 | A1 |
20010039579 | Trcka et al. | Nov 2001 | A1 |
20010049793 | Sugimoto | Dec 2001 | A1 |
20020001384 | Buer et al. | Jan 2002 | A1 |
20020004902 | Toh et al. | Jan 2002 | A1 |
20020016826 | Johansson et al. | Feb 2002 | A1 |
20020016910 | Wright et al. | Feb 2002 | A1 |
20020019945 | Houston et al. | Feb 2002 | A1 |
20020023140 | Hile et al. | Feb 2002 | A1 |
20020026591 | Hartley et al. | Feb 2002 | A1 |
20020032860 | Wheeler et al. | Mar 2002 | A1 |
20020032871 | Malan et al. | Mar 2002 | A1 |
20020035683 | Kaashoek et al. | Mar 2002 | A1 |
20020038339 | Xu | Mar 2002 | A1 |
20020042876 | Smith | Apr 2002 | A1 |
20020042877 | Wheeler et al. | Apr 2002 | A1 |
20020046041 | Lang | Apr 2002 | A1 |
20020049853 | Chu et al. | Apr 2002 | A1 |
20020069263 | Sears et al. | Jun 2002 | A1 |
20020071438 | Singh | Jun 2002 | A1 |
20020078381 | Farley et al. | Jun 2002 | A1 |
20020078382 | Sheikh et al. | Jun 2002 | A1 |
20020080888 | Shu et al. | Jun 2002 | A1 |
20020083033 | Abdo et al. | Jun 2002 | A1 |
20020083342 | Webb et al. | Jun 2002 | A1 |
20020083343 | Crosbie et al. | Jun 2002 | A1 |
20020087882 | Schneier et al. | Jul 2002 | A1 |
20020091697 | Huang et al. | Jul 2002 | A1 |
20020091757 | Cuomo et al. | Jul 2002 | A1 |
20020095492 | Kaashoek et al. | Jul 2002 | A1 |
20020107853 | Hofmann et al. | Aug 2002 | A1 |
20020112008 | Christenson et al. | Aug 2002 | A1 |
20020112168 | Filipi-Martin et al. | Aug 2002 | A1 |
20020112185 | Hodges | Aug 2002 | A1 |
20020116463 | Hart | Aug 2002 | A1 |
20020116627 | Tarbotton et al. | Aug 2002 | A1 |
20020120705 | Schiavone et al. | Aug 2002 | A1 |
20020120853 | Tyree | Aug 2002 | A1 |
20020120874 | Shu et al. | Aug 2002 | A1 |
20020129002 | Alberts et al. | Sep 2002 | A1 |
20020129277 | Caccavale | Sep 2002 | A1 |
20020133365 | Grey et al. | Sep 2002 | A1 |
20020133586 | Shanklin et al. | Sep 2002 | A1 |
20020138416 | Lovejoy et al. | Sep 2002 | A1 |
20020138755 | Ko | Sep 2002 | A1 |
20020138759 | Dutta | Sep 2002 | A1 |
20020138762 | Horne | Sep 2002 | A1 |
20020143963 | Converse et al. | Oct 2002 | A1 |
20020147734 | Shoup et al. | Oct 2002 | A1 |
20020147780 | Liu et al. | Oct 2002 | A1 |
20020147915 | Chefalas et al. | Oct 2002 | A1 |
20020147925 | Lingafelt et al. | Oct 2002 | A1 |
20020152399 | Smith | Oct 2002 | A1 |
20020161718 | Coley et al. | Oct 2002 | A1 |
20020165971 | Baron | Nov 2002 | A1 |
20020169954 | Bandini et al. | Nov 2002 | A1 |
20020172367 | Mulder et al. | Nov 2002 | A1 |
20020174358 | Wolff et al. | Nov 2002 | A1 |
20020178227 | Matsa et al. | Nov 2002 | A1 |
20020178383 | Hrabik et al. | Nov 2002 | A1 |
20020181703 | Logan et al. | Dec 2002 | A1 |
20020186698 | Ceniza | Dec 2002 | A1 |
20020188864 | Jackson | Dec 2002 | A1 |
20020194161 | McNamee et al. | Dec 2002 | A1 |
20020194469 | Dominique et al. | Dec 2002 | A1 |
20020194490 | Halperin et al. | Dec 2002 | A1 |
20020199095 | Bandini et al. | Dec 2002 | A1 |
20030004688 | Gupta et al. | Jan 2003 | A1 |
20030004689 | Gupta et al. | Jan 2003 | A1 |
20030005326 | Flemming | Jan 2003 | A1 |
20030009554 | Burch et al. | Jan 2003 | A1 |
20030009693 | Brock et al. | Jan 2003 | A1 |
20030009696 | Bunker et al. | Jan 2003 | A1 |
20030009698 | Lindeman et al. | Jan 2003 | A1 |
20030009699 | Gupta et al. | Jan 2003 | A1 |
20030014662 | Gupta et al. | Jan 2003 | A1 |
20030014664 | Hentunen | Jan 2003 | A1 |
20030021280 | Makinson et al. | Jan 2003 | A1 |
20030023692 | Moroo | Jan 2003 | A1 |
20030023695 | Kobata et al. | Jan 2003 | A1 |
20030023873 | Ben-Itzhak | Jan 2003 | A1 |
20030023874 | Prokupets et al. | Jan 2003 | A1 |
20030023875 | Hursey et al. | Jan 2003 | A1 |
20030028803 | Bunker et al. | Feb 2003 | A1 |
20030033516 | Howard et al. | Feb 2003 | A1 |
20030033542 | Goseva-Popstojanova et al. | Feb 2003 | A1 |
20030037141 | Milo et al. | Feb 2003 | A1 |
20030041263 | Devine et al. | Feb 2003 | A1 |
20030041264 | Black et al. | Feb 2003 | A1 |
20030046421 | Horvitz et al. | Mar 2003 | A1 |
20030051026 | Carter et al. | Mar 2003 | A1 |
20030051163 | Bidaud | Mar 2003 | A1 |
20030051168 | King et al. | Mar 2003 | A1 |
20030055931 | Cravo De Almeida et al. | Mar 2003 | A1 |
20030061502 | Teblyashkin et al. | Mar 2003 | A1 |
20030061506 | Cooper et al. | Mar 2003 | A1 |
20030065791 | Garg et al. | Apr 2003 | A1 |
20030065943 | Geis et al. | Apr 2003 | A1 |
20030084020 | Shu | May 2003 | A1 |
20030084280 | Bryan et al. | May 2003 | A1 |
20030084320 | Tarquini et al. | May 2003 | A1 |
20030084323 | Gales | May 2003 | A1 |
20030084347 | Luzzatto | May 2003 | A1 |
20030088680 | Nachenberg et al. | May 2003 | A1 |
20030088792 | Card et al. | May 2003 | A1 |
20030093667 | Dutta et al. | May 2003 | A1 |
20030093695 | Dutta | May 2003 | A1 |
20030093696 | Sugimoto | May 2003 | A1 |
20030095555 | McNamara et al. | May 2003 | A1 |
20030097439 | Strayer et al. | May 2003 | A1 |
20030097564 | Tewari et al. | May 2003 | A1 |
20030101381 | Mateev et al. | May 2003 | A1 |
20030105827 | Tan et al. | Jun 2003 | A1 |
20030105859 | Garnett et al. | Jun 2003 | A1 |
20030105976 | Copeland, III | Jun 2003 | A1 |
20030110392 | Aucsmith et al. | Jun 2003 | A1 |
20030110393 | Brock et al. | Jun 2003 | A1 |
20030110396 | Lewis et al. | Jun 2003 | A1 |
20030115485 | Milliken | Jun 2003 | A1 |
20030115486 | Choi et al. | Jun 2003 | A1 |
20030120604 | Yokota et al. | Jun 2003 | A1 |
20030120647 | Aiken et al. | Jun 2003 | A1 |
20030123665 | Dunstan et al. | Jul 2003 | A1 |
20030126464 | McDaniel et al. | Jul 2003 | A1 |
20030126472 | Banzhof | Jul 2003 | A1 |
20030135749 | Gales et al. | Jul 2003 | A1 |
20030140137 | Joiner et al. | Jul 2003 | A1 |
20030140250 | Taninaka et al. | Jul 2003 | A1 |
20030145212 | Crumly | Jul 2003 | A1 |
20030145225 | Bruton, III et al. | Jul 2003 | A1 |
20030145226 | Bruton, III et al. | Jul 2003 | A1 |
20030145232 | Poletto et al. | Jul 2003 | A1 |
20030149887 | Yadav | Aug 2003 | A1 |
20030149888 | Yadav | Aug 2003 | A1 |
20030154393 | Young | Aug 2003 | A1 |
20030154399 | Zuk et al. | Aug 2003 | A1 |
20030154402 | Pandit et al. | Aug 2003 | A1 |
20030158905 | Petry et al. | Aug 2003 | A1 |
20030159069 | Choi et al. | Aug 2003 | A1 |
20030159070 | Mayer et al. | Aug 2003 | A1 |
20030167402 | Stolfo et al. | Sep 2003 | A1 |
20030172120 | Tomkow et al. | Sep 2003 | A1 |
20030172166 | Judge et al. | Sep 2003 | A1 |
20030172167 | Judge et al. | Sep 2003 | A1 |
20030172289 | Soppera | Sep 2003 | A1 |
20030172291 | Judge et al. | Sep 2003 | A1 |
20030172292 | Judge | Sep 2003 | A1 |
20030172294 | Judge | Sep 2003 | A1 |
20030172301 | Judge et al. | Sep 2003 | A1 |
20030172302 | Judge et al. | Sep 2003 | A1 |
20030187996 | Cardina et al. | Oct 2003 | A1 |
20030212791 | Pickup | Nov 2003 | A1 |
20030233328 | Scott et al. | Dec 2003 | A1 |
20030236845 | Pitsos | Dec 2003 | A1 |
20040015554 | Wilson | Jan 2004 | A1 |
20040025044 | Day | Feb 2004 | A1 |
20040054886 | Dickinson, III et al. | Mar 2004 | A1 |
20040058673 | Irlam et al. | Mar 2004 | A1 |
20040059811 | Sugauchi et al. | Mar 2004 | A1 |
20040083384 | Hypponen | Apr 2004 | A1 |
20040088570 | Roberts et al. | May 2004 | A1 |
20040103315 | Cooper et al. | May 2004 | A1 |
20040111531 | Staniford et al. | Jun 2004 | A1 |
20040139160 | Wallace et al. | Jul 2004 | A1 |
20040139334 | Wiseman | Jul 2004 | A1 |
20040143763 | Radatti | Jul 2004 | A1 |
20040167968 | Wilson et al. | Aug 2004 | A1 |
20040177120 | Kirsch | Sep 2004 | A1 |
20040181462 | Bauer et al. | Sep 2004 | A1 |
20040193482 | Hoffman et al. | Sep 2004 | A1 |
20040203589 | Wang et al. | Oct 2004 | A1 |
20040205135 | Hallam-Baker | Oct 2004 | A1 |
20040221062 | Starbuck et al. | Nov 2004 | A1 |
20040236884 | Beetz | Nov 2004 | A1 |
20040267893 | Lin | Dec 2004 | A1 |
20050014749 | Chen et al. | Jan 2005 | A1 |
20050021738 | Goeller et al. | Jan 2005 | A1 |
20050043936 | Corston-Oliver et al. | Feb 2005 | A1 |
20050052998 | Oliver et al. | Mar 2005 | A1 |
20050058129 | Jones et al. | Mar 2005 | A1 |
20050065810 | Bouron | Mar 2005 | A1 |
20050081059 | Bandini et al. | Apr 2005 | A1 |
20050086526 | Aguirre | Apr 2005 | A1 |
20050102366 | Kirsch | May 2005 | A1 |
20050188045 | Katsikas | Aug 2005 | A1 |
20050204159 | Davis et al. | Sep 2005 | A1 |
20050235360 | Pearson | Oct 2005 | A1 |
20050262209 | Yu | Nov 2005 | A1 |
20050262210 | Yu | Nov 2005 | A1 |
20060036693 | Hulten et al. | Feb 2006 | A1 |
20060036727 | Kurapati et al. | Feb 2006 | A1 |
20060042483 | Work et al. | Mar 2006 | A1 |
20060047794 | Jezierski | Mar 2006 | A1 |
20060095404 | Adelman et al. | May 2006 | A1 |
20060095966 | Park | May 2006 | A1 |
20060123083 | Goutte et al. | Jun 2006 | A1 |
20060168006 | Shannon et al. | Jul 2006 | A1 |
20060168017 | Stern et al. | Jul 2006 | A1 |
20060212925 | Shull et al. | Sep 2006 | A1 |
20060212930 | Shull et al. | Sep 2006 | A1 |
20060212931 | Shull et al. | Sep 2006 | A1 |
20060230039 | Shull et al. | Oct 2006 | A1 |
20060253458 | Dixon et al. | Nov 2006 | A1 |
20060259551 | Caldwell, Jr. | Nov 2006 | A1 |
20080060075 | Cox et al. | Mar 2008 | A1 |
20090064329 | Okumura et al. | Mar 2009 | A1 |
20090083413 | Levow et al. | Mar 2009 | A1 |
20100017487 | Patinkin | Jan 2010 | A1 |
20100049848 | Levow et al. | Feb 2010 | A1 |
Number | Date | Country |
---|---|---|
WO9605673 | Feb 1996 | WO |
WO0028420 | May 2000 | WO |
WO0155927 | Aug 2001 | WO |
WO0173523 | Oct 2001 | WO |
WO02101516 | Dec 2002 | WO |
Number | Date | Country | |
---|---|---|---|
20100205671 A1 | Aug 2010 | US |
Number | Date | Country | |
---|---|---|---|
60407975 | Sep 2002 | US | |
60341462 | Dec 2001 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12249823 | Oct 2008 | US |
Child | 12762367 | US | |
Parent | 10654771 | Sep 2003 | US |
Child | 12249823 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10251403 | Sep 2002 | US |
Child | 10654771 | US | |
Parent | 09881074 | Jun 2001 | US |
Child | 10251403 | US | |
Parent | 09881145 | Jun 2001 | US |
Child | 09881074 | US |