System and method for threat detection and identification

Information

  • Patent Grant
  • 11082435
  • Patent Number
    11,082,435
  • Date Filed
    Monday, May 6, 2019
    5 years ago
  • Date Issued
    Tuesday, August 3, 2021
    2 years ago
Abstract
Exemplary systems and methods for malware attack detection and identification are provided. A malware detection and identification system a controller that features an analysis environment including a virtual machine. The analysis environment to (1) receive data by the virtual machine of the analysis environment and identify a portion of the data that have been received from one or more untrusted, (2) monitor state information associated with the identified portion of the data during execution by the virtual machine, (3) identify an outcome of the state information by tracking the state information during execution of the identified portion of the data by the virtual machine, and (4) determine whether the identified outcome comprises a redirection in control flow during execution by the virtual machine of the portion of the data, the redirection in the control flow constituting an unauthorized activity.
Description
BACKGROUND
Field of the Invention

The present invention relates generally to computer networks, and more particularly to preventing the spread of malware.


Background Art

Detecting and distinguishing computer worms from ordinary communications traffic within a computer network is a challenging problem. Moreover, modern computer worms operate at an ever increasing level of sophistication and complexity. Consequently, it has become increasingly difficult to detect computer worms.


A computer worm can propagate through a computer network by using active propagation techniques. One such active propagation technique is to select target systems to infect by scanning network address space (e.g., a scan-directed computer worm). Another active propagation technique is to use topological information from an infected system to actively propagate the computer worm in the system (e.g., a topologically directed computer worm). Still another active propagation technique is to select target systems to infect based on some combination of previously generated lists of target systems (e.g., a hit-list directed computer worm).


In addition to the active propagation techniques, a computer worm may propagate through a computer network by using passive propagation techniques. One passive propagation technique is for the worm to attach itself to a normal network communication not initiated by the computer worm itself (e.g., a stealthy or passive contagion computer worm). The computer worm then propagates through the computer network in the context of normal communication patterns not directed by the computer worm.


It is anticipated that next-generation computer worms will have multiple transport vectors, use multiple target selection techniques, have no previously known signatures, and will target previously unknown vulnerabilities. It is also anticipated that next generation computer worms will use a combination of active and passive propagation techniques and may emit chaff traffic (i.e., spurious traffic generated by the computer worm) to cloak the communication traffic that carries the actual exploit sequences of the computer worms. This chaff traffic will be emitted in order to confuse computer worm detection systems and to potentially trigger a broad denial-of-service by an automated response system.


Approaches for detecting computer worms in a computer system include misuse detection and anomaly detection. In misuse detection, known attack patterns of computer worms are used to detect the presence of the computer worm. Misuse detection works reliably for known attack patterns but is not particularly useful for detecting novel attacks. In contrast to misuse detection, anomaly detection has the ability to detect novel attacks. In anomaly detection, a baseline of normal behavior in a computer network is created so that deviations from this behavior can be flagged as anomalous. The difficulty inherent in this approach is that universal definitions of normal behavior are difficult to obtain. Given this limitation, anomaly detection approaches strive to minimize false positive rates of computer worm detection.


In one suggested computer worm containment system, detection devices are deployed in a computer network to monitor outbound network traffic and detect active scan directed computer worms within the computer network. To achieve effective containment of these active computer worms, as measured by the total infection rate over the entire population of systems, the detection devices are widely deployed in the computer network in an attempt to detect computer worm traffic close to a source of the computer worm traffic. Once detected, these computer worms are contained by using an address blacklisting technique. This computer worm containment system, however, does not have a mechanism for repair and recovery of infected computer networks.


In another suggested computer worm containment system, the protocols (e.g., network protocols) of network packets are checked for standards compliance under an assumption that a computer worm will violate the protocol standards (e.g., exploit the protocol standards) in order to successfully infect a computer network. While this approach may be successful in some circumstances, this approach is limited in other circumstances. Firstly, it is possible for a network packet to be fully compatible with published protocol standard specifications and still trigger a buffer overflow type of software error due to the presence of a software bug. Secondly, not all protocols of interest can be checked for standards compliance because proprietary or undocumented protocols may be used in a computer network. Moreover, evolutions of existing protocols and the introduction of new protocols may lead to high false positive rates of computer worm detection when “good” behavior cannot be properly and completely distinguished from “bad” behavior. Encrypted communications channels further complicate protocol checking because protocol compliance cannot be easily validated at the network level for encrypted traffic.


In another approach to computer worm containment, “honey farms” have been proposed. A honey farm includes “honeypots” that are sensitive to probe attempts in a computer network. One problem with this approach is that probe attempts do not necessarily indicate the presence of a computer worm because there may be legitimate reasons for probing a computer network. For example, a computer network can be legitimately probed by scanning an Internet Protocol (IP) address range to identify poorly configured or rogue devices in the computer network. Another problem with this approach is that a conventional honey farm does not detect passive computer worms and does not extract signatures or transport vectors in the face of chaff emitting computer worms.


Another approach to computer worm containment assumes that computer worm probes are identifiable at a given worm sensor in a computer network because the computer worm probes will target well known vulnerabilities and thus have well known signatures which can be detected using a signature-based intrusion detection system. Although this approach may work for well known computer worms that periodically recur, such as the CodeRed computer worm, this approach does not work for novel computer worm attacks exploiting a zero-day vulnerability (e.g., a vulnerability that is not widely known).


One suggested computer worm containment system attempts to detect computer worms by observing communication patterns between computer systems in a computer network. In this system, connection histories between computer systems are analyzed to discover patterns that may represent a propagation trail of the computer worm. In addition to false positive related problems, the computer worm containment system does not distinguish between the actual transport vector of a computer worm and a transport vector including a spuriously emitted chaff trail. As a result, simply examining malicious traffic to determine the transport vector can lead to a broad denial of service (DOS) attack on the computer network. Further, the computer worm containment system does not determine a signature of the computer worm that can be used to implement content filtering of the computer worm. In addition, the computer worm containment system does not have the ability to detect stealthy passive computer worms, which by their very nature cause no anomalous communication patterns.


One problem with creating signatures to block or eliminate worms, viruses, Trojan horses, spyware, hacker attacks, or other malware, is that creating signatures can take considerable time. In one example, a virus is often received several times and studied before it can be identified and the attack recognized. During the time to manually create virus signatures, the virus may infect thousands of computers that will have to be subsequently disinfected. Unfortunately, the damage caused by the virus may never be corrected.


Further, even when a signature has been created, it may be narrowly defined and only block or identify one type of malware. Signatures are often created that only recognize a pattern of code and not the vulnerability attacked. In one example, a virus may cause a buffer overflow in a particular application to gain control. The virus may cause the buffer overflow with 64 bytes within a select region of data transmitted from an attacker. A signature may be created to recognize the 64 bytes within the select region of data to block the virus. However, the vulnerability may be that the buffer overflow occurs whenever 60 bytes or more are input. As a result, the virus may be modified (e.g., a polymorphic virus) to either move the 64 bytes to another region of the transmitted data, thereby sidestepping recognition by the signature, or attacking the same buffer overflow with 63 bytes rather than 64 bytes. Again, the signature may not recognize the pattern of the virus and the attack is conducted until yet another signature is created and more damage is done.


SUMMARY OF THE INVENTION

Exemplary systems and methods for malware attack detection and identification are provided. A malware detection and identification system can comprise a controller. The controller can comprise an analysis environment configured to transmit network data to a virtual machine, flag input values associated with the network data from untrusted sources, monitor the flagged input values within the virtual machine, identify an outcome of one or more instructions that manipulate the flagged input values, and determine if the outcome of the one or more instructions comprise an unauthorized activity.


The analysis environment may be further configured to analyze the network data with a heuristic to determine if the network data is suspicious. The analysis environment may be configured to monitor at least one byte of the flagged input values at the processor instruction level. The input values associated with the network data can comprise input values within the network data as well as input values derived from the network data.


The analysis environment may be further configured to generate one or more unauthorized activity signatures based on the determination. The unauthorized activity signature may be utilized to block a malware attack vector, a malware attack payload, or a class of malware attacks. Further, the controller may comprise a signature module configured to transmit the unauthorized activity signature to a digital device configured to enforce the unauthorized signature.


A malware detection and identification method may comprise transmitting network data to a virtual machine flagging input values associated with the network data from untrusted sources, monitoring the flagged input values within the virtual machine, identifying an outcome of one or more instructions that manipulate the flagged input values, and determining if the outcome of the one or more instructions comprise an unauthorized activity.


A machine readable medium may have embodied thereon executable code, the executable code being executable by a processor for performing a malware determination and identification method, the method comprising transmitting network data to a virtual machine, flagging input values associated with the network data from untrusted sources, monitoring the flagged input values within the virtual machine, identifying an outcome of one or more instructions that manipulate the flagged input values, and determining if the outcome of the one or more instructions comprise an unauthorized activity.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 depicts a computing environment in which a worm sensor can be implemented, in accordance with one embodiment of the present invention.



FIG. 2 depicts a controller of a computer worm sensor, in accordance with one embodiment of the present invention.



FIG. 3 depicts a computer worm detection system, in accordance with one embodiment of the present invention.



FIG. 4 depicts a flow chart for a method of detecting computer worms, in accordance with one embodiment of the present invention.



FIG. 5 depicts a computer worm containment system, in accordance with one embodiment of the present invention.



FIG. 6 depicts a computer worm defense system, in accordance with one embodiment of the present invention.



FIG. 7 depicts an unauthorized activity detection system, in accordance with one embodiment of the present invention.



FIG. 8 depicts an analysis environment, in accordance with one embodiment of the present invention.



FIG. 9 depicts a flow chart for a method of detecting unauthorized activity, in accordance with one embodiment of the present invention.



FIG. 10 depicts a flow chart for a method for orchestrating a response to network data, in accordance with one embodiment of the present invention.



FIG. 11 depicts a controller of an unauthorized activity detection system, in accordance with one embodiment of the present invention.



FIG. 12 depicts an analysis environment, in accordance with one embodiment of the present invention.



FIG. 13 depicts a flow chart for a method for concurrently orchestrating a response to network data, in accordance with one embodiment of the present invention.



FIG. 14 depicts a flow chart for a method for concurrently identifying unauthorized activity, in accordance with one embodiment of the present invention.



FIG. 15 depicts a network environment, in accordance with one embodiment of the present invention.



FIG. 16 depicts a controller of a dynamic signature and enforcement system, in accordance with one embodiment of the present invention.



FIG. 17 depicts a flow chart for a method for a dynamic signature creation and enforcement system, in accordance with one embodiment of the present invention.



FIG. 18 depicts a flow chart for a method for receiving an unauthorized activity signature and barring network data based on the unauthorized activity signature, in accordance with one embodiment of the present invention.



FIG. 19 depicts a flow chart for a method to detect and identify a malware attack in accordance with one embodiment of the present invention.



FIG. 20 depicts another flow chart of a method to detect and identify a malware attack in accordance with one embodiments of the present invention.





DETAILED DESCRIPTION

An unauthorized activity containment system in accordance with one embodiment of the present invention detects computer suspicious activity, models the suspicious activity to identify unauthorized activity, and blocks the unauthorized activity. The unauthorized activity containment system can flag suspicious activity and then model the effects of the suspicious activity to identify malware and/or unauthorized activity associated with a computer user. The threshold for detecting the suspicious activity may be set low whereby a single command may be flagged as suspicious. In other embodiments, the threshold may be higher to flag suspicious activity of a combination of commands or repetitive commands.


Unauthorized activity can include any unauthorized and/or illegal computer activity. Unauthorized activity can also include activity associated with malware or illegitimate computer use. Malware is software created and distributed for malicious purposes and can take the form of viruses, worms, or trojan horses, for example. A virus is an intrusive program that infects a computer file by inserting a copy of itself in the file. The copy is usually executed when the file is loaded into memory, allowing the virus to infect still other files. A worm is a program that propagates itself across computers, usually by creating copies of itself in each computer's memory. A worm might duplicate itself in one computer so often that it causes the computer to crash. A trojan horse is a destructive program disguised as a game, utility, or application. When run, a trojan horse can harm the computer system while appearing to do something useful.


Illegitimate computer use can comprise intentional or unintentional unauthorized access to data. A hacker may intentionally seek to damage a computer system. A hacker, or computer cracker, is an individual that seeks unauthorized access to data. One example of a common attack is a denial-of-service attack where the hacker configures one or more computers to constantly request access to a target computer. The target computer may become overwhelmed by the requests and either crash or become too busy to conduct normal operations. While some hackers seek to intentionally damage computer systems, other computer users may seek to gain rights or privileges of a computer system in order to copy data or access other computers on a network. Such computer use can unintentionally damage computer systems or corrupt data.


Detection of worms can be accomplished through the use of a computer worm detection system that employs a decoy computer network having orchestrated network activities. The computer worm detection system is configured to permit computer worms to infect the decoy computer network. Alternately, rather than infect the decoy network, communications that are characteristic of a computer worm can be filtered from communication traffic and replayed in the decoy network. Detection is then based on the monitored behavior of the decoy computer network. Once a computer worm has been detected, an identifier of the computer worm is determined and provided to a computer worm blocking system that is configured to protect one or more computer systems of a real computer network. In some embodiments, the computer worm detection system can generate a recovery script to disable the computer worm and repair damage caused to the one or more computer systems, and in some instances, the computer worm blocking system initiates the repair and recovery of the infected systems.



FIG. 1 depicts an exemplary computing environment 100 in which a computer worm sensor 105 is implemented, in accordance with one embodiment of the present invention. In various embodiments, the computer worm sensor 105 functions as a computer worm detection system, as is described more fully herein. The computer worm sensor 105 includes a controller 115, a computer network 110 (e.g., a hidden or decoy network), and a gateway 125 (e.g., a wormhole system). The computer network 110 includes one or more computing systems 120 (e.g., hidden systems) in communication with each other. The controller 115 and the gateway 125 are in communication with the computer network 110 and the computing systems 120. Additionally, the gateway 125 is in communication with a communication network 130 (e.g., a production network). The communication network 130 can be a public computer network such as the Internet, or a private computer network, such as a wireless telecommunication network.


Optionally, the computer worm sensor 105 may include one or more traffic analysis devices 135 in communication with the communication network 130. A traffic analysis device 135 analyzes network traffic in the communication network 130 to identify network communications characteristic of a computer worm. The traffic analysis device 135 can then selectively duplicate the identified network communications and provide the duplicated network communications to the controller 115. The controller 115 replays the duplicated network communications in the computer network 110 to determine whether the network communications include a computer worm.


The computing systems 120 are computing devices typically found in a computer network. For example, the computing systems 120 can include computing clients or servers. As a further example, the computing systems 120 can include gateways and subnets in the computer network 110. Each of the computing systems 120 and the gateway 125 can have different hardware or software profiles.


The gateway 125 allows computer worms to pass from the communication network 130 to the computer network 110. The computer worm sensor 105 can include multiple gateways 125 in communication with multiple communication networks 130. These communication networks 130 may also be in communication with each other. For example, the communication network 130 can be part of the Internet or in communication with the Internet. In one embodiment, each of the gateways 125 can be in communication with multiple communication networks 130.


The controller 115 controls the operation of the computing systems 120 and the gateway 125 to orchestrate network activities in the computer worm sensor 105. In one embodiment, the orchestrated network activities are a predetermined sequence of network activities in the computer network 110, which represents an orchestrated behavior of the computer network 110. In this embodiment, the controller 115 monitors the computer network 110 to determine a monitored behavior of the computer network 110 in response to the orchestrated network activities. The controller 115 then compares the monitored behavior of the computer network 110 with a predetermined orchestrated behavior to identify an anomalous behavior.


Anomalous behavior may include a communication anomaly, like an unexpected network communication, or an execution anomaly, for example, an unexpected execution of computer program code. If the controller 115 identifies an anomalous behavior, the computer network 110 is deemed to be infected with a computer worm. In this way, the controller 115 can detect the presence of a computer worm in the computer network 110 based on an anomalous behavior of the computer worm in the computer network 110. The controller 115 then creates an identifier (i.e., a “definition” of the anomalous behavior), which can be used for detecting the computer worm in another computer network, such as the communication network 130.


The identifier determined by the controller 115 for a computer worm in the computer network 110 can be a signature that characterizes the anomalous behavior of the computer worm. The signature can then be used to detect the computer worm in another computer network. In one embodiment, the signature indicates a sequence of ports in the computer network 110 along with data used to exploit each of the ports. for instance, the signature can be a set of tuples {(p1, c1), (p2, c2), . . . }, where pn represents a Transfer Control Protocol (TCP) or a User Datagram Protocol (UDP) port number, and cn is signature data contained in a TCP or UDP packet used to exploit a port associated with the port number. For example, the signature data can be 16-32 bytes of data in a data portion of a data packet.


The controller 115 can determine a signature of a computer worm based on a uniform resource locator (URL), and can generate the signature by using a URL filtering device, which represents a specific case of content filtering. For example, the controller 115 can identify a uniform resource locator (URL) in data packets of Hyper Text Transfer Protocol (HTTP) traffic and can extract a signature from the URL. Further, the controller 115 can create a regular expression for the URL and include the regular expression in the signature such that each tuple of the signature includes a destination port and the regular expression. In this way, a URL filtering device can use the signature to filter out network traffic associated with the URL. The controller 115, in some embodiments, can also filter data packet traffic for a sequence of tokens and dynamically produce a signature having a regular expression that includes the token sequence.


Alternatively, the identifier may be a vector (e.g., a propagation vector, an attack vector, or a payload vector) that characterizes an anomalous behavior of the computer worm in the computer network 110. For example, the vector can be a propagation vector (i.e., a transport vector) that characterizes a sequence of paths traveled by the computer worm in the computer network 110. The propagation vector may include a set {p1, p2, p3, . . . }, where pn represents a port number (e.g., a TCP or UDP port number) in the computer network 110 and identifies a transport protocol (e.g., TCP or UDP) used by the computer worm to access the port. Further, the identifier may be a multi-vector that characterizes multiple propagation vectors for the computer worm. In this way, the vector can characterize a computer worm that uses a variety of techniques to propagate in the computer network 110. These techniques may include dynamic assignment of probe addresses to the computing systems 120, network address translation (NAT) of probe addresses to the computing systems 120, obtaining topological service information from the computer network 110, or propagating through multiple gateways 125 of the computer worm sensor 105.


The controller 115 can be configured to orchestrate network activities (e.g., network communications or computing services) in the computer network 110 based on one or more orchestration patterns. In one embodiment, the controller 115 generates a series of network communications based on an orchestration pattern to exercise one or more computing services (e.g., Telnet, FTP, or SMTP) in the computer network 110. In this embodiment, the orchestration pattern produces an orchestrated behavior (e.g., an expected behavior) of the computer network 110 in the absence of computer worm infection. The controller 115 then monitors network activities in the computer network 110 (e.g., the network communications and computing services accessed by the network communications) to determine a monitored behavior of the computer network 110, and compares the monitored behavior with the orchestrated behavior. If the monitored behavior does not match the orchestrated behavior, the computer network 110 is deemed to be infected with a computer worm. The controller 115 then identifies an anomalous behavior in the monitored behavior (e.g., a network activity in the monitored behavior that does not match the orchestration pattern) and determines an identifier for the computer worm based on the anomalous behavior. In other embodiments, the controller 115 is configured to detect unexpected network activities in the computer network 110.


In another embodiment, an orchestrated pattern is associated with a type of network communication. In this embodiment, the gateway 125 identifies the type of a network communication received by the gateway 125 from the communication network 130 before propagating the network communication to the computer network 110. The controller 115 then selects an orchestration pattern based on the type of network communication identified by the gateway 125 and orchestrates network activities in the computer network 110 based on the selected orchestration pattern. In the computer network 110, the network communication accesses one or more computing systems 120 via one or more ports to access one or more computing services (e.g., network services) provided by the computing systems 120.


For example, the network communication may access an FTP server on one of the computing systems 120 via a well-known or registered FTP port number using an appropriate network protocol (e.g., TCP or UDP). In this example, the orchestration pattern includes the identity of the computing system 120, the FTP port number, and the appropriate network protocol for the FTP server. If the monitored behavior of the computer network 110 does not match the orchestrated behavior expected from the orchestration pattern, the network communication is deemed to be infected with a computer worm. The controller 115 then determines an identifier for the computer worm based on the monitored behavior, as is described in more detail herein.


The controller 115 orchestrates network activities in the computer network 110 such that the detection of anomalous behavior in the computer network 110 is simple and highly reliable. All behavior (e.g., network activities) of the computer network 110 that is not part of an orchestrated behavior represents an anomalous behavior. In alternative embodiments, the monitored behavior of the computer network 110 that is not part of the orchestrated behavior is analyzed to determine whether any of the monitored behavior is an anomalous behavior.


In another embodiment, the controller 115 periodically orchestrates network activities in the computer network 110 to access various computing services (e.g., web servers or file servers) in the communication network 130. In this way, a computer worm that has infected one of these computing services may propagate from the communication network 130 to the computer network 110 via the orchestrated network activities. The controller 115 then orchestrates network activities to access the same computing services in the computer network 110 and monitors a behavior of the computer network 110 in response to the orchestrated network activities. If the computer worm has infected the computer network 110, the controller 115 detects the computer worm based on an anomalous behavior of the computer worm in the monitored behavior, as is described more fully herein.


In one embodiment, a single orchestration pattern exercises all available computing services in the computer network 110. In other embodiments, each orchestration pattern exercises selected computing services in the computer network 110, or the orchestration patterns for the computer network 110 are dynamic (e.g., vary over time). For example, a user of the computer worm sensor 105 may add, delete, or modify the orchestration patterns to change the orchestrated behavior of the computer network 110.


In one embodiment, the controller 115 orchestrates network activities in the computer network 110 to prevent a computer worm in the communication network 130 from recognizing the computer network 110 as a decoy. For example, a computer worm may identify and avoid inactive computer networks, as such networks may be decoy computer networks deployed for detecting the computer worm (e.g., the computer network 110). In this embodiment, therefore, the controller 115 orchestrates network activities in the computer network 110 to prevent the computer worm from avoiding the computer network 110.


In another embodiment, the controller 115 analyzes both the packet header and the data portion of data packets in network communications in the computer network 110 to detect anomalous behavior in the computer network 110. For example, the controller 115 can compare the packet header and the data portion of the data packets with those of data packets propagated pursuant to an orchestration pattern to determine whether the network communications data packets constitute anomalous behavior in the computer network 110. Because the network communication propagated pursuant to the orchestration pattern is an orchestrated behavior of the computer network 110, the controller 115 avoids false positive detection of anomalous behavior in the computer network 110, which can occur in anomaly detection systems operating on unconstrained computer networks. In this way, the controller 115 reliably detects computer worms in the computer network 110 based on the anomalous behavior.


To further illustrate what is meant by reliable detection of anomalous behavior, for example, an orchestration pattern can be used that is expected to cause emission of a sequence of data packets (a, b, c, d) in the computer network 110. The controller 115 orchestrates network activities in the computer network 110 based on the orchestration pattern and monitors the behavior (e.g., measures the network traffic) of the computer network 110. If the monitored behavior of the computer network 110 includes a sequence of data packets (a, b, c, d, e, f), then the extra data packets (e, f) represent an anomalous behavior (e.g., anomalous traffic). This anomalous behavior may be caused by an active computer worm propagating inside the computer network 110.


As another example, if an orchestration pattern is expected to cause emission of a sequence of data packets (a, b, c, d) in the computer network 110, but the monitored behavior includes a sequence of data packets (a, b, c′, d), the modified data packets (b′, c′) represent an anomalous behavior in the computer network 110. This anomalous behavior may be caused by a passive computer worm propagating inside the computer network 110.


In various further embodiments, the controller 115 generates a recovery script for the computer worm, as is described more fully herein. The controller 115 can then execute the recovery script to disable (e.g., destroy) the computer worm in the computer worm sensor 105 (e.g., remove the computer worm from the computing systems 120 and the gateway 125). Moreover, the controller 115 can output the recovery script for use in disabling the computer worm in other infected computer networks and systems.


In another embodiment, the controller 115 identifies the source of a computer worm based on a network communication containing the computer worm. For example, the controller 115 may identify an infected host (e.g., a computing system) in the communication network 130 that generated the network communication containing the computer worm. In this example, the controller 115 transmits the recovery script via the gateway 125 to the host in the communication network 130. In turn, the host executes the recovery script to disable the computer worm in the host. In various further embodiments, the recovery script is also capable of repairing damage to the host caused by the computer worm.


The computer worm sensor 105 can export the recovery script, in some embodiments, to a bootable compact disc (CD) or floppy disk that can be loaded into infected hosts to repair the infected hosts. For example, the recovery script can include an operating system for the infected host and repair scripts that are invoked as part of the booting process of the operating system to repair an infected host. Alternatively, the computer worm sensor 105 may provide the recovery script to an infected computer network (e.g., the communication network 130) so that the computer network 130 can direct infected hosts in the communication network 130 to reboot and load the operating system in the recovery script.


In another embodiment, the computer worm sensor 105 uses a per-host detection and recovery mechanism to recover hosts (e.g., computing systems) in a computer network (e.g., the communication network 130). The computer worm sensor 105 generates a recovery script including a detection process for detecting the computer worm and a recovery process for disabling the computer worm and repairing damage caused by the computer worm. The computer worm sensor 105 provides the recovery script to hosts in a computer network and each host executes the detection process. If the host detects the computer worm, the host then executes the recovery process. In this way, a computer worm that performs random corruptive acts on the different hosts (e.g., computing systems) in the computer network can be disabled in the computer network and damage to the computer network caused by the computer worm can be repaired.


The computer worm sensor 105 can be a single integrated system, such as a network device or a network appliance, which is deployed in the communication network 130 (e.g., a commercial or military computer network). Alternatively, the computer worm sensor 105 may include integrated software for controlling operation of the computer worm sensor 105, such that per-host software (e.g., individual software for each computing system 120 and gateway 125) is not required.


The computer worm sensor 105 can also be a hardware module, such as a combinational logic circuit, a sequential logic circuit, a programmable logic device, or a computing device, among others. Alternatively, the computer worm sensor 105 may include one or more software modules containing computer program code, such as a computer program, a software routine, binary code, or firmware, among others. The software code can be contained in a permanent memory storage device such as a compact disc read-only memory (CD-ROM), a hard disk, or other memory storage device. In various embodiments, the computer worm sensor 105 includes both hardware and software modules.


In some embodiments, the computer worm sensor 105 is substantially transparent to the communication network 130 and does not substantially affect the performance or availability of the communication network 130. In another embodiment, the software in the computer worm sensor 105 may be hidden such that a computer worm cannot detect the computer worm sensor 105 by checking for the existence of files (e.g., software programs) in the computer worm sensor 105 or by performing a simple signature check of the files. In one example, the software configuration of the computer worm sensor 105 is hidden by employing one or more well-known polymorphic techniques used by viruses to evade signature-based detection.


In another embodiment, the gateway 125 facilitates propagation of computer worms from the communication network 130 to the computer network 110, with the controller 115 orchestrating network activities in the computer network 110 to actively propagate the computer worms from the communication network 130 to the computer network 110. For example, the controller 115 can originate one or more network communications between the computer network 110 and the communication network 130. In this way, a passive computer worm in the communication network 130 can attach to one of the network communications and propagate along with the network communication from the communication network 130 to the computer network 110. Once the computer worm is in the computer network 110, the controller 115 can detect the computer worm based on an anomalous behavior of the computer worm, as is described in more fully herein.


In another embodiment, the gateway 125 selectively prevents normal network traffic (e.g., network traffic not generated by a computer worm) from propagating from the communication network 130 to the computer network 110 to prevent various anomalies or perturbations in the computer network 110. In this way, the orchestrated behavior of the computer network 110 can be simplified to increase the reliability of the computer worm sensor 105.


For example, the gateway 125 can prevent Internet Protocol (IP) data packets from being routed from the communication network 130 to the computer network 110. Alternatively, the gateway 125 can prevent broadcast and multicast network communications from being transmitted from the communication network 130 to the computer network 110, prevent communications generated by remote shell applications (e.g., Telnet) in the communication network 130 from propagating to the computer network 110, or exclude various application level gateways including proxy services that are typically present in a computer network for application programs in the computer network. Such application programs can include a Web browser, an FTP server and a mail server, and the proxy services can include the Hypertext Markup Language (HTML), the File Transfer Protocol (FTP), or the Simple Mail Transfer Protocol (SMTP).


In another embodiment, the computing systems 120 and the gateway 125 are virtual computing systems. For example, the computing systems 120 may be implemented as virtual systems using machine virtualization technologies such as VMware™ sold by VMware, Inc. In another example, the VM can be based on instrumental virtual CPU technology (e.g., Bochs, Qemu, and Valgrind.) In another embodiment, the virtual systems include VM software profiles and the controller 115 automatically updates the VM software profiles to be representative of the communication network 130. The gateway 125 and the computer network 110 may also be implemented as a combination of virtual and real systems.


In another embodiment, the computer network 110 is a virtual computer network. The computer network 110 includes network device drivers (e.g., special purpose network device drivers) that do not access a physical network, but instead use software message passing between the different virtual computing systems 120 in the computer network 110. The network device drivers may log data packets of network communications in the computer network 110, which represent the monitored behavior of the computer network 110.


In various embodiments, the computer worm sensor 105 establishes a software environment of the computer network 110 (e.g., computer programs in the computing systems 120) to reflect a software environment of a selected computer network (e.g., the communication network 130). For example, the computer worm sensor 105 can select a software environment of a computer network typically attacked by computer worms (e.g., a software environment of a commercial communication network) and can configure the computer network 110 to reflect that software environment. In a further embodiment, the computer worm sensor 105 updates the software environment of the computer network 110 to reflect changes in the software environment of the selected computer network. In this way, the computer worm sensor 105 can effectively detect a computer worm that targets a recently deployed software program or software profile in the software environment (e.g., a widely deployed software profile).


The computer worm sensor 105 can also monitor the software environment of the selected computer network and automatically update the software environment of the computer network 110 to reflect the software environment of the selected computer network. For example, the computer worm sensor 105 can modify the software environment of the computer network 110 in response to receiving an update for a software program (e.g., a widely used software program) in the software environment of the selected computer network.


In another embodiment, the computer worm sensor 105 has a probe mechanism to automatically check the version, the release number, and the patch-level of major operating systems and application software components installed in the communication network 130. Additionally, the computer worm sensor 105 has access to a central repository of up-to-date versions of the system and application software components. In this embodiment, the computer worm sensor 105 detects a widely used software component (e.g., software program) operating in the communication network 130, downloads the software component from the central repository, and automatically deploys the software component in the computer network 110 (e.g., installs the software component in the computing systems 120). The computer worm sensor 105 may coordinate with other computer worm sensors 105 to deploy the software component in the computer networks 110 of the computer worm sensors 105. In this way, the software environment of each computer worm sensor 105 is modified to contain the software component.


In another embodiment, the computer worm sensors 105 are automatically updated from a central computing system (e.g., a computing server) by using a push model. In this embodiment, the central computing system obtains updated software components and sends the updated software components to the computer worm sensors 105. Moreover, the software environments of the computer worm sensors 105 can represent widely deployed software that computer worms are likely to target. Examples of available commercial technologies that can aid in the automated update of software and software patches in a networked environment include N1 products sold by SUN Microsystems, Inc.™ and Adaptive Infrastructure products sold by the Hewlett Packard Company™. In some embodiments, the computer worm sensors 105 are automatically updated by connecting to an independent software vendor (ISV) supplied update mechanism (e.g., the Microsoft Windows™ update service.)


The computer worm sensor 105, in some embodiments, can maintain an original image of the computer network 110 (e.g., a copy of the original file system for each computing system 120) in a virtual machine that is isolated from both of the computer network 110 and the communication network 130 (e.g., not connected to the computer network 110 or the communication network 130). The computer worm sensor 105 obtains a current image of an infected computing system 120 (e.g., a copy of the current file system of the computing system 120) and compares the current image with the original image of the computer network 110 to identify any discrepancies between these images, which represent an anomalous behavior of a computer worm in the infected computing system 120.


The computer worm sensor 105 generates a recovery script based on the discrepancies between the current image and the original image of the computing system 120. The recovery script can be used to disable the computer worm in the infected computing system 120 and repair damage to the infected computing system 120 caused by the computer worm. For example, the recovery script may include computer program code for identifying infected software programs or memory locations based on the discrepancies, and for removing the discrepancies from the infected software programs or memory locations. The infected computing system 120 can then execute the recovery script to disable (e.g., destroy) the computer worm and repair any damage to the infected computing system 120 caused by the computer worm.


The recovery script may include computer program code for replacing the current file system of the computing system 120 with the original file system of the computing system 120 in the original image of the computer network 110. Alternatively, the recovery script may include computer program code for replacing infected files with the corresponding original files of the computing system 120 in the original image of the computer network 110. In still another embodiment, the computer worm sensor 105 includes a file integrity checking mechanism (e.g., a tripwire) for identifying infected files in the current file system of the computing system 120. The recovery script can also include computer program code for identifying and restoring files modified by a computer worm to reactivate the computer worm during reboot of the computing system 120 (e.g., reactivate the computer worm after the computer worm is disabled).


In one embodiment, the computer worm sensor 105 occupies a predetermined address space (e.g., an unused address space) in the communication network 130. The communication network 130 redirects those network communications directed to the predetermined address space to the computer worm sensor 105. For example, the communication network 130 can redirect network communications to the computer worm sensor 105 by using various IP layer redirection techniques. In this way, an active computer worm using a random IP address scanning technique (e.g., a scan directed computer worm) can randomly select an address in the predetermined address space and can infect the computer worm sensor 105 based on the selected address (e.g., transmitting a network communication containing the computer worm to the selected address).


An active computer worm can select an address in the predetermined address space based on a previously generated list of target addresses (e.g., a hit-list directed computer worm) and can infect a computing system 120 located at the selected address. Alternatively, an active computer worm can identify a target computing system 120 located at the selected address in the predetermined address space based on a previously generated list of target systems, and then infect the target computing system 120 based on the selected address.


In various embodiments, the computer worm sensor 105 identifies data packets directed to the predetermined address space and redirects the data packets to the computer worm sensor 105 by performing network address translation (NAT) on the data packets. For example, the computer network 110 may perform dynamic NAT on the data packets based on one or more NAT tables to redirect data packets to one or more computing systems 120 in the computer network 110. In the case of a hit-list directed computer worm having a hit-list that does not have a network address of a computing system 120 in the computer network 110, the computer network 110 can perform NAT to redirect the hit-list directed computer worm to one of the computing systems 120. Further, if the computer worm sensor 105 initiates a network communication that is not defined by the orchestrated behavior of the computer network 110, the computer network 110 can dynamically redirect the data packets of the network communication to a computing system 120 in the computer network 110.


In another embodiment, the computer worm sensor 105 operates in conjunction with dynamic host configuration protocol (DHCP) servers in the communication network 130 to occupy an address space in the communication network 130. In this embodiment, the computer worm sensor 105 communicates with each DHCP server to determine which IP addresses are unassigned to a particular subnet associated with the DHCP server in the communication network 130. The computer worm sensor 105 then dynamically responds to network communications directed to those unassigned IP addresses. For example, the computer worm sensor 105 can dynamically generate an address resolution protocol (ARP) response to an ARP request.


In another embodiment, a traffic analysis device 135 analyzes communication traffic in the communication network 130 to identify a sequence of network communications characteristic of a computer worm. The traffic analysis device 135 may use one or more well-known worm traffic analysis techniques to identify a sequence of network communications in the communication network 130 characteristic of a computer worm. For example, the traffic analysis device 135 may identify a repeating pattern of network communications based on the destination ports of data packets in the communication network 130. The traffic analysis device 135 duplicates one or more network communications in the sequence of network communications and provides the duplicated network communications to the controller 115, which emulates the duplicated network communications in the computer network 110.


The traffic analysis device 135 may identify a sequence of network communications in the communication network 130 characteristic of a computer worm by using heuristic analysis techniques (i.e., heuristics) known to those skilled in the art. For example, the traffic analysis device 135 may detect a number of IP address scans, or a number of network communications to an invalid IP address, occurring within a predetermined period. The traffic analysis device 135 determines whether the sequence of network communications is characteristic of a computer worm by comparing the number of IP address scans or the number of network communications in the sequence to a heuristics threshold (e.g., one thousand IP address scans per second).


The traffic analysis device 135 may lower typical heuristics thresholds of these heuristic techniques to increase the rate of computer worm detection, which can also increase the rate of false positive computer worm detection by the traffic analysis device 135. Because the computer worm sensor 105 emulates the duplicated network communications in the computer network 110 to determine whether the network communications include an anomalous behavior of a computer worm, the computer worm sensor 105 may increase the rate of computer worm detection without increasing the rate of false positive worm detection.


In another embodiment, the traffic analysis device 135 filters network communications characteristic of a computer worm in the communication network 130 before providing duplicate network communications to the controller 115. For example, a host A in the communication network 130 can send a network communication including an unusual data byte sequence (e.g., worm code) to a TCP/UDP port of a host B in the communication network 130. In turn, the host B can send a network communication including a similar unusual data byte sequence to the same TCP/UDP port of a host C in the communication network 130. In this example, the network communications from host A to host B and from host B to host C represent a repeating pattern of network communication. The unusual data byte sequences may be identical data byte sequences or highly correlated data byte sequences. The traffic analysis device 135 filters the repeating pattern of network communications by using a correlation threshold to determine whether to duplicate the network communication and provide the duplicated network communication to the controller 115.


The traffic analysis device 135 may analyze communication traffic in the communication network 130 for a predetermined period. For example, the predetermined period can be a number of seconds, minutes, hours, or days. In this way, the traffic analysis device 135 can detect slow propagating computer worms as well as fast propagating computer worms in the communication network 130.


The computer worm sensor 105 may contain a computer worm (e.g., a scanning computer worm) within the computer network 110 by performing dynamic NAT on an unexpected network communication originating in the computer network 110 (e.g., an unexpected communication generated by a computing system 120). For example, the computer worm sensor 105 can perform dynamic NAT on data packets of an IP address range scan originating in the computer network 110 to redirect the data packets to a computing system 120 in the computer network 110. In this way, the network communication is contained in the computer network 110.


In another embodiment, the computer worm sensor 105 is topologically knit into the communication network 130 to facilitate detection of a topologically directed computer worm. The controller 115 may use various network services in the communication network 130 to topologically knit the computer worm sensor 105 into the communication network 130. For example, the controller 115 may generate a gratuitous ARP response including the IP address of a computing system 120 to the communication network 130 such that a host in the communication network 130 stores the IP address in an ARP cache. In this way, the controller 115 plants the IP address of the computing system 120 into the communication network 130 to topologically knit the computing system 120 into the communication network 130.


The ARP response generated by the computer worm sensor 105 may include a media access control (MAC) address and a corresponding IP address for one or more of the computing systems 120. A host (e.g., a computing system) in the communication network 130 can then store the MAC and IP addresses in one or more local ARP caches. A topologically directed computer worm can then access the MAC and IP addresses in the ARP caches and can target the computing systems 120 based on the MAC or IP addresses.


In various embodiments, the computer worm sensor 105 can accelerate network activities in the computer network 110. In this way, the computer worm sensor 105 can reduce the time for detecting a time-delayed computer worm (e.g., the CodeRed-II computer worm) in the computer network 110. Further, accelerating the network activities in the computer network 110 may allow the computer worm sensor 105 to detect the time-delayed computer worm before the time-delayed computer worm causes damage in the communication network 130. The computer worm sensor 105 can then generate a recovery script for the computer worm and provide the recovery script to the communication network 130 for disabling the computer worm in the communication network 130.


The computing system 120 in the computer network can accelerate network activities by intercepting time-sensitive system calls (e.g., “time-of-day” or “sleep” system calls) generated by a software program executing in the computing system 120 or responses to such systems calls, and then modifying the systems calls or responses to accelerate execution of the software program. For example, the computing system 120 can modify a parameter of a “sleep” system call to reduce the execution time of this system call or modify the time or date in a response to a “time-of-day” system call to a future time or date. Alternatively, the computing system 120 can identify a time consuming program loop (e.g., a long, central processing unit intensive while loop) executing in the computing system 120 and can increase the priority of the software program containing the program loop to accelerate execution of the program loop.


In various embodiments, the computer worm sensor 105 includes one or more computer programs for identifying execution anomalies in the computing systems 120 (e.g., anomalous behavior in the computer network 110) and distinguishing a propagation vector of a computer worm from spurious traffic (e.g. chaff traffic) generated by the computer worm. In one embodiment, the computing systems 120 execute the computing programs to identify execution anomalies occurring in the computing network 110. The computer worm sensor 105 correlates these execution anomalies with the monitored behavior of the computer worm to distinguish computing processes (e.g., network services) that the computer worm exploits for propagation purposes from computing processes that only receive benign network traffic from the computer worm. The computer worm sensor 105 then determines a propagation vector of the computer worm based on the computing processes that the computer worm propagates for exploitative purposes. In a further embodiment, each computing system 120 executing a function of one of the computer programs as an intrusion detection system (IDS) by generating a computer worm intrusion indicator in response to detecting an execution anomaly.


In one embodiment, the computer worm sensor 105 tracks system call sequences to identify an execution anomaly in the computing system 120. For example, the computer worm sensor 105 can use finite state automata techniques to identify an execution anomaly. Additionally, the computer worm system 105 may identify an execution anomaly based on call-stack information for system calls executed in a computing system 120. For example, a call-stack execution anomaly may occur when a computer worm executes system calls from the stack or the heap of the computing system 120. The computer worm system 105 may also identify an execution anomaly based on virtual path identifiers in the call-stack information.


The computer worm system 105 may monitor transport level ports of a computing system 120. For example, the computer worm sensor 105 can monitor systems calls (e.g., “bind” or “recvfrom” system calls) associated with one or more transport level ports of a computing process in the computing system 120 to identify an execution anomaly. If the computer worm system 105 identifies an execution anomaly for one of the transport level ports, the computer worm sensor 105 includes the transport level port in the identifier (e.g., a signature or a vector) of the computer worm, as is described more fully herein.


In another embodiment, the computer worm sensor 105 analyzes binary code (e.g., object code) of a computing process in the computing system 120 to identify an execution anomaly. The computer worm system 105 may also analyze the call stack and the execution stack of the computing system 120 to identify the execution anomaly. For example, the computer worm sensor 105 may perform a static analysis on the binary code of the computing process to identify possible call stacks and virtual path identifiers for the computing process. The computer worm sensor 105 then compares an actual call stack with the identified call stacks to identify a call stack execution anomaly in the computing system 120. In this way, the computer worm sensor 105 can reduce the number of false positive computer worm detections and false negative computer worm detections. Moreover, if the computer worm sensor 105 can identify all possible call-stacks and virtual path identifiers for the computing process, the computer worm sensor 105 can have a zero false positive rate of computer worm detection.


In another embodiment, the computer worm sensor 105 identifies one or more anomalous program counters in the call stack. For example, an anomalous program counter can be the program counter of a system call generated by worm code of a computer worm. The computer worm sensor 105 tracks the anomalous program counters and determines an identifier for detecting the computer worm based on the anomalous program counters. Additionally, the computer worm sensor 105 can determine whether a memory location (e.g., a memory address or a memory page) referenced by the program counter is a writable memory location. The computer worm sensor 105 then determines whether the computer worm has exploited the memory location. For example, a computer worm can store worm code into a memory location by exploiting a vulnerability of the computing system 120 (e.g., a buffer overflow mechanism).


The computer worm sensor 105 may take a snapshot of data in the memory around the memory location referenced by the anomalous program counter. The computer worm sensor 105 then searches the snapshot for data in recent data packets received by the computing process (e.g., computing thread) associated with the anomalous program counter. The computer worm sensor 105 searches the snapshot by using a searching algorithm to compare data in the recent data packets with a sliding window of data (e.g., 16 bytes of data) in the snapshot. If the computer worm sensor 105 finds a match between the data in a recent data packet and the data in the sliding window, the matching data is deemed to be a signature candidate for the computer worm.


In another embodiment, the computing system 120 tracks the integrity of computing code in a computing system 120 to identify an execution anomaly in the computing system 120. The computing system 120 associates an integrity value with data stored in the computing system 120 to identify the source of the data. If the data is from a known source (e.g., a computing program) in the computing system 120, the integrity value is set to one, otherwise the integrity value is set to zero. For example, data received by the computing system 120 in a network communication is associated with an integrity value of zero. The computing system 120 stores the integrity value along with the data in the computing system 120, and monitors a program counter in the computing system 120 to identify an execution anomaly based on the integrity value. A program counter having an integrity value of zero indicates that data from a network communication is stored in the program counter, which represents an execution anomaly in the computing system 120.


The computing system 120 may use the signature extraction algorithm to identify a decryption routine in the worm code of a polymorphic worm, such that the decryption routine is deemed to be a signature candidate of the computer worm. Additionally, the computer worm sensor 105 may compare signature candidates identified by the computing systems 120 in the computer worm sensor 105 to determine an identifier for detecting the computer worm. For example, the computer worm sensor 105 can identify common code portions in the signature candidates to determine an identifier for detecting the computer worm. In this way, the computer worm sensor 105 can determine an identifier of a polymorphic worm containing a mutating decryption routine (e.g., polymorphic code).


In another embodiment, the computer worm sensor 105 monitors network traffic in the computer network 110 and compares the monitored network traffic with typical network traffic patterns occurring in a computer network to identify anomalous network traffic in the computer network 110. The computer worm sensor 105 determines signature candidates based on data packets of the anomalous network traffic (e.g., extracts signature candidates from the data packets) and determines identifiers for detecting computer worms based on the signature candidates.


In another embodiment, the computer worm sensor 105 evaluates characteristics of a signature candidate to determine the quality of the signature candidate, which indicates an expected level of false positive computer worm detection in a computer network (e.g., the communication network 130). For example, a signature candidate having a high quality is not contained in data packets of typical network traffic occurring in the computer network. Characteristics of a signature candidate include a minimum length of the signature candidate (e.g., 16 bytes of data) and an unusual data byte sequence. In one embodiment, the computer worm sensor 105 performs statistical analysis on the signature candidate to determine whether the signature candidate includes an unusual byte sequence. For example, computer worm sensor 105 can determine a correlation between the signature candidate and data contained in typical network traffic. In this example, a low correlation (e.g., zero correlation) indicates a high quality signature candidate.


In another embodiment, the computer worm sensor 105 identifies execution anomalies by detecting unexpected computing processes in the computer network 110 (i.e., computing processes that are not part of the orchestrated behavior of the computing network 110). The operating systems in the computing systems 120 may be configured to detect computing processes that are not in a predetermined collection of computing processes. In another embodiment, a computing system 120 is configured as a network server that permits a host in the communication network 130 to remotely execute commands on the computing system 120. For example, the original Morris computer worm exploited a debug mode of sendmail that allowed remote command execution in a mail server.


In some cases, the intrusion detection system of the computer worm sensor 105 detects an active computer worm based on anomalous network traffic in the computer network 110, but the computer worm sensor 105 does not detect an execution anomaly caused by a computing process in the computer network 110. In these cases, the computer worm sensor 105 determines whether the computer worm has multiple possible transport vectors based on the ports being accessed by the anomalous network traffic in the computer network 110. If the computer network 110 includes a small number of ports (e.g., one or two), the computer worm sensor 105 can use these ports to determine a vector for the computer worm. Conversely, if the computer network 110 includes many ports (e.g., three or more ports), the computer worm sensor 105 partitions the computing services in the computer network 110 at appropriate control points to determine those ports exploited by the computer worm.


The computer worm sensor 105 may randomly block ports of the computing systems 120 to suppress traffic to these blocked ports. Consequently, a computer worm having a transport vector that requires one or more of the blocked ports will not be able to infect a computing system 120 in which those ports are blocked. The computer worm sensor 105 then correlates the anomalous behavior of the computer worm across the computing systems 120 to determine which ports the computer worm has used for diversionary purposes (e.g., emitting chaff) and which ports the computer worm has used for exploitive purposes. The computer worm sensor 105 then determines a transport vector of the computer worm based on the ports that the computer worm has used for exploitive purposes.



FIG. 2 depicts an exemplary embodiment of the controller 115. The controller 115 includes an extraction unit 200, an orchestration engine 205, a database 210, and a software configuration unit 215. The extraction unit 200, the orchestration engine 205, the database 210, and the software configuration unit 215 are in communication with each other and with the computer network 110 (FIG. 1). Optionally, the controller 115 includes a protocol sequence replayer 220 in communication with the computer network 110 and the traffic analysis device 135 (FIG. 1).


In various embodiments, the orchestration engine 205 controls the state and operation of the computer worm sensor 105 (FIG. 1). In one embodiment, the orchestration engine 205 configures the computing systems 120 (FIG. 1) and the gateway 125 (FIG. 1) to operate in a predetermined manner in response to network activities occurring in the computer network 110, and generates network activities in the computer network 110 and the communication network 130 (FIG. 1). In this way, the orchestration engine 205 orchestrates network activities in the computer network 110. For example, the orchestration engine 205 may orchestrate network activities in the computer network 110 by generating an orchestration sequence (e.g., a predetermined sequence of network activities) among various computing systems 120 in the computer network 110, including network traffic that typically occurs in the communication network 130.


In one embodiment, the orchestration engine 205 sends orchestration requests (e.g., orchestration patterns) to various orchestration agents (e.g., computing processes) in the computing systems 120. The orchestration agent of a computing system 120 performs a periodic sweep of computing services (e.g., network services) in the computing system 120 that are potential targets of a computer worm attack. The computing services in the computing system 120 may includes typical network services (e.g., web service, FTP service, mail service, instant messaging, or Kazaa) that are also in the communication network 130.


The orchestration engine 205 may generate a wide variety of orchestration sequences to exercise a variety of computing services in the computer network 110, or may select orchestration patterns to avoid loading the communication network 110 with orchestrated network traffic. Additionally, the orchestration engine 205 may select the orchestration patters to vary the orchestration sequences. In this way, a computer worm is prevented from scanning the computer network 110 to predict the behavior of the computer network 110.


In various embodiments, the software configuration unit 215 dynamically creates or destroys virtual machines (VMs) or VM software profiles in the computer network 110, and may initialize or update the software state of the VMs or VM software profiles. In this way, the software configuration unit 215 configures the computer network 110 such that the controller 115 can orchestrate network activities in the computer network 110 based on one or more orchestration patterns. It is to be appreciated that the software configuration unit 215 is optional in various embodiments of the computer worm sensor 105.


In various embodiments, the extraction unit 200 determines an identifier for detecting the computer worm. In these embodiments, the extraction unit 200 can extract a signature or a vector of the computer worm based on network activities (e.g., an anomalous behavior) occurring in the computer network 110, for example from data (e.g., data packets) in a network communication.


The database 210 stores data for the computer worm sensor 105, which may include a configuration state of the computer worm sensor 105. For example, the configuration state may include orchestration patterns or “golden” software images of computer programs (i.e., original software images uncorrupted by a computer worm exploit). The data stored in the database 210 may also includes identifiers or recovery scripts for computer worms, or identifiers for the sources of computer worms in the communication network 130. The identifier for the source of each computer worm may be associated with the identifier and the recovery script of the computer worm.


The protocol sequence replayer 220 receives a network communication from the traffic analysis device 135 (FIG. 1) representing a network communication in the communication network 130 and replays (i.e., duplicates) the network communication in the computer network 110. The protocol sequence replayer 220 may receive the network communication from the traffic analysis device 135 via a private encrypted network (e.g., a virtual private network) within the communication network 130 or via another communication network. The controller 115 monitors the behavior of the computer network 110 in response to the network communication to determine a monitored behavior of the computer network 110 and determine whether the monitored behavior includes an anomalous behavior, as is described more fully herein.


In one embodiment, the protocol sequence replayer 220 includes a queue 225 for storing network communications. The queue 225 receives a network communication from the traffic analysis device 135 and temporarily stores the network communication until the protocol sequence replayer 220 is available to replay the network communication. In another embodiment, the protocol sequence replayer 220 is a computing system 120 in the computer network 110. For example, the protocol sequence replayer 200 may be a computer server including computer program code for replaying network communications in the computer network 110.


In another embodiment, the protocol sequence replayer 220 is in communication with a port (e.g., connected to a network port) of a network device in the communication network 130 and receives duplicated network communications occurring in the communication network 130 from the port. For example, the port can be a Switched Port Analyzer (SPAN) port of a network switch or a network router in the communication network 130, which duplicates network traffic in the communication network 130. In this way, various types of active and passive computer worms (e.g., hit-list directed, topologically-directed, server-directed, and scan-directed computer worms) may propagate from the communication network 130 to the computer network 110 via the duplicated network traffic.


The protocol sequence replayer 220 replays the data packets in the computer network 110 by sending the data packets to a computing system 120 having the same class (e.g., Linux or Windows platform) as the original target system of the data packets. In various embodiments, the protocol network replayer 220 synchronizes any return network traffic generated by the computing system 120 in response to the data packets. The protocol sequence replayer 220 may suppress (e.g., discard) the return network traffic such that the return network traffic is not transmitted to a host in the communication network 130. In one embodiment, the protocol sequence replayer 220 replays the data packets by sending the data packets to the computing system 120 via a TCP connection or UDP session. In this embodiment, the protocol sequence replayer 220 synchronizes return network traffic by terminating the TCP connection or UDP session.


The protocol sequence replayer 220 may modify destination IP addresses of data packets in the network communication to one or more IP addresses of the computing systems 120 and replay (i.e., generate) the modified data packets in the computer network 110. The controller 115 monitors the behavior of the computer network 110 in response to the modified data packets, and may detect an anomalous behavior in the monitored behavior, as is described more fully herein. If the controller 115 identifies an anomalous behavior, the computer network 110 is deemed to be infected with a computer worm and the controller 115 determines an identifier for the computer worm, as is described more fully herein.


The protocol sequence replayer 220 may analyze data packets in a sequence of network communications in the communication network 130 to identify a session identifier. The session identifier identifies a communication session for the sequence of network communications and can distinguish the network communications in the sequence from other network communications in the communication network 130. For example, each communication session in the communication network 130 can have a unique session identifier. The protocol sequence replayer 220 may identify the session identifier based on the communication protocol of the network communications in the sequence. For instance, the session identifier may be in a field of a data packet header as specified by the communication protocol. Alternatively, the protocol sequence replayer 220 may infer the session identifier from repeating network communications in the sequence. For example, the session identifier is typically one of the first fields in an application level communication between a client and a server (e.g., computing system 120) and is repeatedly used in subsequent communications between the client and the server.


The protocol sequence replayer 220 may modify the session identifier in the data packets of the sequence of network communications. The protocol sequence replayer 220 generates an initial network communication in the computer network 110 based on a selected network communication in the sequence, and the computer network 110 (e.g., a computing system 120) generates a response including a session identifier. The protocol sequence replayer 220 then substitutes the session identifier in the remaining data packets of the network communication with the session identifier of the response. In a further embodiment, the protocol sequence replayer 220 dynamically modifies session variables in the data packets, as is appropriate, to emulate the sequence of network communications in the computer network 110.


The protocol sequence replayer 220 may determine the software or hardware profile of a host (e.g., a computing system) in the communication network 130 to which the data packets of the network communication are directed. The protocol sequence replayer 220 then selects a computing system 120 in the computer network 110 that has the same software or hardware profile of the host and performs dynamic NAT on the data packets to redirect the data packets to the selected computing system 120. Alternatively, the protocol sequence replayer 220 randomly selects a computing system 120 and performs dynamic NAT on the data packets to redirect the data packets to the randomly selected computing system 120.


In one embodiment, the traffic analysis device 135 can identify a request (i.e., a network communication) from a web browser to a web server in the communication network 130, and a response (i.e., a network communication) from the web server to the web browser. In this case, the response may include a passive computer worm. The traffic analysis device 135 may inspect web traffic on a selected network link in the communication network 130 to identify the request and response. For example, the traffic analysis device 135 may select the network link or identify the request based on a policy. The protocol sequence replayer 220 orchestrates the request in the computer network 110 such that a web browser in a computing system 120 initiates a substantially similar request. In response to this request, the protocol sequence replayer 220 generates a response to the web browser in the computing system 120, which is substantially similar to the response generated by the web server in the communication network 130. The controller 115 then monitors the behavior of the web browser in the computing system 120 and may identify an anomalous behavior in the monitored behavior. If the controller 115 identifies an anomalous behavior, the computer network 110 is deemed to be infected with a passive computer worm.



FIG. 3 depicts an exemplary computer worm detection system 300. The computer worm detection system 300 includes multiple computer worm sensors 105 and a sensor manager 305. Each of the computer worm sensors 130 is in communication with the sensor manager 305 and the communication network 130. The sensor manager 305 coordinates communications or operations between the computer worm sensors 105.


In one embodiment, each computer worm sensor 105 randomly blocks one or more ports of the computing systems 120. Accordingly, some of the worm sensors 105 may detect an anomalous behavior of a computer worm, as described more fully herein. The worm sensors 105 that detect an anomalous behavior communicate the anomalous behavior (e.g., a signature candidate) to the sensor manager 305. In turn, the sensor manager 305 correlates the anomalous behaviors and determines an identifier (e.g., a transport vector) for detecting the computer worm.


In some cases, a human intruder (e.g., a computer hacker) may attempt to exploit vulnerabilities that a computer worm would exploit in a computer worm sensor 105. The sensor manager 305 may distinguish an anomalous behavior of a human intruder from an anomalous behavior of a computer worm by tracking the number of computing systems 120 in the computer worm sensors 105 that detect a computer worm within a given period. If the number of computing systems 120 detecting a computer worm within the given period exceeds a predetermined threshold, the sensor manager 305 determines that a computer worm caused the anomalous behavior. Conversely, if the number of computing systems 120 detecting a computer worm within the given period is equal to or less than the predetermined threshold, the sensor manager 300 determines that a human intruder caused the anomalous behavior. In this way, false positive detections of the computer worm may be decreased.


In one embodiment, each computer worm sensor 105 maintains a list of infected hosts (e.g., computing systems infected by a computer worm) in the communication network 130 and communicates the list to the sensor manager 305. In this way, computer worm detection system 300 maintains a list of infected hosts detected by the computer worm sensors 105.



FIG. 4 depicts a flow chart for an exemplary method of detecting computer worms, in accordance with one embodiment of the present invention. In step 400, the computer worm sensor 105 (FIG. 1) orchestrates a sequence of network activities in the computer network 110 (FIG. 1). For example, the orchestration engine 205 (FIG. 2) of the computer worm sensor 105 can orchestrate the sequence of network activity in the computer network 110 based on one or more orchestration patterns, as is described more fully herein.


In step 405, the controller 115 (FIG. 1) of the computer worm sensor 105 monitors the behavior of the computer network 110 in response to the predetermined sequence of network activity. For example, the orchestration engine 205 (FIG. 2) of the computer worm sensor 105 can monitor the behavior of the computer network 110. The monitored behavior of the computer network 110 may include one or more network activities in addition to the predetermined sequence of network activities or network activities that differ from the predetermined sequence of network activities.


In step 410, the computer worm sensor 105 identifies an anomalous behavior in the monitored behavior to detect a computer worm. In one embodiment, the controller 115 identifies the anomalous behavior by comparing the predetermined sequence of network activities with network activities in the monitored behavior. For example, the orchestration engine 205 of the controller 115 can identify the anomalous behavior by comparing network activities in the monitored behavior with one or more orchestrated behaviors defining the predetermined sequence of network activities. The computer worm sensor 105 evaluates the anomalous behavior to determine whether the anomalous behavior is caused by a computer worm, as is described more fully herein.


In step 415, the computer worm sensor 105 determines an identifier for detecting the computer worm based on the anomalous behavior. The identifier may include a signature or a vector of the computer worm, or both. For example, the vector can be a transport vector, an attack vector, or a payload vector. In one embodiment, the extraction unit 200 of the computer worm sensor 105 determines the signature of the computer worm based on one or more signature candidates, as is described more fully herein. It is to be appreciated that step 415 is optional in accordance with various embodiments of the computer worm sensor 105.


In step 420, the computer worm sensor 105 spreads the identifier to network enforce points. Network enforce points can be a router, server, network switch, or another computer worm sensor 105. Once the network enforce point receives the identifier, the identifier may then be used to identify the same computer worm or other variants at different points in the network. Advantageously, once the computer worm is identified, the identifier can be used by other devices throughout the network to quickly and easily stop attacks from spreading without other computer worm sensors 105 re-orchestrating the predetermined sequence of network activities. The process of dynamic signature creation and enforcement is further discussed in FIGS. 15-18.


In step 425, the computer worm sensor 105 generates a recovery script for the computer worm. An infected host (e.g., an infected computing system or network) can then execute the recovery script to disable (e.g., destroy) the computer worm in the infected host or repair damage to the host caused by the computer worm. The computer worm sensor 105 may also identify a host in the communication network 130 that is the source of the computer worm and provides the recovery script to the host such that the host can disable the computer worm and repair damage to the host caused by the computer worm.


In one embodiment, the controller 115 determines a current image of the file system in the computer network 120, and compares the current image with an original image of the file system in the computer network 120 to identify any discrepancies between the current image and the original image. The controller 115 then generates the recovery script based on these discrepancies. The recovery script includes computer program code for identifying infected software programs or memory locations based on the discrepancies, and removing the discrepancies from infected software programs or memory locations. In various embodiments, the computer worm sensor 105 spreads the recovery script to one or more network enforce points.



FIG. 5 depicts an exemplary embodiment of a computer worm containment system 500 comprising a worm sensor 105 in communication with a computer worm blocking system, shown here as a single blocking device 510, over a communication network 130. The blocking device 510 is configured to protect one or more computing services 520. Although the blocking device 510 is shown in FIG. 5 as integrated within the computing service 520, the blocking device 510 can also be implemented as a network appliance between the computing service 520 and the communication network 130. It will be appreciated that the blocking device 510 can also be in communication with more than one worm sensor 105 across the communication network 130. Further, although the communication network 130 is illustrated as being distinct from the computing service 520, the computing service 520 can also be a component of the communication network 130.


Additionally, the computer worm blocking system can comprise multiple blocking devices 510 in communication with one or more computer worm blocking managers (not shown) across the communication network 130 in analogous fashion to the computer worm detection system 300 of FIG. 3. The computer worm blocking managers coordinate communications and operations between the blocking devices 510. In general, worm sensors 105 and blocking devices 510 may be collocated, or they may be implemented on separate devices, depending on the network environment. In one embodiment, communications between the worm sensors 105, the sensor manager 305, the blocking devices 510, and the computer worm blocking managers are cryptographically authenticated.


In one embodiment, the blocking device 510 loads a computer worm signature into a content filter operating at the network level to block the computer worm from entering the computing service 520 from the communication network 130. In another embodiment, the blocking device 510 blocks a computer worm transportation vector in the computing service 520 by using transport level action control lists (ACLs) in the computing service 520.


More specifically, the blocking device 510 can function as a network interface between the communication network 130 and the corresponding computing service 520. For example, a blocking device 510 can be an inline signature based Intrusion Detection and Protection (IDP) system, as would be recognized by one skilled in the art. As another example, the blocking device 510 can be a firewall, network switch, or network router that includes content filtering or ACL management capabilities.


An effective computer worm quarantine may require a proper network architecture to ensure that blocking measures are effective in containing the computer worm. For example, if there are content filtering devices or transport level ACL devices protecting a set of subnets on the computing service 520, then there should not be another path from the computing service 520 on that subnet that does not pass through the filtering device.


Assuming that the communication network 130 is correctly partitioned, the function of the blocking device 510 is to receive a computer worm identifier, such as a signature list or transport vector, from the worm sensor 105 and configure the appropriate filtering devices. These filtering devices can be commercially available switches, routers, or firewalls obtainable from any of a number of network equipment vendors, or host-based solutions that provide similar functionality. In some embodiments, ACLs are used to perform universal blocking of those transport ports for the computing services 520 under protection. For example, traffic originating from a given source IP and intended for a given destination IP with the destination port matching a transport port in the transport vector can be blocked.


Another class of filtering is content based filtering, in which the filtering devices inspect the contents of the data past the TCP or UDP header of a data packet to check for particular data sequences. Examples of content filtering devices are routers in the class of the Cisco™ routers that use Network Based Application Recognition (NBAR) to classify and apply a policy to packets (e.g., reduce the priority of the packets or discard the packets). These types of filtering devices can be useful to implement content filtering at appropriate network points.


In one embodiment, host-based software is deployed on an enterprise scale to perform content filtering in the context of host-based software. In this embodiment, ACL specifications (e.g., vendor independent ACL specifications) and content filtering formats (e.g., eXtensible Markup Language or XML format) are communicated to the blocking devices 510, which in turn dynamically configure transport ACLs or content filters for network equipment and host software of different vendors.



FIG. 6 depicts a computer worm defense system of the present invention that comprises a plurality of separate computer worm containment systems 500 coupled to a management system 600. Each of the plurality of computer worm containment systems 500 includes a worm sensor 105 in communication over a communication network 130 with a computer worm blocking system, again represented by a single blocking device 510 configured to protect a computer system 520. The management system 600 communicates with both the worm sensors 105 and the blocking systems of the various computer worm containment systems 500.


Each computer worm containment system 500 is associated with a subscriber having a subscriber account that is maintained and managed by the management system 600. The management system 600 provides various computer worm defense services that allow the subscribers to obtain different levels of protection from computer worms, computer viruses, and other malicious code, based on levels of payment, for example.


The management system 600 interacts with the worm sensors 105 of the various computer worm containment systems in several ways. For example, the management system 600 can activate and deactivate worm sensors 105 based on payment or the lack thereof by the associated subscriber. The management system 600 also obtains identifiers of computer worms and repair scripts from the various worm sensors 105 and distributes these identifiers to other computer worm containment systems 500. The management system 600 can also distribute system updates as needed to controllers 115 (not shown) of the worm sensors 105. It will be appreciated that the computer worm defense system of the invention benefits from having a distributed set of worm sensors 105 in a widely distributed set of environments, compared to a centralized detection system, because computer worms are more likely to be detected sooner by the distributed set of worm sensors 105. Accordingly, in some embodiments it is advantageous to not deactivate a worm sensor 500 upon non-payment by a subscriber.


The management system 600 also interacts with the computer worm blocking systems of the various computer worm containment systems. Primarily, the management system 600 distributes computer worm identifiers found by worm sensors 105 of other computer worm containment systems 500 to the remaining computer worm blocking systems. In some embodiments the distribution is performed automatically as soon as the identifiers become known to the management system 600. However, in other embodiments, perhaps based on lower subscription rates paid by subscribers, newly found computer worm identifiers are distributed on a periodic basis such as daily or weekly. Similarly, the distribution of repair scripts to the various computer worm containment systems can also be controlled by the management system 600. In some embodiments, identifiers and/or repair scripts are distributed to subscribers by CD-ROM or similar media rather than automatically over a network such as the Internet.


In one embodiment, payment for the computer worm defense service is based on a periodic (e.g., monthly or annual) subscription fee. Such a fee can be based on the size of the enterprise being protected by the subscriber's computer worm containment system 500, where the size can be measured, for example, by the number of computer systems 520 therein. In another embodiment, a subscriber pays a fee for each computer worm identifier that is distributed to a computer worm containment system associated with the subscriber. In still another embodiment, payment for the computer worm defense service is based on a combination of a periodic subscription fee and a fee for each computer worm identifier received from the computer worm defense service. In yet another embodiment, subscribers receive a credit for each computer worm identifier that originates from a worm sensor 105 of their computer worm containment system 500.



FIG. 7 depicts an unauthorized activity detection system 700, in accordance with one embodiment of the present invention. The unauthorized activity detection system 700 comprises a source device 705, a destination device 710, and a tap 715 each coupled to a communication network 720. The tap 715 is further coupled to a controller 725.


The source device 705 and the destination device 710 are digital devices. Some examples of digital devices include computers, servers, laptops, personal digital assistants, and cellular telephones. The source device 705 is configured to transmit network data over the communication network 720 to the destination device 710. The destination device is configured to receive the network data from the source device 705.


The tap 715 is a digital data tap configured to monitor network data and provide a copy of the network data to the controller 725. Network data comprises signals and data that are transmitted over the communication network 720 including data flows from the source device 705 to the destination device 710. In one example, the tap 715 intercepts and copies the network data without an appreciable decline in performance of the source device 705, the destination device 710, or the communication network 720. The tap 715 can copy any portion of the network data. For example, the tap 715 can receive and copy any number of data packets from the network data.


In some embodiments, the network data can be organized into one or more data flows and provided to the controller 725. In various embodiments, the tap 715 can sample the network data based on a sampling scheme. Data flows can then be reconstructed based on the network data samples.


The tap 715 can also capture metadata from the network data. The metadata can be associated with the source device 705 and the destination device 710. The metadata can identify the source device 705 and/or the destination device 710. In some embodiments, the source device 705 transmits metadata which is capture by the tap 715. In other embodiments, the heuristic module 730 (described herein) can determine the source device 705 and the destination device 710 by analyzing data packets within the network data in order to generate the metadata.


The communication network 720 can be similar to the communication network 130 (FIG. 1). The communication network 720 can be a public computer network such as the Internet, or a private computer network such as a wireless telecommunication network, wide area network, or local area network.


The controller 725 can be any digital device or software that receives network data from the tap 715. In some embodiments, the controller 725 is contained within the computer worm sensor 105 (FIG. 1). In other embodiments, the controller 725 may be contained within a separate traffic analysis device 135 (FIG. 1) or a stand-alone digital device. The controller 725 can comprise a heuristic module 730, a scheduler 735, a fingerprint module 740, a virtual machine pool 745, an analysis environment 750, and a policy engine 755. In some embodiments, the tap 715 can be contained within the controller 725.


The heuristic module 730 receives the copy of the network data from the tap 715. The heuristic module 730 applies heuristics and/or probability analysis to determine if the network data might contain suspicious activity. In one example, the heuristic module 730 flags network data as suspicious. The network data can then be buffered and organized into a data flow. The data flow is then provided to the scheduler 735. In some embodiments, the network data is provided directly to the scheduler 735 without buffering or organizing the data flow.


The heuristic module 730 can perform any heuristic and/or probability analysis. In one example, the heuristic module 730 performs a dark internet protocol (IP) heuristic. A dark IP heuristic can flag network data coming from a source device 705 that has not previously been identified by the heuristic module 730. The dark IP heuristic can also flag network data going to an unassigned IP address. In an example, an attacker scans random IP addresses of a network to identify an active server or workstation. The dark IP heuristic can flag network data directed to an unassigned IP address.


The heuristic module 730 can also perform a dark port heuristic. A dark port heuristic can flag network data transmitted to an unassigned or unusual port address. Such network data transmitted to an unusual port can be indicative of a port scan by a worm or hacker. Further, the heuristic module 730 can flag network data from the source device 705 that are significantly different than traditional data traffic transmitted by the source device 705. For example, the heuristic module 730 can flag network data from a source device 705 such as a laptop that begins to transmit network data that is common to a server.


The heuristic module 730 can retain data packets belonging to a particular data flow previously copied by the tap 715. In one example, the heuristic module 730 receives data packets from the tap 715 and stores the data packets within a buffer or other memory. Once the heuristic module 730 receives a predetermined number of data packets from a particular data flow, the heuristic module 730 performs the heuristics and/or probability analysis.


In some embodiments, the heuristic module 730 performs heuristic and/or probability analysis on a set of data packets belonging to a data flow and then stores the data packets within a buffer or other memory. The heuristic module 730 can then continue to receive new data packets belonging to the same data flow. Once a predetermined number of new data packets belonging to the same data flow are received, the heuristic and/or probability analysis can be performed upon the combination of buffered and new data packets to determine a likelihood of suspicious activity.


In some embodiments, an optional buffer receives the flagged network data from the heuristic module 730. The buffer can buffer and organize the flagged network data into one or more data flows before providing the one or more data flows to the scheduler 735. In various embodiments, the buffer can buffer network data and stall before providing the network data to the scheduler 735. In one example, the buffer stalls the network data to allow other components of the controller 725 time to complete functions or otherwise clear data congestion.


The scheduler 735 identifies the destination device 710 and retrieves a virtual machine associated with the destination device 710. A virtual machine is software that is configured to mimic the performance of a device (e.g., the destination device 710). The virtual machine can be retrieved from the virtual machine pool 745.


In some embodiments, the heuristic module 730 transmits the metadata identifying the destination device 710 to the scheduler 735. In other embodiments, the scheduler 735 receives one or more data packets of the network data from the heuristic module 730 and analyzes the one or more data packets to identify the destination device 710. In yet other embodiments, the metadata can be received from the tap 715.


The scheduler 735 can retrieve and configure the virtual machine to mimic the pertinent performance characteristics of the destination device 710. In one example, the scheduler 735 configures the characteristics of the virtual machine to mimic only those features of the destination device 710 that are affected by the network data copied by the tap 715. The scheduler 735 can determine the features of the destination device 710 that are affected by the network data by receiving and analyzing the network data from the tap 715. Such features of the destination device 710 can include ports that are to receive the network data, select device drivers that are to respond to the network data and any other devices coupled to or contained within the destination device 710 that can respond to the network data. In other embodiments, the heuristic module 730 can determine the features of the destination device 710 that are affected by the network data by receiving and analyzing the network data from the tap 715. The heuristic module 730 can then transmit the features of the destination device to the scheduler 735.


The optional fingerprint module 740 is configured to determine the packet format of the network data to assist the scheduler 735 in the retrieval and/or configuration of the virtual machine. In one example, the fingerprint module 740 determines that the network data is based on a transmission control protocol/internet protocol (TCP/IP). Thereafter, the scheduler 735 will configure a virtual machine with the appropriate ports to receive TCP/IP packets. In another example, the fingerprint module 740 can configure a virtual machine with the appropriate ports to receive user datagram protocol/internet protocol (UDP/IP) packets. The fingerprint module 740 can determine any type of packet format of a network data.


In other embodiments, the optional fingerprint module 740 passively determines a software profile of the network data to assist the scheduler 735 in the retrieval and/or configuration of the virtual machine. The software profile may comprise the operating system (e.g., Linux RH6.2) of the source device 705 that generated the network data. The determination can be based on analysis of the protocol information of the network data. In an example, the optional fingerprint module 740 determines that the software profile of network data is Windows XP, SP1. The optional fingerprint module 740 can then configure a virtual machine with the appropriate ports and capabilities to receive the network data based on the software profile. In other examples, the optional fingerprint module 740 passes the software profile of the network data to the scheduler 735 which either selects or configures the virtual machine based on the profile.


The virtual machine pool 745 is configured to store virtual machines. The virtual machine pool 745 can be any storage capable of storing software. In one example, the virtual machine pool 745 stores a single virtual machine that can be configured by the scheduler 735 to mimic the performance of any destination device 710 on the communication network 720. The virtual machine pool 745 can store any number of distinct virtual machines that can be configured to simulate the performance of any destination devices 710.


The analysis environment 750 simulates transmission of the network data between the source device 705 and the destination device 710 to analyze the effects of the network data upon the destination device 710. The analysis environment 750 can identify the effects of malware or illegitimate computer users (e.g., a hacker, computer cracker, or other computer user) by analyzing the simulation of the effects of the network data upon the destination device 710 that is carried out on the virtual machine. There can be multiple analysis environments 710 to simulate multiple network data. The analysis environment 750 is further discussed with respect to FIG. 8.


The optional policy engine 755 is coupled to the heuristic module 730 and can identify network data as suspicious based upon policies contained within the policy engine 755. In one example, a destination device 710 can be a computer designed to attract hackers and/or worms (e.g., a “honey pot”). The policy engine 755 can contain a policy to flag any network data directed to the “honey pot” as suspicious since the “honey pot” should not be receiving any legitimate network data. In another example, the policy engine 755 can contain a policy to flag network data directed to any destination device 710 that contains highly sensitive or “mission critical” information.


The policy engine 755 can also dynamically apply a rule to copy all network data related to network data already flagged by the heuristic module 730. In one example, the heuristic module 730 flags a single packet of network data as suspicious. The policy engine 755 then applies a rule to flag all data related to the single packet (e.g., data flows) as suspicious. In some embodiments, the policy engine 755 flags network data related to suspicious network data until the analysis environment 750 determines that the network data flagged as suspicious is related to unauthorized activity.


Although FIG. 7 depicts data transmitted from the source device 705 to the destination device 710, either device can transmit and receive data from the other. Similarly, although only two devices are depicted, any number of devices can send and/or receive data across the communication network 720. Moreover, the tap 715 can monitor and copy data transmitted from multiple devices without appreciably effecting the performance of the communication network 720 or the devices coupled to the communication network 720.



FIG. 8 depicts an analysis environment 750, in accordance with one embodiment of the present invention. The analysis environment 750 comprises a replayer 805, a virtual switch 810, and a virtual machine 815. The replayer 805 receives network data that has been flagged by the heuristic module 730 and replays the network data in the analysis environment 750. The replayer 805 is similar to the protocol sequence replayer 220 (FIG. 2). In some embodiments, the replayer 805 mimics the behavior of the source device 705 in transmitting the flagged network data. There can be any number of replayers 805 simulating network data between the source device 705 and the destination device 710. In a further embodiment, the replayer dynamically modifies session variables, as is appropriate, to emulate a “live” client or server of the protocol sequence being replayed. In one example, dynamic variables that may be dynamically substituted include dynamically assigned ports, transaction IDs, and any other variable that is dynamic to each protocol session.


The virtual switch 810 is software that is capable of forwarding packets of flagged network data to the virtual machine 815. In one example, the replayer 805 simulates the transmission of the data flow by the source device 705. The virtual switch 810 simulates the communication network 720 and the virtual machine 815 simulates the destination device 710. The virtual switch 810 can route the data packets of the data flow to the correct ports of the virtual machine 815.


The virtual machine 815 is a representation of the destination device that can be provided to the analysis environment 750 by the scheduler 735. In one example, the scheduler 735 retrieves a virtual machine 815 from the virtual machine pool 745 and configures the virtual machine 815 to mimic a destination device 710. The configured virtual machine 815 is then provided to the analysis environment 750 where it can receive flagged network data from the virtual switch 810.


As the analysis environment 750 simulates the transmission of the network data, behavior of the virtual machine 815 can be closely monitored for unauthorized activity. If the virtual machine 815 crashes, performs illegal operations, performs abnormally, or allows access of data to an unauthorized computer user, the analysis environment 750 can react. In some embodiments, the analysis environment 750 performs dynamic taint analysis to identify unauthorized activity (dynamic taint analysis is further described in FIG. 12.) In one example, the analysis environment 750 can transmit a command to the destination device 710 to stop accepting the network data or data flows from the source device 705.


In some embodiments, the analysis environment 750 monitors and analyzes the behavior of the virtual machine 815 in order to determine a specific type of malware or the presence of an illicit computer user. The analysis environment 750 can also generate computer code configured to eliminate new viruses, worms, or other malware. In various embodiments, the analysis environment 750 can generate computer code configured to repair damage performed by malware or the illicit computer user. By simulating the transmission of suspicious network data and analyzing the response of the virtual machine, the analysis environment 750 can identify known and previously unidentified malware and the activities of illicit computer users before a computer system is damaged or compromised.



FIG. 9 depicts a flow chart for a method of detecting unauthorized activity, in accordance with one embodiment of the present invention. In step 900, network data is copied. For example, the network data can be copied by a tap, such as the tap 715. In some embodiments, the tap 715 can be coupled directly to the source device 705, the destination device 710, or the communication network 720.


In step 905, the network data is analyzed to determine whether the network data is suspicious. For example a heuristic module, such as the heuristic module 730, can analyze the network data. The heuristic module can base the determination on heuristic and/or probabilistic analyses. In various embodiments, the heuristic module has a very low threshold to determine whether the network data is suspicious. For example, a single command within the network data directed to an unusual port of the destination device can cause the network data to be flagged as suspicious.


Step 905 can alternatively include flagging network data as suspicious based on policies such as the identity of a source device, a destination device, or the activity of the network data. In one example, even if the heuristic module does not flag the network data, the network data can be flagged as suspicious based on a policy if the network data was transmitted from a device that does not normally transmit network data. Similarly, based on another policy, if the destination device contains trade secrets or other critical data, then any network data transmitted to the destination device can be flagged suspicious. Similarly, if the network data is directed to a particularly important database or is attempting to gain rights or privileges within the communication network or the destination device, then the network data can be flagged as suspicious. In various embodiments, the policy engine 755 flags network data based on these and/or other policies.


In step 910, the transmission of the network data is orchestrated to analyze unauthorized activity. In one example, the transmission of the network data over a network is simulated to analyze the resulting action of the destination device. The simulation can be monitored and analyzed to identify the effects of malware or illegitimate computer use.



FIG. 10 depicts a flow chart for a method for orchestrating the transmission of network data, in according with one embodiment of the present invention. In step 1000, the replayer 805 within the analysis environment 750 is configured to perform as the source device 705. In one example, the replayer 805 simply transmits the flagged network data to simulate network data transmission. There can be multiple replayers 805 transmitting different network data from a single source device 705. Alternately, there can be multiple replayers 805 that mimic different source devices 705 that transmit different network data.


In step 1005, a virtual machine 815 is retrieved and configured to mimic the destination device 710. The scheduler 735 identifies the destination device 710 and retrieves a virtual machine 815 from the virtual machine pool 745. In some embodiments, the scheduler 735 further configures the virtual machine 815 to mimic the performance characteristics of the destination device 710. The scheduler 735 than transmits the virtual machine 815 to the analysis environment 750.


In step 1010, the analysis environment 750 replays transmission of the network data between the configured replayer 805 and the virtual machine 815 to detect unauthorized activity. The replayer 805 is configured to simulate the source device 705 transmitting the network data and the virtual machine 815 is configured to mimic the features of the destination device 710 that is affected by the network data. The virtual switch 810 can simulate the communication network 720 in delivering the network data to the destination device 710.


As the transmission of the network data on the model destination device 710 is simulated, results are monitored to determine if the network data is generated by malware or activity generated by an illegitimate computer use. In one example, if the network data attempts to replicate programs within the virtual machine 815, then a virus can be identified. In another example, if the network data constantly attempts to access different ports of the virtual machine 815, then a worm or hacker can be identified.


Since the effects of network data transmission is simulated and the result analyzed, the controller 725 need not wait for repetitive behavior of malware or computer hackers before detecting their presence. In some examples of the prior art, new viruses and hackers are detected only upon multiple events that cause similar damage. By contrast, in some embodiments, a single data flow can be flagged and identified as harmful within a simulation thereby identifying malware, hackers, and unwitting computer users before damage is done.



FIG. 11 depicts a controller 725 of an unauthorized activity detection system 1100, in accordance with one embodiment of the present invention. The controller 725 can receive a variety of network data from the tap 715 (FIG. 7). In exemplary embodiments, the controller 725 concurrently processes different network data. In some embodiments, the controller 725 processes the different network data nearly simultaneously (e.g., in parallel). In other embodiments, the controller 725 processes the different network data serially but the processing of the different network data may be interwoven as resources allow. As a result, the controller 725 may receive and analyze different network data from the communication network 720 without appreciable interference on the data traffic in the communication network 720.


In some embodiments, the controller 725 can concurrently receive first and second network data where the second network data is different from the first. In one example, the tap 715 concurrently sends the first and second network data to the controller 725. In another example, the tap 715 sends first network data and the second network data to the controller 725 at approximately the same time. The tap 715 may comprise multiple taps capable of sending different network data in parallel or serial to the controller 725.


The controller 725 can concurrently process the first network data and the second network data. In some embodiments, the controller 725 can process the first network data and the second network data simultaneously. In one example, the controller 725 is software on a computer with two or more processors capable of independent processing. In other embodiments, the controller 725 can process the first network data and the second network data serially. For example, the controller 725 is software on a computer with a single processor capable of interleaving commands associated with the first network data and the other commands associated with the second network data. As a result, the processing of the first network data and the second network data may appear to be simultaneous. Although the processing of two different forms of network data (i.e., first network data and second network data) is discussed, there can be any number of different network data processed by the controller 725 during any time.


The controller 725 can comprise a heuristic module 730, a scheduler 735, a virtual machine pool 745, and a plurality of analysis environments 750. The heuristic module 730 concurrently receives the copy of the first network data and a copy of the second network data from the tap 715. In some embodiments, the heuristic module 730 can receive different network data in parallel from the tap 715. The heuristic module 730 applies heuristics and/or probability analysis to determine if the first network data and/or the second network data might contain suspicious activity.


In other embodiments, the heuristic module 730 serially applies heuristics and/or probabilistic analysis to both the first network data and the second network data. In one example, the heuristic module 730 may apply various heuristics to determine if the first network data contains suspicious activity while a variety of other network data is determined as being not suspicious.


The heuristic module 730 can independently apply heuristics and probability analysis to different network data. In one example, the heuristic module 730 flags network data as suspicious and proceeds to receive new network data as the other network data continues to be analyzed. The network data flagged as suspicious can then be buffered and organized into a data flow. The data flow is then provided to the scheduler 735.


The heuristic module 730 may comprise an optional buffer to buffer the network data flagged a suspicious. The buffer may be controlled so as to hold network data as resources are otherwise occupied. In one example, the buffer may hold network data if the scheduler 735 capacity has already been maximized. Once the scheduler 735 frees capacity, the buffer may release some or all buffered network data to the scheduler 735 as needed.


The heuristic module 730 can retain data packets belonging to a variety of different network data previously copied by the tap 715. The data packets may be a part of data flows from the first network data, the second network data, or any other network data copied by the tap 715. In some embodiments, the heuristic module 730 performs heuristic and/or probability analysis on a set of data packets belonging to a data flow and then stores the data packets within a buffer or other memory.


The scheduler 735 identifies the destination devices 710 to receive the first network data and/or the second network data and retrieves a plurality of virtual machines associated with the destination devices 710. In one example, a source device sends network data flagged as suspicious to two or more destination devices. The scheduler 735 configures a plurality of virtual machines to mimic the performance of each destination device, respectively.


Concurrently with configuring the first plurality of virtual machines for the first network data, the scheduler 735 can configure a second plurality of virtual machines to mimic the performance of other destination devices receiving the second network data. In one example, the scheduler 735 can perform these tasks simultaneously. The first plurality of virtual machines and the second plurality of virtual machines can be retrieved from the virtual machine pool 745.


The virtual machine pool 745 is configured to store virtual machines. The virtual machine pool 745 can be any type of storage capable of storing software. In one example, the virtual machine pool 745 stores a plurality of virtual machines that can be configured by the scheduler 735 to mimic the performance of a plurality of destination devices 710 that receive network data on the communication network 720. The virtual machine pool 745 can store any number of distinct virtual machines that can be configured to simulate the performance of any destination devices 710.


The analysis environments 750 simulate transmission of the network data between the source device 705 and the destination device 710 to analyze the effects of the network data upon the destination device 710. The analysis environment 750 can identify the effects of malware or illegitimate computer users (e.g., a hacker, computer cracker, or other computer user) by analyzing the simulation of the effects of the network data upon the destination device 710 that is carried out on the virtual machine. There can be multiple analysis environments 710 to simulate different network data. Although FIG. 11 depicts only two analysis environments 750, there may be any number of analysis environments 750 within controller 725. In one example, there may be as many analysis environments 750 as there are different network data to analyze. The analysis environments 750 can operate concurrently and independently with each other. The analysis environments 750 are further discussed with respect to FIG. 12.



FIG. 12 depicts an analysis environment 750, in accordance with one embodiment of the present invention. In exemplary embodiments, each different analysis environment 750 can analyze different network data concurrently. While one analysis environment 750 analyzes network data for suspicious activity, another analysis environment 750 may independently analyze other network data. A single analysis environment 750 can analyze network data broadcast to multiple destination devices.


The analysis environment 750 comprises a replayer 805, a virtual switch 810, and a plurality of virtual machines 815. The replayer 805 receives network data that has been flagged by the heuristic module 730 and replays the network data in the analysis environment 750. In some embodiments, the replayer 805 mimics the behavior of the source device 705 in transmitting the flagged network data to a plurality of destination devices. There can be any number of replayers 805 simulating network data between the source device 705 and the destination device 710.


The virtual switch 810 is software that is capable of forwarding packets of flagged network data to the plurality of virtual machines 815. In one example, the replayer 805 simulates the transmission of the data flow by the source device 705. The virtual switch 810 simulates the communication network 720 and the plurality of virtual machines 815 mimic the plurality of destination devices 710. The virtual switch 810 can route the data packets of the data flow to the correct ports of any of the plurality of virtual machines 815. There may be any number of virtual switches 810.


In some embodiments, the virtual switch 810 concurrently routes data packets to any number of the plurality of virtual machines 815. In one example, the virtual switch 810 independently routes data packets belonging to network data to two or more of the plurality of virtual machines 815. In another example, the virtual switch 810 serially routes the data packets to two or more of the plurality of virtual machines 815.


The plurality of virtual machines 815 is a representation of the plurality of destination devices, each of which is to receive the same network data. The scheduler 735 provides the plurality of virtual machines 815 to the analysis environment 750. In one example, the scheduler 735 retrieves the plurality of virtual machines 815 from the virtual machine pool 745 and configures each of the plurality of virtual machines 815 to mimic a separate destination device 710 that is to receive the network data. Although only two virtual machines 815 are depicted in FIG. 12, there can be any number of virtual machines 815.


As the analysis environment 750 simulates the transmission of the network data, behavior of the plurality of virtual machines 815 can be monitored for unauthorized activity. If any of the plurality of virtual machines 815 crashes, performs illegal operations, performs abnormally, or allows access of data to an unauthorized computer user, the analysis environment 750 can react. In one example, the analysis environment 750 can transmit a command to any destination device 710 to stop accepting the network data or data flows from any source device 705.


In some embodiments, the analysis environment 750 performs dynamic taint analysis to identify unauthorized activity. For an unauthorized computer user to change the execution of an otherwise legitimate program, the unauthorized computer user must cause a value that is normally derived from a trusted source to be derived from the user's own input. Program values (e.g., jump addresses and format strings) are traditionally supplied by a trusted program and not from external untrusted inputs. An unauthorized computer user, however, may attempt to exploit the program by overwriting these values.


In one example of dynamic taint analysis, all input data from untrusted or otherwise unknown sources are flagged. Program execution of programs with flagged input data is then monitored to track how the flagged data propagates (i.e., what other data becomes tainted) and to check when the flagged data is used in dangerous ways. For example, use of tainted data as jump addresses or format strings often indicates an exploit of a vulnerability such as a buffer overrun or format string vulnerability.



FIG. 13 depicts a flow chart for a method for concurrently orchestrating a response to network data by a plurality of virtual machines, in accordance with one embodiment of the present invention. Network data, received by a tap 715, can be transmitted from a single source device 705 to a plurality of destination devices 710.


In step 1300, the replayer 805 within the analysis environment 750 is configured to perform as the source device 705. In one example, the replayer 805 simply transmits the flagged network data to two or more virtual machines to simulate network data transmission. In some embodiments, the replayer 805 can transmit flagged network data in parallel to the two or more virtual machines. In other embodiments, the replayer 805 can also transmit flagged network data in serial or interleave the transmission of flagged network data to one virtual machine with the transmission of the same flagged network data to another virtual machine. In some embodiments, there can be multiple replayers 805 transmitting different network data.


In step 1305, a plurality of virtual machines 815 is retrieved and configured to mimic a plurality of destination devices 710. The scheduler 735 identifies the destination devices 710 and retrieves the plurality of virtual machines 815 from the virtual machine pool 745. The scheduler 735 than transmits the plurality of virtual machines 815 to the analysis environment 750.


In step 1310, the analysis environment 750 replays transmission of the network data between the configured replayer 805 and at least one virtual machine 815 of the plurality of virtual machines 815 to detect unauthorized activity. In step 1315, the analysis environment 750 analyzes a first response of the at least one virtual machine 815 to identify unauthorized activity.


In step 1320, the analysis environment 750 replays transmission of the network data between the configured replayer 805 and at least one other virtual machine to detect unauthorized activity. In step 1325, the analysis environment 750 analyzes a second response of the at least one other virtual machine 815 to identify unauthorized activity.


Steps 1310 and 1315 can be performed concurrently with steps 1320 and 1325. In some embodiments, steps 1310-1315 and steps 1320-1325 are performed by software as resources allow. In some embodiments, steps 1310-1315 and steps 1320-1325 are performed in parallel.



FIG. 14 depicts a flow chart for a method for concurrently identifying unauthorized activity, in accordance with one embodiment of the present invention. In step 1400, tap 715 copies network data directed to a plurality of destination devices 710 on the communication network 720. In step 1405, the network data is analyzed with a heuristic to detect suspicious activity. If suspicious activity is not detected, then FIG. 14 ends. If suspicious activity is detected, a first replayer 805 is configured to perform as the source device 705 to transmit the network data in step 1410. In step 1415, a plurality of virtual machines 815 is retrieved to mimic the plurality of destination devices 710. Transmission of the network data is replayed between the first replayer 805 and the plurality of virtual machines 815 to detect unauthorized activity in step 1420. In step 1425, a response is analyzed by any of the plurality of virtual machines 815 to the network data to identify unauthorized activity.


Similarly, in step 1430, tap 715 copies other network data directed to an other plurality of destination devices 710 on the communication network 720. In some embodiments, the other plurality of destination devices 710 are the same plurality destination devices referred to within step 1400. In one example, step 1400 a tap 715 copies network data transmitted to a plurality of destination devices 710 while in step 1405, the tap 715 copies different network data transmitted to the same plurality of destination devices 710. In another example, the other network data referred to within 1405 is transmitted to some but not all of the plurality of destination devices 710 identified in step 1400.


In step 1435, the other network data is analyzed with a heuristic to detect suspicious activity. If suspicious activity is not detected, then FIG. 14 ends. If suspicious activity is detected, a second replayer 805 is configured to perform as the source device 705 to transmit the other network data in step 1440. In step 1445, an other plurality of virtual machines 815 is retrieved to mimic the plurality of destination devices 710. The other plurality of virtual machines 815 may comprise some, all, or none of the virtual machines 815 within the plurality of virtual machines 815 discussed in step 1415.


Transmission of the network data is replayed between the second replayer 805 and the other plurality of virtual machines to detect unauthorized activity in step 1450. In step 1455, a second response is analyzed by any of the other plurality of virtual machines 815 to the network data to identify unauthorized activity.


Steps 1430 through 1455 can occur concurrently with steps 1400 through 1425. In some embodiments, steps 1430 through 1455 are performed in parallel with steps 1400 through 1425. In other embodiments, the performance of steps 1430 through 1455 is interwoven with the performance of steps 1400 through 1425 as resources allow. In one example, step 1410 is performed during and/or in between any of steps 1405 through 1455.



FIG. 15 depicts a network environment 1500, in accordance with one embodiment of the present invention. The network environment 1500 comprises a controller 1505, a computing system 1510, and a computing system 1515, each coupled to a router 1520. The router 1520 is further coupled to a communications network 1525. A controller server 1550 and a gateway 1530 are also coupled to the communications network 1525. Further, a controller 1535, a computing system 1540, and a computing system 1545 are coupled to the gateway 1530.


The controller 1505 and the controller 1535 can be the controller 725 (FIG. 7) or a computer worm sensor 105 (FIG. 1). The controller 1505 can be configured to scan network data transmitted to or from the computing system 1510 and/or computing system 1515 for unauthorized activity. Similarly, the controller 1535 can be configured to scan network data transmitted to or from the computing system 1540 and/or computing system 1545 for unauthorized activity.


If the controller 1505 finds unauthorized activity, the controller 1505 can generate an unauthorized activity signature based on the unauthorized activity. An unauthorized activity signature is a string of bits or a binary code pattern contained within the unauthorized activity. In some embodiments, the unauthorized activity signature contains a portion of the unauthorized activity code. The portion of the unauthorized activity code can be used to identify network data containing similar code to identify unauthorized activity.


The controller 1505 can transmit the unauthorized activity signature to the controller server 1550 and/or the controller 1535. In some embodiments, the unauthorized activity signature may be uploaded to the router 1520 and/or the gateway 1530. The process of detecting unauthorized activity is discussed in FIG. 9 herein. In exemplary embodiments, any controller may find unauthorized activity, generate an unauthorized activity signature based on the unauthorized activity, and transmit the unauthorized activity signature to on or more other controllers.


In exemplary embodiments, the controller 1505 may transmit the unauthorized activity signature to the router 1520. In one example, the router 1520 receives the unauthorized activity signature and blocks, quarantines, or deletes network data containing unauthorized activity based on the unauthorized activity signature. In some embodiments, the controller 1505 may be contained within the router 1520. The router 1520 can receive the unauthorized activity signature from the controller 1535, the controller 1505, or the controller server 1550.


The gateway 1530 may also be configured to receive the unauthorized activity signature from the controller 1535, the controller 1505, or the controller server 1550. The gateway 1530 can be configured to scan packets of network data based on the unauthorized activity signature and take action when unauthorized activity is found. Although the router 1520 and the gateway 1530 are depicted in FIG. 15, any network device (e.g., switch), digital device, or server may be configured to receive the unauthorized activity signature and take action when unauthorized activity is found.


The controller server 1550 can be any server configured to receive one or more unauthorized activity signatures from the controller 1505 and/or the controller 1535. In one example, the controller 1505 detects unauthorized activity and generates an unauthorized activity signature. The controller 1505 transmits the unauthorized activity signature to the controller server 1550. The controller server 1550 may store the unauthorized activity signature received from the controller 1505 and transmit the unauthorized activity signature to any other controller (e.g., controller 1535), router (e.g., router 1520), and/or gateway (e.g., 1530). In some embodiments, the controller server 1550 also collects statistics from the controllers 1505 and 1535 regarding the unauthorized activity found.


As discussed in FIG. 9 and FIG. 10, unauthorized activity may be discovered and identified within network data. Advantageously, once the unauthorized activity signature is generated, the unauthorized activity signature may be used to detect unauthorized activity within other network data. As such, probability and statistical analysis may be rendered unnecessary thereby increasing the speed of interception of unauthorized activity and reducing computing costs associated with further analysis.


The unauthorized activity signature may be transmitted to other controllers thereby reducing the need for two or more controllers to generate their own unauthorized activity signatures. As a result, a single packet containing unauthorized activity may be identified as containing unauthorized activity, an unauthorized activity signature generated, and the other controllers automatically updated with the unauthorized activity signature to identify any other unauthorized network data transmitted to or from a protected system.



FIG. 16 depicts a controller 1600, in accordance with one embodiment of the present invention. The controller 1600 can be any digital device or software that receives network data from a communication network (not depicted.) In some embodiments, the controller 1600 is contained within the computer worm sensor 105 (FIG. 1). In other embodiments, the controller 1600 may be contained within a separate traffic analysis device 135 (FIG. 1) or a stand-alone digital device. The controller 1600 can comprise a heuristic module 1610, a policy engine 1620, a signature module 1630, a scheduler 1640, a virtual machine pool 1650, a fingerprint module 1660, and an analysis environment 1670.


The heuristic module 1610 receives the copy of the network data from the communication network. In some embodiments, the heuristic module 1610 receives the copy of the network data over a tap. The heuristic module 1610 can apply heuristics and/or probability analysis to determine if the network data might contain suspicious activity.


The policy engine 1620 is coupled to the heuristic module 1610 and can identify network data as unauthorized activity. The policy engine 1620 may scan network data to detect unauthorized activity based upon an unauthorized activity signature. In some embodiments, the policy engine 1620 retrieves the unauthorized activity signature from the signature module 1630 (discussed herein). The network data is then scanned for unauthorized activity based on the unauthorized activity signature. The policy engine 1620 can also flag network data as suspicious based on policies (further discussed in FIG. 7.)


The policy engine 1620 can scan the header of a packet of network data as well as the packet contents for unauthorized activity. In some embodiments, the policy engine 1620 scans only the header of the packet for unauthorized activity based on the unauthorized activity signature. If unauthorized activity is found, then no further scanning may be performed. In other embodiments, the policy engine 1620 scans the packet contents for unauthorized activity.


Advantageously, unauthorized activity may be found by scanning only the header of a packet, the contents of the packet, or both the header and the contents of the packet. As a result, unauthorized activity that might otherwise evade discovery can be detected. In one example, evidence of unauthorized activity may be located within the contents of the packet. By scanning only the contents of the packet, unauthorized activity may be detected.


If the packet contents or the packet header indicate that the network data contains unauthorized activity, then the policy engine 1620, the heuristic module 1610, or the signature module 1630 may take action. In one example, the policy engine 1620 may quarantine, delete, or bar the packet from the communications network. The policy engine 1620 may also quarantine, delete, or bar other packets belonging to the same data flow as the unauthorized activity packet.


The signature module 1630 receives, authenticates, and stores unauthorized activity signatures. The unauthorized activity signatures may be generated by the analysis environment 1670 or another controller 1600. The unauthorized activity signatures may then be transmitted to the signature module 1630 of one or more controllers 1600.


The scheduler 1640 identifies the destination device 710 (FIG. 7) and retrieves a virtual machine associated with the destination device 710. The virtual machine can be retrieved from the virtual machine pool 1650. The scheduler 1640 can retrieve and configure the virtual machine to mimic the pertinent performance characteristics of the destination device 710.


The fingerprint module 1660 is configured to determine the packet format of the network data to assist the scheduler 1640 in the retrieval and/or configuration of the virtual machine. The virtual machine pool 1650 is configured to store virtual machines.


The analysis environment 1670 simulates transmission of the network data between the source device 705 (FIG. 7) and the destination device 710 to analyze the effects of the network data upon the destination device 710 to detect unauthorized activity. As the analysis environment 1670 simulates the transmission of the network data, behavior of the virtual machine can be closely monitored for unauthorized activity. If the virtual machine crashes, performs illegal operations, performs abnormally, or allows access of data to an unauthorized computer user, the analysis environment 1670 can react. In some embodiments, the analysis environment 1670 performs dynamic taint analysis to identify unauthorized activity (dynamic taint analysis is further described in FIG. 12.)


Once unauthorized activity is detected, the analysis environment 1670 can generate the unauthorized activity signature configured to identify network data containing unauthorized activity. Since the unauthorized activity signature does not necessarily require probabilistic analysis to detect unauthorized activity within network data, unauthorized activity detection based on the unauthorized activity signature may be very fast and save computing time.


The analysis environment 1670 may store the unauthorized activity signature within the signature module 1630. The analysis environment 1670 may also transmit or command the transmission of the unauthorized activity signature to one or more other controllers 1600, controller servers 1550 (FIG. 15), and/or network devices (e.g., routers, gateways, bridges, switches). By automatically storing and transmitting the unauthorized activity signature, previously known and unidentified malware and the activities of illicit computer users can be quickly controlled and reduced before a computer system is damaged or compromised.



FIG. 17 depicts a flow chart for a method for a dynamic signature creation and enforcement system, in accordance with one embodiment of the present invention. In step 1700, network data is copied. For example, the network data can be copied by a tap, such as the tap 715 (FIG. 7).


In step 1705, the network data is analyzed to determine whether the network data is suspicious. For example, a heuristic module, such as the heuristic module 1610 (FIG. 16), can analyze the network data. The heuristic module can base the determination on heuristic and/or probabilistic analyses. In various embodiments, the heuristic module has a very low threshold to determine whether the network data is suspicious. Alternatively, the policy engine 1620 (FIG. 16) may flag network data a suspicious. If the network data is not determined to be suspicious, then FIG. 17 ends.


If the network data is flagged as suspicious, then the transmission of the network data is orchestrated to determine unauthorized activity in step 1710. The orchestration of the transmission of the network data to determine unauthorized activity may be performed by the analysis environment 1670 (FIG. 16). An example of the orchestration of the transmission of network data is in FIG. 11. In step 1715, the result of the orchestration of the transmission of the network data is analyzed. If the network data does not contain unauthorized activity, then FIG. 17 ends. If the network data contains unauthorized activity, then an unauthorized activity signature is generated based on the unauthorized activity in step 1720. The unauthorized activity signature may be generated by the analysis environment 1670 or the signature module 1630 (FIG. 16).


In step 1725, the unauthorized activity signature is transmitted to one or more other controllers 1600 (FIG. 16). In exemplary embodiments, the unauthorized activity signature is transmitted to a computer worm sensor 105 (FIG. 1) or any digital device that comprises a controller 1600. The receiving controller 1600 can store the unauthorized activity signature within the receiving controller's signature module 1630. Receiving the unauthorized activity signature is further discussed in FIG. 18.


Optionally the unauthorized activity signature may be authenticated. In some embodiments, the analysis environment 1670 can generate an authentication code along with the unauthorized activity signature. The authentication code can then be scanned to determine that the unauthorized activity signature is verified. In one example, the analysis environment 1670 generates the unauthorized activity signature and an authentication code. The analysis environment 1670 transmits the unauthorized activity signature and the authentication code to another controller 1600. The controller 1600 verifies the authentication code to ensure that the unauthorized activity signature is genuine. If the unauthorized activity signature is authenticated, then the signature module 1630 stores the unauthorized activity signature.


The unauthorized activity signature can also be encrypted. In one example, the controller 1600 generates, encrypts, and transmits the unauthorized activity signature to another controller 1600. The receiving controller 1600 can decrypt the unauthorized activity signature and store the unauthorized activity signature within the signature module 1630. In some embodiments, the controller 1600 generates an authentication code and proceeds to encrypt the authentication code and the unauthorized activity signature prior to transmitting the authentication code and the unauthorized activity signature to another controller 1600.



FIG. 18 depicts a flow chart for a method for enforcing an unauthorized activity signature and barring network data based on the unauthorized activity signature, in accordance with one embodiment of the present invention. In step 1800, the controller 1600 (FIG. 16) receives the unauthorized activity signature. In some embodiments, the controller 1600 receives the unauthorized activity signature over a tap. In one example, the heuristic module 1610 (FIG. 16) or the policy engine 1620 (FIG. 16) scans network data to determine if the network data contains the unauthorized activity signature. If the unauthorized activity signature is recognized, the unauthorized activity signature may be authenticated in step 1805.


In one example, the signature module 1630 (FIG. 16) stores verification information necessary to authenticate unauthorized activity signatures. The unauthorized activity signature may be compared with the verification information to authenticate the unauthorized activity signature. In another example, an authentication code is sent with or incorporated in the unauthorized activity signature. The signature module 1630 may authenticate the authentication code by comparing the code to the verification information.


In some embodiments, the unauthorized activity signature is encrypted. In one example, the signature module 1630 stores encryption keys to decode the unauthorized activity signature. The process of decoding the unauthorized activity signature may authenticate the unauthorized activity signature.


Once the unauthorized activity signature is authenticated, the unauthorized activity signature may be stored within the controller 1600. In one example, the policy engine 1620 or the signature module 1630 stores the authenticated unauthorized activity signature. In other embodiments, authentication is optional.


In step 1810, network data is received from the communication network. In step 1820, the network data is scanned for unauthorized activity based on the unauthorized activity signature. In one example, the policy engine 1620 scans the network data for code resembling the unauthorized activity signature. If the network data does not contain unauthorized activity based on the unauthorized activity signature, then FIG. 18 ends. If the network data contains unauthorized activity based on the unauthorized activity signature, then the network data is blocked in step 1830. In other embodiments, the network data containing unauthorized activity may be quarantined or deleted.


In some embodiments, activity results are collected regarding the unauthorized activity detected, the type of network data containing the unauthorized activity, and the unauthorized activity signature used to detect the unauthorized activity. The activity results may be transmitted to another controller 1600 or a controller server 1550 (FIG. 15).


In one example, network data from a particular IP address is found to contain unauthorized activity based on an unauthorized activity signature. The IP address from the offending source may be stored within the activity results. If more network data with unauthorized activity from the same IP address is found by the controller 1600, another controller 1600, or controller server 1550, further security procedures may be enforced. In an example, the policy engine 1620 of one or more controllers 1600 may receive and enforce a policy that all network data from the IP address will be flagged as suspicious. In other examples, statistics regarding the unauthorized activity detected, the type of network data containing the unauthorized activity, and the unauthorized activity signature used to detect the unauthorized activity may be tracked.



FIG. 19 depicts a method for malware attack detection and identification in an exemplary embodiment. In orchestrating the transmission of network data to the destination device 710 (FIG. 7), a replayer 805 (FIG. 8) can transmit a copy of the network data to the virtual machine 815 (FIG. 8). The analysis environment 1670 (FIG. 16) and/or the signature module 1630 (FIG. 16) can observe the response of the virtual machine 815 to identify a malware attack within the copied network data.


In exemplary embodiments, one of the advantages of observing the response of the virtual machine 815 is the ability to observe bytes at the processor instruction level. Input values from the network data as well as those values derived from instructions within the network data can be observed within the memory of the virtual machine 815. The outcome of instructions that manipulate the input values and derived values can be monitored to find unauthorized activity.


In one example, for a malware attack to change the execution of a program illegitimately (e.g., force the destination device 710 to perform an undesirable action such as copy a virus to memory) the malware attack causes a value that is normally derived from a trusted source to be derived from the malware attack within the network data. Values such as jump addresses and format strings are normally supplied by the executable code present within the destination device 710. When jump addresses or format strings are supplied by external inputs (i.e., input values, derived values, or instructions from network data), then the malware attack may gain control of future processor instruction(s).


In step 1900, the analysis environment 1670 flags input values associated with the network data from untrusted sources. A trusted source is a source of data (i.e., values and instructions) that is considered to be safe and not infected with malware. In some embodiments, a trusted source includes the destination device 710 and the virtual machine 815 before being exposed to the network data. Untrusted sources include any source that is not proven to be a trusted source. Since network data is often the medium of malware attacks, network data is considered to be from an untrusted source.


Input values associated with network data includes the input values within the network data as well as any values that may be derived from the network data. In one example, input values include values within the network data that are to be stored in one or more registries of the virtual machine 815. In another example, input values also include values that are arithmetically derived from instructions and/or values within the network data that are also to be ultimately stored within one or more registries of the virtual machine 815.


In step 1905, the analysis environment 1670 monitors the flagged input values within the virtual machine 815. In various embodiments, the analysis environment 1670 tracks the execution of instructions that affect the flagged input values to observe the response by the virtual machine 815. In one example, the analysis environment 1670 constructs a table of instructions that affect the flagged input values. The table may include entries that comprise snapshots of the data within the memory of the virtual machine 815. There can be a separate snapshot before and after the execution of each instruction that affects the flagged data. If unauthorized activity is found, the table may be used to provide a historical reference to analyze the attack, identify the vulnerability, and create an unauthorized activity signature. In various embodiments, the unauthorized activity signature identifies malware attacks as well as the attacked vulnerability.


In step 1910, the analysis environment 1670 identifies an outcome of one or more instructions that manipulate the flagged input values. The analysis environment 1670 can track each instruction that affects or uses the flagged input values. The result of the execution of the instruction in memory of the virtual machine 815 is also tracked.


In step 1915, the analysis environment 1670 determines if the outcome of the one or more instructions that manipulate the flagged input values comprise an unauthorized activity. In various embodiments, format string attacks and attacks that alter jump targets including return addresses, function pointers, or function pointer offsets are identified as unauthorized activity. Many attacks attempt to overwrite a jump target in order to redirect control flow either to the network data, to a standard library function, or to another point in an application to circumvent security checks. Alternately, the malware may provide a malicious format string to trick a program on the virtual machine 815 into leaking data or writing a malware chosen value to a specific memory address. Malware can also attempt to overwrite data that is later used as an argument to a system call. There may be many ways to identify one or more outcomes of the one or more instructions that manipulate the flagged input values as unauthorized activity.


In various embodiments, outcomes that qualify as unauthorized activity may be loaded or otherwise stored by a user or administrator in the controller 1600. The outcomes that qualify as unauthorized activity may also be updated or altered as necessary by the user or administrator. It will be appreciated by those skilled in the art that there may be many ways to load or store outcomes that qualify as unauthorized activity into the controller 1600.


Once unauthorized activity is found, network data can be continued to be transmitted to the virtual machine 815 and the method continued. As a result, multiple attacks can be identified within the same network data. Further, multiple attack vectors and payloads can be observed and identified without stopping the process. An attack vector is the method of attack that malware may use to infect or alter a system. A payload is the damage the malware can cause, either intentionally or unintentionally.


By transmitting associated network data (e.g., data flows) until the attack is completed and the payload is executed, the full malware attack may be observed and multiple unauthorized activity signatures can be generated. In various embodiments, unauthorized activity signatures can be generated to identify and block malware attacks which have not been previously analyzed. As a result, not only can a particular malware attack be identified and blocked by an unauthorized activity signature, but a class of malware attack may also be blocked. A class of malware attacks includes different malware that may attack the same vulnerability (i.e., the same attack vector) or perform the same damage (i.e., the same payload). In one example, a buffer overflow may occur for inputs over 60 bytes. An unauthorized activity signature may be generated to block malware attacks of the buffer overflow. As a result, all malware that attempts to overflow the same buffer with any number of bytes over 60 may be blocked by a single unauthorized activity signature.


Although FIG. 19 is described as the analysis environment 1670 conducting the exemplary method, any module within the controller 1600 or within the analysis environment 1670 may perform the exemplary method. Further, different modules within the controller 1600 and the analysis environment 1670 may perform different portions of the method. Those skilled in the art will appreciate that there may be many ways to perform these methods.


In various embodiments, the controller 1600 does not comprise a heuristic module 1610 and the analysis environment 1670 does not comprise a replayer 805. The copy of the network data received from the tap may be received by the analysis environment 1670 which analyzes the response of the virtual machine 815 to the copy of the network data to identify the malware attack.


In one example, the controller 1600 receives a copy of network data. The copy of network data may be scanned and compared to various policies and/or signatures maintained by the policy engine 1620. If the copy of network data is not identified as containing a malware attack by a policy, the controller 1600 can orchestrate the transmission of the network data by transmitting the copy of the network data to a virtual machine 815 in the analysis environment 1670. The analysis environment 1670 can then monitor the reaction of the virtual machine 815 to the copy of the network data to identify a malware attack. The determination of the reaction of the virtual machine 815 may be performed as discussed in FIG. 19. If the copy of the network data contains a malware attack, an unauthorized activity signature may be generated and transmitted to another controller as discussed in FIG. 20. It will be appreciated by those skilled in the art that the controller 1600 may also not comprise a policy engine 1620 and that the controller 1600 may orchestrate the transmission of all copies of network data to the virtual machine 815 to identify a malware attack.



FIG. 20 is another method for malware attack detection and identification in an exemplary embodiment. In step 2000, the controller 1600 (FIG. 16) receives a copy of network data from the communication network 720 (FIG. 7) over the tap 715 (FIG. 7).


The heuristic module 1610 (FIG. 16) may receive the copy of the network data and apply one or more heuristics to determine if the network data contains suspicious activity in step 2005. If the copy of the network data does not contain suspicious activity, then new network data may then be copied in step 2000. If the copy of the network data is determined to contain suspicious activity, the heuristic module 1610 may flag the copy of the network data as suspicious and provide the flagged copy of network data to the scheduler 1640 (FIG. 16).


The scheduler 1640 can scan the flagged copy of network data to determine the destination device 710 (FIG. 7) and retrieve a suitable virtual machine 815 (FIG. 8) from the virtual machine pool 1650 (FIG. 16). The virtual machine is then provided to the analysis environment 1670 (FIG. 16).


The analysis environment 1670 replays transmission of the flagged copy of network data between a configured replayer 805 (FIG. 8) and the virtual machine 815 retrieved from the virtual machine pool 1650 in step 2010.


In step 2015, the analysis environment 1670 flags the input values associated with the copy of network data from untrusted sources. In one example, input values from the copy of the network data as well as and values derived from instructions within the copy of network data are considered as untrusted and are flagged.


In step 2020, the analysis environment 1670 monitors the flagged input values associated with the network data within the virtual machine 815. In step 2025, the analysis environment 1670 identifies the outcome of one or more instructions that manipulate flagged input values. In one example, the analysis environment 1670 tracks the execution of instructions that affects the associated input values from untrusted sources. The analysis environment 1670 may log or store the tracking of instruction execution within a table.


In step 2030, the analysis environment 1670 determines if the outcome of the execution of the one or more instructions comprise unauthorized activity. In one example, the outcome or result of an instruction, untrusted network data is used as a jump target, such as a return address, function pointer or function pointer offset in order to redirect the control flow to the network data. Since this is evidence of an attack, this outcome is identified as unauthorized activity. If unauthorized activity is not found, then new network data may then be copied in step 2000. However, if unauthorized activity is found, the signature module 1630 may generate an unauthorized activity signature based on the determination in step 2035.


In one example, the signature module 1630 identifies the unauthorized activity and accesses the table of instruction execution. By backtracking the chain of instructions, the signature module 1630 can review the program counter and call stack at every point instructions operated on the relevant flagged associated input values and at what point the exploit actually occurred. The signature module 1630 can then determine the nature and location of a vulnerability and identify the exploit used to generate one or more unauthorized activity signatures.


In various embodiments, the analysis environment 1670 allows the attack on the virtual machine to go forward even after unauthorized activity is identified. As a result, additional samples of the malware attack can be collected. The additional samples of the malware attack can be used by the signature module 1630 to develop a more accurate unauthorized activity signature or develop multiple unauthorized activity signatures. In one example, the signature module 1630 generates an unauthorized activity signature addressing the attack vector as well as an unauthorized activity signature addressing the attack payload.


In step 2040, the signature module 1630 transmits one or more unauthorized activity signatures to another controller 1600. The unauthorized activity signature(s) may then be used to quickly identify and block malware attacks without applying heuristics or replaying transmission of copies of network data.


In exemplary embodiments, this method may be used to identify “zero day” malware attacks. A “zero day” malware attack includes a malware attack that has not yet been identified. In one example, a worm may attack thousands of computer systems before a patch or virus signature is manually developed. However, a “zero day” attack can be received and analyzed by the controller 1600. By analyzing the malware attack at the processor instruction level, the exploit (i.e., vulnerability) and payload can be identified and an unauthorized activity signature generated addressing the exploit. The unauthorized activity signature can then be transmitted to other digital devices configured to enforce the unauthorized activity signature (e.g., other controllers 1600). As a result, the attack can be addressed without waiting for third-party patches to eliminate the vulnerability or waiting for multiple attacks to collect information with which to manually create signatures.


The above-described modules can be comprised of instructions that are stored on storage media. The instructions can be retrieved and executed by a processor (e.g., the processor 700). Some examples of instructions include software, program code, and firmware. Some examples of storage media comprise memory devices and integrated circuits. The instructions are operational when executed by the processor to direct the processor to operate in accordance with embodiments of the present invention. Those skilled in the art are familiar with instructions, processor(s), and storage media.


In the foregoing specification, the invention is described with reference to specific embodiments thereof, but those skilled in the art will recognize that the invention is not limited thereto. Various features and aspects of the above-described invention can be used individually or jointly. Further, the invention can be utilized in any number of environments and applications beyond those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded as illustrative rather than restrictive. It will be recognized that the terms “comprising,” “including,” and “having,” as used herein, are specifically intended to be read as open-ended terms of art.

Claims
  • 1. A digital device for detecting malware, comprising: one or more processors; anda memory communicatively coupled to the one or more processors, the memory comprises software that, when executed by the one or more processors, performs operations including determining whether data received by the digital device is suspicious resulting from an analysis of the received data, the analysis of the received data is based upon a plurality of policies selected to identify whether the received data is suspicious that represents a first likelihood that the received data is associated with malware,responsive to the received data being determined to be suspicious, executing the received data within a virtual machine,monitoring information during the execution of the received data within the virtual machine, the monitored information includes information produced by the virtual machine during the execution of the received data,determining whether the information produced by the virtual machine during the execution of the received data constitutes an unauthorized activity, the unauthorized activity representing a second likelihood that the received data is associated with malware and the second likelihood is greater than the first likelihood, andgenerating data for use in subsequent detection of malware within data provided to the digital device for analysis.
  • 2. The digital device of claim 1, wherein the received data is suspicious based on the digital device conducting a heuristic analysis on the received data.
  • 3. The digital device of claim 2, wherein the heuristic analysis on the received data comprises a detection of a number of Internet Protocol (IP) scans conducted.
  • 4. The digital device of claim 2, wherein the heuristic analysis on the received data comprises a detection of a command within the received data directed to an unusual port of a destination device.
  • 5. The digital device of claim 1, wherein the unauthorized activity includes a crash of the virtual machine.
  • 6. The digital device of claim 1, wherein the unauthorized activity includes an illegal operation being performed by the virtual machine during the execution of the received data.
  • 7. The digital device of claim 1, wherein the unauthorized activity includes an abnormal performance of the virtual machine.
  • 8. The digital device of claim 1, wherein the signature includes is a string of bits or a binary code pattern for use in identifying whether the data subsequent to the received data including malware by a comparison of binary code within the subsequent data and the binary code pattern.
  • 9. The digital device of claim 1 being communicatively coupled to a router for transmission of a signature to the router, the signature corresponding to the data for use in subsequent detection of malware and being used by the router in blocking a propagation of malware through the data received by the digital device subsequent to generation of the signature.
  • 10. The digital device of claim 9, wherein the signature comprises a binary code pattern.
  • 11. A malware detection and identification method, comprising: determining whether received data is suspicious based on an analysis of the received data separate from execution of the received data;responsive to the received data being determined to be suspicious, executing the received data within a virtual machine;monitoring information produced during the execution of the received data by the virtual machine;determining whether the monitored information constitutes an unauthorized activity; andgenerating data for use in detection of malware within incoming data received by a digital device for analysis subsequent to generation of the generated data.
  • 12. The method of claim 11, wherein the received data is suspicious based on the digital device conducting a heuristic analysis on the received data.
  • 13. The method of claim 12, wherein the heuristic analysis on the received data comprises a detection of a number of Internet Protocol (IP) scans conducted.
  • 14. The method of claim 12, wherein the heuristic analysis on the received data comprises a detection of a command within the received data directed to an unusual port of a destination device.
  • 15. The method of claim 11, wherein the unauthorized activity includes a crash of the virtual machine.
  • 16. The method of claim 11, wherein the unauthorized activity includes an illegal operation being performed by the virtual machine during the execution of the received data.
  • 17. The method of claim 11, wherein the unauthorized activity includes an abnormal performance of the virtual machine.
  • 18. The method of claim 11, wherein the generated data comprises a signature that corresponds to a string of bits or a binary code pattern for use in identifying whether the data subsequent to the received data including malware by a comparison of binary code within the subsequent data and the binary code pattern.
  • 19. The method of claim 18, wherein a device generated data for use in detection of malware being communicatively coupled to a router for transmission of a signature to the router, the signature corresponding to the generated data for use in subsequent detection of malware and being used by the router in blocking a propagation of malware through the incoming data received by the digital device subsequent to generation of the signature.
  • 20. The method of claim 19, wherein the signature comprises a binary code pattern.
  • 21. The digital device of claim 1, wherein the determining whether the data received by the digital device is suspicious is conducted from the analysis of the data based on the plurality of policies including a policy associated with identifying a source device of the data or a destination device for the data.
  • 22. The digital device of claim 1, wherein the determining whether the data received by the digital device is suspicious is conducted from the analysis of the data based on the plurality of policies including a policy of identifying the data as suspicious when it is abnormal for the data to be transmitted from a source device.
  • 23. The digital device of claim 1, wherein the determining whether the data received by the digital device is suspicious is conducted from the analysis of the data based on the plurality of policies including a policy that identifies whether the data is associated with an attempt to gain rights or privileges within a communication network to which the digital device is coupled.
  • 24. The digital device of claim 1, wherein the determining whether the data received by the digital device is suspicious is conducted from the analysis of the data based on the plurality of policies including a policy that identifies whether the data is associated with an attempt to gain rights or privileges associated with a destination device that is communicatively coupled to the digital device.
  • 25. The digital device of claim 1, wherein the data is determined to be suspicious based on results of an analysis of the data exceeding a first threshold, the first threshold is set to detect at least a single command being flagged as suspicious.
  • 26. The method of claim 11, wherein the received data is determined to be suspicious based on the analysis conducted in accordance with a policy of a plurality of policies, the policy is associated with identifying a source device of the data or a destination device for the data.
  • 27. The method of claim 11, wherein the received data is determined to be suspicious based on the analysis conducted in accordance with a policy of a plurality of policies, the policy is associated with identifying the data as suspicious when it is abnormal for the data to be transmitted from a source device.
  • 28. The method of claim 11, wherein the received data is determined to be suspicious based on the analysis conducted in accordance with a policy of a plurality of policies, the policy is associated with identifying whether the data is associated with an attempt to gain rights or privileges within a communication network to which the digital device is coupled.
  • 29. The method of claim 11, wherein the received data is determined to be suspicious based on the analysis conducted in accordance with a policy of a plurality of policies, the policy is associated with identifying whether the data is associated with an attempt to gain rights or privileges associated with a particular destination device.
  • 30. The method of claim 11, wherein the received data is determined to be suspicious based on results of an analysis of the data exceeding a first threshold, the first threshold is set to detect at least a single command being flagged as suspicious.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 14/530,474 filed Oct. 31, 2014, now U.S. Pat. No. 10,284,574, issued May 7, 2019, which is a continuation of U.S. patent application Ser. No. 11/717,476, filed Mar. 12, 2007, entitled “Systems and Methods for Malware Attack Detection and Identification”, now U.S. Pat. No. 8,881,282, issued Nov. 4, 2014, which is a continuation-in-part of U.S. patent application Ser. No. 11/494,990, filed Jul. 28, 2006, entitled “Dynamic Signature Creation and Enforcement”, which is a continuation-in-part of U.S. patent application Ser. No. 11/471,072, filed Jun. 19, 2006, entitled “Virtual Machine with Dynamic Data Flow Analysis”, which is a continuation-in-part of U.S. patent application Ser. No. 11/409,355, filed Apr. 20, 2006, entitled “Heuristic Based Capture with Replay to Virtual Machine”, which claims benefit to U.S. patent application Ser. No. 11/096,287, filed Mar. 31, 2005, entitled “System and Method of Detecting Computer Worms,” U.S. patent application Ser. No. 11/151,812, filed Jun. 13, 2005, entitled “System and Method of Containing Computer Worms,” and U.S. patent application Ser. No. 11/152,286, Jun. 13, 2005, entitled “Computer Worm Defense System and Method” all of which are incorporated by reference herein. U.S. patent application Ser. No. 11/096,287, filed Mar. 31, 2005, entitled “System and Method of Detecting Computer Worms,” claims benefit to provisional patent application No. 60/559,198, filed Apr. 1, 2004, entitled “System and Method of Detecting Computer Worms.” U.S. patent application Ser. No. 11/151,812, filed Jun. 13, 2005, entitled “System and Method of Containing Computer Worms,” claims benefit of provisional patent application No. 60/579,953, filed Jun. 14, 2004, entitled “System and Method of Containing Computer Worms.” U.S. patent application Ser. No. 11/152,286, filed Jun. 13, 2005, entitled “Computer Worm Defense System and Method,” claims benefit of provisional patent application No. 60/579,910, filed Jun. 14, 2004, entitled “Computer Worm Defense System and Method.” The above-referenced provisional patent applications are also incorporated by reference herein. This application is also related to U.S. patent application Ser. No. 11/717,408, filed Mar. 12, 2007, entitled “Malware Containment and Security Analysis on Connection”, U.S. patent application Ser. No. 11/717,474, filed Mar. 12, 2007, entitled “Systems and Methods for Malware Attack Prevention”, and U.S. patent application Ser. No. 11/717,475, filed Mar. 12, 2007, entitled “Malware Containment on Connection”. The above-referenced related nonprovisional patent applications are also incorporated by reference herein.

US Referenced Citations (714)
Number Name Date Kind
4292580 Ott et al. Sep 1981 A
5175732 Hendel et al. Dec 1992 A
5319776 Hile et al. Jun 1994 A
5440723 Arnold et al. Aug 1995 A
5490249 Miller Feb 1996 A
5657473 Killean et al. Aug 1997 A
5802277 Cowlard Sep 1998 A
5842002 Schnurer et al. Nov 1998 A
5960170 Chen et al. Sep 1999 A
5978917 Chi Nov 1999 A
5983348 Ji Nov 1999 A
6088803 Tso et al. Jul 2000 A
6092194 Touboul Jul 2000 A
6094677 Capek et al. Jul 2000 A
6108799 Boulay et al. Aug 2000 A
6154844 Touboul et al. Nov 2000 A
6269330 Cidon et al. Jul 2001 B1
6272641 Ji Aug 2001 B1
6279113 Vaidya Aug 2001 B1
6279133 Vafai et al. Aug 2001 B1
6298445 Shostack et al. Oct 2001 B1
6357008 Nachenberg Mar 2002 B1
6424627 Sørhaug et al. Jul 2002 B1
6442696 Wray et al. Aug 2002 B1
6484315 Ziese Nov 2002 B1
6487666 Shanklin et al. Nov 2002 B1
6493756 O'Brien et al. Dec 2002 B1
6550012 Villa et al. Apr 2003 B1
6775657 Baker Aug 2004 B1
6831893 Ben Nun et al. Dec 2004 B1
6832367 Choi et al. Dec 2004 B1
6895550 Kanchirayappa et al. May 2005 B2
6898632 Gordy et al. May 2005 B2
6907396 Muttik et al. Jun 2005 B1
6941348 Petry et al. Sep 2005 B2
6971097 Wallman Nov 2005 B1
6981279 Arnold et al. Dec 2005 B1
7007107 Ivchenko et al. Feb 2006 B1
7028179 Anderson et al. Apr 2006 B2
7043757 Hoefelmeyer et al. May 2006 B2
7058822 Edery et al. Jun 2006 B2
7069316 Gryaznov Jun 2006 B1
7080407 Zhao et al. Jul 2006 B1
7080408 Pak et al. Jul 2006 B1
7093002 Wolff et al. Aug 2006 B2
7093239 van der Made Aug 2006 B1
7096498 Judge Aug 2006 B2
7100201 Izatt Aug 2006 B2
7107617 Hursey et al. Sep 2006 B2
7159149 Spiegel et al. Jan 2007 B2
7213260 Judge May 2007 B2
7231667 Jordan Jun 2007 B2
7240364 Branscomb et al. Jul 2007 B1
7240368 Roesch et al. Jul 2007 B1
7243371 Kasper et al. Jul 2007 B1
7249175 Donaldson Jul 2007 B1
7251215 Turner et al. Jul 2007 B1
7287278 Liang Oct 2007 B2
7308716 Danford et al. Dec 2007 B2
7328453 Merkle, Jr. et al. Feb 2008 B2
7346486 Ivancic et al. Mar 2008 B2
7356736 Natvig Apr 2008 B2
7386888 Liang et al. Jun 2008 B2
7392542 Bucher Jun 2008 B2
7418729 Szor Aug 2008 B2
7428300 Drew et al. Sep 2008 B1
7441272 Durham et al. Oct 2008 B2
7448084 Apap et al. Nov 2008 B1
7458098 Judge et al. Nov 2008 B2
7464404 Carpenter et al. Dec 2008 B2
7464407 Nakae et al. Dec 2008 B2
7467408 O'Toole, Jr. Dec 2008 B1
7478428 Thomlinson Jan 2009 B1
7480773 Reed Jan 2009 B1
7487543 Arnold et al. Feb 2009 B2
7496960 Chen et al. Feb 2009 B1
7496961 Zimmer et al. Feb 2009 B2
7519990 Xie Apr 2009 B1
7523493 Liang et al. Apr 2009 B2
7530104 Thrower et al. May 2009 B1
7540025 Tzadikario May 2009 B2
7546638 Anderson et al. Jun 2009 B2
7565550 Liang et al. Jul 2009 B2
7568233 Szor et al. Jul 2009 B1
7584455 Ball Sep 2009 B2
7603715 Costa et al. Oct 2009 B2
7607171 Marsden et al. Oct 2009 B1
7639714 Stolfo et al. Dec 2009 B2
7644441 Schmid et al. Jan 2010 B2
7657419 van der Made Feb 2010 B2
7676841 Sobchuk et al. Mar 2010 B2
7698548 Shelest et al. Apr 2010 B2
7707633 Danford et al. Apr 2010 B2
7712136 Sprosts et al. May 2010 B2
7730011 Deninger et al. Jun 2010 B1
7739740 Nachenberg et al. Jun 2010 B1
7779463 Stolfo et al. Aug 2010 B2
7784097 Stolfo et al. Aug 2010 B1
7832008 Kraemer Nov 2010 B1
7836502 Zhao et al. Nov 2010 B1
7849506 Dansey et al. Dec 2010 B1
7854007 Sprosts et al. Dec 2010 B2
7869073 Oshima Jan 2011 B2
7877803 Enstone et al. Jan 2011 B2
7904959 Sidiroglou et al. Mar 2011 B2
7908660 Bahl Mar 2011 B2
7930738 Petersen Apr 2011 B1
7937387 Frazier et al. May 2011 B2
7937761 Bennett May 2011 B1
7949849 Lowe et al. May 2011 B2
7996556 Raghavan et al. Aug 2011 B2
7996836 McCorkendale et al. Aug 2011 B1
7996904 Chiueh et al. Aug 2011 B1
7996905 Arnold et al. Aug 2011 B2
8006305 Aziz Aug 2011 B2
8010667 Zhang et al. Aug 2011 B2
8020206 Hubbard et al. Sep 2011 B2
8028338 Schneider et al. Sep 2011 B1
8042184 Batenin Oct 2011 B1
8045094 Teragawa Oct 2011 B2
8045458 Alperovitch et al. Oct 2011 B2
8069484 McMillan et al. Nov 2011 B2
8087086 Lai et al. Dec 2011 B1
8171553 Aziz et al. May 2012 B2
8176049 Deninger et al. May 2012 B2
8176480 Spertus May 2012 B1
8201246 Wu et al. Jun 2012 B1
8204984 Aziz et al. Jun 2012 B1
8214905 Doukhvalov et al. Jul 2012 B1
8220055 Kennedy Jul 2012 B1
8225288 Miller et al. Jul 2012 B2
8225373 Kraemer Jul 2012 B2
8233882 Rogel Jul 2012 B2
8234640 Fitzgerald et al. Jul 2012 B1
8234709 Viljoen et al. Jul 2012 B2
8239944 Nachenberg et al. Aug 2012 B1
8260914 Ranjan Sep 2012 B1
8266091 Gubin et al. Sep 2012 B1
8286251 Eker et al. Oct 2012 B2
8291499 Aziz et al. Oct 2012 B2
8307435 Mann et al. Nov 2012 B1
8307443 Wang et al. Nov 2012 B2
8312545 Tuvell et al. Nov 2012 B2
8321936 Green et al. Nov 2012 B1
8321941 Tuvell et al. Nov 2012 B2
8332571 Edwards, Sr. Dec 2012 B1
8365286 Poston Jan 2013 B2
8365297 Parshin et al. Jan 2013 B1
8370938 Daswani et al. Feb 2013 B1
8370939 Zaitsev et al. Feb 2013 B2
8375444 Aziz et al. Feb 2013 B2
8381299 Stolfo et al. Feb 2013 B2
8402529 Green et al. Mar 2013 B1
8464340 Ahn et al. Jun 2013 B2
8479174 Chiriac Jul 2013 B2
8479276 Vaystikh et al. Jul 2013 B1
8479291 Bodke Jul 2013 B1
8510827 Leake et al. Aug 2013 B1
8510828 Guo et al. Aug 2013 B1
8510842 Amit et al. Aug 2013 B2
8516478 Edwards et al. Aug 2013 B1
8516590 Ranadive et al. Aug 2013 B1
8516593 Aziz Aug 2013 B2
8522348 Chen et al. Aug 2013 B2
8528086 Aziz Sep 2013 B1
8533824 Hutton et al. Sep 2013 B2
8539582 Aziz et al. Sep 2013 B1
8549638 Aziz Oct 2013 B2
8555391 Demir et al. Oct 2013 B1
8561177 Aziz et al. Oct 2013 B1
8566476 Shiffer et al. Oct 2013 B2
8566946 Aziz et al. Oct 2013 B1
8584094 Dadhia et al. Nov 2013 B2
8584234 Sobel et al. Nov 2013 B1
8584239 Aziz et al. Nov 2013 B2
8595834 Xie et al. Nov 2013 B2
8627476 Satish et al. Jan 2014 B1
8635696 Aziz Jan 2014 B1
8682054 Xue et al. Mar 2014 B2
8682812 Ranjan Mar 2014 B1
8689333 Aziz Apr 2014 B2
8695096 Zhang Apr 2014 B1
8713631 Pavlyushchik Apr 2014 B1
8713681 Silberman et al. Apr 2014 B2
8726392 McCorkendale et al. May 2014 B1
8739280 Chess et al. May 2014 B2
8776229 Aziz Jul 2014 B1
8782792 Bodke Jul 2014 B1
8789172 Stolfo et al. Jul 2014 B2
8789178 Kejriwal et al. Jul 2014 B2
8793278 Frazier et al. Jul 2014 B2
8793787 Ismael et al. Jul 2014 B2
8805947 Kuzkin et al. Aug 2014 B1
8806647 Daswani et al. Aug 2014 B1
8832829 Manni et al. Sep 2014 B2
8850570 Ramzan Sep 2014 B1
8850571 Staniford et al. Sep 2014 B2
8881234 Narasimhan et al. Nov 2014 B2
8881271 Butler, II Nov 2014 B2
8881282 Aziz et al. Nov 2014 B1
8898788 Aziz et al. Nov 2014 B1
8935779 Manni et al. Jan 2015 B2
8949257 Shiffer et al. Feb 2015 B2
8984638 Aziz et al. Mar 2015 B1
8990939 Staniford et al. Mar 2015 B2
8990944 Singh et al. Mar 2015 B1
8997219 Staniford et al. Mar 2015 B2
9009822 Ismael et al. Apr 2015 B1
9009823 Ismael et al. Apr 2015 B1
9027135 Aziz May 2015 B1
9071638 Aziz et al. Jun 2015 B1
9104867 Thioux et al. Aug 2015 B1
9106630 Frazier et al. Aug 2015 B2
9106694 Aziz et al. Aug 2015 B2
9118715 Staniford et al. Aug 2015 B2
9159035 Ismael et al. Oct 2015 B1
9171160 Vincent et al. Oct 2015 B2
9176843 Ismael et al. Nov 2015 B1
9189627 Islam Nov 2015 B1
9195829 Goradia et al. Nov 2015 B1
9197664 Aziz et al. Nov 2015 B1
9223972 Vincent et al. Dec 2015 B1
9225740 Ismael et al. Dec 2015 B1
9241010 Bennett et al. Jan 2016 B1
9251343 Vincent et al. Feb 2016 B1
9262635 Paithane et al. Feb 2016 B2
9268936 Butler Feb 2016 B2
9275229 LeMasters Mar 2016 B2
9282109 Aziz et al. Mar 2016 B1
9292686 Ismael et al. Mar 2016 B2
9294501 Mesdaq et al. Mar 2016 B2
9300686 Pidathala et al. Mar 2016 B2
9306960 Aziz Apr 2016 B1
9306974 Aziz et al. Apr 2016 B1
9311479 Manni et al. Apr 2016 B1
9355247 Thioux et al. May 2016 B1
9356944 Aziz May 2016 B1
9363280 Rivlin et al. Jun 2016 B1
9367681 Ismael et al. Jun 2016 B1
9398028 Karandikar et al. Jul 2016 B1
9413781 Cunningham et al. Aug 2016 B2
9426071 Caldejon et al. Aug 2016 B1
9430646 Mushtaq et al. Aug 2016 B1
9432389 Khalid et al. Aug 2016 B1
9438613 Paithane et al. Sep 2016 B1
9438622 Staniford et al. Sep 2016 B1
9438623 Thioux et al. Sep 2016 B1
9459901 Jung et al. Oct 2016 B2
9467460 Otvagin et al. Oct 2016 B1
9483644 Paithane et al. Nov 2016 B1
9495180 Ismael Nov 2016 B2
9497213 Thompson et al. Nov 2016 B2
9507935 Ismael et al. Nov 2016 B2
9516057 Aziz Dec 2016 B2
9519782 Aziz et al. Dec 2016 B2
9536091 Paithane et al. Jan 2017 B2
9537972 Edwards et al. Jan 2017 B1
9560059 Islam Jan 2017 B1
9565202 Kindlund et al. Feb 2017 B1
9591015 Amin et al. Mar 2017 B1
9591020 Aziz Mar 2017 B1
9594904 Jain et al. Mar 2017 B1
9594905 Ismael et al. Mar 2017 B1
9594912 Thioux et al. Mar 2017 B1
9609007 Rivlin et al. Mar 2017 B1
9626509 Khalid et al. Apr 2017 B1
9628498 Aziz et al. Apr 2017 B1
9628507 Haq et al. Apr 2017 B2
9633134 Ross Apr 2017 B2
9635039 Islam et al. Apr 2017 B1
9641546 Manni et al. May 2017 B1
9654485 Neumann May 2017 B1
9661009 Karandikar et al. May 2017 B1
9661018 Aziz May 2017 B1
9674298 Edwards et al. Jun 2017 B1
9680862 Ismael et al. Jun 2017 B2
9690606 Ha et al. Jun 2017 B1
9690933 Singh et al. Jun 2017 B1
9690935 Shiffer et al. Jun 2017 B2
9690936 Malik et al. Jun 2017 B1
9736179 Ismael Aug 2017 B2
9740857 Ismael et al. Aug 2017 B2
9747446 Pidathala et al. Aug 2017 B1
9756074 Aziz et al. Sep 2017 B2
9773112 Rathor et al. Sep 2017 B1
9781144 Otvagin et al. Oct 2017 B1
9787700 Amin et al. Oct 2017 B1
9787706 Otvagin et al. Oct 2017 B1
9792196 Ismael et al. Oct 2017 B1
9824209 Ismael et al. Nov 2017 B1
9824211 Wilson Nov 2017 B2
9824216 Khalid et al. Nov 2017 B1
9825976 Gomez et al. Nov 2017 B1
9825989 Mehra et al. Nov 2017 B1
9838408 Karandikar et al. Dec 2017 B1
9838411 Aziz Dec 2017 B1
9838416 Aziz Dec 2017 B1
9838417 Khalid et al. Dec 2017 B1
9846776 Paithane et al. Dec 2017 B1
9876701 Caldejon et al. Jan 2018 B1
9888016 Amin et al. Feb 2018 B1
9888019 Pidathala et al. Feb 2018 B1
9910988 Vincent et al. Mar 2018 B1
9912644 Cunningham Mar 2018 B2
9912681 Ismael et al. Mar 2018 B1
9912684 Aziz et al. Mar 2018 B1
9912691 Mesdaq et al. Mar 2018 B2
9912698 Thioux et al. Mar 2018 B1
9916440 Paithane et al. Mar 2018 B1
9921978 Chan et al. Mar 2018 B1
9934376 Ismael Apr 2018 B1
9934381 Kindlund et al. Apr 2018 B1
9946568 Ismael et al. Apr 2018 B1
9954890 Staniford et al. Apr 2018 B1
9973531 Thioux May 2018 B1
10002252 Ismael et al. Jun 2018 B2
10019338 Goradia et al. Jul 2018 B1
10019573 Silberman et al. Jul 2018 B2
10025691 Ismael et al. Jul 2018 B1
10025927 Khalid et al. Jul 2018 B1
10027689 Rathor et al. Jul 2018 B1
10027690 Aziz et al. Jul 2018 B2
10027696 Rivlin et al. Jul 2018 B1
10033747 Paithane et al. Jul 2018 B1
10033748 Cunningham et al. Jul 2018 B1
10033753 Islam et al. Jul 2018 B1
10033759 Kabra et al. Jul 2018 B1
10050998 Singh Aug 2018 B1
10068091 Aziz et al. Sep 2018 B1
10075455 Zafar et al. Sep 2018 B2
10083302 Paithane et al. Sep 2018 B1
10084813 Eyada Sep 2018 B2
10089461 Ha et al. Oct 2018 B1
10097573 Aziz Oct 2018 B1
10104102 Neumann Oct 2018 B1
10108446 Steinberg et al. Oct 2018 B1
10121000 Rivlin et al. Nov 2018 B1
10122746 Manni et al. Nov 2018 B1
10133863 Bu et al. Nov 2018 B2
10133866 Kumar et al. Nov 2018 B1
10146810 Shiffer et al. Dec 2018 B2
10148693 Singh et al. Dec 2018 B2
10165000 Aziz et al. Dec 2018 B1
10169585 Pilipenko et al. Jan 2019 B1
10176321 Abbasi et al. Jan 2019 B2
10181029 Ismael et al. Jan 2019 B1
10191861 Steinberg et al. Jan 2019 B1
10192052 Singh et al. Jan 2019 B1
10198574 Thioux et al. Feb 2019 B1
10200384 Mushtaq et al. Feb 2019 B1
10210329 Malik et al. Feb 2019 B1
10216927 Steinberg Feb 2019 B1
10218740 Mesdaq et al. Feb 2019 B1
10242185 Goradia Mar 2019 B1
10284574 Aziz et al. May 2019 B1
20010005889 Albrecht Jun 2001 A1
20010047326 Broadbent et al. Nov 2001 A1
20020018903 Kokubo et al. Feb 2002 A1
20020038430 Edwards et al. Mar 2002 A1
20020091819 Melchione et al. Jul 2002 A1
20020095607 Lin-Hendel Jul 2002 A1
20020116627 Tarbotton et al. Aug 2002 A1
20020144156 Copeland Oct 2002 A1
20020162015 Tang Oct 2002 A1
20020166063 Lachman et al. Nov 2002 A1
20020169952 DiSanto et al. Nov 2002 A1
20020184528 Shevenell et al. Dec 2002 A1
20020188887 Largman et al. Dec 2002 A1
20020194490 Halperin et al. Dec 2002 A1
20030014664 Hentunen Jan 2003 A1
20030021728 Shame et al. Jan 2003 A1
20030074578 Ford et al. Apr 2003 A1
20030084318 Schertz May 2003 A1
20030101381 Mateev et al. May 2003 A1
20030115483 Liang Jun 2003 A1
20030188190 Aaron et al. Oct 2003 A1
20030191957 Hypponen et al. Oct 2003 A1
20030200460 Morota et al. Oct 2003 A1
20030212902 van der Made Nov 2003 A1
20030229801 Kouznetsov et al. Dec 2003 A1
20030237000 Denton et al. Dec 2003 A1
20040003323 Bennett et al. Jan 2004 A1
20040006473 Mills et al. Jan 2004 A1
20040015712 Szor Jan 2004 A1
20040019832 Arnold et al. Jan 2004 A1
20040047356 Bauer Mar 2004 A1
20040083408 Spiegel et al. Apr 2004 A1
20040088581 Brawn May 2004 A1
20040093513 Cantrell et al. May 2004 A1
20040111531 Staniford et al. Jun 2004 A1
20040117478 Triulzi et al. Jun 2004 A1
20040117624 Brandt et al. Jun 2004 A1
20040128355 Chao et al. Jul 2004 A1
20040165588 Pandya Aug 2004 A1
20040236963 Danford et al. Nov 2004 A1
20040243349 Greifeneder et al. Dec 2004 A1
20040249911 Alkhatib et al. Dec 2004 A1
20040255161 Cavanaugh Dec 2004 A1
20040268147 Wiederin et al. Dec 2004 A1
20050005159 Oliphant Jan 2005 A1
20050021740 Bar et al. Jan 2005 A1
20050033960 Vialen et al. Feb 2005 A1
20050033989 Poletto et al. Feb 2005 A1
20050050148 Mohammadioun et al. Mar 2005 A1
20050086523 Zimmer et al. Apr 2005 A1
20050091513 Mitomo et al. Apr 2005 A1
20050091533 Omote et al. Apr 2005 A1
20050091652 Ross et al. Apr 2005 A1
20050108562 Khazan et al. May 2005 A1
20050114663 Cornell et al. May 2005 A1
20050125195 Brendel Jun 2005 A1
20050149726 Joshi et al. Jul 2005 A1
20050157662 Bingham et al. Jul 2005 A1
20050183143 Anderholm et al. Aug 2005 A1
20050201297 Peikari Sep 2005 A1
20050210533 Copeland et al. Sep 2005 A1
20050238005 Chen et al. Oct 2005 A1
20050240781 Gassoway Oct 2005 A1
20050262562 Gassoway Nov 2005 A1
20050265331 Stolfo Dec 2005 A1
20050283839 Cowburn Dec 2005 A1
20060010495 Cohen et al. Jan 2006 A1
20060015416 Hoffman et al. Jan 2006 A1
20060015715 Anderson Jan 2006 A1
20060015747 Van de Ven Jan 2006 A1
20060021029 Brickell et al. Jan 2006 A1
20060021054 Costa et al. Jan 2006 A1
20060031476 Mathes et al. Feb 2006 A1
20060047665 Neil Mar 2006 A1
20060070130 Costea et al. Mar 2006 A1
20060075496 Carpenter et al. Apr 2006 A1
20060095968 Portolani et al. May 2006 A1
20060101516 Sudaharan et al. May 2006 A1
20060101517 Banzhof et al. May 2006 A1
20060117385 Mester et al. Jun 2006 A1
20060123477 Raghavan et al. Jun 2006 A1
20060143709 Brooks et al. Jun 2006 A1
20060150249 Gassen et al. Jul 2006 A1
20060161983 Cothrell et al. Jul 2006 A1
20060161987 Levy-Yurista Jul 2006 A1
20060161989 Reshef et al. Jul 2006 A1
20060164199 Glide et al. Jul 2006 A1
20060173992 Weber et al. Aug 2006 A1
20060179147 Tran et al. Aug 2006 A1
20060184632 Marino et al. Aug 2006 A1
20060191010 Benjamin Aug 2006 A1
20060221956 Narayan et al. Oct 2006 A1
20060236393 Kramer et al. Oct 2006 A1
20060242709 Seinfeld et al. Oct 2006 A1
20060248519 Jaeger et al. Nov 2006 A1
20060248582 Panjwani et al. Nov 2006 A1
20060251104 Koga Nov 2006 A1
20060288417 Bookbinder et al. Dec 2006 A1
20070006288 Mayfield et al. Jan 2007 A1
20070006313 Porras et al. Jan 2007 A1
20070011174 Takaragi et al. Jan 2007 A1
20070016951 Piccard et al. Jan 2007 A1
20070019286 Kikuchi Jan 2007 A1
20070033645 Jones Feb 2007 A1
20070038943 FitzGerald et al. Feb 2007 A1
20070064689 Shin et al. Mar 2007 A1
20070074169 Chess et al. Mar 2007 A1
20070094730 Bhikkaji et al. Apr 2007 A1
20070101435 Konanka et al. May 2007 A1
20070128855 Cho et al. Jun 2007 A1
20070142030 Sinha et al. Jun 2007 A1
20070143827 Nicodemus et al. Jun 2007 A1
20070156895 Vuong Jul 2007 A1
20070157180 Tillmann et al. Jul 2007 A1
20070157306 Elrod et al. Jul 2007 A1
20070168988 Eisner et al. Jul 2007 A1
20070171824 Ruello et al. Jul 2007 A1
20070174915 Gribble et al. Jul 2007 A1
20070192500 Lum Aug 2007 A1
20070192858 Lum Aug 2007 A1
20070198275 Malden et al. Aug 2007 A1
20070208822 Wang et al. Sep 2007 A1
20070220607 Sprosts et al. Sep 2007 A1
20070240218 Tuvell et al. Oct 2007 A1
20070240219 Tuvell et al. Oct 2007 A1
20070240220 Tuvell et al. Oct 2007 A1
20070240222 Tuvell et al. Oct 2007 A1
20070250930 Aziz et al. Oct 2007 A1
20070256132 Oliphant Nov 2007 A2
20070271446 Nakamura Nov 2007 A1
20080005782 Aziz Jan 2008 A1
20080018122 Zierler et al. Jan 2008 A1
20080028463 Dagon et al. Jan 2008 A1
20080040710 Chiriac Feb 2008 A1
20080046781 Childs et al. Feb 2008 A1
20080066179 Liu Mar 2008 A1
20080072326 Danford et al. Mar 2008 A1
20080077793 Tan et al. Mar 2008 A1
20080080518 Hoeflin et al. Apr 2008 A1
20080086720 Lekel Apr 2008 A1
20080098476 Syversen Apr 2008 A1
20080120722 Sima et al. May 2008 A1
20080134178 Fitzgerald et al. Jun 2008 A1
20080134334 Kim et al. Jun 2008 A1
20080141376 Clausen et al. Jun 2008 A1
20080184367 McMillan et al. Jul 2008 A1
20080184373 Traut et al. Jul 2008 A1
20080189787 Arnold et al. Aug 2008 A1
20080201778 Guo et al. Aug 2008 A1
20080209557 Herley et al. Aug 2008 A1
20080215742 Goldszmidt et al. Sep 2008 A1
20080222729 Chen et al. Sep 2008 A1
20080263665 Ma et al. Oct 2008 A1
20080295172 Bohacek Nov 2008 A1
20080301810 Lehane et al. Dec 2008 A1
20080307524 Singh et al. Dec 2008 A1
20080313738 Enderby Dec 2008 A1
20080320594 Jiang Dec 2008 A1
20090003317 Kasralikar et al. Jan 2009 A1
20090007100 Field et al. Jan 2009 A1
20090013408 Schipka Jan 2009 A1
20090031423 Liu et al. Jan 2009 A1
20090036111 Danford et al. Feb 2009 A1
20090037835 Goldman Feb 2009 A1
20090044024 Oberheide et al. Feb 2009 A1
20090044274 Budko et al. Feb 2009 A1
20090064332 Porras et al. Mar 2009 A1
20090077666 Chen et al. Mar 2009 A1
20090083369 Marmor Mar 2009 A1
20090083855 Apap et al. Mar 2009 A1
20090089879 Wang et al. Apr 2009 A1
20090094697 Provos et al. Apr 2009 A1
20090113425 Ports et al. Apr 2009 A1
20090125976 Wassermann et al. May 2009 A1
20090126015 Monastyrsky et al. May 2009 A1
20090126016 Sobko et al. May 2009 A1
20090133125 Choi et al. May 2009 A1
20090144823 Lamastra et al. Jun 2009 A1
20090158430 Borders Jun 2009 A1
20090172815 Gu et al. Jul 2009 A1
20090187992 Poston Jul 2009 A1
20090193293 Stolfo et al. Jul 2009 A1
20090198651 Shiffer et al. Aug 2009 A1
20090198670 Shiffer et al. Aug 2009 A1
20090198689 Frazier et al. Aug 2009 A1
20090199274 Frazier et al. Aug 2009 A1
20090199296 Xie et al. Aug 2009 A1
20090228233 Anderson et al. Sep 2009 A1
20090241187 Troyansky Sep 2009 A1
20090241190 Todd et al. Sep 2009 A1
20090265692 Godefroid et al. Oct 2009 A1
20090271867 Zhang Oct 2009 A1
20090300415 Zhang et al. Dec 2009 A1
20090300761 Park et al. Dec 2009 A1
20090328185 Berg et al. Dec 2009 A1
20090328221 Blumfield et al. Dec 2009 A1
20100005146 Drako et al. Jan 2010 A1
20100011205 McKenna Jan 2010 A1
20100017546 Poo et al. Jan 2010 A1
20100030996 Butler, II Feb 2010 A1
20100031353 Thomas et al. Feb 2010 A1
20100037314 Perdisci et al. Feb 2010 A1
20100043073 Kuwamura Feb 2010 A1
20100054278 Stolfo et al. Mar 2010 A1
20100058474 Hicks Mar 2010 A1
20100064044 Nonoyama Mar 2010 A1
20100077481 Polyakov et al. Mar 2010 A1
20100083376 Pereira et al. Apr 2010 A1
20100115621 Staniford et al. May 2010 A1
20100132038 Zaitsev May 2010 A1
20100154056 Smith et al. Jun 2010 A1
20100180344 Malyshev et al. Jul 2010 A1
20100192223 Ismael et al. Jul 2010 A1
20100220863 Dupaquis et al. Sep 2010 A1
20100235831 Dittmer Sep 2010 A1
20100251104 Massand Sep 2010 A1
20100281102 Chinta et al. Nov 2010 A1
20100281541 Stolfo et al. Nov 2010 A1
20100281542 Stolfo et al. Nov 2010 A1
20100287260 Peterson et al. Nov 2010 A1
20100299754 Amit et al. Nov 2010 A1
20100306173 Frank Dec 2010 A1
20110004737 Greenebaum Jan 2011 A1
20110025504 Lyon et al. Feb 2011 A1
20110041179 St Hlberg Feb 2011 A1
20110047594 Mahaffey et al. Feb 2011 A1
20110047620 Mahaffey et al. Feb 2011 A1
20110055907 Narasimhan et al. Mar 2011 A1
20110078794 Manni et al. Mar 2011 A1
20110093951 Aziz Apr 2011 A1
20110099620 Stavrou et al. Apr 2011 A1
20110099633 Aziz Apr 2011 A1
20110099635 Silberman et al. Apr 2011 A1
20110113231 Kaminsky May 2011 A1
20110145918 Jung et al. Jun 2011 A1
20110145920 Mahaffey et al. Jun 2011 A1
20110145934 Abramovici et al. Jun 2011 A1
20110167493 Song et al. Jul 2011 A1
20110167494 Bowen et al. Jul 2011 A1
20110173213 Frazier et al. Jul 2011 A1
20110173460 Ito et al. Jul 2011 A1
20110219449 St. Neitzel et al. Sep 2011 A1
20110219450 McDougal et al. Sep 2011 A1
20110225624 Sawhney et al. Sep 2011 A1
20110225655 Niemela et al. Sep 2011 A1
20110247072 Staniford et al. Oct 2011 A1
20110265182 Peinado et al. Oct 2011 A1
20110289582 Kejriwal et al. Nov 2011 A1
20110302587 Nishikawa et al. Dec 2011 A1
20110307954 Melnik et al. Dec 2011 A1
20110307955 Kaplan et al. Dec 2011 A1
20110307956 Yermakov et al. Dec 2011 A1
20110314546 Aziz et al. Dec 2011 A1
20120023593 Puder et al. Jan 2012 A1
20120054869 Yen et al. Mar 2012 A1
20120066698 Yanoo Mar 2012 A1
20120079596 Thomas et al. Mar 2012 A1
20120084859 Radinsky et al. Apr 2012 A1
20120096553 Srivastava et al. Apr 2012 A1
20120110667 Zubrilin et al. May 2012 A1
20120117652 Manni et al. May 2012 A1
20120121154 Xue et al. May 2012 A1
20120124426 Maybee et al. May 2012 A1
20120174186 Aziz et al. Jul 2012 A1
20120174196 Bhogavilli et al. Jul 2012 A1
20120174218 McCoy et al. Jul 2012 A1
20120198279 Schroeder Aug 2012 A1
20120210423 Friedrichs et al. Aug 2012 A1
20120222121 Staniford et al. Aug 2012 A1
20120255015 Sahita et al. Oct 2012 A1
20120255017 Sallam Oct 2012 A1
20120260342 Dube et al. Oct 2012 A1
20120266244 Green et al. Oct 2012 A1
20120278886 Luna Nov 2012 A1
20120297489 Dequevy Nov 2012 A1
20120330801 McDougal et al. Dec 2012 A1
20120331553 Aziz et al. Dec 2012 A1
20130014259 Gribble et al. Jan 2013 A1
20130036472 Aziz Feb 2013 A1
20130047257 Aziz Feb 2013 A1
20130074185 McDougal et al. Mar 2013 A1
20130086684 Mohler Apr 2013 A1
20130097699 Balupari et al. Apr 2013 A1
20130097706 Titonis et al. Apr 2013 A1
20130111587 Goel et al. May 2013 A1
20130117852 Stute May 2013 A1
20130117855 Kim et al. May 2013 A1
20130139264 Brinkley et al. May 2013 A1
20130160125 Likhachev et al. Jun 2013 A1
20130160127 Jeong et al. Jun 2013 A1
20130160130 Mendelev et al. Jun 2013 A1
20130160131 Madou et al. Jun 2013 A1
20130167236 Sick Jun 2013 A1
20130174214 Duncan Jul 2013 A1
20130185789 Hagiwara et al. Jul 2013 A1
20130185795 Winn et al. Jul 2013 A1
20130185798 Saunders et al. Jul 2013 A1
20130191915 Antonakakis et al. Jul 2013 A1
20130196649 Paddon et al. Aug 2013 A1
20130227691 Aziz et al. Aug 2013 A1
20130246370 Bartram et al. Sep 2013 A1
20130247186 LeMasters Sep 2013 A1
20130263260 Mahaffey et al. Oct 2013 A1
20130291109 Staniford et al. Oct 2013 A1
20130298243 Kumar et al. Nov 2013 A1
20130318038 Shiffer et al. Nov 2013 A1
20130318073 Shiffer et al. Nov 2013 A1
20130325791 Shiffer et al. Dec 2013 A1
20130325792 Shiffer et al. Dec 2013 A1
20130325871 Shiffer et al. Dec 2013 A1
20130325872 Shiffer et al. Dec 2013 A1
20140032875 Butler Jan 2014 A1
20140053260 Gupta et al. Feb 2014 A1
20140053261 Gupta et al. Feb 2014 A1
20140130158 Wang et al. May 2014 A1
20140137180 Lukacs et al. May 2014 A1
20140169762 Ryu Jun 2014 A1
20140179360 Jackson et al. Jun 2014 A1
20140181131 Ross Jun 2014 A1
20140189687 Jung et al. Jul 2014 A1
20140189866 Shiffer et al. Jul 2014 A1
20140189882 Jung et al. Jul 2014 A1
20140237600 Silberman et al. Aug 2014 A1
20140280245 Wilson Sep 2014 A1
20140283037 Sikorski et al. Sep 2014 A1
20140283063 Thompson et al. Sep 2014 A1
20140328204 Klotsche et al. Nov 2014 A1
20140337836 Ismael Nov 2014 A1
20140344926 Cunningham et al. Nov 2014 A1
20140351935 Shao et al. Nov 2014 A1
20140380473 Bu et al. Dec 2014 A1
20140380474 Paithane et al. Dec 2014 A1
20150007312 Pidathala et al. Jan 2015 A1
20150096022 Vincent et al. Apr 2015 A1
20150096023 Mesdaq et al. Apr 2015 A1
20150096024 Haq et al. Apr 2015 A1
20150096025 Ismael Apr 2015 A1
20150180886 Staniford et al. Jun 2015 A1
20150186645 Aziz et al. Jul 2015 A1
20150199513 Ismael et al. Jul 2015 A1
20150199531 Ismael et al. Jul 2015 A1
20150199532 Ismael et al. Jul 2015 A1
20150220735 Paithane et al. Aug 2015 A1
20150372980 Eyada Dec 2015 A1
20160004869 Ismael et al. Jan 2016 A1
20160006756 Ismael et al. Jan 2016 A1
20160044000 Cunningham Feb 2016 A1
20160127393 Aziz et al. May 2016 A1
20160191547 Zafar et al. Jun 2016 A1
20160191550 Ismael et al. Jun 2016 A1
20160261612 Mesdaq et al. Sep 2016 A1
20160285914 Singh et al. Sep 2016 A1
20160301703 Aziz Oct 2016 A1
20160335110 Paithane et al. Nov 2016 A1
20170083703 Abbasi et al. Mar 2017 A1
20180013770 Ismael Jan 2018 A1
20180048660 Paithane et al. Feb 2018 A1
20180121316 Ismael et al. May 2018 A1
20180288077 Siddiqui et al. Oct 2018 A1
Foreign Referenced Citations (11)
Number Date Country
2439806 Jan 2008 GB
2490431 Oct 2012 GB
02006928 Jan 2002 WO
0223805 Mar 2002 WO
2007117636 Oct 2007 WO
2008041950 Apr 2008 WO
2011084431 Jul 2011 WO
2011112348 Sep 2011 WO
2012075336 Jun 2012 WO
2012145066 Oct 2012 WO
2013067505 May 2013 WO
Non-Patent Literature Citations (61)
Entry
Thomas H. Ptacek, and Timothy N. Newsham , “Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection”, Secure Networks, (“Ptacek”), (Jan. 1998).
U.S. Appl. No. 11/717,476, filed Mar. 12, 2007 Final Office Action dated Feb. 2, 2011.
U.S. Appl. No. 11/717,476, filed Mar. 12, 2007 Non-Final Office Action dated Jan. 15, 2014.
U.S. Appl. No. 11/717,476, filed Mar. 12, 2007 Non-Final Office Action dated Jun. 17, 2010.
Venezia, Paul , “NetDetector Captures Intrusions”, InfoWorld Issue 27, (“Venezia”), (Jul. 14, 2003).
Vladimir Getov: “Security as a Service in Smart Clouds—Opportunities and Concerns”, Computer Software and Applications Conference (COMPSAC), 2012 IEEE 36th Annual, IEEE, Jul. 16, 2012 (Jul. 16, 2012).
Wahid et al., Characterising the Evolution in Scanning Activity of Suspicious Hosts, Oct. 2009, Third International Conference on Network and System Security, pp. 344-350.
Whyte, et al., “DNS-Based Detection of Scanning Works in an Enterprise Network”, Proceedings of the 12th Annual Network and Distributed System Security Symposium, (Feb. 2005), 15 pages.
Williamson, Matthew M., “Throttling Viruses: Restricting Propagation to Defeat Malicious Mobile Code”, ACSAC Conference, Las Vegas, NV, USA, (Dec. 2002), pp. 1-9.
Yuhei Kawakoya et al: “Memory behavior-based automatic malware unpacking in stealth debugging environment”, Malicious and Unwanted Software (Malware), 2010 5th International Conference on, IEEE, Piscataway, NJ, USA, Oct. 19, 2010, pp. 39-46, XP031833827, ISBN:978-1-4244-8-9353-1.
Zhang et al., The Effects of Threading, Infection Time, and Multiple-Attacker Collaboration on Malware Propagation, Sep. 2009, IEEE 28th International Symposium on Reliable Distributed Systems, pp. 73-82.
“Network Security: NetDetector—Network Intrusion Forensic System (NIFS) Whitepaper”, (“NetDetector Whitepaper”), (2003).
“When Virtual is Better Than Real”, IEEEXplore Digital Library, available at, http://ieeexplore.ieee.org/xpl/articleDetails.isp?reload=true&arnumbe- r=990073, (Dec. 7, 2013).
Abdullah, et al., Visualizing Network Data for Intrusion Detection, 2005 IEEE Workshop on Information Assurance and Security, pp. 100-108.
Adetoye, Adedayo , et al., “Network Intrusion Detection & Response System”, (“Adetoye”), (Sep. 2003).
Apostolopoulos, George; hassapis, Constantinos; “V-eM: A cluster of Virtual Machines for Robust, Detailed, and High-Performance Network Emulation”, 14th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, Sep. 11-14, 2006, pp. 117-126.
Aura, Tuomas, “Scanning electronic documents for personally identifiable information”, Proceedings of the 5th ACM workshop on Privacy in electronic society. ACM, 2006.
Baecher, “The Nepenthes Platform: An Efficient Approach to collect Malware”, Springer-verlag Berlin Heidelberg, (2006), pp. 165-184.
Bayer, et al., “Dynamic Analysis of Malicious Code”, J Comput Virol, Springer-Verlag, France., (2006), pp. 67-77.
Boubalos, Chris , “extracting syslog data out of raw pcap dumps, seclists.org, Honeypots mailing list archives”, available at http://seclists.org/honeypots/2003/q2/319 (“Boubalos”), (Jun. 5, 2003).
Chaudet, C. , et al., “Optimal Positioning of Active and Passive Monitoring Devices”, International Conference on Emerging Networking Experiments and Technologies, Proceedings of the 2005 ACM Conference on Emerging Network Experiment and Technology, CoNEXT '05, Toulousse, France, (Oct. 2005), pp. 71-82.
Chen, P. M. and Noble, B. D., “When Virtual is Better Than Real, Department of Electrical Engineering and Computer Science”, University of Michigan (“Chen”) (2001).
Cisco “Intrusion Prevention for the Cisco ASA 5500-x Series” Data Sheet (2012).
Cohen, M.I. , “PyFlag—An advanced network forensic framework”, Digital investigation 5, Elsevier, (2008), pp. S112-S120.
Costa, M. , et al., “Vigilante: End-to-End Containment of Internet Worms”, SOSP '05, Association for Computing Machinery, Inc., Brighton U.K., (Oct. 23-26, 2005).
Didier Stevens, “Malicious PDF Documents Explained”, Security & Privacy, IEEE, IEEE Service Center, Los Alamitos, CA, US, vol. 9, No. 1, Jan. 1, 2011, pp. 80-82, XP011329453, ISSN: 1540-7993, DOI: 10.1109/MSP.2011.14.
Distler, “Malware Analysis: An Introduction”, SANS Institute InfoSec Reading Room, SANS Institute, (2007).
Dunlap, George W. , et al., “ReVirt: Enabling Intrusion Analysis through Virtual-Machine Logging and Replay”, Proceeding of the 5th Symposium on Operating Systems Design and Implementation, USENIX Association, (“Dunlap”), (Dec. 9, 2002).
FireEye Malware Analysis & Exchange Network, Malware Protection System, FireEye Inc., 2010.
FireEye Malware Analysis, Modern Malware Forensics, FireEye Inc., 2010.
FireEye v.6.0 Security Target, pp. 1-35, Version 1.1, FireEye Inc., May 2011.
Goel, et al., Reconstructing System State for Intrusion Analysis, Apr. 2008 SIGOPS Operating Systems Review, vol. 42 Issue 3, pp. 21-28.
Gregg Keizer: “Microsoft's HoneyMonkeys Show Patching Windows Works”, Aug. 8, 2005, XP055143386, Retrieved from the Internet: URL:http://www.informationweek.com/microsofts-honeymonkeys-show-patching-windows-works/d/d-id/1035069? [retrieved on Jun. 1, 2016].
Heng Yin et al, Panorama: Capturing System-Wide Information Flow for Malware Detection and Analysis, Research Showcase © CMU, Carnegie Mellon University, 2007.
Hiroshi Shinotsuka, Malware Authors Using New Techniques to Evade Automated Threat Analysis Systems, Oct. 26, 2012, http://www.symantec.com/connect/blogs/, pp. 1-4.
Idika et al., A-Survey-of-Malware-Detection-Techniques, Feb. 2, 2007, Department of Computer Science, Purdue University.
Isohara, Takamasa, Keisuke Takemori, and Ayumu Kubota. “Kernel-based behavior analysis for android malware detection.” Computational intelligence and Security (CIS), 2011 Seventh International Conference on. IEEE, 2011.
Kaeo, Merike , “Designing Network Security”, (“Kaeo”), (Nov. 2003).
Kevin A Roundy et al: “Hybrid Analysis and Control of Malware”, Sep. 15, 2010, Recent Advances in Intrusion Detection, Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 317-338, XP019150454 ISBN:978-3-642-15511-6.
Khaled Salah et al: “Using Cloud Computing to Implement a Security Overlay Network”, Security & Privacy, IEEE, IEEE Service Center, Los Alamitos, CA, US, vol. 11, No. 1, Jan. 1, 2013 (Jan. 1, 2013).
Kim, H. , et al., “Autograph: Toward Automated, Distributed Worm Signature Detection”, Proceedings of the 13th Usenix Security Symposium (Security 2004), San Diego, (Aug. 2004), pp. 271-286.
King, Samuel T., et al., “Operating System Support for Virtual Machines”, (“King”) (2003).
Kreibich, C. , et al., “Honeycomb-Creating Intrusion Detection Signatures Using Honeypots”, 2nd Workshop on Hot Topics in Networks (HotNets-11), Boston, USA, (2003).
Kristoff, J. , “Botnets, Detection and Mitigation: DNS-Based Techniques”, NU Security Day, (2005), 23 pages.
Lastline Labs, The Threat of Evasive Malware, Feb. 25, 2013, Lastline Labs, pp. 1-8.
Li et al., A VMM-Based System Call Interposition Framework for Program Monitoring, Dec. 2010, IEEE 16th International Conference on Parallel and Distributed Systems, pp. 706-711.
Lindorfer, Martina, Clemens Kolbitsch, and Paolo Milani Comparetti. “Detecting environment-sensitive malware.” Recent Advances in Intrusion Detection. Springer Berlin Heidelberg, 2011.
Marchette, David J., “Computer Intrusion Detection and Network Monitoring: A Statistical Viewpoint”, (“Marchette”), (2001).
Moore, D. , et al., “Internet Quarantine: Requirements for Containing Self-Propagating Code”, INFOCOM, vol. 3, (Mar. 30-Apr. 3, 2003), pp. 1901-1910.
Morales, Jose A., et al., ““Analyzing and exploiting network behaviors of malware.””, Security and Privacy in Communication Networks. Springer Berlin Heidelberg, 2010. 20-34.
Mori, Detecting Unknown Computer Viruses, 2004, Springer-Verlag Berlin Heidelberg.
Natvig, Kurt , “SANDBOXII: Internet”, Virus Bulletin Conference, (“Natvig”), (Sep. 2002).
NetBIOS Working Group. Protocol Standard for a NetBIOS Service on a TCP/UDP transport: Concepts and Methods. STD 19, RFC 1001, Mar. 1987.
Newsome, J. , et al., “Dynamic Taint Analysis for Automatic Detection, Analysis, and Signature Generation of Exploits on Commodity Software”, In Proceedings of the 12th Annual Network and Distributed System Security, Symposium (NDSS '05), (Feb. 2005).
Nojiri, D. , et al., “Cooperation Response Strategies for Large Scale Attack Mitigation”, DARPA Information Survivability Conference and Exposition, vol. 1, (Apr. 22-24, 2003), pp. 293-302.
Oberheide et al., CloudAV.sub.--N-Version Antivirus in the Network Cloud, 17th USENIX Security Symposium USENIX Security '08 Jul. 28-Aug. 1, 2008 San Jose, CA.
PCT/US2012/021916 filed Jan. 19, 2012, International Search Report and Written Opinion dated May 10, 2012.
PCT/US2012/026402 filed Feb. 23, 2012 International Search Report and Written Opinion dated May 25, 2012.
Reiner Sailer, Enriquillo Valdez, Trent Jaeger, Roonald Perez, Leendert van Doom, John Linwood Griffin, Stefan Berger., sHype: Secure Hypervisor Appraoch to Trusted Virtualized Systems (Feb. 2, 2005) (“Sailer”).
Silicon Defense, “Worm Containment in the Internal Network”, (Mar. 2003), pp. 1-25.
Singh, S. , et al., “Automated Worm Fingerprinting”, Proceedings of the ACM/USENIX Symposium on Operating System Design and Implementation, San Francisco, California, (Dec. 2004).
Provisional Applications (3)
Number Date Country
60559198 Apr 2004 US
60579953 Jun 2004 US
60579910 Jun 2004 US
Continuations (2)
Number Date Country
Parent 14530474 Oct 2014 US
Child 16404522 US
Parent 11717476 Mar 2007 US
Child 14530474 US
Continuation in Parts (6)
Number Date Country
Parent 11494990 Jul 2006 US
Child 11717476 US
Parent 11471072 Jun 2006 US
Child 11494990 US
Parent 11409355 Apr 2006 US
Child 11471072 US
Parent 11096287 Mar 2005 US
Child 11409355 US
Parent 11151812 Jun 2005 US
Child 11096287 US
Parent 11152286 Jun 2005 US
Child 11151812 US