Correlation and consolidation of analytic data for holistic view of a malware attack

Information

  • Patent Grant
  • 9311479
  • Patent Number
    9,311,479
  • Date Filed
    Thursday, March 14, 2013
    11 years ago
  • Date Issued
    Tuesday, April 12, 2016
    8 years ago
Abstract
According to one embodiment, a method for correlating and consolidating analytic data to provide a holistic view of a malware attack. The method comprises receiving analytic data from a plurality of electronic devices. The analytic data from each electronic device of the plurality of electronic devices comprises input attributes and analysis attributes. Thereafter, the analytic data is correlated by determining whether a first analysis attribute provided by a first electronic device of the plurality of electronic devices matches a second analysis attribute provided by a second electronic device of the plurality of electronic devices. In response determining that the first analysis attribute provided by the first electronic device matches the second analysis attribute provided by the second electronic device, the input attributes associated with the first analysis attribute and the second analysis attribute are consolidated for subsequent display.
Description
FIELD

Embodiments of the disclosure relate to the field of network security. More specifically, one embodiment of the disclosure relates to a system, apparatus and method for correlating analytic data produced by different malware content detection systems, and consolidating portions of this data to provide a holistic view of a malware attack.


GENERAL BACKGROUND

Over the last decade, malicious software (malware) has become a pervasive problem for Internet users. In some situations, malware is a program or file that is embedded within downloadable content and designed to adversely influence (i.e. attack) normal operations of a computer. Examples of different types of malware may include bots, computer viruses, worms, Trojan horses, spyware, adware, or any other programming that operates within the computer without permission.


For instance, content may be embedded with objects associated with a web page hosted by a malicious web site. By downloading this content, malware causing another web page to be requested from a malicious web site may be unknowingly installed on the computer. Similarly, malware may also be installed on a computer upon receipt or opening of an electronic mail (email) message. For example, an email message may contain an attachment, such as a Portable Document Format (PDF) document, with embedded executable malware. Also, malware may exist in files infected through any of a variety of attack vectors, which are uploaded from the infected computer onto a networked storage device such as a file share.


Over the past few years, various types of security appliances have been deployed at different segments of a network. These security appliances are configured to uncover the presence of malware embedded within ingress content propagating through over these different segments. However, there is no mechanism that operates, in concert with multiple security appliances, to correlate and consolidate information from these security appliances in order to provide a customer with a holistic view of a malware attack.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the invention are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:



FIG. 1 is an exemplary block diagram of a communication network deploying a plurality of malware content detection (MCD) systems.



FIG. 2 is an exemplary block diagram of logic implemented within the management system of FIG. 1.



FIG. 3 is an exemplary block diagram of an Analytic Data Response message received by the management system from a MCD system.



FIG. 4 is an exemplary diagram of logic within a MCD system.



FIG. 5A is an exemplary embodiment of a flowchart partially illustrating an operation of populating a data store by a MCD system for subsequent access by the management system.



FIGS. 5B and 5C are exemplary general diagrams of the aggregation of analytic data by a MCD system for supply to the management system.



FIG. 6A is an exemplary embodiment of a flowchart of the general operations for correlating and consolidating analytic data from multiple MCD systems as conducted by the management system.



FIG. 6B is an exemplary embodiment of a more detailed flowchart partially illustrating correlation and consolidation of analytic data by the management system.



FIGS. 7A-7D are exemplary embodiments of a detailed illustrative example of aggregation, correlation and consolidation of analytic data by the management system.



FIG. 8 is an exemplary embodiment of a display screen that includes data produced by the correlation logic and consolidation logic to provide a consumer with a holistic view of a malware attack.





DETAILED DESCRIPTION

Various embodiments of the disclosure relate to a management system configured to correlate analytic data received from multiple malware content detection (MCD) systems. In general, the management system controls the uploading of analytic data from each MCD system. This analytic data enables the management system to (i) determine whether the same malware appears to be present at different MCD systems (i.e. evidence of a malware attack) and (ii) consolidate at least a portion of the analytic data in order to provide a holistic view of the malware attack. This “holistic view” may be accomplished by generating one or more screen displays that provide comprehensive details concerning the network entry point and migration of suspicious network content.


More specifically, the management system is configured to receive, from each of the MCD systems, analytic data associated with suspicious network content that has been analyzed by that MCD system for malware. The analytic data comprises (1) information that identifies the suspicious network content (e.g., a time-stamp value, monotonic count value, or another type of identifier); (2) input attributes; and (3) analysis attributes. In general, “input attributes” include information used in the routing of the content, such as source and/or destination information. “Analysis attributes” include information directed to portions of the suspicious network content that are analyzed for malware (hereinafter referred to as “artifacts”) as well as one or more anomalous behaviors observed during malware detection analysis of the artifacts.


After receipt of analytic data from different MCD systems, the management system correlates the analytic data by recursively comparing analysis attributes recovered from one MCD system with analysis attributes recovered from one or more other MCD systems. Upon determining that at least certain analysis attributes from different MCD systems match, the input attributes corresponding to these compared analysis attributes may be consolidated to provide greater details as to the infection vector for the suspicious network content (e.g. initial source, number of recipients, time of receipt, etc.).


I. Terminology


In the following description, certain terminology is used to describe features of the invention. For example, in certain situations, the terms “logic” and “engine” are representative of hardware, firmware or software that is configured to perform one or more functions. As hardware, logic may include circuitry such as processing circuitry (e.g., a microprocessor, one or more processor cores, a programmable gate array, a microcontroller, an application specific integrated circuit, etc.), wireless receiver, transmitter and/or transceiver circuitry, semiconductor memory, combinatorial logic, or other types of electronic components.


As software, logic may be in the form of one or more software modules, such as executable code in the form of an executable application, an application programming interface (API), a subroutine, a function, a procedure, an applet, a servlet, a routine, source code, object code, a shared library/dynamic load library, or one or more instructions. These software modules may be stored in any type of a suitable non-transitory storage medium, or transitory storage medium (e.g., electrical, optical, acoustical or other form of propagated signals such as carrier waves, infrared signals, or digital signals). Examples of non-transitory storage medium may include, but is not limited or restricted to a programmable circuit; a semiconductor memory; non-persistent storage such as volatile memory (e.g., any type of random access memory “RAM”); persistent storage such as non-volatile memory (e.g., read-only memory “ROM”, power-backed RAM, flash memory, phase-change memory, etc.), a solid-state drive, hard disk drive, an optical disc drive, or a portable memory device. As firmware, the executable code is stored in persistent storage.


The term “network content” generally refers to information transmitted over a network as one or more messages, namely a grouping of information that comprises a header and a payload, such as any of the following: a packet; a frame; a stream being a sequence of packets or frames; an Asynchronous Transfer Mode “ATM” cell; or any other series of bits having a prescribed format. The “payload” is generally defined as including the data associated with the message such as text, software, an image, an object, audio, video, a Uniform Resource Locator (URL), or other types of digital data. The “header” is generally defined as including control information. However, the specific types of control information depend on the network content type.


For data traffic, such as data transmitted in accordance with a Hypertext Transfer Protocol (HTTP), HyperText Markup Language (HTML) protocol, the header may include source and destination Internet Protocol (IP) addresses (e.g., IPv4 or IPv6 addressing) and/or source and destination port information.


Another examples of network content includes email, which may be transmitted using an email protocol such as Simple Mail Transfer Protocol (SMTP), Post Office Protocol version 3 (POPS), or Internet Message Access Protocol (IMAP4). A further example of network content includes an Instant Message, which may be transmitted using Session Initiation Protocol (SIP) or Extensible Messaging and Presence Protocol (XMPP) for example. Yet another example of network content includes one or more files that are transferred using a data transfer protocol such as File Transfer Protocol (FTP) for subsequent storage on a file share. Where the network content is email, Instant Message or a file, the header may include the sender/recipient address, the sender/recipient phone number, or a targeted network location of the file, respectively.


The term “malware” is directed to software that produces an undesirable behavior upon execution, where the behavior is deemed to be “undesirable” based on customer-specific rules, manufacturer-based rules, or any other type of rules formulated by public opinion or a particular governmental or commercial entity. This undesired behavior may include a communication-based anomaly or an execution-based anomaly that (1) alters the functionality of an electronic device executing that application software in a malicious manner; (2) alters the functionality of an electronic device executing that application software without any malicious intent; and/or (3) provides an unwanted functionality which is generally acceptable in other context.


The term “transmission medium” is a communication path between two or more systems (e.g. any electronic devices with data processing functionality such as, for example, a security appliance, server, mainframe, computer, netbook, tablet, smart phone, router, switch, bridge or brouter). The communication path may include wired and/or wireless segments. Examples of wired and/or wireless segments include electrical wiring, optical fiber, cable, bus trace, or a wireless channel using infrared, radio frequency (RF), or any other wired/wireless signaling mechanism.


Lastly, the terms “or” and “and/or” as used herein are to be interpreted as inclusive or meaning any one or any combination. Therefore, “A, B or C” or “A, B and/or C” mean “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.” An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive.


As this invention is susceptible to embodiments of many different forms, it is intended that the present disclosure is to be considered as an example of the principles of the invention and not intended to limit the invention to the specific embodiments shown and described.


II. General Architecture


Referring to FIG. 1, an exemplary block diagram of a communication network 100 deploying a plurality of malware content detection (MCD) systems 1101-110N (N>1) communicatively coupled to a management system 120 via a network 130 is shown. In general, management system 120 is adapted to manage MCD systems 1101-110N. For instance, management system 120 may be adapted to cause malware signatures generated by any of MCD systems 1101-110N to be shared with one or more of the other MCD systems 1101-110N, for example, on a subscription basis. Furthermore, management system 120 may be adapted to aggregate, correlate and consolidate analytic data provided by MCD systems 1101-110N for subsequent conveyance to an electronic device 125 with display capabilities, as represented by communication paths 115. This analytic data, when correlated and consolidated, provides a network administrator with more information for defending against and preventing a malware attack.


Each MCD system 1101-110N (N=3) is adapted to intercept and analyze network content (e.g., data traffic, email, files, etc.) in real-time so as to determine whether the network content constitutes suspicious network content. The network content is considered to be “suspicious” when a portion of the network content (e.g. payload data) is determined, with a certain level of likelihood, to include malware.


According to this embodiment of the communication network, a first MCD system 1101 may be a web-based security appliance that is configured to inspect ingress data traffic, identify whether any artifacts of the data traffic may include malware, and if so, analyze at least those artifacts. This analysis may be partially conducted in a virtual machine (VM) execution environment to detect anomalous behaviors that would be present if the data traffic was actually processed by an electronic device. The particulars of this analysis are described below.


As shown in FIG. 1, first MCD system 1101 may be deployed as an inline security appliance (not shown) or coupled to network 130 via a network tap 1501 (e.g., a data/packet capturing device), which can be integrated into first MCD system 1101, provided as a standalone component, or integrated into different network components such as a firewall 140, a router, a switch or other type of network relay device. Network tap 1501 may include a digital network tap configured to monitor network content (data traffic) and provide a copy of the data traffic along with its metadata to first MCD system 1101 for analysis. The data traffic may comprise signaling transmitted over network 130, including data from/to a remote server 160.


As further shown in FIG. 1, second MCD system 1102 is a communication-based security appliance that is configured to analyze and report suspicious network content, such as malware within an incoming communication message (e.g., email message, short message service “SMS” message, etc.). As shown, second MCD system 1102 may be positioned within a message transfer agent (MTA) deployed in network 130 as shown, or connected to network 130 via a network tap.


Third MCD system 1103 is a storage-based security appliance that is configured to analyze and report suspicious network content, such as potential malware within a file to be uploaded into one or more file shares 160. As with first MCD system 1101, third MCD system 1103 may be deployed as an inline security appliance (not shown) or coupled to network 130 via a network tap 1502.


It is contemplated that management system 120 may be deployed to provide cloud computing services for correlation and consolidation of the analytic data as described. Furthermore, it is contemplated that the functionality of one or more MCD systems 1101-110N may be incorporated into management system 120 when malware detection is to be conducted at a centralized resource.


Referring now to FIG. 2, an exemplary block diagram of logic that is implemented within management system 120 is shown. Management system 110 comprises one or more processors 200 that are coupled to communication interface logic 210 via a first transmission medium 220. Communication interface 210 enables communications with MCD systems 1101-110N of FIG. 1 as well as other electronic devices over private and/or public networks, such as electronic device 125 used to view the correlated and consolidated analytic results from the malware detection analysis. According to one embodiment of the disclosure, communication interface logic 210 may be implemented as a physical interface including one or more ports for wired connectors. Additionally, or in the alternative, communication interface logic 210 may be implemented with one or more radio units for supporting wireless communications with other electronic devices.


Processor 200 is further coupled to persistent storage 230 via transmission medium 225. According to one embodiment of the disclosure, persistent storage 230 may include configuration logic 240, distribution logic 250, aggregation logic 260, correlation logic 270 and/or consolidation logic 280. Of course, when implemented as hardware, logic 240, 250, 260, 270 and/or 280 would be implemented separately from persistent memory 230.


Configuration logic 240 provides centralized control of the functionality of MCD systems 1101-110N. In particular, configuration logic 240 allows an administrator in a customer environment to alter configuration information within MCD systems 1101-110N as well as other networked electronic devices. For instance, as illustrative examples, configuration logic 240 may be used to alter the Internet Protocol (IP) address assigned to one of the security appliances (e.g., MCD system 1101), alter key information stored within any of MCD systems 1101-110N, alter user access/privileges so that different administrators have different access rights, or the like.


Distribution logic 250 allows management system 120 to influence analysis priorities at one MCD system based on suspicious network content detected at another MCD system. For instance, during analysis of the network content, a second MCD system 1102 may receive an email message for malware detection analysis, where the email message includes an artifact (e.g., URL) within its payload. As second MCD system 1102 is not configured to analyze the URL before access by the end-user, the URL is merely provided to management system 120 as an analysis attribute.


The presence of certain artifacts (e.g., URL) as an analysis attribute within the stored analytic data may prompt distribution logic 250 to transmit a priority message to first MCD system 1101 of FIG. 1. The priority message requests malware detection analysis to be conducted on any network content associated with the URL, where the URL is selected by the end user. Of course, it is contemplated that management system 120 may be adapted to ignore or lessen the analysis priority of network content, especially where the network content is determined to be provided from a trusted source.


Aggregation logic 260 is configured to request (i.e. pull) analytic data from each of the MCD systems 1101-110N for storage within an internal data store 290, where at least a portion of the analytic data is used by correlation logic 270. In particular, according to one embodiment of the disclosure, aggregation logic 260 maintains network addresses (e.g., Internet Protocol “IP” address and/or media access control “MAC” address) for each MCD system 1101-110N. In response to a triggering event, where the event may be scheduled based on an elapsed time or may be aperiodic, aggregation logic 260 sends a message to one or more MCD systems 1101-110N requesting analytic data (hereinafter generally referred to as an “Analytic Data Query message”). Within each Analytic Data Query message, aggregation logic 260 may provide information (e.g. last stored time-stamp value and/or sequence value, etc.) to assist a targeted MCD system (e.g., MCD system 110i, where 1≦i≦N) to identify stored analytic data that has not yet been uploaded to management system 120.


In response to an Analytic Data Query message, management system 120 receives one or more Analytic Data Response messages 300 from targeted MCD system 110i as shown in FIG. 3. Analytic Data Response message 300 comprises (1) a header 310 and (2) a payload 350. Header 310 includes at least a source address 320 identifying MCD system 110i. Payload 350 comprises information associated with suspicious network content analyzed by the targeted MCD system. The information includes at least (i) an identifier for the suspicious network content (e.g., assigned sequence number and/or time-stamp value, etc.), (ii) one or more input attributes associated with the suspicious network content, and/or (iii) one or more analysis attributes associated with the suspicious network content.


It is contemplated that multiple messages may be utilized to provide the information to management system 120, such as the analysis attributes being provided in a first message and input attributes provided in a subsequent message. Also, it is contemplated that MCD system 110i may be adapted to “push” the input attributes and/or analysis attributes in lieu of the “pull” operations as described.


Where different MCD systems are operating on common suspicious network content, these MCD systems 1101-110N of FIG. 1 will provide one or more identical analysis attributes. These analysis attributes are identical, in part, because the malware detection analysis conducted by these MCD systems is in accordance with a common mechanism as described below (static and VM-execution environment). The input attributes are different based on the MCD system analyzing the network content. Examples of analysis and input attributes realized by different types of MCD systems are set forth below in Table A.









TABLE A







Examples of Attributes









ATTRIBUTES


MCD SYSTEM TYPE
(INPUT “I” AND/OR ANALYSIS “A”)





Network-based
I: Source IP (and/or MAC) address



I: Destination IP (and/or MAC) address



A: URL (website accessed)



A: Information identifying anomalous



behaviors detected within the virtual



execution environment (e.g., file changes,



registry changes, process changes, etc.)


Communications-based
I: Sender identifier (email address, phone



number for text, etc.)



I: Recipient identifier (email address,



phone number for text, etc.)



I: Subject Line information



A: URL(s) present in communication message



A: Attachment present in communication



message



A: Information identifying anomalous



behaviors detected within the virtual



execution environment (e.g., file changes,



registry changes, process changes, etc.)


Storage-based
I: Network location of the file



I: Source IP (and/or MAC) address of



downloading source



A: File Share name



A: File name/File size/File type



A: File checksum



A: Information identifying anomalous



behaviors detected within the virtual



execution environment (e.g., file changes,



registry changes, process changes, etc.)









Referring back to FIG. 2, triggered by aggregation logic 260 receiving analytic data from one or more MCD systems, correlation logic 270 attempts to find relationships between analysis attributes provided from different MCD systems. This may be accomplished by comparing similarities between artifacts being part of the analyzed network content (e.g., URLs, PDF attachments, etc.) as well as the anomalous behavior observed during analysis of the artifacts (e.g., registry changes, process changes, file changes, etc.). Time proximity may further be considered.


As an illustrative example, an anomalous behavior (e.g. particular registry change) for a first suspicious network content is detected by the first MCD system. The data associated with the anomalous behavior, namely the registry change in this example, undergoes a hash operation to produce a first hash value that is stored as a first analysis attribute.


Similarly, the second MCD system detects an anomalous behavior during malware analysis on a second suspicious network content, which is related to the first suspicious network content. The data associated with this anomalous behavior, such as the same registry change for example, undergoes a hash operation to produce a second hash value that is stored as a second analysis attribute. As the hash operation is conducted on the identical information, the second hash value would be equivalent to the first hash value.


Continuing this illustrative example, correlation logic 270 determines a match by comparing the first analysis attribute to analysis attributes supplied by the second MCD system, including the second analysis attribute. By determining that the first hash value matches the second hash value, the management system has effectively determined that the first network content is related to the second network content.


Optionally, as a secondary determination, correlation logic 270 may confirm that the first analysis attribute occurred within a prescribed time period (e.g., a few minutes, an hour, etc.) from detection of the second analysis attribute. The temporal proximity of the occurrence of these analysis attributes may provide additional information to confirm that the network contents associated with these attributes are related or the same.


Triggered by correlation logic 270, consolidation logic 280 consolidates input attributes associated with these matched analysis attributes. Continuing the above example, consolidation logic 280 provides consolidated input attributes to GUI logic 285. Based on these consolidated input attributes, GUI logic 285 provides one or more screen displays for conveying a more detailed summary of suspicious network content being detected by different MCD systems.


Although the illustrative embodiments are directed to conducting a hash or transformation operation on one or more analysis attributes prior to comparison with other analysis attributes uncovered elsewhere, it is contemplated that information associated with the analysis attributes (or a portion of such information) may be used in lieu of a hash (or transformation) value. For instance, it is possible to use some or all of information from the analysis attribute itself in a complex comparative algorithm to determine if a match is detected.


Referring now to FIG. 4, an exemplary block diagram of logic within a MCD system (e.g., MCD system 1101 of FIG. 1) is shown. Herein, MCD system 1101 comprises (1) static instrumentation engine 400; (2) dynamic run-time test and observation (RTO) engine 420, (3) priority setting logic 470; (4) an optional hash (transformation) logic 480 and/or (5) local data store 490. As shown, static instrumentation engine 400 and dynamic RTO engine 420 are deployed within the same device. However, it is contemplated that static instrumentation engine 400 and dynamic RTO engine 420 may be employed within different devices and/or executed by different processors when implemented as software.


Static instrumentation engine 400 receives ingress network content 405 and generates a representation of the content 405 that is analyzed with one or more various software analysis techniques (e.g., control information analysis, or data analysis). Static instrumentation engine 400 then modifies content 405 to include within itself special monitoring functions and/or special stimuli functions operable during processing of content 405 in dynamic run-time test and observation engine 420. The monitoring functions report their results to control logic 425 and the stimuli functions are told what stimuli to generate by control logic 425. Also, a time-stamp value may be applied to content 405 through a time-stamp generation unit 427 and provided as an identifier for content 405. During the malware detection analysis by static instrumentation engine 400, upon detection of potential malware within the network content, an alert message is generated where at least a portion of information 410 associated with the alert message is routed to data store 490. Some of information 410, namely analysis attributes and/or identification information, may undergo hashing or some sort of transformation to minimize the amount of data to be stored in data store 490.


It is contemplated that static instrumentation engine 400 may be adapted to receive information from dynamic RTO engine 420 in order to instrument the code to better analyze specific behaviors.


After processing is completed by static instrumentation engine 400, content 405 is then provided to control logic 425 within dynamic RTO engine 420. Control logic 425 operates as a scheduler to dynamically control the malware detection analysis among different applications and/or the same application software among different run-time test and observation environments (“run-time environments”).


In general, dynamic RTO engine 420 acts as an intelligent testing function. According to one approach, dynamic RTO engine 420 recursively collects information describing the current state of network content 405 and selects a subset of rules, perhaps corresponding at least in part to the behaviors set by the user, to be monitored during virtual execution of network content 405. The strategic selection and application of various rules over a number of recursions in view of each new observed operational state permits control logic 425 to resolve a specific conclusion about network content 405, namely if network content 405 constitutes suspicious network content.


As shown in FIG. 4, dynamic RTO engine 420 comprises a virtual machine repository 430 that is configured to store one or more virtual machines 4401-440P (where P≧1). More specifically, virtual machine repository 430 may be adapted to store a single virtual machine (VM) that can be configured by scheduling functionality within control unit 425 to simulate the performance of multiple types of electronic devices. Virtual machine repository 430 also can store any number of distinct VMs each configured to simulate performance of a different electronic device and/or different operating systems (or versions) for such electronic devices.


One or more run-time environments 450 simulate operations of network content 405 to detect one or more anomalous behaviors. For instance, run-time environment 4551 can be used to identify the presence of anomalous behavior during analysis of simulated operations of network content 405 performed on a virtual machine 4401. Of course, there can be multiple run-time test environments 4551-455M (M≧2) to simulate multiple types of processing environments for network content 405.


A virtual machine may be considered a representation of a specific electronic device that is provided to a selected run-time environment by control unit 425. In one example, control unit 425 retrieves virtual machine 4401 from virtual machine repository 430 and configures virtual machine 4401 to mimic a particular type of electronic device, such as a computer operating a certain version of Windows® OS. The configured virtual machine 4401 is then provided to one of the run-time environments 4551-455M (e.g., run-time environment 4551).


As run-time environment 4551 simulates the operations of network content 405, virtual machine 4401 can be closely monitored for any behaviors set by the user or for any prioritized content identified by priority setting logic 470. By simulating the processing of network content 405 and analyzing the response of virtual machine 4401, run-time environment 4551 can detect anomalous behaviors and upload analytic data associated with these behaviors to data store 490. This analytic data may include information identifying process changes, file changes and registry changes (or hash values associated with these changes).


Besides VM 4401, run-time environment 4551 is provided with network content 405 (or an instance 460 of network content) along with an instance 465 of the type of operating system on which target content 405 will run if deemed sufficiently safe during the dynamic anomalous behavior detection process. Here, the use of virtual machines (VMs) permits the instantiation of multiple additional run-time environments 4551-455M each handling specific network content and the OS instance, where the various run-time environments 4551-455M are isolated from one another.


As previously described, the simultaneous existence of multiple run-time environments 4551-455M permits different types of observations/tests to be run on particular network content. That is, different instances of the same network content may be provided in different run-time environments so that different types of tests/observances can be concurrently performed on the same content. Alternatively, different network content can be concurrently tested/observed.


For instance, a first packet-based data stream associated with network content may be tested/observed in a first run-time environment (e.g., environment 4551) while a second packet-based data stream is tested/observed in another run-time environment (e.g., environment 455M). Notably, instances of different operating system types and even different versions of the same type of operating system may be located in different run-time environments. For example, a Windows® 8 operating system (OS) instance 465 may be located in first run-time test environment 4551 while another instance of a different version of Windows® OS or Linux® OS (not shown) may be located in a second run-time test environment 455M. Concurrent testing of one or more packet-based data streams (whether different instances of the same packet-based data stream or respective instances of different packet-based data streams or some combination thereof) enhances the overall performance of the communication network.


III. Anomalous Behavior Analysis and Generation/Aggregation of Analytic Data


Referring to FIG. 5A, an exemplary diagram of a flowchart partially illustrating populating of a data store by a MCD system for subsequent access by the management system is shown. Prior to conducting the malware detection analysis, however, ingress network content is received by the MCD system. Upon determining that this content constitutes suspicious network content, a first identifier is assigned to the suspicious network content (blocks 500, 502 and 505). Input attributes associated with the ingress network content (e.g., source and/or destination) are extracted for subsequent storage in the data store of the MCD system (block 510). Also, malware detection analysis is conducted on the artifacts associated with the ingress network content (block 515).


Upon completion of the malware detection analysis, the MCD system stores the artifacts and information associated with any detected anomalous behavior as analysis attributes within a data store. With these analysis artifacts, the MCD system further stores an identifier associated with the content along with the input attributes (blocks 520 and 525). However, if anomalous behavior is not detected, the input attributes along with the identifier associated with the content and the artifacts are collectively stored in the data store (block 530).


Referring now to FIGS. 5B and 5C, exemplary diagrams of the generation and aggregation of analytic data from a MCD system is illustrated. Herein, as shown in FIG. 5B, a plurality of MCD systems 1101-110N are communicatively coupled to management system 120 via transmission mediums 5351-535N. MCD systems 1101-1103 are adapted to intercept and analyze, in real-time, different types of network content (e.g., data traffic, email messages, uploaded files for storage, etc.) so as to determine whether the network content constitutes suspicious network content.


As shown in FIG. 5C, each MCD system 110i (i=1, 2 or 3 in FIG. 1) is configured to receive a first type of network content 540, including header 542 and a payload 544. Upon receipt of network content 540, MCD system 110i assigns an identifier 550 for network content 540 and extracts at least a portion of information within header 542 as the input attributes 555. Both identifier 550 and input attributes 555 are stored in an entry 580 in data store 490. Data store 490 may be situated as a local data store (as shown) or remotely located from MCD system 110i.


Upon performing malware detection analysis on payload 544, a determination is made whether any artifacts 560 (e.g. text, objects, etc.) within payload 544 are “suspicious,” namely that data may constitute malware. If one or more artifacts 560 within payload 544 is “suspicious,” MCD system 110i analyzes artifact(s) 560 in a virtual machine (VM) execution logic (as described above) to detect any anomalous behavior(s) 565. Hence, artifacts 560 along with any detected anomalous behavior(s) 565 are stored as analysis attributes 570 and 575, respectively. However, if none of the artifacts within payload 544 is determined to be “suspicious,” these artifact(s) 560 are merely stored as analysis attribute(s) 570.


Also, it is contemplated that MCD system 110i may conduct a transformation on artifacts and/or recorded anomalous behaviors associated with network content 540 (e.g., one-way hash operation in accordance with a message-digest algorithm such as “MD5”) to produce results having a lesser byte size than the artifact/behavior itself (e.g. hash value or digest). Of course, in lieu of a one-way hash operation, other transformations may be performed on payload artifacts 560 such as a checksum operation, for example. The hash values would be stored as analysis attributes 570 and 575 along with input attributes 555 and identifier 550.


Hence, content identifier 550 along with input attributes 555 and analysis attributes 570-575 are stored in data store 490, which is accessible by management system 120 on a periodic or aperiodic basis. More specifically, according to one embodiment of the disclosure, after a prescribed time has elapsed, management system 120 sends a query (e.g. Analytic Data Query message) for analytic data within local store 490 which has been recently stored since the last query. Upon receipt of the query, with perhaps successful authentication of management system 120 through a challenge/response scheme or another authentication scheme, analytic data from one or more entries within data store 490 are uploaded to management system 120.


IV. Correlation/Consolidation of Analytic Data


Referring to FIG. 6A, an exemplary embodiment of a flowchart of the operations for correlating and consolidating the analytic data from multiple MCD systems is shown. Herein, correlation logic within the management system compares analysis attributes associated with a first MCD system to analysis attributes associated with a second MCD system (block 600). If a match is detected for any of these attributes, the input attributes associated with the compared attributes are consolidated to collectively provide additional information concerning a malware attack associated with the network content (blocks 605 and 610). If a match is not detected, a determination is made whether all comparisons between the incoming analysis attributes have been conducted (block 615). If not, the correlation and consolidation operations continue (block 620). Otherwise, the correlation and consolidation process completes.


Referring now to FIG. 6B, an exemplary diagram of a flowchart partially illustrating correlation and consolidation of analytic data by the management system is shown. Herein, in response to a triggering event to commence acquisition of analytic data from a targeted MCD system (e.g., elapse of a prescribed time period, signaling of the presence of an alert message, etc.), the management system retrieves stored analytic data from the targeted MCD system (blocks 650 and 655). Thereafter, as an optional feature, the MCD system may perform a hash operation on each analysis attribute in the analytic data (block 660).


Thereafter, a recursive comparison scheme is conducted as to whether an analysis attribute associated with the targeted MCD system matches an analysis attribute associated with another MCD system (block 665). For example, the comparison may involve determining whether the hash value associated with an analysis attribute uploaded by the targeted MCD system matches a hash value associated with an analysis attribute uploaded by another MCD system.


If a match is detected, the management system consolidates the input attributes associated with the compared analysis attributes (block 670). Otherwise, a determination is made whether all of the newly received analysis attributes have been analyzed (block 675). If not, the correlation and consolidation analysis is recursive and returns to the operations set forth in operation 680. Otherwise, the analysis is completed (operation 685).


V. Illustration of Aggregation/Correlation/Consolidation of Analytic Data


Referring to FIG. 7, a detailed illustrative example of aggregation, correlation and consolidation of analytic data to provide a more detailed elaboration of a malware attack is shown. Operating as a communication-based security appliance, a second MCD system is configured to receive a first type of network content such as an email message including a header and a payload (block 700). Upon receipt of email message, the second MCD system assigns a content identifier to the email message and extracts at least a portion of information within header as the input attributes (blocks 702 and 704). Both the content identifier and the input attributes are stored within an entry associated with a data store associated with the second MCD system (block 706).


Thereafter, a determination is made as to whether the payload of the email message includes a first artifact such as an attachment (block 708). If so, the second MCD system conducts a malware detection analysis on the first artifact (attachment) by conducting static and dynamic malware analysis as described in FIG. 4 to detect any anomalous behaviors (block 712). Prior to performing the malware detection analysis, however, the second MCD system may conduct a one-way hash operation on the attachment to produce a hash value for storage as the analysis attribute or store the artifact as an analysis attribute (block 710).


Thereafter, any anomalous behaviors uncovered during the virtual processing of the artifact (e.g., detachment and opening of the attachment) within the VM-based run-time environment. The anomalous behaviors, if any, are stored as analysis attributes within the corresponding entry (block 714).


Besides determining whether the payload of the email message includes a first type of artifact, another determination is made as to whether the payload includes a second type of artifact such as a URL (block 716). If so, the URL is not analyzed in the VM-base run-time environment. Rather, the URL (or a hash value of the URL) is added as an analysis attributes within the entry (block 718).


Operating as a web-based security appliance contemporaneously with the second MCD system, a first MCD system is configured to receive a second type of network content such as a network data traffic including a header and a payload (block 720). Upon receipt of data traffic, the first MCD system assigns a content identifier and extracts at least a portion of information within header as the input attributes (blocks 722 and 724). Both the content identifier and the input attributes are stored within an entry within a data store associated with the first MCD system (block 726).


Thereafter, a malware detection analysis is performed on the data traffic by at least analyzing artifacts of the payload by conducting static and dynamic malware analysis as described in FIG. 4 to detect any anomalous behaviors (block 730). These artifacts may include a single frame or series of video frames, audio, text, images, etc. The first MCD system also stores the one or more artifacts as analysis attributes, where such artifacts may be stored as hash values (block 728).


Thereafter, any anomalous behaviors uncovered during analysis of the artifact(s) in a VM-based run-time environment are also stored as analysis attributes within the corresponding entry (block 732).


Lastly, operating as a storage-based security appliance, the third MCD system is configured to receive a third type of network content, such as a file being part of the data payload (block 740). Upon receipt of the file, the first MCD system assigns a content identifier and extracts at least a portion of information within header as the input attributes (blocks 742 and 744). This information may include a network location for storage of the file. Both the content identifier and the input attributes are stored as an entry within a local store associated with the third MCD system (block 746).


Thereafter, a malware detection analysis is performed on the file by at least analyzing artifacts in the file by conducting static and dynamic malware analysis as described in FIG. 4 to detect any anomalous behaviors (block 750). The third MCD system also stores the one or more artifacts as analysis attributes, where such artifacts may be transformed as hash values (block 748).


Any anomalous behaviors uncovered during analysis of the file artifact(s) in a VM-based run-time environment are also stored as analysis attributes within the corresponding entry (block 752).


Periodically, the management system queries each of the MCD systems for recently stored analytic data (block 760). The entries within the data store for a corresponding MCD system that include analytic data (e.g. at least input and analysis attributes) recently stored since the last query are uploaded to the management system (block 762). According to one embodiment, the analytic data from each MCD system remains segregated within the local store of the management system.


The management system compares the analysis attributes associated with the first MCD system, the second MCD system and the third MCD system to determine if any of these analysis attributes match to denote that the network content was detected by multiple MCD systems (blocks 764 and 766).


Presuming for this illustrative example that the URL within the email message was selected, which caused a file (FILE-1) to be downloaded from a malicious server and FILE-1 was subsequently stored on the file share. For this example, the management system correlates the analytic data and determines that the URL associated with the email message matches the URL associated with the network data traffic (block 768). Hence, the input attributes associated with these analysis attributes are consolidated so that the management system may now convey that the URL associated with FILE-1 was received via an email message at time t1 from sender (SENDER-1) to multiple recipients, including RECIPIENT-1 who selected the URL (and received FILE-1 at time t2) as a download while RECIPIENTS-2 . . . 5 who have not yet activated the URL (block 770).


Furthermore, upon further correlation of analysis attributes associated with the URLs, a determination is made that FILE-1 detected by the first MCD system as being downloaded upon selecting the URL also was detected by the third MCD system as being uploaded into a file share (block 772). Hence, the input attributes associated with these analysis attributes are consolidated so that the management system may convey that the URL associated with FILE-1 was received via an email message at time t1 from SENDER-1 to RECIPIENTS-1 . . . 5, where RECIPIENT-1 activated the URL while RECIPIENTS-2 . . . 5 have not yet activated the URL, and FILE_1 was downloaded to RECIPIENT-1 at time t2 and uploaded by RECIPIENT-1 to the file share at network location 0011xx at time t3 (block 774). Such an analysis continues until no further matches are determined for the associated analysis attributes for this particular network content thread.


This information enables the network administrator to further monitor whether the migration of FILE-1 (e.g., was it downloaded by any electronic devices from file share, etc.) and enables the network administrator to not only remove the malicious file from the file share, but also send advisories to USERS 2-5 of the presence of malware and to avoid activating the URL on the particular email message.


Referring now to FIG. 8, an exemplary embodiment of a display screen 800 that includes data produced by the correlation logic and consolidation logic to provide a consumer with a holistic view of a malware attack is shown. In particular, display screen 800 illustrates a first display portion 810 that identifies alerts from first MCD system 1101 of FIG. 1 and a second display portion 820 that identifies alerts from second MCD system 1102 of FIG. 1.


As shown, second display portion 820 provides one or more entries that identify recipients of analyzed email messages. For instance, as shown, a first entry 825 comprises a first field 830 identifying a recipient (XYZ@fireeye.com) to which email messages have been sent. The recipient may correspond to any type of system such as an employee's computer, a server accessible to multiple employees, etc. First entry 825 of second display portion 820 further comprises a second field 831 identifying the total number of email messages (e.g. forty email messages) received by the recipient; a third field 832 identifying a number of attachments in the email messages (e.g., 92 attachments) as well as the number of attachments that are deemed either “malicious” or at least “suspicious” (e.g. no attachments); a fourth field 833 identifying a number of URLS detected in the email messages (e.g. 615 URLs) and the number of suspicious (or malicious) URLs (e.g., 9 suspicious URLs); a fifth field 834 identifying the last malware detected for the suspicious (or malicious) URLs; and a sixth field 835 identifying a time of last detection of the email messages.


An image 840, which is represented by a globe for this illustrative example, is produced by the correlation logic and/or the consolidation logic and displayed within display screen in one of the fields of second display portion 820 (e.g., fourth field 833). Image 840 identifies that at least some of these URLs have been selected by users of downstream electronic devices based on the correlation and consolidation of input attributes for matching analysis attributes detected by both first and second MCD systems 1102 and 1102 of FIG. 1.


First display portion 810 provides one or more entries that identify electronic devices that have received ingress traffic with suspicious network content. For instance, as shown, a first entry 850 comprises a first field 860 identifying an IP address of a first electronic device (10.10.101.93) from which suspicious (or malicious) network content has been detected. First entry 850 in first display portion 810 further comprises a second field 861 identifying a severity rating of suspicious (or malicious) activity detected for the first electronic device. The severity rating may be based, at least in part, on a total number of suspicious (or malicious) activities detected and the type of activities (e.g. infections of malware, callbacks, blocks, etc.) set forth in fields 862-865.


As further shown in FIG. 8, field 866 identifies the last malware detected for the suspicious (or malicious) network content (e.g., malicious code such as Trojan Generic, Exploit.Browser, etc.). Additional malware detected for network content may be displayed by selecting an element within field 866. A final field 867 identifies a time of last detection of the network content.


An image 870, which is represented by an envelope for this illustrative example, is produced by the correlation logic and/or the consolidation logic and displayed within display screen in one of the fields (e.g., field 862) of first display portion 810. Image 870 identifies that the suspicious network content resulted from an email message received by the host electronic devices, where such generation is based on the correlation and consolidation of input attributes for matching analysis attributes detected by both first and second MCD systems 1102 and 1102 of FIG. 1.


The same general layout is provided for second entry 852 and other entries within first display portion 810. It is contemplated that the layout may be provided through other viewpoints besides alerts and e-alerts, such as by specific MCD systems where the granularity of the correlation and consolidation information may represent which MCD system detected which suspicious activity.


In the foregoing description, the invention is described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. For instance, in lieu of or in addition to the MCD system 1101-1103 of FIG. 1, a malware analysis system (MAS) system may be communicatively coupled to management system 120 of FIG. 1. The MAS system operates as a forensic workbench by receiving, based on user interaction, suspicious network content from at least one of MCD systems 1101-1103. The MAS system can be adapted with capabilities for a user to conduct a more in-depth analysis of suspicious network content, where such analysis may be uploaded to management system 120 as well.

Claims
  • 1. A method for detecting a malware attack and displaying information associated with suspicious network content pertaining to the malware attack, the method comprising: receiving analytic data including information associated with network content being monitored for malware, the analytic data being stored at least in a payload of a message from each of a plurality of electronic devices and including one or more input attributes and one or more analysis attributes;correlating the analytic data that comprises determining whether a first analysis attribute provided by a first electronic device of the plurality of electronic devices matches a second analysis attribute provided by a second electronic device of the plurality of electronic devices;responsive to determining that the first analysis attribute matches the second analysis attribute, consolidating input attributes associated with the first analysis attribute and the second analysis attribute for subsequent display; andoutputting, for display, information representing the consolidated input attributes to identify that a first network content associated with the first analysis attribute received from the first electronic device is the same as or related to a second network content associated with the second analysis attribute received from the second electronic device.
  • 2. The method of claim 1, wherein the first analysis attribute corresponds to a Uniform Resource Locator (URL) within the first network content being a first type of network content and the second analysis attribute corresponds to the URL within the second network content being a second type of network content.
  • 3. The method of claim 2, wherein the first type of network content corresponds to an electronic mail (email) message being different from the second type of network content.
  • 4. The method of claim 3, wherein the second type of network content comprises a file type.
  • 5. The method of claim 1, wherein the first type of network content includes an electronic mail (email) message that is analyzed for malware by the first electronic device and the second type of network content includes network traffic that is analyzed for malware by the second electronic device.
  • 6. The method of claim 1, wherein the first analysis attribute comprises at least one of (i) information related to a portion of the network content that is analyzed for malware within the first electronic device and (ii) one or more anomalous behaviors observed during malware detection analysis of the information.
  • 7. The method of claim 6, wherein the one or more input attributes associated with the first analysis attribute comprises at least one of (i) information identifying a destination of the first network content and (ii) information identifying a source of the first network content.
  • 8. The method of claim 1, wherein analytic data received from the first electronic device further comprises an identifier that identifies the first network content.
  • 9. The method of claim 1, wherein the first electronic device comprises a web-based security appliance to inspect ingress data traffic and to generate the message including at least the first attribute based on an analysis of the ingress data traffic.
  • 10. The method of claim 1, wherein the first analysis attribute includes data associated with an non-malicious behavior detected by the first electronic device.
  • 11. The method of claim 10, wherein the data associated with the anomalous behavior includes a first hash value, the first hash value being a result passed by conducting a hash operation on the data associated with the anomalous behavior.
  • 12. The method of claim 1, wherein the first analysis attribute includes a hash value of data associated with an anomalous behavior detected by the first electronic device and the second analysis attribute includes a hash value of data associated with an anomalous behavior detected by the second electronic device.
  • 13. The method of claim 1, wherein the first analysis attribute matches the second analysis attribute based on a finding of similarities between the first analysis attribute and the second analysis attribute.
  • 14. The method of claim 1, wherein the first analysis attribute matches the second analysis attribute based on a finding that a hash value of the first analysis attribute is identical to a hash value of the second analysis attribute.
  • 15. A method for detecting a malware attack and displaying information associated with suspicious network content pertaining to the malware attack, the method comprising: receiving the analytic data that comprises input attributes and analysis attributes, the analysis attributes include (a) information related to portions of the suspicious network content that are analyzed for malware and (b) one or more anomalous behaviors observed during analysis of the information related to the portions of the suspicious network content and the input attributes provide information related to (i) an entry point of network content into a network as detected by a first electronic device of the plurality of electronic devices and (ii) information related to a migration of the network content as monitored by a second electronic device of the plurality of electronic devices;correlating the analytic data that comprises determining whether a first analysis attribute provided by the first electronic device matches a second analysis attribute provided by the second electronic device, the first analysis attribute comprises at least one of (i) information related to a portion of the network content that is analyzed for malware and (ii) one or more anomalous behaviors observed during malware detection analysis of the information; andresponsive to determining that the first analysis attribute matches the second analysis attribute, consolidating input attributes associated with the first analysis attribute and the second analysis attribute for subsequent display.
  • 16. The method of claim 15 further comprising: transmitting information associated the input attributes for display on an electronic device with display capabilities.
  • 17. The method of claim 15, wherein the first analysis attribute matches the second analysis attribute based on a finding of similarities between the first analysis attribute and the second analysis attribute.
  • 18. The method of claim 15, wherein the first analysis attribute matches the second analysis attribute based on a finding that a hash value of the first analysis attribute is identical to a hash value of the second analysis attribute.
  • 19. The method of claim 15, wherein one or more input attributes from the first electronic device identifies an email message with a Uniform Resource Locator (URL) that caused the network content to be downloaded and stored on a file share and one or more input attributes from the second electronic device identifies the migration of the network content from the file share to one or more electronic devices.
  • 20. The method of claim 15, wherein the input attributes from the first electronic device identifies a type of communication from which the suspicious network content originated, the type of communication includes an email message.
  • 21. A system for detecting a malware attack on at least an electronic device that is part of a network, comprising: one or more hardware processors; anda memory coupled to the one or more hardware processors, the memory comprises aggregation logic that, when executed by the one or more hardware processors, receives analytic data from a plurality of electronic devices that are part of the network, the analytic data comprises information associated with network content being monitored for malware, the information associated with the network content includes one or more input attributes and one or more analysis attributes,correlation logic that, when executed by the one or more hardware processors, correlates the analytic data by at least determining whether a first analysis attribute provided by a first electronic device of the plurality of electronic devices matches a second analysis attribute provided by a second electronic device of the plurality of electronic devices,consolation logic that, when executed by the one or more hardware processors and responsive to a determination that the first analysis attribute matches the second analysis attribute, consolidates input attributes associated with the first analysis attribute and the second analysis attribute, anddisplay logic that, when executed by the one or more hardware processors, generates and provides information associated the consolidated input attributes for display on an electronic device with display capabilities, the information further includes one or more images representing that the first analysis attribute detected by the first electronic device originated from network content analyzed by the second electronic device.
  • 22. The system of claim 21, wherein the correlation logic determines that the first analysis attribute matches the second analysis attribute based on a finding of similarities between the first analysis attribute and the second analysis attribute.
  • 23. The method of claim 21, wherein the correlation logic determines that the first analysis attribute matches the second analysis attribute based on a finding that a hash value of the first analysis attribute is identical to a hash value of the second analysis attribute.
  • 24. A system for detecting a malware attack comprising: one or more hardware processors; anda memory coupled to the one or more hardware processors, the memory comprises aggregation logic that, when executed by the one or more hardware processors, receives analytic data from a plurality of electronic devices, the analytic data comprises information associated with network content being monitored for malware, the information includes one or more input attributes and one or more analysis attributes,correlation logic that, when executed by the one or more hardware processors, correlates the analytic data that comprises determining whether a first analysis attribute provided by a first electronic device of the plurality of electronic devices matches a second analysis attribute provided by a second electronic device of the plurality of electronic devices, andconsolation logic that, when executed by the one or more hardware processors and responsive to determining that the first analysis attribute matches the second analysis attribute, consolidates input attributes associated with the first analysis attribute and the second analysis attribute for subsequent display, anddisplay logic that, when executed by the one or more hardware processors, generates and provides display information based on the input attributes that identifies (i) an entry point of the network content into a network as detected by the first electronic device and (ii) information related to a migration of the network content as monitored by the second electronic device.
  • 25. The system method of claim 24, wherein the aggregation logic, when executed by the one or more hardware processors and in response to a triggering event, sends a message requesting the analytic data to at least one of the plurality of electronic devices.
  • 26. The system of claim 24, wherein the correlation logic determines that the first analysis attribute matches the second analysis attribute based on a finding of similarities between the first analysis attribute and the second analysis attribute.
  • 27. The method of claim 24, wherein the correlation logic determines that the first analysis attribute matches the second analysis attribute based on a finding that a hash value of the first analysis attribute is identical to a hash value of the second analysis attribute.
  • 28. The system of claim 24, wherein the display logic to display the entry point as an image to represent an origin of the suspicious network content.
  • 29. A non-transitory storage medium including software that, when executed by one or more hardware processors, performs a plurality of operations, comprising: correlating analytic data received from a plurality of electronic devices that are part of a network by determining whether a first analysis attribute in analytic data provided from a first electronic device of the plurality of electronic devices matches a second analysis attribute in analytic data provided from a second electronic device of the plurality of electronic devices, wherein the analytic data from the first electronic device and the second electronic device comprises one or more input attributes and one or more analysis attributes that are based on network content being monitored for malware;responsive to determining that the first analysis attribute provided from the first electronic device matches the second analysis attribute provided from the second electronic device, consolidating input attributes associated with the first analysis attribute and the second analysis attribute for subsequent display; andtransmitting information associated the consolidated input attributes for display on an electronic device with display capabilities, the information includes (i) one or more images that represent a migration of network content associated with both the first analysis attribute and the second analysis attribute through a network and (ii) a first network content associated with the first analysis attribute is the same as or related to a second network content associated with the second analysis attribute.
  • 30. The non-transitory storage medium of claim 29, wherein the first analysis attribute corresponds to a Uniform Resource Locator (URL) within the first network content being a first type of network content and the second analysis attribute corresponds to the URL within the second network content being a second type of network content.
  • 31. The non-transitory storage medium of claim 29, wherein the analytic data includes information associated with network content being monitored for malware.
  • 32. The non-transitory storage medium of claim 29, wherein the first analysis attribute comprises at least one of (i) information related to portions of network content that are analyzed for malware and (ii) one or more anomalous behaviors observed during malware detection analysis of the information.
  • 33. The non-transitory storage medium of claim 32, wherein the input attributes comprise include at least one of (i) information identifying a destination of the network content and (ii) information identifying a source of the network content.
  • 34. The non-transitory storage medium of claim 29, wherein the software, when executed by one or more hardware processors, determines that the first analysis attribute in analytic data provided from the first electronic device matches the second analysis attribute in analytic data provided from the second electronic device based on a finding of similarities between the first analysis attribute and the second analysis attribute.
  • 35. The non-transitory storage medium of claim 29, wherein the software, when executed by one or more hardware processors, determines that the first analysis attribute in analytic data provided from the first electronic device matches the second analysis attribute in analytic data provided from the second electronic device based on a finding that a hash value of the first analysis attribute is identical to a hash value of the second analysis attribute.
  • 36. A non-transitory storage medium including software that, when executed by one or more hardware processors, performs operations, comprising: correlating analytic data received from a plurality of electronic devices by determining whether a first analysis attribute in analytic data provided from a first electronic device of the plurality of electronic devices matches a second analysis attribute in analytic data provided from a second electronic device of the plurality of electronic devices, wherein the analytic data from the first electronic device and the second electronic device comprises information associated with network content being monitored for malware, wherein the information includes one or more input attributes and one or more analysis attributes;responsive to determining that the first analysis attribute provided from the first electronic device matches the second analysis attribute provided from the second electronic device, consolidating input attributes associated with the first analysis attribute and the second analysis attribute for subsequent display; andoutputting, for display, information representing the consolidated input attributes to identify that the second analysis attribute detected by the second electronic device is based on network content that is also received by the first electronic device.
  • 37. The non-transitory storage medium of claim 36, wherein the network content is a Uniform Resource Locator (URL).
  • 38. The non-transitory storage medium of claim 36, wherein the software, when executed by one or more hardware processors, determines that the first analysis attribute matches the second analysis attribute based on a finding of similarities between the first analysis attribute and the second analysis attribute.
  • 39. The non-transitory storage medium of claim 36, wherein the software, when executed by one or more hardware processors, determines that the first analysis attribute matches the second analysis attribute based on a finding that a hash value of the first analysis attribute is identical to a hash value of the second analysis attribute.
  • 40. A system comprising: one or more hardware processors;a memory including one or more software modules that, when executed by the one or more hardware processors, correlates analytic data received from a plurality of sources to (1) determine if a first analysis attribute, which is associated with a first network content analyzed by a first source external to the system and provided from the first source as a portion of the analytic data, matches a second analysis attribute associated with a second network content analyzed by a second source external to the system and provided from the second source as a portion of the analytic data, and if a match is detected, (2) consolidate input attributes associated with the first analysis attribute and the second analysis attribute, and (3) output, for display, information representing the consolidated input attributes to identify that the first network content associated with the first analysis attribute analyzed by the first source is the same as or related to the second network content associated with the second analysis attribute analyzed by the second source.
  • 41. The method of claim 8, wherein the first identifier is a time-stamp value.
  • 42. The system of claim 40, wherein the first analysis attribute corresponds to a Uniform Resource Locator (URL) within the first network content being a first type of network content and the second analysis attribute corresponds to the URL within the second network content being a second type of network content.
  • 43. The system of claim 42, wherein the first type of network content includes an electronic mail (email) message that is analyzed for malware by the first source being a first electronic device and the second type of network content includes network traffic that is analyzed for malware by the second source being a second electronic device.
  • 44. The system of claim 40, wherein the first analysis attribute comprises at least one of (i) information related to a portion of the network content that is analyzed for malware by the first source being a first electronic device and (ii) one or more anomalous behaviors observed during malware detection analysis of the information.
  • 45. The system of claim 44, wherein the one or more input attributes associated with the first analysis attribute comprises at least one of (i) information identifying a destination of the first network content and (ii) information identifying a source of the first network content.
  • 46. The system of claim 40, wherein analytic data received from the first source being a first electronic device further comprises an identifier that identifies the first network content.
  • 47. The system of claim 46, wherein the first identifier is a time-stamp value.
  • 48. The system of claim 40, wherein the one or more software modules stored in the memory, when executed by the one or more hardware processors, determine that the first analysis attribute matches the second analysis attribute based on a finding of similarities between the first analysis attribute and the second analysis attribute.
  • 49. The system of claim 40, wherein the one or more software modules stored in the memory, when executed by the one or more hardware processors, determine that the first analysis attribute matches the second analysis attribute based on a finding that a hash value of the first analysis attribute is identical to a hash value of the second analysis attribute.
US Referenced Citations (506)
Number Name Date Kind
4292580 Ott et al. Sep 1981 A
5175732 Hendel et al. Dec 1992 A
5440723 Arnold et al. Aug 1995 A
5490249 Miller Feb 1996 A
5657473 Killean et al. Aug 1997 A
5842002 Schnurer et al. Nov 1998 A
5978917 Chi Nov 1999 A
6088803 Tso et al. Jul 2000 A
6094677 Capek et al. Jul 2000 A
6108799 Boulay et al. Aug 2000 A
6118382 Hibbs et al. Sep 2000 A
6269330 Cidon et al. Jul 2001 B1
6272641 Ji Aug 2001 B1
6279113 Vaidya Aug 2001 B1
6298445 Shostack Oct 2001 B1
6357008 Nachenberg Mar 2002 B1
6417774 Hibbs et al. Jul 2002 B1
6424627 Sorhaug et al. Jul 2002 B1
6442696 Wray et al. Aug 2002 B1
6484315 Ziese Nov 2002 B1
6487666 Shanklin et al. Nov 2002 B1
6493756 O'Brien et al. Dec 2002 B1
6550012 Villa et al. Apr 2003 B1
6700497 Hibbs et al. Mar 2004 B2
6775657 Baker Aug 2004 B1
6831893 Ben Nun et al. Dec 2004 B1
6832367 Choi et al. Dec 2004 B1
6895550 Kanchirayappa et al. May 2005 B2
6898632 Gordy et al. May 2005 B2
6907396 Muttik et al. Jun 2005 B1
6941348 Petry et al. Sep 2005 B2
6971097 Wallman Nov 2005 B1
6981279 Arnold et al. Dec 2005 B1
6995665 Appelt et al. Feb 2006 B2
7007107 Ivchenko et al. Feb 2006 B1
7028179 Anderson et al. Apr 2006 B2
7043757 Hoefelmeyer et al. May 2006 B2
7069316 Gryaznov Jun 2006 B1
7080407 Zhao et al. Jul 2006 B1
7080408 Pak et al. Jul 2006 B1
7093002 Wolff et al. Aug 2006 B2
7093239 van der Made Aug 2006 B1
7096498 Judge Aug 2006 B2
7100201 Izatt Aug 2006 B2
7107617 Hursey et al. Sep 2006 B2
7159149 Spiegel et al. Jan 2007 B2
7213260 Judge May 2007 B2
7231667 Jordan Jun 2007 B2
7240364 Branscomb et al. Jul 2007 B1
7240368 Roesch et al. Jul 2007 B1
7243371 Kasper et al. Jul 2007 B1
7249175 Donaldson Jul 2007 B1
7287278 Liang Oct 2007 B2
7308716 Danford et al. Dec 2007 B2
7328453 Merkle, Jr. et al. Feb 2008 B2
7346486 Ivancic et al. Mar 2008 B2
7356736 Natvig Apr 2008 B2
7386888 Liang et al. Jun 2008 B2
7392542 Bucher Jun 2008 B2
7418729 Szor Aug 2008 B2
7428300 Drew et al. Sep 2008 B1
7441272 Durham et al. Oct 2008 B2
7448084 Apap et al. Nov 2008 B1
7458098 Judge et al. Nov 2008 B2
7464404 Carpenter et al. Dec 2008 B2
7464407 Nakae et al. Dec 2008 B2
7467408 O'Toole, Jr. Dec 2008 B1
7478428 Thomlinson Jan 2009 B1
7480773 Reed Jan 2009 B1
7487543 Arnold et al. Feb 2009 B2
7496960 Chen et al. Feb 2009 B1
7496961 Zimmer et al. Feb 2009 B2
7519990 Xie Apr 2009 B1
7523493 Liang et al. Apr 2009 B2
7530104 Thrower et al. May 2009 B1
7540025 Tzadikario May 2009 B2
7565550 Liang et al. Jul 2009 B2
7568233 Szor et al. Jul 2009 B1
7584455 Ball Sep 2009 B2
7603715 Costa et al. Oct 2009 B2
7607171 Marsden et al. Oct 2009 B1
7639714 Stolfo et al. Dec 2009 B2
7644441 Schmid et al. Jan 2010 B2
7657419 van der Made Feb 2010 B2
7676841 Sobchuk et al. Mar 2010 B2
7698548 Shelest et al. Apr 2010 B2
7707633 Danford et al. Apr 2010 B2
7712136 Sprosts et al. May 2010 B2
7730011 Deninger et al. Jun 2010 B1
7739740 Nachenberg et al. Jun 2010 B1
7779463 Stolfo et al. Aug 2010 B2
7784097 Stolfo et al. Aug 2010 B1
7832008 Kraemer Nov 2010 B1
7836502 Zhao et al. Nov 2010 B1
7849506 Dansey et al. Dec 2010 B1
7854007 Sprosts et al. Dec 2010 B2
7869073 Oshima Jan 2011 B2
7877803 Enstone et al. Jan 2011 B2
7904959 Sidiroglou et al. Mar 2011 B2
7908660 Bahl Mar 2011 B2
7930738 Petersen Apr 2011 B1
7937761 Benett May 2011 B1
7949849 Lowe et al. May 2011 B2
7996556 Raghavan et al. Aug 2011 B2
7996836 McCorkendale et al. Aug 2011 B1
7996904 Chiueh et al. Aug 2011 B1
7996905 Arnold et al. Aug 2011 B2
8006305 Aziz Aug 2011 B2
8010667 Zhang et al. Aug 2011 B2
8020206 Hubbard et al. Sep 2011 B2
8028338 Schneider et al. Sep 2011 B1
8042184 Batenin Oct 2011 B1
8045094 Teragawa Oct 2011 B2
8045458 Alperovitch et al. Oct 2011 B2
8069484 McMillan et al. Nov 2011 B2
8087086 Lai et al. Dec 2011 B1
8171553 Aziz et al. May 2012 B2
8176049 Deninger et al. May 2012 B2
8176480 Spertus May 2012 B1
8201246 Wu et al. Jun 2012 B1
8204984 Aziz et al. Jun 2012 B1
8214905 Doukhvalov et al. Jul 2012 B1
8220055 Kennedy Jul 2012 B1
8225288 Miller et al. Jul 2012 B2
8225373 Kraemer Jul 2012 B2
8233882 Rogel Jul 2012 B2
8234640 Fitzgerald et al. Jul 2012 B1
8234709 Viljoen et al. Jul 2012 B2
8239944 Nachenberg et al. Aug 2012 B1
8260914 Ranjan Sep 2012 B1
8266091 Gubin et al. Sep 2012 B1
8286251 Eker et al. Oct 2012 B2
8291499 Aziz et al. Oct 2012 B2
8307435 Mann et al. Nov 2012 B1
8307443 Wang et al. Nov 2012 B2
8312545 Tuvell et al. Nov 2012 B2
8321936 Green et al. Nov 2012 B1
8321941 Tuvell et al. Nov 2012 B2
8332571 Edwards, Sr. Dec 2012 B1
8365286 Poston Jan 2013 B2
8365297 Parshin et al. Jan 2013 B1
8370938 Daswani et al. Feb 2013 B1
8370939 Zaitsev et al. Feb 2013 B2
8375444 Aziz et al. Feb 2013 B2
8381299 Stolfo et al. Feb 2013 B2
8402529 Green et al. Mar 2013 B1
8464340 Ahn et al. Jun 2013 B2
8479174 Chiriac Jul 2013 B2
8479276 Vaystikh et al. Jul 2013 B1
8479291 Bodke Jul 2013 B1
8510827 Leake et al. Aug 2013 B1
8510828 Guo et al. Aug 2013 B1
8510842 Amit et al. Aug 2013 B2
8516478 Edwards et al. Aug 2013 B1
8516590 Ranadive et al. Aug 2013 B1
8516593 Aziz Aug 2013 B2
8522348 Chen et al. Aug 2013 B2
8528086 Aziz Sep 2013 B1
8533824 Hutton et al. Sep 2013 B2
8539582 Aziz et al. Sep 2013 B1
8549638 Aziz Oct 2013 B2
8555391 Demir et al. Oct 2013 B1
8561177 Aziz et al. Oct 2013 B1
8566946 Aziz et al. Oct 2013 B1
8584094 Dadhia et al. Nov 2013 B2
8584234 Sobel et al. Nov 2013 B1
8584239 Aziz et al. Nov 2013 B2
8595834 Xie et al. Nov 2013 B2
8627476 Satish et al. Jan 2014 B1
8635696 Aziz Jan 2014 B1
8682054 Xue et al. Mar 2014 B2
8682812 Ranjan Mar 2014 B1
8689333 Aziz Apr 2014 B2
8695096 Zhang Apr 2014 B1
8713631 Pavlyushchik Apr 2014 B1
8713681 Silberman et al. Apr 2014 B2
8726392 McCorkendale et al. May 2014 B1
8739280 Chess et al. May 2014 B2
8776229 Aziz Jul 2014 B1
8782792 Bodke Jul 2014 B1
8789172 Stolfo et al. Jul 2014 B2
8789178 Kejriwal et al. Jul 2014 B2
8793787 Ismael et al. Jul 2014 B2
8805947 Kuzkin et al. Aug 2014 B1
8806647 Daswani et al. Aug 2014 B1
8832829 Manni et al. Sep 2014 B2
8850570 Ramzan Sep 2014 B1
8850571 Staniford et al. Sep 2014 B2
8881234 Narasimhan et al. Nov 2014 B2
8881282 Aziz et al. Nov 2014 B1
8898788 Aziz et al. Nov 2014 B1
8935779 Manni et al. Jan 2015 B2
8984638 Aziz et al. Mar 2015 B1
8990939 Staniford et al. Mar 2015 B2
8990944 Singh et al. Mar 2015 B1
8997219 Staniford et al. Mar 2015 B2
9009822 Ismael et al. Apr 2015 B1
9009823 Ismael et al. Apr 2015 B1
9027135 Aziz May 2015 B1
9071638 Aziz et al. Jun 2015 B1
9104867 Thioux et al. Aug 2015 B1
9106694 Aziz et al. Aug 2015 B2
9118715 Staniford et al. Aug 2015 B2
20010005889 Albrecht Jun 2001 A1
20010047326 Broadbent et al. Nov 2001 A1
20020018903 Kokubo et al. Feb 2002 A1
20020038430 Edwards et al. Mar 2002 A1
20020091819 Melchione et al. Jul 2002 A1
20020095607 Lin-Hendel Jul 2002 A1
20020116627 Tarbotton et al. Aug 2002 A1
20020144156 Copeland, III Oct 2002 A1
20020162015 Tang Oct 2002 A1
20020166063 Lachman et al. Nov 2002 A1
20020169952 DiSanto et al. Nov 2002 A1
20020184528 Shevenell et al. Dec 2002 A1
20020188887 Largman et al. Dec 2002 A1
20020194490 Halperin et al. Dec 2002 A1
20030074578 Ford et al. Apr 2003 A1
20030084318 Schertz May 2003 A1
20030101381 Mateev et al. May 2003 A1
20030115483 Liang Jun 2003 A1
20030188190 Aaron et al. Oct 2003 A1
20030191957 Hypponen et al. Oct 2003 A1
20030200460 Morota et al. Oct 2003 A1
20030212902 Van Der Made Nov 2003 A1
20030229801 Kouznetsov et al. Dec 2003 A1
20030237000 Denton et al. Dec 2003 A1
20040003323 Bennett et al. Jan 2004 A1
20040015712 Szor Jan 2004 A1
20040019832 Arnold et al. Jan 2004 A1
20040047356 Bauer Mar 2004 A1
20040083408 Spiegel et al. Apr 2004 A1
20040088581 Brawn et al. May 2004 A1
20040093513 Cantrell et al. May 2004 A1
20040111531 Staniford et al. Jun 2004 A1
20040117478 Triulzi et al. Jun 2004 A1
20040117624 Brandt et al. Jun 2004 A1
20040128355 Chao et al. Jul 2004 A1
20040165588 Pandya Aug 2004 A1
20040236963 Danford et al. Nov 2004 A1
20040243349 Greifeneder et al. Dec 2004 A1
20040249911 Alkhatib et al. Dec 2004 A1
20040255161 Cavanaugh Dec 2004 A1
20040268147 Wiederin et al. Dec 2004 A1
20050005159 Oliphant Jan 2005 A1
20050021740 Bar et al. Jan 2005 A1
20050033960 Vialen et al. Feb 2005 A1
20050033989 Poletto et al. Feb 2005 A1
20050050148 Mohammadioun et al. Mar 2005 A1
20050086523 Zimmer et al. Apr 2005 A1
20050091513 Mitomo et al. Apr 2005 A1
20050091533 Omote et al. Apr 2005 A1
20050091652 Ross et al. Apr 2005 A1
20050108562 Khazan et al. May 2005 A1
20050114663 Cornell et al. May 2005 A1
20050125195 Brendel Jun 2005 A1
20050149726 Joshi et al. Jul 2005 A1
20050157662 Bingham et al. Jul 2005 A1
20050183143 Anderholm et al. Aug 2005 A1
20050201297 Peikari Sep 2005 A1
20050210533 Copeland et al. Sep 2005 A1
20050238005 Chen et al. Oct 2005 A1
20050240781 Gassoway Oct 2005 A1
20050262562 Gassoway Nov 2005 A1
20050265331 Stolfo Dec 2005 A1
20050283839 Cowburn Dec 2005 A1
20060010495 Cohen et al. Jan 2006 A1
20060015416 Hoffman et al. Jan 2006 A1
20060015715 Anderson Jan 2006 A1
20060015747 Van de Ven Jan 2006 A1
20060021029 Brickell et al. Jan 2006 A1
20060021054 Costa et al. Jan 2006 A1
20060031476 Mathes et al. Feb 2006 A1
20060047665 Neil Mar 2006 A1
20060070130 Costea et al. Mar 2006 A1
20060075496 Carpenter et al. Apr 2006 A1
20060095968 Portolani et al. May 2006 A1
20060101516 Sudaharan et al. May 2006 A1
20060101517 Banzhof et al. May 2006 A1
20060117385 Mester et al. Jun 2006 A1
20060123477 Raghavan et al. Jun 2006 A1
20060143709 Brooks et al. Jun 2006 A1
20060150249 Gassen et al. Jul 2006 A1
20060161983 Cothrell et al. Jul 2006 A1
20060161987 Levy-Yurista Jul 2006 A1
20060161989 Reshef et al. Jul 2006 A1
20060164199 Gilde et al. Jul 2006 A1
20060173992 Weber et al. Aug 2006 A1
20060179147 Tran et al. Aug 2006 A1
20060184632 Marino et al. Aug 2006 A1
20060191010 Benjamin Aug 2006 A1
20060221956 Narayan et al. Oct 2006 A1
20060236393 Kramer et al. Oct 2006 A1
20060242709 Seinfeld et al. Oct 2006 A1
20060248519 Jaeger et al. Nov 2006 A1
20060248582 Panjwani et al. Nov 2006 A1
20060251104 Koga Nov 2006 A1
20060288417 Bookbinder et al. Dec 2006 A1
20070006288 Mayfield et al. Jan 2007 A1
20070006313 Porras et al. Jan 2007 A1
20070011174 Takaragi et al. Jan 2007 A1
20070016951 Piccard et al. Jan 2007 A1
20070033645 Jones Feb 2007 A1
20070038943 FitzGerald et al. Feb 2007 A1
20070064689 Shin et al. Mar 2007 A1
20070074169 Chess et al. Mar 2007 A1
20070089165 Wei et al. Apr 2007 A1
20070094730 Bhikkaji et al. Apr 2007 A1
20070101435 Konanka et al. May 2007 A1
20070128855 Cho et al. Jun 2007 A1
20070142030 Sinha et al. Jun 2007 A1
20070143827 Nicodemus et al. Jun 2007 A1
20070156895 Vuong Jul 2007 A1
20070157180 Tillmann et al. Jul 2007 A1
20070157306 Elrod et al. Jul 2007 A1
20070168988 Eisner et al. Jul 2007 A1
20070171824 Ruello et al. Jul 2007 A1
20070174915 Gribble et al. Jul 2007 A1
20070192500 Lum Aug 2007 A1
20070192858 Lum Aug 2007 A1
20070198275 Malden et al. Aug 2007 A1
20070208822 Wang et al. Sep 2007 A1
20070220607 Sprosts et al. Sep 2007 A1
20070240218 Tuvell et al. Oct 2007 A1
20070240219 Tuvell et al. Oct 2007 A1
20070240220 Tuvell et al. Oct 2007 A1
20070240222 Tuvell et al. Oct 2007 A1
20070250930 Aziz et al. Oct 2007 A1
20070256132 Oliphant Nov 2007 A2
20070271446 Nakamura Nov 2007 A1
20080005782 Aziz Jan 2008 A1
20080028463 Dagon et al. Jan 2008 A1
20080032556 Schreier Feb 2008 A1
20080040710 Chiriac Feb 2008 A1
20080046781 Childs et al. Feb 2008 A1
20080066179 Liu Mar 2008 A1
20080072326 Danford et al. Mar 2008 A1
20080077793 Tan et al. Mar 2008 A1
20080080518 Hoeflin et al. Apr 2008 A1
20080086720 Lekel Apr 2008 A1
20080098476 Syversen Apr 2008 A1
20080120722 Sima et al. May 2008 A1
20080134178 Fitzgerald et al. Jun 2008 A1
20080134334 Kim et al. Jun 2008 A1
20080141376 Clausen et al. Jun 2008 A1
20080181227 Todd Jul 2008 A1
20080184373 Traut et al. Jul 2008 A1
20080189787 Arnold et al. Aug 2008 A1
20080201778 Guo et al. Aug 2008 A1
20080209557 Herley et al. Aug 2008 A1
20080215742 Goldszmidt et al. Sep 2008 A1
20080222729 Chen et al. Sep 2008 A1
20080263665 Ma et al. Oct 2008 A1
20080295172 Bohacek Nov 2008 A1
20080301810 Lehane et al. Dec 2008 A1
20080307524 Singh et al. Dec 2008 A1
20080313738 Enderby Dec 2008 A1
20080320594 Jiang Dec 2008 A1
20090003317 Kasralikar et al. Jan 2009 A1
20090007100 Field et al. Jan 2009 A1
20090013408 Schipka Jan 2009 A1
20090031423 Liu et al. Jan 2009 A1
20090036111 Danford et al. Feb 2009 A1
20090037835 Goldman Feb 2009 A1
20090044024 Oberheide et al. Feb 2009 A1
20090044274 Budko et al. Feb 2009 A1
20090064332 Porras et al. Mar 2009 A1
20090077666 Chen et al. Mar 2009 A1
20090083369 Marmor Mar 2009 A1
20090083855 Apap et al. Mar 2009 A1
20090089879 Wang et al. Apr 2009 A1
20090094697 Provos et al. Apr 2009 A1
20090113425 Ports et al. Apr 2009 A1
20090125976 Wassermann et al. May 2009 A1
20090126015 Monastyrsky et al. May 2009 A1
20090126016 Sobko et al. May 2009 A1
20090133125 Choi et al. May 2009 A1
20090144823 Lamastra et al. Jun 2009 A1
20090158430 Borders Jun 2009 A1
20090172815 Gu et al. Jul 2009 A1
20090187992 Poston Jul 2009 A1
20090193293 Stolfo et al. Jul 2009 A1
20090199296 Xie et al. Aug 2009 A1
20090228233 Anderson et al. Sep 2009 A1
20090241187 Troyansky Sep 2009 A1
20090241190 Todd et al. Sep 2009 A1
20090265692 Godefroid et al. Oct 2009 A1
20090271867 Zhang Oct 2009 A1
20090300415 Zhang et al. Dec 2009 A1
20090300761 Park et al. Dec 2009 A1
20090328185 Berg et al. Dec 2009 A1
20090328221 Blumfield et al. Dec 2009 A1
20100005146 Drako et al. Jan 2010 A1
20100011205 McKenna Jan 2010 A1
20100017546 Poo et al. Jan 2010 A1
20100031353 Thomas et al. Feb 2010 A1
20100037314 Perdisci et al. Feb 2010 A1
20100043073 Kuwamura Feb 2010 A1
20100054278 Stolfo et al. Mar 2010 A1
20100058474 Hicks Mar 2010 A1
20100064044 Nonoyama Mar 2010 A1
20100077481 Polyakov et al. Mar 2010 A1
20100083376 Pereira et al. Apr 2010 A1
20100115621 Staniford et al. May 2010 A1
20100132038 Zaitsev May 2010 A1
20100154056 Smith et al. Jun 2010 A1
20100180344 Malyshev et al. Jul 2010 A1
20100192223 Ismael et al. Jul 2010 A1
20100220863 Dupaquis et al. Sep 2010 A1
20100235831 Dittmer Sep 2010 A1
20100251104 Massand Sep 2010 A1
20100281102 Chinta et al. Nov 2010 A1
20100281541 Stolfo et al. Nov 2010 A1
20100281542 Stolfo et al. Nov 2010 A1
20100287260 Peterson et al. Nov 2010 A1
20100299754 Amit et al. Nov 2010 A1
20100306173 Frank Dec 2010 A1
20110004737 Greenebaum Jan 2011 A1
20110025504 Lyon et al. Feb 2011 A1
20110041179 Stahlberg Feb 2011 A1
20110047594 Mahaffey et al. Feb 2011 A1
20110047620 Mahaffey et al. Feb 2011 A1
20110055907 Narasimhan et al. Mar 2011 A1
20110078794 Manni et al. Mar 2011 A1
20110093951 Aziz Apr 2011 A1
20110099620 Stavrou et al. Apr 2011 A1
20110099633 Aziz Apr 2011 A1
20110113231 Kaminsky May 2011 A1
20110145918 Jung et al. Jun 2011 A1
20110145920 Mahaffey et al. Jun 2011 A1
20110145934 Abramovici et al. Jun 2011 A1
20110167493 Song et al. Jul 2011 A1
20110167494 Bowen et al. Jul 2011 A1
20110173460 Ito et al. Jul 2011 A1
20110219449 St. Neitzel et al. Sep 2011 A1
20110219450 McDougal et al. Sep 2011 A1
20110225624 Sawhney et al. Sep 2011 A1
20110225655 Niemela et al. Sep 2011 A1
20110247072 Staniford et al. Oct 2011 A1
20110265182 Peinado et al. Oct 2011 A1
20110289582 Kejriwal et al. Nov 2011 A1
20110302587 Nishikawa et al. Dec 2011 A1
20110307954 Melnik et al. Dec 2011 A1
20110307955 Kaplan et al. Dec 2011 A1
20110307956 Yermakov et al. Dec 2011 A1
20110314546 Aziz et al. Dec 2011 A1
20120023593 Puder et al. Jan 2012 A1
20120054869 Yen et al. Mar 2012 A1
20120066698 Yanoo Mar 2012 A1
20120079596 Thomas et al. Mar 2012 A1
20120084859 Radinsky et al. Apr 2012 A1
20120110667 Zubrilin et al. May 2012 A1
20120117652 Manni et al. May 2012 A1
20120121154 Xue et al. May 2012 A1
20120124426 Maybee et al. May 2012 A1
20120174186 Aziz et al. Jul 2012 A1
20120174196 Bhogavilli et al. Jul 2012 A1
20120174218 McCoy et al. Jul 2012 A1
20120198279 Schroeder Aug 2012 A1
20120210423 Friedrichs et al. Aug 2012 A1
20120222121 Staniford et al. Aug 2012 A1
20120255015 Sahita et al. Oct 2012 A1
20120255017 Sallam Oct 2012 A1
20120260342 Dube et al. Oct 2012 A1
20120266244 Green et al. Oct 2012 A1
20120278886 Luna Nov 2012 A1
20120297489 Dequevy Nov 2012 A1
20120330801 McDougal et al. Dec 2012 A1
20130014259 Gribble et al. Jan 2013 A1
20130036472 Aziz Feb 2013 A1
20130047257 Aziz Feb 2013 A1
20130074185 McDougal et al. Mar 2013 A1
20130086684 Mohler Apr 2013 A1
20130097699 Balupari et al. Apr 2013 A1
20130097706 Titonis et al. Apr 2013 A1
20130111587 Goel et al. May 2013 A1
20130117852 Stute May 2013 A1
20130117855 Kim et al. May 2013 A1
20130139264 Brinkley et al. May 2013 A1
20130160125 Likhachev et al. Jun 2013 A1
20130160127 Jeong et al. Jun 2013 A1
20130160130 Mendelev et al. Jun 2013 A1
20130160131 Madou et al. Jun 2013 A1
20130167236 Sick Jun 2013 A1
20130174214 Duncan Jul 2013 A1
20130185789 Hagiwara et al. Jul 2013 A1
20130185795 Winn et al. Jul 2013 A1
20130185798 Saunders et al. Jul 2013 A1
20130191915 Antonakakis et al. Jul 2013 A1
20130196649 Paddon et al. Aug 2013 A1
20130227691 Aziz et al. Aug 2013 A1
20130246370 Bartram et al. Sep 2013 A1
20130263260 Mahaffey et al. Oct 2013 A1
20130291109 Staniford et al. Oct 2013 A1
20130298243 Kumar et al. Nov 2013 A1
20130333032 Delatorre et al. Dec 2013 A1
20140053260 Gupta et al. Feb 2014 A1
20140053261 Gupta et al. Feb 2014 A1
20140130158 Wang et al. May 2014 A1
20140137180 Lukacs et al. May 2014 A1
20140169762 Ryu Jun 2014 A1
20140179360 Jackson et al. Jun 2014 A1
20140328204 Klotsche et al. Nov 2014 A1
20140337836 Ismael Nov 2014 A1
20140351935 Shao et al. Nov 2014 A1
20150096025 Ismael Apr 2015 A1
Foreign Referenced Citations (11)
Number Date Country
2439806 Jan 2008 GB
2490431 Oct 2012 GB
WO-0206928 Jan 2002 WO
WO-0223805 Mar 2002 WO
WO-2007-117636 Oct 2007 WO
WO-2008041950 Apr 2008 WO
WO-2011084431 Jul 2011 WO
2011112348 Sep 2011 WO
2012075336 Jun 2012 WO
WO-2012145066 Oct 2012 WO
2013067505 May 2013 WO
Non-Patent Literature Citations (75)
Entry
IEEE Xplore Digital Library Sear Results for “detection of unknown computer worms”. Http//ieeexplore.ieee.org/searchresult.jsp?SortField=Score&SortOrder=desc&ResultC . . . , (Accessed on Aug. 28, 2009).
AltaVista Advanced Search Results. “Event Orchestrator”. Http://www.altavista.com/web/results?Itag=ody&pg=aq&aqmode=aqa=Event+Orchesrator . . ., (Accessed on Sep. 3, 2009).
AltaVista Advanced Search Results. “attack vector identifier”.Http://www.altavista.com/web/results?Itag=ody&pg=aq&aqmode=aqa=Event+Orchestrator . . ., (Accessed on Sep. 15, 2009).
Cisco, Configuring the Catalyst Switched Port Analyzer (SPAN) (“CISCO”), (1992-2003).
Reiner Sailer, Enriquillo Valdez, Trent Jaeger, Roonald Perez, Leendert van Doorn, John Linwood Griffin, Stefan Berger., sHype: Secure Hypervisor Appraoch to Trusted Virtualized Systems (Feb. 2, 2005) (“Sailer”).
Excerpt regarding First Printing Date for Merike Kaeo, Designing Network Security (“Kaeo”), (2005).
The Sniffers's Guide to Raw Traffic available at: yuba.stanford.edu/˜casado/pcap/section1.html, (Jan. 6, 2014).
“Network Security: NetDetector—Network Intrusion Forensic System (NIFS) Whitepaper”, “NetDetector Whitepaper”), (2003).
“Packet”, Microsoft Computer Dictionary, Microsoft Press, (Mar. 2002), 1 page.
“When Virtual is Better Than Real”, IEEEXplore Digital Library, available at, http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=990073, (Dec. 7, 2013).
Abdullah, et al., Visualizing Network Data for Intrusion Detection, 2005 IEEE Workshop on Information Assurance and Security, pp. 100-108.
Adetoye, Adedayo , et al., “Network Intrusion Detection & Response System”, (“Adetoye”), (Sep. 2003).
Aura, Tuomas, “Scanning electronic documents for personally identifiable information”, Proceedings of the 5th ACM workshop on Privacy in electronic society. ACM, 2006.
Baecher, “The Nepenthes Platform: An Efficient Approach to collect Malware”, Springer-verlag Berlin Heidelberg, (2006), pp. 165-184.
Bayer, et al., “Dynamic Analysis of Malicious Code”, J Comput Virol, Springer-Verlag, France., (2006), pp. 67-77.
Boubalos, Chris , “extracting syslog data out of available at raw pcap dumps, seclists.org, Honeypots mailing list archives”, available at http://seclists.org/honeypots/2003/q2/319 (“Boubalos”), (Jun. 5, 2003).
Chaudet, C. , et al., “Optimal Positioning of Active and Passive Monitoring Devices”, International Conference on Emerging Networking Experiments and Technologies, Proceedings of the 2005 ACM Conference on Emerging Network Experiment and Technology, CoNEXT '05, Toulousse, France, (Oct. 2005), pp. 71-82.
Cohen, M.I. , “PyFlag—An advanced network forensic framework”, Digital investigation 5, Elsevier, (2008), pp. S112-S120.
Costa, M. , et al., “Vigilante: End-to-End Containment of Internet Worms”, SOSP '05, Association for Computing Machinery, Inc., Brighton U.K., (Oct. 23-26, 2005).
Crandall, J.R. , et al., “Minos:Control Data Attack Prevention Orthogonal to Memory Model”, 37th International Symposium on Microarchitecture, Portland, Oregon, (Dec. 2004).
Deutsch, P. , “Zlib compressed data format specification version 3.3” RFC 1950, (1996).
Distler, “Malware Analysis: An Introduction”, SANS Institute InfoSec Reading Room, SANS Institute, (2007).
Dunlap, George W. , et al., “ReVirt: Enabling Intrusion Analysis through Virtual-Machine Logging and Replay”, Proceeding of the 5th Symposium on Operating Systems Design and Implementation, USENIX Association, (“Dunlap”), (Dec. 9, 2002).
Filiol, Eric , et al., “Combinatorial Optimisation of Worm Propagation on an Unknown Network”, International Journal of Computer Science 2.2 (2007).
Goel, et al., Reconstructing System State for Intrusion Analysis, Apr. 2008 SIGOPS Operating Systems Review, vol. 42 Issue 3, pp. 21-28.
Hjelmvik, Erik , “Passive Network Security Analysis with NetworkMiner”, (In)Secure, Issue 18, (Oct. 2008), pp. 1-100.
Kaeo, Merike , “Designing Network Security”, (“Kaeo”), (Nov. 2003).
Kim, H. , et al., “Autograph: Toward Automated, Distributed Worm Signature Detection”, Proceedings of the 13th Usenix Security Symposium (Security 2004), San Diego, (Aug. 2004), pp. 271-286.
King, Samuel T., et al., “Operating System Support for Virtual Machines”, (“King”).
Krasnyansky, Max , et al., Universal TUN/TAP driver, available at https://www.kernel.org/doc/Documentation/networking/tuntap.txt (2002) (“Krasnyansky”).
Kreibich, C. , et al., “Honeycomb-Creating Intrusion Detection Signatures Using Honeypots”, 2nd Workshop on Hot Topics in Networks (HotNets-11), Boston, USA, (2003).
Kristoff, J. , “Botnets, Detection and Mitigation: DNS-Based Techniques”, NU Security Day, (2005), 23 pages.
Liljenstam, Michael , et al., “Simulating Realistic Network Traffic for Worm Warning System Design and Testing”, Institute for Security Technology studies, Dartmouth College (“Liljenstam”), (Oct. 27, 2003).
Marchette, David J., “Computer Intrusion Detection and Network Monitoring: A Statistical Viewpoint”, (“Marchette”), (2001).
Margolis, P.E. , “Random House Webster's ‘Computer & Internet Dictionary 3rd Edition’”, ISBN 0375703519, (Dec. 1998).
Moore, D. , et al., “Internet Quarantine: Requirements for Containing Self-Propagating Code”, INFOCOM, vol. 3, (Mar. 30-Apr. 3, 2003), pp. 1901-1910.
Morales, Jose A., et al., ““Analyzing and exploiting network behaviors of malware.””, Security and Privacy in Communication Networks. Springer Berlin Heidelberg, 2010. 20-34.
Natvig, Kurt , “SandboxII: Internet”, Virus Bulletin Conference, (“Natvig”), (Sep. 2002).
NetBIOS Working Group. Protocol Standard for a NetBIOS Service on a TCP/UDP transport: Concepts and Methods. STD 19, RFC 1001, Mar. 1987.
Newsome, J. , et al., “Dynamic Taint Analysis for Automatic Detection, Analysis, and Signature Generation of Exploits on Commodity Software”, In Proceedings of the 12th Annual Network and Distributed System Security, Symposium (NDSS '05), (Feb. 2005).
Newsome, J. , et al., “Polygraph: Automatically Generating Signatures for Polymorphic Worms”, In Proceedings of the IEEE Symposium on Security and Privacy, (May 2005).
Nojiri, D. , et al., “Cooperation Response Strategies for Large Scale Attack Mitigation”, DARPA Information Survivability Conference and Exposition, vol. 1, (Apr. 22-24, 2003), pp. 293-302.
Peter M. Chen, and Brian D. Noble , “When Virtual is Better Than Real, Department of Electrical Engineering and Computer Science”, University of Michigan (“Chen”).
Silicon Defense, “Worm Containment in the Internal Network”, (Mar. 2003), pp. 1-25.
Singh, S. , et al., “Automated Worm Fingerprinting”, Proceedings of the ACM/USENIX Symposium on Operating System Design and Implementation, San Francisco, California, (Dec. 2004).
Spitzner, Lance , “Honeypots: Tracking Hackers”, (“Spizner”), (Sep. 17, 2002).
Thomas H. Ptacek, and Timothy N. Newsham , “Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection”, Secure Networks, (“Ptacek”), (Jan. 1998).
Venezia, Paul , “NetDetector Captures Intrusions”, InfoWorld Issue 27,(“Venezia”), (Jul. 14, 2003).
Whyte, et al., “DNS-Based Detection of Scanning Works in an Enterprise Network”, Proceedings of the 12th Annual Network and Distributed System Security Symposium, (Feb. 2005), 15 pages.
Williamson, Matthew M., “Throttling Viruses: Restricting Propagation to Defeat Malicious Mobile Code”, ACSAC Conference, Las Vegas, NV, USA, (Dec. 2002), pp. 1-9.
Adobe Systems Incorporated, “PDF 32000-1:2008, Document management—Portable document format—Part1:PDF 1.7”, First Edition, Jul. 1, 2008, 756 pages.
Apostolopoulos, George; hassapis, Constantinos; “V-eM: A cluster of Virtual Machines for Robust, Detailed, and High-Performance Network Emulation”, 14th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, Sep. 11-14, 2006, pp. 117-126.
Baldi, Mario; Risso, Fulvio; “A Framework for Rapid Development and Portable Execution of Packet-Handling Applications”, 5th IEEE International Symposium Processing and Information Technology, Dec. 21, 2005, pp. 233-238.
Cisco “Intrusion Prevention for the Cisco ASA 5500-x Series” Data Sheet (2012).
Clark, John, Sylvian Leblanc,and Scott Knight. “Risks associated with usb hardware trojan devices used by insiders.” Systems Conference (SysCon), 2011 IEEE International. IEEE, 2011.
FireEye Malware Analysis & Exchange Network, Malware Protection System, FireEye Inc., 2010.
FireEye Malware Analysis, Modern Malware Forensics, FireEye Inc., 2010.
FireEye v.6.0 Security Target, pp. 1-35, Version 1.1, FireEye Inc., May 2011.
Gibler, Clint, et al. AndroidLeaks: automatically detecting potential privacy leaks in android applications on a large scale. Springer Berlin Heidelberg, 2012.
Gregg Keizer: “Microsoft's HoneyMonkeys Show Patching Windows Works”, Aug. 8, 2005, XP055143386, Retrieved from the Internet: URL:https://web.archive.org/web/20121022220617/http://www.informationweek-.com/microsofts-honeymonkeys-show-patching-wi/167600716 [retrieved on Sep. 29, 2014].
Heng Yin et al, Panorama: Capturing System-Wide Information Flow for Malware Detection and Analysis, Research Showcase @ CMU, Carnegie Mellon University, 2007.
Idika et al., A-Survey-of-Malware-Detection-Techniques, Feb. 2, 2007, Department of Computer Science, Purdue University.
Isohara, Takamasa, Keisuke Takemori, and Ayumu Kubota. “Kernel-based behavior analysis for android malware detection.” Computational intelligence and Security (CIS), 2011 Seventh International Conference on. IEEE, 2011.
Kevin A Roundy et al: “Hybrid Analysis and Control of Malware”, Sep. 15, 2010, Recent Advances in Intrusion Detection, Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 317-338, XP019150454 ISBN:978-3-642-15511-6.
Leading Colleges Select FireEye to Stop Malware-Related Data Breaches, FireEye Inc., 2009.
Li et al., A VMM-Based System Call Interposition Framework for Program Monitoring, Dec. 2010, IEEE 16th International Conference on Parallel and Distributed Systems, pp. 706-711.
Lindorfer, Martina, Clemens Kolbitsch, and Paolo Milani Comparetti. “Detecting environment-sensitive malware.” Recent Advances in Intrusion Detection. Springer Berlin Heidelberg, 2011.
Lok Kwong et al: “DroidScope: Seamlessly Reconstructing the OS and Dalvik Semantic Views for Dynamic Android Malware Analysis”, Aug. 10, 2012, XP055158513, Retrieved from the Internet: URL:https://www.usenix.org/system/files/conference/usenixsecurity12/sec12-final107.pdf [retrieved on Dec. 15, 2014].
Mori, Detecting Unknown Computer Viruses, 2004, Springer-Verlag Berlin Heidelberg.
Oberheide et al., CloudAV.sub.-N-Version Antivirus in the Network Cloud, 17th USENIX Security Symposium USENIX Security '08 Jul. 28-Aug. 1, 2008 San Jose, CA.
U.S. Pat. No. 8,171,553 filed Apr. 20, 2006, Inter Parties Review Decision dated Jul. 10, 2015.
U.S. Pat. No. 8,291,499 filed Mar. 16, 2012, Inter Parties Review Decision dated Jul. 10, 2015.
Wahid et al., Characterising the Evolution in Scanning Activity of Suspicious Hosts, Oct. 2009, Third International Conference on Network and System Security, pp. 344-350.
Yuhei Kawakoya et al: “Memory behavior-based automatic malware unpacking in stealth debugging environment”, Malicious and Unwanted Software (Malware), 2010 5th International Conference on, IEEE, Piscataway, NJ, USA, Oct. 19, 2010, pp. 39-46, XP031833827, ISBN:978-1-4244-8-9353-1.
Zhang et al., The Effects of Threading, Infection Time, and Multiple-Attacker Collaboration on Malware Propagation, Sep. 2009, IEEE 28th International Symposium on Reliable Distributed Systems, pp. 73-82.