System and method for bootkit detection

Information

  • Patent Grant
  • 11763004
  • Patent Number
    11,763,004
  • Date Filed
    Thursday, September 27, 2018
    6 years ago
  • Date Issued
    Tuesday, September 19, 2023
    a year ago
Abstract
An embodiment of a computerized method for detecting bootkits is described. Herein, a lowest level software component within a software stack, such as a lowest software driver within a disk driver stack, is determined. The lowest level software component being in communication with a hardware abstraction layer of a storage device. Thereafter, stored information is extracted from the storage device via the lowest level software component, and representative data based on the stored information, such as execution hashes, are generated. The generated data is analyzed to determine whether the stored information includes a bootkit.
Description
FIELD

Embodiments of the disclosure relate to the field of cyber security. More specifically, embodiments of the disclosure relate to a system and computerized method for scalable bootkit detection.


GENERAL BACKGROUND

While the cyber threat landscape continues to evolve at an ever-increasing pace, the exploitation of basic input/output system (BIOS) boot processes remains a threat to enterprises around the world. BIOS exploitation may be accomplished by a threat actor using a “bootkit,” namely an advanced and specialized form of malware that misappropriates execution early in the boot process, making it difficult to identify within a network device. As a bootkit is designed to tamper with the boot process before operating system (OS) execution, this type of malware is often insidious within a network device, and in some cases, persists despite remediation attempts made by security administrators. Therefore, early detection of bootkit malware is essential in protecting a network device from harm.


Reliable and timely detection of bootkit malware for thousands of network devices operating as part of an enterprise network has been difficult for a variety of reasons, especially surrounding the unreliability and impracticality of reading boot records from computers and other network devices of the enterprise network. There are two types of boot records: a Master Boot Record (MBR) and multiple Volume Boot Records (VBRs). The MBR is the first boot sector located at a starting address of a partitioned, storage device such as a hard disk drive, solid-state component array, or a removable drive. The MBR tends to store (i) information associated with logical partitions of the storage device and (ii) executable boot code that functions as a first stage boot loader for the installed operating system. A VBR is a first boot sector stored at a particular partition on the storage device, which contains the necessary computer code to start the boot process. For example, the VBR may include executable boot code that is initialized by the MBR to begin the actual loading of the operating system.


With respect to the unreliability of reading boot records for malware detection, by their nature, bootkits are notorious for hooking legitimate Application Programming Interface (API) calls in an attempt to hide bytes overwritten in the boot code. As a result, collecting the bytes by reading a disk from user space is unreliable, as a bootkit may be intercepting the reads and returning code that appears to be (but is not) legitimate.


With respect to the impracticality of reading boot records from all network devices of the enterprise network for malware detection, given that compromised enterprise networks may support thousands of network devices and each network device includes multiple boot records, a determination as to whether each network device is infected with a bootkit is quite challenging. Currently, a malware analyst could acquire a disk image and then reverse engineer the boot bytes to determine if any malicious code is present in the boot chain. Performed manually, this analysis would require a large team of skilled analysts, which is not easily scalable and greatly increases the costs in supporting an enterprise network in protecting this network from a bootkit attack.


Ultimately, the problems associated with the conventional review of the boot records for bootkit malware are the following: (1) collection of boot records from the network devices is unreliable; (2) analysis of the boot records is manual only, and does not take into account any behavioral analyses; and (3) inability to analyze thousands or even tens of thousands of boot records in a timely manner without significant costs and resources.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the invention are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:



FIG. 1A is a first exemplary block diagram of a cyberattack detection system including deploying a centralized bootkit analysis system adapted to receive extracted data from targeted boot records.



FIG. 1B is a second exemplary block diagram of a cyberattack detection system deploying the bootkit analysis system deployed local to the network device being monitored.



FIG. 2 is an exemplary block diagram of a network device including the software agent and data recovery module of FIG. 1A.



FIG. 3 is an exemplary block diagram of a network device deployed as part of a cloud service and including the bootkit analysis system of FIG. 1A.



FIG. 4 is an exemplary block diagram of a logical representation of the operability of the boot data collection driver operating with the software agent of FIG. 2.



FIG. 5 is an exemplary embodiment of a logical representation of operations conducted by emulator logic of the bootkit analysis system of FIG. 3 in generating an execution hash for analysis by de-duplicator logic and classifier logic of the bootkit analysis system of FIG. 2.



FIG. 6 is an illustrative embodiment of the operations conducted by the bootkit analysis system FIG. 3.





DETAILED DESCRIPTION

Various embodiments of the disclosure relate to a software module installed to operate with (or as part of) a software agent to assist in the detection of malware and/or attempted cyberattacks on a network device (e.g., endpoint). According to one embodiment of the disclosure, the software module (referred to as a “data recovery module”) features a driver that is configured to extract raw data stored in a storage device (e.g., hard disk drive, solid-state component array, or a removable drive, etc.). Thereafter, the extracted raw data is evaluated, such as through simulated processing by emulator logic, and subsequently determined whether a portion of the extracted raw data corresponds to malicious bootstrapping code operating as a bootkit. Herein, the data recovery module may be implemented as code integrated as part of the software agent or may be implemented as software plug-in for the software agent, where the plug-in controls the data extraction from the storage device.


As described below, the data recovery module is configured to obtain information associated with a storage driver stack pertaining to an endpoint under analysis. As an illustrative example, the storage driver stack may correspond to a disk driver stack provided by an operating system (OS) of the endpoint, such as a Windows® OS. Based on this driver stack information, the data recovery module (i) determines a “lowest level” component within the storage driver stack and (ii) extracts data from the storage device via the lowest level component (referred to as “extracted data”).


According to one embodiment of the disclosure, the “lowest level” component may correspond to the software driver in direct communication with a controller for the storage device (e.g., a memory controller such as a disk controller). As an illustrative example, the “lowest level” component may be a software driver that does not utilize any other software drivers in the storage (disk) driver stack before communications with a hardware abstraction layer for the storage (disk) device, such as an intermediary controller or the storage device itself.


As described below, the extracted data may include stored information read from at least one boot record maintained by the storage device, such as a Master Boot Record (MBR) and/or a Volume Boot Records (VBR) for example. For example, this read operation may be a single read operation or iterative read operations to extract data from multiple (two or more) or all of the boot records (e.g., MBR and all of the VBRs). The extracted data associated with each boot record may be referred to as a “boot sample.” For one embodiment of the disclosure, the boot sample may include the data extracted from the entire boot record. As another embodiment, however, the boot sample merely includes a portion of data within a particular boot record, such as one or more bytes of data that correspond to a piece of code accessed from the boot record.


By directly accessing the lowest level component, the data recovery module bypasses the rest of the storage driver stack, as well as various types of user space hooks, which improves the accuracy and trustworthiness in the boot samples provided for analysis. Alternatively, in lieu of the “lowest level” component, the data recovery module may be configured to access a “low-level” component, namely the lowest level component or a near lowest level component being a software component positioned in close proximity to the hardware to reduce the risk of hijacking and increase the trustworthiness of boot sector data. Hence, a first indicator of compromise (IOC) for detecting a compromised boot system may be based, at least in part, on logic within the software agent or a bootkit analysis system (described below) determining that a boot sample being part of the extracted data is different from data retrieved from the particular boot record via processes running in the user space (i.e., not through direct access via the lowest level component of the storage driver stack). The first IOC may be provided to the bootkit analysis system as metadata or other a separate communication channel (not shown).


Upon receipt of the boot samples from the storage device, the endpoint provides these boot samples to the bootkit analysis system. According to one embodiment, the bootkit analysis system may be implemented locally within the endpoint and is adapted to receive boot samples from one or more remote sources. Alternatively, according to another embodiment of the disclosure and described herein, the bootkit analysis system may be implemented remotely from the endpoint, where the bootkit analysis system may be implemented as (i) a separate, on-premises network device on the enterprise network or (ii) logic within a network device supporting a cloud service provided by a private or public cloud network. For the cloud service deployment, the bootkit analysis system may be adapted to receive the boot samples, and optionally metadata associated with the boot samples (e.g., name of the corresponding boot record, identifier of the software agent, and/or an identifier of the endpoint such as a media access control “MAC” address or an Internet Protocol “IP” address). Herein, for this embodiment, the bootkit analysis system may be further adapted to receive boot samples from multiple software agents installed on different endpoints for use in detecting a potential bootkit being installed in any of these endpoints as well.


Herein, the bootkit analysis system comprises emulator logic that simulates processing of each boot sample, namely data bytes corresponding to boot instructions maintained in the corresponding boot record (e.g., MBR, a particular VBR, etc.), to generate an execution hash associated with these boot instructions. More specifically, as soon as or after the boot samples are collected from the storage device, the software agent (or optionally the data recovery module) provides the boot samples to the emulator logic of the bootkit analysis system. The emulator logic captures the high-level functionality during simulated processing of each of the boot samples, where the high-level functionality includes behaviors such as memory reads, memory writes, and/or other interrupts. Each of these behaviors may be represented by one or more instructions, such as one or more assembly instructions. The assembly instructions may include but are not limited or restricted to mnemonics. A “mnemonic” is an abbreviation (symbol or name) used to specify an operation or function which, according to some embodiments, may be entered in the operation code field of an assembler instruction. Examples of certain mnemonics may include the following: AND (logical “and”), OR (logical “or”), SHL (logical “shift left”), SHR (logical “shift right”), and/or MOV (e.g., logical “move”).


During emulation, the emulator logic may be configured to perform a logical operation on the mnemonic of the instructions to produce a data representation, namely the emulator logic is configured to conduct a one-way hash operation on the mnemonic of the instructions, which produces a resultant hash value representative of the boot sample being executed during a boot cycle. The resultant hash value, referred to as an “execution hash,” is generated from continued hashing of mnemonics associated with the instructions being determined through the simulated processing of a boot sample by the emulator logic. Hence, according to one embodiment of the disclosure, each execution hash corresponds to a particular boot sample. However, as another embodiment, an execution hash may correspond to hash results of multiple (two or more) boot samples.


Besides the emulator logic, the bootkit analysis system further features de-duplicator logic and classifier logic. The de-duplicator logic receives a set (e.g., two or more) of execution hashes, which are generated by the emulator logic based on the received boot samples, and compares each of these execution hashes to a plurality of execution hashes associated with previously detected boot samples (referred to as “execution hash intelligence”). The execution hash intelligence may include a plurality of known benign execution hashes (referred to as a “white list” of execution hashes) and a plurality of known malicious execution hashes (referred to as a “black list” of execution hashes). Additionally, the execution hash intelligence may include execution hashes that are highly correlated (e.g., identical or substantially similar) to execution hashes associated with boot records being returned by the software agent.


More specifically, besides white list and black list review, the de-duplicator logic may be configured to identify and eliminate repetitive execution hashes associated with the received boot samples corresponding to boot records maintained at the endpoint of a customer network protected by the software agent. It is contemplated that a count may be maintained to monitor the number of repetitive execution hashes. Given the large volume of boot samples that may be analyzed by a centralized bootkit analysis system associated with an entire enterprise network, this deduplication operation is conducted to create a representative (reduced) set of execution hashes and avoid wasted resources in analyzing the number of identical execution hashes.


As a result, each “matching” execution hash (e.g., an execution hash that is identical to or has at least a prescribed level of correlation with another execution hash in the execution hash intelligence) is removed from the set of execution hashes thereby creating a reduced set of execution hashes. The prescribed level of correlation may be a static value or a programmable value to adjust for false-positives / false-negatives experienced by the cyberattack detection system. Also, the results of the comparisons performed by the emulator logic also may be used to update the execution hash intelligence (e.g., number of detections, type of execution hash, etc.).


Thereafter, each of the reduced set of execution hashes may be analyzed by the classifier logic, and based on such analysis, may be determined to be associated with one or more boot samples classified as malicious, suspicious or benign. For instance, a second IOC for detecting a compromised boot system may be determined by the de-duplicator and classifier logic in response to detecting one or more execution hashes within the enterprise network that are unique or uncommon (e.g., less than 5 prior detected hashes), where these execution hashes denote differences in boot instructions from recognized (and expected) execution hashes that may be due to the presence of a bootkit.


Additionally, during simulated processing of the boot samples by the emulator logic, resultant behaviors associated with such simulated processing are identified and logged. The classifier logic may compare the resultant behaviors to behaviors associated with normal or expected OS bootstrapping generated from prior analyses (human and machine) to identify any behavioral deviations. For example, detection of suspicious behaviors resulting from the simulated processing, such as overwriting critical data structures such as an interrupt vector table (IVT), decoding and executing data from disk, suspicious screen outputs from the boot code, and/or modifying certain files or data on the storage device, may be determined by the classifier as malicious behavior denoting a bootkit. The type and/or number of behavioral deviations may operate as a third IOC utilized by the classifier logic for detecting a compromised boot system.


Based on the IOCs described above, the classifier logic determines whether a boot sample is “malicious,” based on a weighting and scoring mechanism dependent on any combination of the above-described IOCs having been detected, and if so, the classifier logic signals the reporting logic to issue an alert. Similarly, upon determining that the IOCs identify a boot sample under analysis is benign (i.e., non-malicious), the classifier logic discontinues further analyses associated with the boot sample. However, where the classifier logic determines that the IOCs identify the boot sample as neither “malicious” nor “benign” (i.e., “suspicious”), further analyses may be performed on the boot sample by the classifier logic or other logic within or outside of the bootkit analysis system. Such further analyses may be automated and conducted by another analysis system or may be conducted by a security analyst. Additionally, execution hashes associated with malicious and/or benign boot samples may be stored in the black list and/or white list forming the execution hash intelligence described above. These lists may be utilized, at least in part, by the classifier logic as another IOC in detecting a bootkit, especially any execution hashes that represent boot instructions where such tampering of the instructions or the instruction sequence, by itself, identifies the boot sample as malicious.


Based on the foregoing, embodiments of the disclosure are designed to collect boot records from the network device via a low component to increase reliability of the boot record data. Furthermore, the analysis of the boot records take into account behavioral analyses and, with the emulation logic and de-duplicator logic, provide an ability to analyze thousands or even tens of thousands of boot records in a timely manner without significant costs and resources.


I. Terminology

In the following description, certain terminology is used to describe aspects of the invention. For example, in certain situations, the terms “logic” and “component” are representative of hardware, firmware and/or software that is configured to perform one or more functions. As hardware, logic (or a component) may include circuitry having data processing or storage functionality. Examples of such processing or storage circuitry may include, but is not limited or restricted to the following: a processor; one or more processor cores; a programmable gate array; a controller (network, memory, etc.); an application specific integrated circuit; receiver, transmitter and/or transceiver circuitry; semiconductor memory; combinatorial logic, or combinations of one or more of the above components.


Alternatively, the logic (or component) may be in the form of one or more software modules, such as executable code in the form of an operating system, an executable application, code representing a hardware I/O component, an application programming interface (API), a subroutine, a function, a procedure, an applet, a servlet, a plug-in, a routine, source code, object code, a shared library/dynamic load library, or one or more instructions. These software modules may be stored in any type of a suitable non-transitory storage medium, or transitory storage medium (e.g., electrical, optical, acoustical or other form of propagated signals such as carrier waves, infrared signals, or digital signals). Examples of a “non-transitory storage medium” may include, but are not limited or restricted to a programmable circuit; mass storage that includes (a) non-persistent storage such as volatile memory (e.g., any type of random access memory “RAM”), or (b) persistent storage such as non-volatile memory (e.g., read-only memory “ROM”, power-backed RAM, flash memory, phase-change memory, etc.), a solid-state drive, hard disk drive, an optical disc drive, or portable memory device; and/or a semiconductor memory. As firmware, the logic (or component) may be executable code is stored in persistent storage.


A “network device” may refer to a physical electronic device with network connectivity. Examples of a network device may include, but are not limited or restricted to the following: a server; a router or other signal propagation networking equipment (e.g., a wireless or wired access point); or an endpoint (e.g., a stationary or portable computer including a desktop computer, laptop, electronic reader, netbook or tablet; a smart phone; a video-game console; or wearable technology (e.g., watch phone, etc.)). Alternatively, the network device may refer to a virtual device being a collection of software operating as the network device in cooperation with an operating system (OS).


The “endpoint,” defined above, may be a physical or virtual network device equipped with at least an operating system (OS), one or more applications, and a software agent that, upon execution on the endpoint, may operate to identify malicious (or non-malicious) content for use in determining whether the endpoint has been compromised (e.g., currently subjected to a cybersecurity attack). The software agent may be configured to operate on a continuous basis when deployed as daemon software or operate on a noncontinuous basis (e.g., periodic or activated in response to detection of a triggering event). In particular, the “software agent” includes a software module, such as a plug-in for example, that extracts data from the storage device for bootkit analysis.


A “plug-in” generally refers to a software component designed to enhance (add, modify, tune or otherwise configure) a specific functionality or capability to logic such as, for example, the software agent. In one embodiment, the plug-in may be configured to communicate with the software agent through an application program interface (API). For this illustrative embodiment, the plug-in may be configured to collect and analyze information from one or more sources within the network device. This information may include raw data from a storage device, such as extracted data (e.g., bytes of code) from its MBR and/or one or more VBRs. The plug-in can be readily customized or updated without modifying the software agent.


As briefly described above, the term “malware” may be broadly construed as malicious software that can cause a malicious communication or activity that initiates or furthers an attack (hereinafter, “cyberattack”). Malware may prompt or cause unauthorized, unexpected, anomalous, unintended and/or unwanted behaviors (generally “attack-oriented behaviors”) or operations constituting a security compromise of information infrastructure. For instance, malware may correspond to a type of malicious computer code that, upon execution and as an illustrative example, takes advantage of a vulnerability in a network, network device or software, for example, to gain unauthorized access, harm or co-opt operation of a network device or misappropriate, modify or delete data. Alternatively, as another illustrative example, malware may correspond to information (e.g., executable code, script(s), data, command(s), etc.) that is designed to cause a network device to experience attack-oriented behaviors. The attack-oriented behaviors may include a communication-based anomaly or an execution-based anomaly, which, for example, could (1) alter the functionality of a network device in an atypical and unauthorized manner; and/or (2) provide unwanted functionality which may be generally acceptable in another context. A “bootkit” is a type of malware that initiates the cyberattack early in the boot cycle of an endpoint.


In certain instances, the terms “compare,” “comparing,” “comparison,” or other tenses thereof generally mean determining if a match (e.g., identical or at least having a prescribed level of correlation) is achieved between two items where one of the items may include a representation of instructions (e.g., a hash value) associated boot code under analysis.


The term “computerized” generally represents that any corresponding operations are conducted by hardware in combination with software and/or firmware. Also, the term “message” may be one or more packets or frames, a file, a command or series of commands, or any collection of bits having the prescribed format. The term “transmission medium” generally refers to a physical or logical communication link (or path) between two or more network devices. For instance, as a physical communication path, wired and/or wireless interconnects in the form of electrical wiring, optical fiber, cable, bus trace, or a wireless channel using infrared, radio frequency (RF), may be used.


Finally, the terms “or” and “and/or” as used herein are to be interpreted as inclusive or meaning any one or any combination. As an example, “A, B or C” or “A, B and/or C” mean “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.” An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive.


As this invention is susceptible to embodiments of many different forms, it is intended that the present disclosure is to be considered as an example of the principles of the invention and is not intended to limit the invention to the specific embodiments shown and described.


II. General Architecture

Referring to FIG. 1A, a first exemplary block diagram of a cyberattack detection system 100 is shown. For this embodiment, the cyberattack detection system 100 includes a network device (e.g., endpoint) 1101, which is implemented with a software agent 120 to detect a cyberattack being attempted on the endpoint 1101. Herein, for bootkit detection, the software agent 120 may be configured to collect data stored within a storage device 130 of the endpoint 1101 for malware analysis in response to a triggering event that may be periodic (e.g., every boot cycle, at prescribed times during or after business hours, etc.) or aperiodic (e.g., as requested by security personnel, responsive to an update to privileged code in the endpoint 1101, etc.). As shown, the storage device 130 may correspond to a hard-disk drive, one or more solid-state devices (SSDs) such as an array of SSDs (e.g., flash devices, etc.), a Universal Serial Bus (USB) mass storage device, or the like.


As further shown in FIG. 1A, a software module 140 (referred to as a “data recovery module”) is provided to enhance operability of the software agent 120. The data recovery module 140 may be implemented as a software component of the software agent 120 or as a separate plug-in that is communicatively coupled to the software agent 120. The data recovery module 140 features a driver 150 that is configured to extract data 155 stored within the storage device 130 via a lowest level component 160 within a storage driver stack maintained by the network device 1101. The extracted data 155 may be obtained through one or more read messages from the driver 150 to a hardware abstraction layer 165 of the storage device 130 (e.g., a type of controller such as a memory (disk) controller), which is configured to access content from one or more boot records 1701-170M (M≥1) stored in the storage device 130.


More specifically, the driver 150 is configured to directly access the low (e.g., lowest) level software driver 160 within the storage driver stack, such as a software driver in direct communications with the memory controller 165. Via the lowest level software driver 160, the driver 150 may be configured to access stored information (content) within one or more of the boot records 1701-170M (M≥1) maintained by the storage device 130. For example, the driver 150 may conduct one or more read queries to extract data from “M” boot records 1701-170m, which may include a Master Boot Record (MBR) 172 and/or one or more Volume Boot Records (VBRs) 174. The extracted data associated with each boot record 1701-170M is referred to as a “boot sample” 1571-157M, respectively. By directly accessing the lowest level software driver 160 within the storage driver stack, the driver 150 is able to bypass a remainder of the software drivers forming the storage driver stack (see FIG. 4) that have been “hijacked” by malware, or otherwise may be malicious and configured to intercept data requests.


Upon receipt of the extracted data 155 corresponding to the boot samples 1571-157M from the storage device 130, the software agent 120 provides the boot samples 1571-157M to a bootkit analysis system 180. Herein, for this embodiment of the disclosure, the bootkit analysis system 180 may be implemented as a centralized bootkit analysis system (BAS) as shown. In particular, the bootkit analysis system 180 is configured to receive the boot samples 1571-157M from the network device 1101 for analysis as to whether any of the boot samples 1571-157M includes bootkit malware. Additionally, the bootkit analysis system 180 may receive boot samples from other network devices (e.g., network devices 1102-110N, where N≥2) that may be utilized to determine IOCs associated with an incoming boot sample (e.g., boot 1571) identifying that the boot sample 1571 potentially includes bootkit malware.


Herein, the bootkit analysis system 180 may be deployed as (i) a separate, on-premises network device on the enterprise network or (ii) logic within a network device supporting a cloud service provided by a cloud network 190, such as a private cloud network or a public cloud network as shown. Software may be deployed in network devices 1101-110N to extract and provide boot samples to the bootkit analysis system 180 for processing, such as the software agent 120 deployed in network device 1101 that, in combination with the data recovery module 140, provides the boot samples 1571-157M to the bootkit analysis system 180. The bootkit analysis system 180 operates to identify IOCs that may signify a presence of bootkit malware within boot records of a monitored network device, such as (1) one or more of the boot samples 1571-157M (e.g., boot record 1571) being different from the same data contained in the boot record 1701 retrieved from the user space; (2) unique execution hashes or uncommon execution hashes (e.g., execution hashes detected less than 5 times previously) denoting different boot instruction sequences among the network devices 1101-110N; and/or (3) behaviors conducted by a particular boot sample 1571... or 157M that deviate from normal (or expected) OS bootstrapping.


Referring now to FIG. 1B, a second exemplary block diagram of the cyberattack detection system 100 deploying the bootkit analysis system 180 is shown. In lieu of a centralized deployment, as show in FIG. 1A, the bootkit analysis system 180 may be deployed as part of the software agent 120 installed on the network device 1101. The software agent 120 may communicate with other software agents within the network devices 1101-110N to collect information needed for IOC determination. Alternatively, the bootkit analysis system 180 may be a software module that is implemented separately from the software agent 120, but is deployed within the same network device 1101. The bootkit analysis system 180 operates to identify IOCs that are used to detect a presence of bootkit malware, as described above.


Referring now to FIG. 2, an exemplary embodiment of a logical representation of the network device 1101 including the software agent 120 of FIG. 1A is shown. Herein, for this embodiment, the network device 1101 operates as an endpoint, including a plurality of components 200, including a processor 210, a network interface 220, a memory 230 and the storage device 130, all of which are communicatively coupled together via a transmission medium 240. As shown, when deployed as a physical device, the components 200 may be at least partially encased in a housing 250, which may be made entirely or partially of a rigid material (e.g., hard plastic, metal, glass, composites, or any combination thereof) that protects these components from environmental conditions.


As shown, the software agent 120 and the data recovery module 140 are stored within the memory 130. The data recovery module 140 includes the driver 150, referred to as the “boot data collection driver” 150, which is configured to extract (raw) data from the storage device 130 while bypassing one or more drivers within the storage driver stack 270 made available by the operating system (OS) 260.


The processor 210 is a multi-purpose, processing component that is configured to execute logic maintained within the memory 230, namely non-transitory storage medium. One example of processor 210 includes an Intel® (x86) central processing unit (CPU) with an instruction set architecture. Alternatively, processor 210 may include another type of CPU, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field-programmable gate array, or any other hardware component with data processing capability.


The memory 230 may be implemented as a persistent storage, including the software agent 120 with additional functionality provided by the data recovery module 140. The software agent 120, upon execution on the processor 210, operates as a daemon software application by conducting operations of retrieving stored contents within the storage device 130 in response to a triggering event as described above. More specifically, the data recovery module 140 includes the boot data collection driver 150, which is configured to recover the extracted data 155, namely boot samples 1571-157M. More specifically, the boot data collection driver 150 accesses the OS 260 of the endpoint 1101 to obtain the storage driver stack 270 and determines the lowest level component 160 associated with the stack 270. Thereafter, the boot data collection driver 150 initiates a message to the lowest level component 160 requesting data maintained in one or more storage locations within the storage device 130 that are identified in the query message. For example, the data collection driver 150 may initiate one or more READ messages for stored information within the MBR 172 and/or one or more VBRs 174. The stored information from each of the boot records 1701-170M, representing at least part of a corresponding boot sample 1571-157M, are subsequently provided from the endpoint 1101 to the bootkit analysis system 180.


Referring to FIG. 3, an exemplary embodiment of a logical representation of a network device 300 deploying the bootkit analysis system 180 of FIG. 1A is shown. Herein, for this embodiment, the network device 300 is deployed as part of a cloud network (e.g., public or private cloud network) and supports a cloud service for bootkit detection. For this embodiment, the network device 300 includes a plurality of components 305, including a processor 310, a network interface 320 and a memory 330, which are communicatively coupled together via a transmission medium 340. As shown, the memory 330 stores (i) the bootkit analysis system 180 including emulator logic 350, a boot sample data store 355, de-duplicator logic 360 and classifier logic 370; and (ii) reporting logic 380. Executive hash intelligence 390 may be accessible and stored therein.


The processor 310 is a multi-purpose, processing component that is configured to execute logic maintained within the bootkit analysis system 180. During execution of certain logic, the bootkit analysis system 180 is configured to receive boot samples 1571-157M from the network device 1101, temporarily store the received boot samples 1571-157M in the boot sample data store 355, and modify data within each of the boot samples 1571-157M to produce representative data for analysis. The representative data, referred to as an execution hash (described below), may be used to determine whether a sequence of operations performed in accordance with each boot sample 1571... or 157M differs from “normal” bootstrapping operations. Stated differently, the detection of the presence of a bootkit may be based, at least in part, on detection of differences between sequence of operations to be performed in accordance with any of the boot samples 1571-157M and the sequence of operations performed in accordance with “normal” bootstrapping operations.


More specifically, the bootkit analysis system 180 includes the emulator logic 350 that simulates processing of each of the boot samples 1571-157M to determine high-level functionality of each of the boot samples 1571-157M. This functionality includes behaviors such as memory accesses, memory reads and writes, and other interrupts. Each of these behaviors may be represented by one or more instructions, such as one or more assembly instructions. The assembly instructions may include, but are not limited or restricted to the following mnemonics: AND (logical “and”), OR (logical “or”), SHL (logical “shift left”), SHR (logical “shift right”), and/or MOV (e.g., logical “move”).


During emulation, according to one embodiment of the disclosure, the emulator logic 350 performs a one-way hash operation on the mnemonics of the determined instructions associated with each boot sample (e.g., boot sample 1571). The resultant hash value, referred to as an “execution hash,” is generated from continued hashing of mnemonics associated with the instructions being determined through the simulated processing of the boot sample 1571 by the emulator logic 350. Hence, an execution hash may be generated for each boot sample 1571-157M provided to the bootkit analysis system 180.


As further shown in FIG. 3, the bootkit analysis system 180 further features the de-duplicator logic 360. The de-duplicator logic 360 is configured to (i) receive a set of execution hashes each based on content from one of the boot samples 1571-157M received by the emulator logic 350 and (ii) eliminate execution hashes deemed to be repetitious, namely execution hashes that are not considered unique or uncommon in comparison with previously generated execution hashes 390 (referred to as “execution hash intelligence 390”). The elimination of repetitious execution hashes may involve consideration of execution hashes stored in a black list, white list and prior execution hashes analyzed for the boot samples from a particular software agent for evaluation by the bootkit analysis system 180. The elimination of repetitious execution hashes generates a reduced set of execution hashes and groups the execution hashes, which translates into a saving of processing and storage resources. It is noted that any detected comparisons (e.g., matches) with “malicious” execution hashes may be reported to the classifier 370 (or left as part of the reduced set of execution hashes) or routed to the reporting logic 380 to generate an alert, as described below.


Thereafter, each execution hash of the reduced set of execution hashes is analyzed by the classifier logic 370. Based, at least in part on such analysis, the classifier logic 370 determines whether data associated with the boot samples 1571-157M is malicious, suspicious or benign based on the presence or absence of notable distinctions between each execution hash from the reduced set of execution hashes and certain execution hashes within the execution hash intelligence 390 representative of normal (or expected) bootstrapping operations. The “malicious” or “benign” classification may be based on detected IOCs associated with one or more boot samples, such as matching between a certain execution hash and/or sequences of execution hashes within the reduced set of execution hashes to execution hash(es) within the execution hash intelligence 390 to identify the boot sample(s) 1571-157M. When the result is non-determinative, the execution hash is classified as “suspicious.


As described above, one (second) IOC for detecting a compromised boot system may be determined by the de-duplicator logic 360 and the classifier logic 370 in response to detecting one or more execution hashes are unique or uncommon (e.g., less than 5 prior detected hashes), where these execution hashes denote differences in boot instructions from recognized (and expected) execution hashes that may be due to the presence of a bootkit. Additionally, during simulated processing of the boot samples by the emulator logic, resultant behaviors associated with such simulated processing are identified and logged. The classifier logic 370 may compare the resultant behaviors to behaviors associated with normal or expected OS bootstrapping generated from prior analyses (human and machine) to identify any behavioral deviations. For example, overwriting certain data structures such as an interrupt vector table (IVT), decoding and executing data from disk, suspicious screen outputs from the boot code, and/or modifying certain files or data on the storage device, may be determined by the classifier logic 370 as malicious behavior denoting a bootkit. The type and/or number of behavioral deviations may operate as another (second) IOC utilized by the classifier logic for detecting a compromised boot system while deviation between raw boot record data depending on the retrieval path may constitute another (first) IOC that is provided to the classifier as metadata with the boot samples or via a separate communication path.


Where the execution hash is suspicious, where the level of correlation does not meet the correlation threshold in that there are deviations between the execution hash under analysis and the execution hashes within the execution hash intelligence 390, further (and more in-depth) analyses may be performed on the extracted data in contrast to discontinued processing of the benign execution hashes. Where the execution hash is determined to be malicious, however, the classifier logic 370 communicates with the reporting logic 380 to generate an alert that is provided to a security administrator. The “alert” may be a displayable image or other communication to advise the security administrator of a potential bootkit attack. Additionally, malicious execution hashes and/or benign execution hashes may be stored in a black list and/or white list, respectively. These lists may be utilized, at least in part, by the classifier logic 370.


III. Exemplary Logical Layout

Referring now to FIG. 4, an exemplary block diagram of a logical representation of the operability of the boot data collection driver 150 operating with the software agent 120 of FIG. 2 is shown. Herein, the boot data collection driver 150 receives one or more instructions from the software agent (not shown) to retrieve raw data from the addressable storage device 130. Upon receipt of the instruction(s) to retrieve data from the storage device 130, the boot data collection driver 150 initiates a request to an OS (e.g., Windows® OS) of the network device (e.g., an API call) for information 415 associated with the storage driver stack 410. Returned by the OS of the network device, the stack information 415, which is visually represented in the figure as an array of drivers expanding from a lowest level of the storage driver stack 410 (e.g., lowest storage driver 420) up to software driver 440. The storage driver stack 410 illustrates an order of communication starting with the software driver 440 and proceeding to the lowest storage driver 420 via an intermediary software driver 430. As shown, the intermediary software driver 430 is malicious, including bootkit malware 450.


Herein, based on the stack information 415, the boot data collection driver 150 determines a lowest level component associated with the storage driver stack 410, such as the lowest storage driver 420 as illustrated. It is contemplated, however, that a stack representation of other software components, besides software drivers per se, may be used in bypassing secondary software components for direct access to the storage device 130. In the Windows® OS architecture, information associated with the storage driver stack 410 is available through access via a published API.


Thereafter, the boot data collection driver 150 initiates a request 460 to the lowest storage driver 420. The request 460 may correspond to one or more READ request messages for data maintained in one or more selected storage locations within the storage device 130. For example, the boot data collection driver 150 may initiate a first READ request 460 for data bytes within the MBR 172 (e.g., boot sample 1571) via the memory controller 165 and/or other READ requests 460 for data bytes within the VBR(s) 174 (e.g., boot sample 1572...) maintained in the storage device 130. These data bytes, namely extracted data 470 including boot samples 1571-157M, are returned to the boot data collection driver 150 via the lowest storage driver 420. Thereafter, by the boot data collection driver 150 retrieving the boot samples 1571-157M directly via the lowest storage device 420 in lieu of the high-level storage device 440, the boot data collection driver 150 is able to bypass a remainder of the software drivers, including the malicious storage driver 430 configured to intercept data requests. Hence, this provides improved bootkit detection over conventional techniques.


Referring to FIG. 5, an exemplary embodiment of a logical representation of the operations conducted by emulator logic 350 of the bootkit analysis system of FIG. 3 is shown, where the emulator logic 350 is configured to generate an execution hash 500 for each received boot samples 1571-157M (e.g., boot sample 157 of FIGS. 1A-3) based on stored information (e.g., extracted data) retrieved from boot records within a storage device under analysis. Herein, the emulator logic 350 receives each of the boot samples 1571-157M and, for each boot samples 1571-157M (e.g., boot sample 1571), the emulator logic 350 captures high-level functionality during simulated processing of the boot sample 1571, where the high-level functionality includes behaviors 510 such as one or more memory accesses, disk reads and writes, and other interrupts. Each of these behaviors 510 may be represented by a series of instructions 520 (see first operation 525). The series of instructions 520 may include, but are not limited or restricted to assembly instruction(s) such as AND (logical “and”), OR (logical “or”), SHL (logical “shift left”), SHR (logical “shift right”), and/or MOV (e.g., logical “move”).


Thereafter, according to one embodiment of the disclosure, the emulator logic 350 performs a one-way hash operation 530 on the mnemonics 540 (e.g., AND, OR, SHL, SHR, MOV, etc.) associated with the series of instructions 520, which is representative of the ordered instructions executed during a boot cycle (see second operation 545). This ordered hashing operation of the mnemonics 540 for the series of instructions 520 being emulated continues for extracted data for the particular boot sample 1571. Upon completion of the emulation and hashing of the mnemonics 540 for the series of instructions 520 pertaining to the boot sample 1571, which may correspond to a particular boot record such as MBR 172 for example, the emulator logic 350 has produced the execution hash 500 for that particular boot record (see third operation 550).


Alternatively, in lieu of performing the one-way hash operation 530 on the mnemonics 540, it is contemplated that the emulator logic 350 may log the behaviors 510 and may perform a hash operation on the behaviors 510 themselves to produce the execution hash 500. In particular, the emulator logic 350 may perform hash operations on the series of behaviors 510 chronologically (i.e., in order of occurrence). As another example, some of the behaviors 510 may be excluded (filtered) from the hash operations (disregarded) where such behaviors are normally benign and their limited presence may lead to a greater result of false positive detections.


The de-duplicator logic 360 compares the execution hash 500 based on boot sample 1571 and other execution hashes based on boot samples 1572-157M generated by the emulator logic 350, namely a set of execution hashes 555, against a plurality of execution hashes associated with previously detected boot samples (e.g., malicious or benign execution hashes in the execution hash intelligence 390). Based on this comparison, the de-duplicator logic 360 eliminates repetitious execution hashes to formulate a reduced set of execution hashes 560 for analysis by the classifier logic 370. Hence, these unique or uncommon execution hashes are more manageable in identifying boot code that is potentially malicious, such as operating as a bootkit.


As suspicious activity executed by bootkits can vary widely, instead of generating detection signatures for individual malware samples, the bootkit analysis system 180 is configured to identify deviations (in code structure and behavior) from normal OS bootstrapping as another IOC. To enable this analysis, the behaviors 510 produced during simulated processing of content within each of the boot samples 1571-157M may also be considered by the classifier 370 in classifying any of the reduced set of execution hashes 560 as malicious, benign or suspicious as described above. Also, as another IOC, information associated with one of the boot samples 1571-157M being different than data retrieved from the particular boot record via the user space (referred to as “extracted data differences 570 may be considered by the classifier 370. The classification result 580 may be provided to reporting logic (not shown) to issue an alert, as described above.


Referring now to FIG. 6, an illustrative embodiment of the operations conducted by the bootkit analysis system 180 of FIG. 2 is shown. An endpoint 1101 deploys the software agent 120 including the data recovery module 140 that is configured to automatically gain access to prescribed storage locations within the storage device 130 of the endpoint 1101 via a lowest driver of the storage driver stack, as described in FIG. 4 (see operation A). These prescribed storage locations may be directed to a plurality of boot records, including the master boot record (MBR) and/or one or more volume boot records (VBRs) within the storage device 130. For each of these boot records, the data recovery module 140 may be configured to extract data from that boot record thereby obtaining boot samples 1571-157M for the boot records.


After receipt of the boot samples 1571-157M, the endpoint 1101 provides the boot samples 1571-157M to a cloud network 600 for bootkit analysis (operation B). As shown, the boot samples 1571-157M may be provided to an intermediary server 610 for record management and subsequent submission to the cloud network 600. Besides the boot samples 1571-157M, the intermediary server 610 may also receive metadata associated with the boot samples (e.g., name of the corresponding boot record, identifier of the software agent, and/or an identifier of the endpoint such as a media access control “MAC” address or an Internet Protocol “IP” address). According to one embodiment of the disclosure, the server 610 tracks such metadata items and sends only the boot samples 1571-157M to the cloud bootkit analysis system 180 According to another embodiment of the disclosure, the cloud bootkit analysis system 180 may receive the metadata of the boot samples 1571-157M to assist in enriching alerts with additional context information regarding a potential cyberattack based on prior analyses.


For boot record submission and analysis, each of the boot samples 1571-157M associated with each boot record maintained in the storage device 130 of the endpoint 1101 is provided to the bootkit analysis system 180 (operation C). As shown, the intermediary server 610 may access the bootkit analysis system 180 via a RESTful API interface 620. According to one embodiment of the disclosure, where the cloud network 600 may be an Amazon Web Service (AWS®), the RESTful API interface 620 is an AWS® API Gateway being a managed service that aids developers to create, publish, maintain, monitor and/or secure APIs, which is exposed and accessible to receive and validate the submitted boot samples 1571-157M.


Herein, the bootkit analysis system 180 is scalable and configured with the emulator logic 350 of FIG. 3, for example, included as part of a compute service 640 that runs code in response to events and automatically manages the compute resources required by that code. An example of the compute service may include “analysis Lambda™” component 640 for the AWS® architecture. While the Amazon® AWS® public cloud network deployment is described, it is contemplated that the bootkit analysis system 180 may be deployed as part of analogous components within other public cloud networks (e.g., Microsoft® Azure®, Google® Cloud, etc.) or as part of software components within a private cloud network.


Herein, the emulator logic is configured to (i) simulate processing of each incoming boot sample received via AWS® API Gateway 620 to determine instructions associated with data forming that boot sample, and (ii) perform hash operations on information associated with the determined instructions, such as the mnemonics for example, to produce an execution hash for each targeted boot record. The analysis Lambda™ component 640 is further configured with the de-duplicator logic to group different boot samples together based on boot instruction sequencing and remove repetitive execution hashes to reduce a total number of execution hashes for classification. Hence, the unique or uncommon execution hashes are maintained for analysis by the classifier logic.


Thereafter, record metadata (e.g., execution hash, etc.) is generated, collected and stored in a database 650 being part of the cloud network 600 such as Dynamo dB for the AWS® architecture for example. The database 650 may be accessed by the classifier logic, deployed within the analysis Lambda™ component 640, in determining whether information within a boot record is malicious. Additionally, the analysis Lambda™ component 640 features the reporting logic, which generates reports for each boot record that is stored in a predetermined data store 660 within the cloud network 600 (represented as “S3” for the AWS® architecture).


The intermediary server 610 may issue a query request message 670 for reports associated with particular endpoints or particular boot samples via another AWS® RESTful API interface, referred to as API Gateway/Report 630. In response, reports 680 associated with such boot samples or endpoints are gathered from the data store (S3) 660 and returned to the intermediary server 610, where the reports are made available to one or more authorized sources that prompted the query request message 670.


In the foregoing description, the invention is described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims.

Claims
  • 1. A network device for detecting a potential bootkit malware, comprising: a processor; anda non-transitory storage medium communicatively coupled to the processor, the non-transitory storage medium comprises a bootkit analysis system for detecting the bootkit malware based on analysis of a plurality of boot samples, the bootkit analysis system including emulator logic that, upon execution by the processor, simulates processing of each of the plurality of boot samples received to determine high-level functionality of each of the plurality of boot samples and to perform hash operations on the high-level functionality for each of the plurality of boot samples to produce a plurality of execution hashes each generated from a hash operation on mnemonic of instructions for a boot sample of the plurality of boot samples,de-duplicator logic that, upon execution by the processor, receives the plurality of execution hashes each based on content from one of the plurality of boot samples received by the emulator logic and eliminates execution hashes deemed to be repetitious to produce a reduced set of execution hashes, andclassifier logic that, upon execution by the processor, determines whether data associated with the plurality of boot samples is malicious, suspicious or benign based on a presence or absence of notable distinctions between each execution hash of the plurality of execution hashes for the reduced set of execution hashes and a plurality of execution hashes representative of normal or expected bootstrapping operations.
  • 2. The network device of claim 1, wherein the non-transitory storage medium further comprising: a boot sample data store to store the plurality of boot samples for processing by the emulator logic.
  • 3. The network device of claim 1, wherein the non-transitory storage medium further comprising reporting logic that, when executed by the processor, generates an alert that is provided to a security administrator, the alert includes a displayable image to advise the security administrator of a potential bootkit attack.
  • 4. The network device of claim 1, wherein the emulator logic simulates processing of each of the plurality of boot samples received to determine the high-level functionality being mnemonic of instructions corresponding to a plurality of logical instructions, the plurality of logical instructions comprises any combination of two or more instructions from a plurality of instructions including an AND instruction, an OR instruction, a SHR (shift right) instruction, a SHL (shift left) instruction, and a MOV (move) instruction.
  • 5. The network device of claim 1, wherein the plurality of execution hashes representative of normal bootstrapping operations corresponds to an execution hash intelligence gathered from a plurality of network devices including the network device.
  • 6. The network device of claim 1, wherein the de-duplicator logic, upon execution by the processor, is further configured to (i) perform a deduplication operation on an execution hash of the reduced set of execution hashes to determine a level of correlation between the execution hash and prior known execution hashes and (ii) provide the execution hash to the classifier logic to analyze deviations in at least behavior of a first boot sample of the plurality of boot samples from normal OS bootstrapping.
  • 7. A non-transitory storage medium including software that, when executed by one or more processors, performs operations on a plurality of boot samples associated with an electronic device to determine whether the electronic device includes bootkit malware, the non-transitory computer storage medium comprising: emulator logic that, upon execution by the one or more processors, simulates processing of each of the plurality of boot samples to determine high-level functionality of each of the plurality of boot samples and to perform operations on the high-level functionality for each of the plurality of boot samples to produce a set of data representations each associated with one of the plurality of boot samples, wherein each data representation constitutes a hash operation on mnemonic of instructions for each boot sample of the plurality of boot samples;de-duplicator logic that, upon execution by the one or more processors, receives the plurality of data representations each based on content from one of the plurality of boot samples received by the emulator logic and eliminates a data representation of the plurality of data representations deemed to be repetitious to produce a reduced set of data representations; andclassifier logic that, upon execution by the one or more processors, determines whether data associated with the plurality of boot samples is malicious, suspicious or benign based on a presence or absence of notable distinctions between each data representation of the reduced set of data representations and a plurality of data representation associated with normal or expected bootstrapping operations.
  • 8. The non-transitory storage medium of claim 7, wherein each data representation corresponds to an execution hash.
  • 9. The non-transitory storage medium of claim 8 further comprising reporting logic that, when executed by the one or more processors, generates an alert being a message including a displayable image to identify a potential bootkit attack.
  • 10. The non-transitory storage medium of claim 8, wherein the emulator logic to simulate processing of each of the plurality of boot samples received to determine the high-level functionality being a plurality of logical instructions, the plurality of logical instructions comprises any combination of two or more instructions from a plurality of instructions including an AND instruction, an OR instruction, a SHR (shift right) instruction, a SHL (shift left) instruction, and a MOV (move) instruction.
  • 11. The non-transitory storage medium of claim 8, wherein the plurality of execution hashes representative of normal or expected bootstrapping operations corresponds to an execution hash intelligence gathered from a plurality of network devices.
  • 12. A computerized method for detecting a potential bootkit malware, comprising: simulating processing, by emulator logic executed by a processor, of each of the plurality of boot samples received to determine high-level functionality of each of the plurality of boot samples and to perform hash operations on the high-level functionality for each of the plurality of boot samples to produce a plurality of execution hashes, each execution hash of the plurality of execution hashes is generated from a hash operation on mnemonic of instructions for a boot sample of the plurality of boot samples,receiving, by de-duplicator logic executed by the processor, the plurality of execution hashes, each execution hash of the plurality of execution hashes is based on content from one of the plurality of boot samples received by the emulator logic;eliminating, by the de-duplicator logic, one or more execution hashes of the plurality of execution hashes deemed to be repetitious to produce a reduced set of execution hashes; anddetermining, by classifier logic executed by the processor, whether data associated with the plurality of boot samples is malicious, suspicious or benign based on a presence or absence of notable distinctions between each execution hash of the plurality of execution hashes for the reduced set of execution hashes and a plurality of execution hashes representative of normal or expected bootstrapping operations.
  • 13. The computerized method of claim 12 further comprising: storing the plurality of boot samples for processing by the emulator logic.
  • 14. The computerized method of claim 12 further comprising: generating, by reporting logic executed by the processor, an alert that is provided to a security administrator, the alert may be a displayable image to advise the security administrator of a potential bootkit attack.
  • 15. The computerized method of claim 12, wherein mnemonic of instructions corresponding to a plurality of logical instructions, the plurality of logical instructions comprises any combination of two or more instructions from a plurality of instructions including an AND instruction, an OR instruction, a SHR (shift right) instruction, a SHL (shift left) instruction, and a MOV (move) instruction.
  • 16. The computerized method of claim 12, wherein the plurality of execution hashes representative of normal bootstrapping operations corresponds to an execution hash intelligence gathered from a plurality of network devices including the network device.
US Referenced Citations (714)
Number Name Date Kind
4292580 Ott et al. Sep 1981 A
5175732 Hendel et al. Dec 1992 A
5319776 Hile et al. Jun 1994 A
5440723 Arnold et al. Aug 1995 A
5490249 Miller Feb 1996 A
5657473 Killean et al. Aug 1997 A
5802277 Cowlard Sep 1998 A
5842002 Schnurer et al. Nov 1998 A
5960170 Chen et al. Sep 1999 A
5978917 Chi Nov 1999 A
5983348 Ji Nov 1999 A
6088803 Tso et al. Jul 2000 A
6092194 Touboul Jul 2000 A
6094677 Capek et al. Jul 2000 A
6108799 Boulay et al. Aug 2000 A
6154844 Touboul et al. Nov 2000 A
6269330 Cidon et al. Jul 2001 B1
6272641 Ji Aug 2001 B1
6279113 Vaidya Aug 2001 B1
6298445 Shostack et al. Oct 2001 B1
6357008 Nachenberg Mar 2002 B1
6424627 Sørhaug et al. Jul 2002 B1
6442696 Wray et al. Aug 2002 B1
6484315 Ziese Nov 2002 B1
6487666 Shanklin et al. Nov 2002 B1
6493756 O‘Brien et al. Dec 2002 B1
6550012 Villa et al. Apr 2003 B1
6775657 Baker Aug 2004 B1
6831893 Ben Nun et al. Dec 2004 B1
6832367 Choi et al. Dec 2004 B1
6895550 Kanchirayappa et al. May 2005 B2
6898632 Gordy et al. May 2005 B2
6907396 Muttik et al. Jun 2005 B1
6941348 Petry et al. Sep 2005 B2
6971097 Wallman Nov 2005 B1
6981279 Arnold et al. Dec 2005 B1
7007107 Ivchenko et al. Feb 2006 B1
7028179 Anderson et al. Apr 2006 B2
7043757 Hoefelmeyer et al. May 2006 B2
7058822 Edery et al. Jun 2006 B2
7069316 Gryaznov Jun 2006 B1
7080407 Zhao et al. Jul 2006 B1
7080408 Pak et al. Jul 2006 B1
7093002 Wolff et al. Aug 2006 B2
7093239 van der Made Aug 2006 B1
7096498 Judge Aug 2006 B2
7100201 Izatt Aug 2006 B2
7107617 Hursey et al. Sep 2006 B2
7159149 Spiegel et al. Jan 2007 B2
7213260 Judge May 2007 B2
7231667 Jordan Jun 2007 B2
7240364 Branscomb et al. Jul 2007 B1
7240368 Roesch et al. Jul 2007 B1
7243371 Kasper et al. Jul 2007 B1
7249175 Donaldson Jul 2007 B1
7287278 Liang Oct 2007 B2
7308716 Danford et al. Dec 2007 B2
7328453 Merkle, Jr. et al. Feb 2008 B2
7346486 Ivancic et al. Mar 2008 B2
7356736 Natvig Apr 2008 B2
7386888 Liang et al. Jun 2008 B2
7392542 Bucher Jun 2008 B2
7418729 Szor Aug 2008 B2
7428300 Drew et al. Sep 2008 B1
7441272 Durham et al. Oct 2008 B2
7448084 Apap et al. Nov 2008 B1
7458098 Judge et al. Nov 2008 B2
7464404 Carpenter et al. Dec 2008 B2
7464407 Nakae et al. Dec 2008 B2
7467408 O‘Toole, Jr. Dec 2008 B1
7478428 Thomlinson Jan 2009 B1
7480773 Reed Jan 2009 B1
7487543 Arnold et al. Feb 2009 B2
7496960 Chen et al. Feb 2009 B1
7496961 Zimmer et al. Feb 2009 B2
7519990 Xie Apr 2009 B1
7523493 Liang et al. Apr 2009 B2
7530104 Thrower et al. May 2009 B1
7540025 Tzadikario May 2009 B2
7546638 Anderson et al. Jun 2009 B2
7565550 Liang et al. Jul 2009 B2
7568233 Szor et al. Jul 2009 B1
7584455 Ball Sep 2009 B2
7603715 Costa et al. Oct 2009 B2
7607171 Marsden et al. Oct 2009 B1
7639714 Stolfo et al. Dec 2009 B2
7644441 Schmid et al. Jan 2010 B2
7657419 van der Made Feb 2010 B2
7676841 Sobchuk et al. Mar 2010 B2
7698548 Shelest et al. Apr 2010 B2
7707633 Danford et al. Apr 2010 B2
7712136 Sprosts et al. May 2010 B2
7730011 Deninger et al. Jun 2010 B1
7739740 Nachenberg et al. Jun 2010 B1
7779463 Stolfo et al. Aug 2010 B2
7784097 Stolfo et al. Aug 2010 B1
7832008 Kraemer Nov 2010 B1
7836502 Zhao et al. Nov 2010 B1
7849506 Dansey et al. Dec 2010 B1
7854007 Sprosts et al. Dec 2010 B2
7869073 Oshima Jan 2011 B2
7877803 Enstone et al. Jan 2011 B2
7904959 Sidiroglou et al. Mar 2011 B2
7908660 Bahl Mar 2011 B2
7930738 Petersen Apr 2011 B1
7937387 Frazier et al. May 2011 B2
7937761 Bennett May 2011 B1
7949849 Lowe et al. May 2011 B2
7996556 Raghavan et al. Aug 2011 B2
7996836 McCorkendale et al. Aug 2011 B1
7996904 Chiueh et al. Aug 2011 B1
7996905 Arnold et al. Aug 2011 B2
8006305 Aziz Aug 2011 B2
8010667 Zhang et al. Aug 2011 B2
8020206 Hubbard et al. Sep 2011 B2
8028338 Schneider et al. Sep 2011 B1
8042184 Batenin Oct 2011 B1
8045094 Teragawa Oct 2011 B2
8045458 Alperovitch et al. Oct 2011 B2
8069484 McMillan et al. Nov 2011 B2
8087086 Lai et al. Dec 2011 B1
8171553 Aziz et al. May 2012 B2
8176049 Deninger et al. May 2012 B2
8176480 Spertus May 2012 B1
8201246 Wu et al. Jun 2012 B1
8204984 Aziz et al. Jun 2012 B1
8214905 Doukhvalov et al. Jul 2012 B1
8220055 Kennedy Jul 2012 B1
8225288 Miller et al. Jul 2012 B2
8225373 Kraemer Jul 2012 B2
8233882 Rogel Jul 2012 B2
8234640 Fitzgerald et al. Jul 2012 B1
8234709 Viljoen et al. Jul 2012 B2
8239944 Nachenberg et al. Aug 2012 B1
8260914 Ranjan Sep 2012 B1
8266091 Gubin et al. Sep 2012 B1
8286251 Eker et al. Oct 2012 B2
8291499 Aziz et al. Oct 2012 B2
8307435 Mann et al. Nov 2012 B1
8307443 Wang et al. Nov 2012 B2
8312545 Tuvell et al. Nov 2012 B2
8321936 Green et al. Nov 2012 B1
8321941 Tuvell et al. Nov 2012 B2
8332571 Edwards, Sr. Dec 2012 B1
8365286 Poston Jan 2013 B2
8365297 Parshin et al. Jan 2013 B1
8370938 Daswani et al. Feb 2013 B1
8370939 Zaitsev et al. Feb 2013 B2
8375444 Aziz et al. Feb 2013 B2
8381299 Stolfo et al. Feb 2013 B2
8402529 Green et al. Mar 2013 B1
8464340 Ahn et al. Jun 2013 B2
8479174 Chiriac Jul 2013 B2
8479276 Vaystikh et al. Jul 2013 B1
8479291 Bodke Jul 2013 B1
8510827 Leake et al. Aug 2013 B1
8510828 Guo et al. Aug 2013 B1
8510842 Amit et al. Aug 2013 B2
8516478 Edwards et al. Aug 2013 B1
8516590 Ranadive et al. Aug 2013 B1
8516593 Aziz Aug 2013 B2
8522348 Chen et al. Aug 2013 B2
8528086 Aziz Sep 2013 B1
8533824 Hutton et al. Sep 2013 B2
8539582 Aziz et al. Sep 2013 B1
8549638 Aziz Oct 2013 B2
8555391 Demir et al. Oct 2013 B1
8561177 Aziz et al. Oct 2013 B1
8566476 Shiffer et al. Oct 2013 B2
8566946 Aziz et al. Oct 2013 B1
8584094 Dadhia et al. Nov 2013 B2
8584234 Sobel et al. Nov 2013 B1
8584239 Aziz et al. Nov 2013 B2
8595834 Xie et al. Nov 2013 B2
8627476 Satish et al. Jan 2014 B1
8635696 Aziz Jan 2014 B1
8682054 Xue et al. Mar 2014 B2
8682812 Ranjan Mar 2014 B1
8689333 Aziz Apr 2014 B2
8695096 Zhang Apr 2014 B1
8713631 Pavlyushchik Apr 2014 B1
8713681 Silberman et al. Apr 2014 B2
8726392 McCorkendale et al. May 2014 B1
8739280 Chess et al. May 2014 B2
8776229 Aziz Jul 2014 B1
8782792 Bodke Jul 2014 B1
8789172 Stolfo et al. Jul 2014 B2
8789178 Kejriwal et al. Jul 2014 B2
8793278 Frazier et al. Jul 2014 B2
8793787 Ismael et al. Jul 2014 B2
8805947 Kuzkin et al. Aug 2014 B1
8806647 Daswani et al. Aug 2014 B1
8832829 Manni et al. Sep 2014 B2
8850570 Ramzan Sep 2014 B1
8850571 Staniford et al. Sep 2014 B2
8881234 Narasimhan et al. Nov 2014 B2
8881271 Butler, II Nov 2014 B2
8881282 Aziz et al. Nov 2014 B1
8898788 Aziz et al. Nov 2014 B1
8935779 Manni et al. Jan 2015 B2
8949257 Shiffer et al. Feb 2015 B2
8984638 Aziz et al. Mar 2015 B1
8990939 Staniford et al. Mar 2015 B2
8990944 Singh et al. Mar 2015 B1
8997219 Staniford et al. Mar 2015 B2
9009822 Ismael et al. Apr 2015 B1
9009823 Ismael et al. Apr 2015 B1
9027135 Aziz May 2015 B1
9071638 Aziz et al. Jun 2015 B1
9104867 Thioux et al. Aug 2015 B1
9106630 Frazier et al. Aug 2015 B2
9106694 Aziz et al. Aug 2015 B2
9118715 Staniford et al. Aug 2015 B2
9159035 Ismael et al. Oct 2015 B1
9171160 Vincent et al. Oct 2015 B2
9176843 Ismael et al. Nov 2015 B1
9189627 Islam Nov 2015 B1
9195829 Goradia et al. Nov 2015 B1
9197664 Aziz et al. Nov 2015 B1
9223972 Vincent et al. Dec 2015 B1
9225740 Ismael et al. Dec 2015 B1
9241010 Bennett et al. Jan 2016 B1
9251343 Vincent et al. Feb 2016 B1
9262635 Paithane et al. Feb 2016 B2
9268936 Butler Feb 2016 B2
9275229 LeMasters Mar 2016 B2
9282109 Aziz et al. Mar 2016 B1
9292686 Ismael et al. Mar 2016 B2
9294501 Mesdaq et al. Mar 2016 B2
9300686 Pidathala et al. Mar 2016 B2
9306960 Aziz Apr 2016 B1
9306974 Aziz et al. Apr 2016 B1
9311479 Manni et al. Apr 2016 B1
9355247 Thioux et al. May 2016 B1
9356944 Aziz May 2016 B1
9363280 Rivlin et al. Jun 2016 B1
9367681 Ismael et al. Jun 2016 B1
9398028 Karandikar et al. Jul 2016 B1
9413781 Cunningham et al. Aug 2016 B2
9426071 Caldejon et al. Aug 2016 B1
9430646 Mushtaq et al. Aug 2016 B1
9432389 Khalid et al. Aug 2016 B1
9438613 Paithane et al. Sep 2016 B1
9438622 Staniford et al. Sep 2016 B1
9438623 Thioux et al. Sep 2016 B1
9459901 Jung et al. Oct 2016 B2
9467460 Otvagin et al. Oct 2016 B1
9483644 Paithane et al. Nov 2016 B1
9495180 Ismael Nov 2016 B2
9497213 Thompson et al. Nov 2016 B2
9507935 Ismael et al. Nov 2016 B2
9516057 Aziz Dec 2016 B2
9519782 Aziz et al. Dec 2016 B2
9536091 Paithane et al. Jan 2017 B2
9537972 Edwards et al. Jan 2017 B1
9560059 Islam Jan 2017 B1
9565202 Kindlund et al. Feb 2017 B1
9591015 Amin et al. Mar 2017 B1
9591020 Aziz Mar 2017 B1
9594904 Jain et al. Mar 2017 B1
9594905 Ismael et al. Mar 2017 B1
9594912 Thioux et al. Mar 2017 B1
9609007 Rivlin et al. Mar 2017 B1
9626509 Khalid et al. Apr 2017 B1
9628498 Aziz et al. Apr 2017 B1
9628507 Haq et al. Apr 2017 B2
9633134 Ross Apr 2017 B2
9635039 Islam et al. Apr 2017 B1
9641546 Manni et al. May 2017 B1
9654485 Neumann May 2017 B1
9661009 Karandikar et al. May 2017 B1
9661018 Aziz May 2017 B1
9674298 Edwards et al. Jun 2017 B1
9680862 Ismael et al. Jun 2017 B2
9690606 Ha et al. Jun 2017 B1
9690933 Singh et al. Jun 2017 B1
9690935 Shiffer et al. Jun 2017 B2
9690936 Malik et al. Jun 2017 B1
9736179 Ismael Aug 2017 B2
9740857 Ismael et al. Aug 2017 B2
9747446 Pidathala et al. Aug 2017 B1
9756074 Aziz et al. Sep 2017 B2
9773112 Rathor et al. Sep 2017 B1
9781144 Otvagin et al. Oct 2017 B1
9787700 Amin et al. Oct 2017 B1
9787706 Otvagin et al. Oct 2017 B1
9792196 Ismael et al. Oct 2017 B1
9824209 Ismael et al. Nov 2017 B1
9824211 Wilson Nov 2017 B2
9824216 Khalid et al. Nov 2017 B1
9825976 Gomez et al. Nov 2017 B1
9825989 Mehra et al. Nov 2017 B1
9830478 Hale Nov 2017 B1
9838408 Karandikar et al. Dec 2017 B1
9838411 Aziz Dec 2017 B1
9838416 Aziz Dec 2017 B1
9838417 Khalid et al. Dec 2017 B1
9846776 Paithane et al. Dec 2017 B1
9876701 Caldejon et al. Jan 2018 B1
9888016 Amin et al. Feb 2018 B1
9888019 Pidathala et al. Feb 2018 B1
9910988 Vincent et al. Mar 2018 B1
9912644 Cunningham Mar 2018 B2
9912681 Ismael et al. Mar 2018 B1
9912684 Aziz et al. Mar 2018 B1
9912691 Mesdaq et al. Mar 2018 B2
9912698 Thioux et al. Mar 2018 B1
9916440 Paithane et al. Mar 2018 B1
9921978 Chan et al. Mar 2018 B1
9934376 Ismael Apr 2018 B1
9934381 Kindlund et al. Apr 2018 B1
9946568 Ismael et al. Apr 2018 B1
9954890 Staniford et al. Apr 2018 B1
9973531 Thioux May 2018 B1
10002252 Ismael et al. Jun 2018 B2
10019338 Goradia et al. Jul 2018 B1
10019573 Silberman et al. Jul 2018 B2
10025691 Ismael et al. Jul 2018 B1
10025927 Khalid et al. Jul 2018 B1
10027689 Rathor et al. Jul 2018 B1
10027690 Aziz et al. Jul 2018 B2
10027696 Rivlin et al. Jul 2018 B1
10033747 Paithane et al. Jul 2018 B1
10033748 Cunningham et al. Jul 2018 B1
10033753 Islam et al. Jul 2018 B1
10033759 Kabra et al. Jul 2018 B1
10050998 Singh Aug 2018 B1
10068091 Aziz et al. Sep 2018 B1
10075455 Zafar et al. Sep 2018 B2
10083302 Paithane et al. Sep 2018 B1
10084813 Eyada Sep 2018 B2
10089461 Ha et al. Oct 2018 B1
10097573 Aziz Oct 2018 B1
10104102 Neumann Oct 2018 B1
10108446 Steinberg et al. Oct 2018 B1
10121000 Rivlin et al. Nov 2018 B1
10122746 Manni et al. Nov 2018 B1
10133863 Bu et al. Nov 2018 B2
10133866 Kumar et al. Nov 2018 B1
10146810 Shiffer et al. Dec 2018 B2
10148693 Singh et al. Dec 2018 B2
10165000 Aziz et al. Dec 2018 B1
10169585 Pilipenko et al. Jan 2019 B1
10176321 Abbasi et al. Jan 2019 B2
10181029 Ismael et al. Jan 2019 B1
10191861 Steinberg et al. Jan 2019 B1
10192052 Singh et al. Jan 2019 B1
10198574 Thioux et al. Feb 2019 B1
10200384 Mushtaq et al. Feb 2019 B1
10210329 Malik et al. Feb 2019 B1
10216927 Steinberg Feb 2019 B1
10218740 Mesdaq et al. Feb 2019 B1
10242185 Goradia Mar 2019 B1
20010005889 Albrecht Jun 2001 A1
20010047326 Broadbent et al. Nov 2001 A1
20020018903 Kokubo et al. Feb 2002 A1
20020038430 Edwards et al. Mar 2002 A1
20020091819 Melchione et al. Jul 2002 A1
20020095607 Lin-Hendel Jul 2002 A1
20020116627 Tarbotton et al. Aug 2002 A1
20020144156 Copeland, III Oct 2002 A1
20020162015 Tang Oct 2002 A1
20020166063 Lachman, III et al. Nov 2002 A1
20020169952 DiSanto et al. Nov 2002 A1
20020184528 Shevenell et al. Dec 2002 A1
20020188887 Largman et al. Dec 2002 A1
20020194490 Halperin et al. Dec 2002 A1
20030021728 Sharpe, Jr. et al. Jan 2003 A1
20030074578 Ford et al. Apr 2003 A1
20030084318 Schertz May 2003 A1
20030101381 Mateev et al. May 2003 A1
20030115483 Liang Jun 2003 A1
20030188190 Aaron et al. Oct 2003 A1
20030191957 Hypponen et al. Oct 2003 A1
20030200460 Morota et al. Oct 2003 A1
20030212902 van der Made Nov 2003 A1
20030229801 Kouznetsov et al. Dec 2003 A1
20030237000 Denton et al. Dec 2003 A1
20040003323 Bennett et al. Jan 2004 A1
20040006473 Mills et al. Jan 2004 A1
20040015712 Szor Jan 2004 A1
20040019832 Arnold et al. Jan 2004 A1
20040047356 Bauer Mar 2004 A1
20040083408 Spiegel et al. Apr 2004 A1
20040088581 Brawn et al. May 2004 A1
20040093513 Cantrell et al. May 2004 A1
20040111531 Staniford et al. Jun 2004 A1
20040117478 Triulzi et al. Jun 2004 A1
20040117624 Brandt et al. Jun 2004 A1
20040128355 Chao et al. Jul 2004 A1
20040165588 Pandya Aug 2004 A1
20040236963 Danford et al. Nov 2004 A1
20040243349 Greifeneder et al. Dec 2004 A1
20040249911 Alkhatib et al. Dec 2004 A1
20040255161 Cavanaugh Dec 2004 A1
20040268147 Wiederin et al. Dec 2004 A1
20050005159 Oliphant Jan 2005 A1
20050021740 Bar et al. Jan 2005 A1
20050033960 Vialen et al. Feb 2005 A1
20050033989 Poletto et al. Feb 2005 A1
20050050148 Mohammadioun et al. Mar 2005 A1
20050086523 Zimmer et al. Apr 2005 A1
20050091513 Mitomo et al. Apr 2005 A1
20050091533 Omote et al. Apr 2005 A1
20050091652 Ross et al. Apr 2005 A1
20050108562 Khazan et al. May 2005 A1
20050114663 Cornell et al. May 2005 A1
20050125195 Brendel Jun 2005 A1
20050149726 Joshi et al. Jul 2005 A1
20050157662 Bingham et al. Jul 2005 A1
20050183143 Anderholm et al. Aug 2005 A1
20050201297 Peikari Sep 2005 A1
20050210533 Copeland et al. Sep 2005 A1
20050238005 Chen et al. Oct 2005 A1
20050240781 Gassoway Oct 2005 A1
20050262562 Gassoway Nov 2005 A1
20050265331 Stolfo Dec 2005 A1
20050283839 Cowburn Dec 2005 A1
20060010495 Cohen et al. Jan 2006 A1
20060015416 Hoffman et al. Jan 2006 A1
20060015715 Anderson Jan 2006 A1
20060015747 Van de Ven Jan 2006 A1
20060021029 Brickell et al. Jan 2006 A1
20060021054 Costa et al. Jan 2006 A1
20060031476 Mathes et al. Feb 2006 A1
20060047665 Neil Mar 2006 A1
20060070130 Costea et al. Mar 2006 A1
20060075496 Carpenter et al. Apr 2006 A1
20060095968 Portolani et al. May 2006 A1
20060101516 Sudaharan et al. May 2006 A1
20060101517 Banzhof et al. May 2006 A1
20060117385 Mester et al. Jun 2006 A1
20060123477 Raghavan et al. Jun 2006 A1
20060143709 Brooks et al. Jun 2006 A1
20060150249 Gassen et al. Jul 2006 A1
20060161983 Cothrell et al. Jul 2006 A1
20060161987 Levy-Yurista Jul 2006 A1
20060161989 Reshef et al. Jul 2006 A1
20060164199 Gilde et al. Jul 2006 A1
20060173992 Weber et al. Aug 2006 A1
20060179147 Tran et al. Aug 2006 A1
20060184632 Marino et al. Aug 2006 A1
20060191010 Benjamin Aug 2006 A1
20060221956 Narayan et al. Oct 2006 A1
20060236393 Kramer et al. Oct 2006 A1
20060242709 Seinfeld et al. Oct 2006 A1
20060248519 Jaeger et al. Nov 2006 A1
20060248582 Panjwani et al. Nov 2006 A1
20060251104 Koga Nov 2006 A1
20060288417 Bookbinder et al. Dec 2006 A1
20070006288 Mayfield et al. Jan 2007 A1
20070006313 Porras et al. Jan 2007 A1
20070011174 Takaragi et al. Jan 2007 A1
20070016951 Piccard et al. Jan 2007 A1
20070019286 Kikuchi Jan 2007 A1
20070033645 Jones Feb 2007 A1
20070038943 FitzGerald et al. Feb 2007 A1
20070064689 Shin et al. Mar 2007 A1
20070074169 Chess et al. Mar 2007 A1
20070094730 Bhikkaji et al. Apr 2007 A1
20070101435 Konanka et al. May 2007 A1
20070128855 Cho et al. Jun 2007 A1
20070142030 Sinha et al. Jun 2007 A1
20070143827 Nicodemus et al. Jun 2007 A1
20070156895 Vuong Jul 2007 A1
20070157180 Tillmann et al. Jul 2007 A1
20070157306 Elrod et al. Jul 2007 A1
20070168988 Eisner et al. Jul 2007 A1
20070171824 Ruello et al. Jul 2007 A1
20070174915 Gribble et al. Jul 2007 A1
20070192500 Lum Aug 2007 A1
20070192858 Lum Aug 2007 A1
20070198275 Malden et al. Aug 2007 A1
20070208822 Wang et al. Sep 2007 A1
20070220607 Sprosts et al. Sep 2007 A1
20070240218 Tuvell et al. Oct 2007 A1
20070240219 Tuvell et al. Oct 2007 A1
20070240220 Tuvell et al. Oct 2007 A1
20070240222 Tuvell et al. Oct 2007 A1
20070250930 Aziz et al. Oct 2007 A1
20070256132 Oliphant Nov 2007 A2
20070271446 Nakamura Nov 2007 A1
20080005782 Aziz Jan 2008 A1
20080018122 Zierler et al. Jan 2008 A1
20080028463 Dagon et al. Jan 2008 A1
20080040710 Chiriac Feb 2008 A1
20080046781 Childs et al. Feb 2008 A1
20080066179 Liu Mar 2008 A1
20080072326 Danford et al. Mar 2008 A1
20080077793 Tan et al. Mar 2008 A1
20080080518 Hoeflin et al. Apr 2008 A1
20080086720 Lekel Apr 2008 A1
20080098476 Syversen Apr 2008 A1
20080120722 Sima et al. May 2008 A1
20080134178 Fitzgerald et al. Jun 2008 A1
20080134334 Kim et al. Jun 2008 A1
20080141376 Clausen et al. Jun 2008 A1
20080184367 McMillan et al. Jul 2008 A1
20080184373 Traut et al. Jul 2008 A1
20080189787 Arnold et al. Aug 2008 A1
20080201778 Guo et al. Aug 2008 A1
20080209557 Herley et al. Aug 2008 A1
20080215742 Goldszmidt et al. Sep 2008 A1
20080222729 Chen et al. Sep 2008 A1
20080263665 Ma et al. Oct 2008 A1
20080295172 Bohacek Nov 2008 A1
20080301810 Lehane et al. Dec 2008 A1
20080307524 Singh et al. Dec 2008 A1
20080313738 Enderby Dec 2008 A1
20080320594 Jiang Dec 2008 A1
20090003317 Kasralikar et al. Jan 2009 A1
20090007100 Field et al. Jan 2009 A1
20090013408 Schipka Jan 2009 A1
20090031423 Liu et al. Jan 2009 A1
20090036111 Danford et al. Feb 2009 A1
20090037835 Goldman Feb 2009 A1
20090044024 Oberheide et al. Feb 2009 A1
20090044274 Budko et al. Feb 2009 A1
20090064332 Porras et al. Mar 2009 A1
20090077666 Chen et al. Mar 2009 A1
20090083369 Marmor Mar 2009 A1
20090083855 Apap et al. Mar 2009 A1
20090089879 Wang et al. Apr 2009 A1
20090094697 Provos et al. Apr 2009 A1
20090113425 Ports et al. Apr 2009 A1
20090125976 Wassermann et al. May 2009 A1
20090126015 Monastyrsky et al. May 2009 A1
20090126016 Sobko et al. May 2009 A1
20090133125 Choi et al. May 2009 A1
20090144823 Lamastra et al. Jun 2009 A1
20090158430 Borders Jun 2009 A1
20090172815 Gu et al. Jul 2009 A1
20090187992 Poston Jul 2009 A1
20090193293 Stolfo et al. Jul 2009 A1
20090198651 Shiffer et al. Aug 2009 A1
20090198670 Shiffer et al. Aug 2009 A1
20090198689 Frazier et al. Aug 2009 A1
20090199274 Frazier et al. Aug 2009 A1
20090199296 Xie et al. Aug 2009 A1
20090228233 Anderson et al. Sep 2009 A1
20090241187 Troyansky Sep 2009 A1
20090241190 Todd et al. Sep 2009 A1
20090265692 Godefroid et al. Oct 2009 A1
20090271867 Zhang Oct 2009 A1
20090300415 Zhang et al. Dec 2009 A1
20090300761 Park et al. Dec 2009 A1
20090328185 Berg et al. Dec 2009 A1
20090328221 Blumfield et al. Dec 2009 A1
20100005146 Drako et al. Jan 2010 A1
20100011205 McKenna Jan 2010 A1
20100017546 Poo et al. Jan 2010 A1
20100030996 Butler, II Feb 2010 A1
20100031353 Thomas et al. Feb 2010 A1
20100037314 Perdisci et al. Feb 2010 A1
20100043073 Kuwamura Feb 2010 A1
20100054278 Stolfo et al. Mar 2010 A1
20100058474 Hicks Mar 2010 A1
20100064044 Nonoyama Mar 2010 A1
20100077481 Polyakov et al. Mar 2010 A1
20100083376 Pereira et al. Apr 2010 A1
20100115621 Staniford et al. May 2010 A1
20100132038 Zaitsev May 2010 A1
20100154056 Smith et al. Jun 2010 A1
20100180344 Malyshev et al. Jul 2010 A1
20100192223 Ismael et al. Jul 2010 A1
20100220863 Dupaquis et al. Sep 2010 A1
20100235831 Dittmer Sep 2010 A1
20100251104 Massand Sep 2010 A1
20100281102 Chinta et al. Nov 2010 A1
20100281541 Stolfo et al. Nov 2010 A1
20100281542 Stolfo et al. Nov 2010 A1
20100287260 Peterson et al. Nov 2010 A1
20100299754 Amit et al. Nov 2010 A1
20100306173 Frank Dec 2010 A1
20110004737 Greenebaum Jan 2011 A1
20110025504 Lyon et al. Feb 2011 A1
20110041179 Ståhlberg Feb 2011 A1
20110047594 Mahaffey et al. Feb 2011 A1
20110047620 Mahaffey et al. Feb 2011 A1
20110055907 Narasimhan et al. Mar 2011 A1
20110078794 Manni et al. Mar 2011 A1
20110093951 Aziz Apr 2011 A1
20110099620 Stavrou et al. Apr 2011 A1
20110099633 Aziz Apr 2011 A1
20110099635 Silberman et al. Apr 2011 A1
20110113231 Kaminsky May 2011 A1
20110145918 Jung et al. Jun 2011 A1
20110145920 Mahaffey et al. Jun 2011 A1
20110145934 Abramovici et al. Jun 2011 A1
20110167493 Song et al. Jul 2011 A1
20110167494 Bowen et al. Jul 2011 A1
20110173213 Frazier et al. Jul 2011 A1
20110173460 Ito et al. Jul 2011 A1
20110219449 St. Neitzel et al. Sep 2011 A1
20110219450 McDougal et al. Sep 2011 A1
20110225624 Sawhney et al. Sep 2011 A1
20110225655 Niemelä et al. Sep 2011 A1
20110247072 Staniford et al. Oct 2011 A1
20110265182 Peinado et al. Oct 2011 A1
20110289582 Kejriwal et al. Nov 2011 A1
20110302587 Nishikawa et al. Dec 2011 A1
20110307954 Melnik et al. Dec 2011 A1
20110307955 Kaplan et al. Dec 2011 A1
20110307956 Yermakov et al. Dec 2011 A1
20110314546 Aziz et al. Dec 2011 A1
20120023593 Puder et al. Jan 2012 A1
20120054869 Yen et al. Mar 2012 A1
20120066698 Yanoo Mar 2012 A1
20120079596 Thomas et al. Mar 2012 A1
20120084859 Radinsky et al. Apr 2012 A1
20120096553 Srivastava et al. Apr 2012 A1
20120110667 Zubrilin et al. May 2012 A1
20120117652 Manni et al. May 2012 A1
20120121154 Xue et al. May 2012 A1
20120124426 Maybee et al. May 2012 A1
20120174186 Aziz et al. Jul 2012 A1
20120174196 Bhogavilli et al. Jul 2012 A1
20120174218 McCoy et al. Jul 2012 A1
20120198279 Schroeder Aug 2012 A1
20120210423 Friedrichs et al. Aug 2012 A1
20120222121 Staniford et al. Aug 2012 A1
20120255015 Sahita et al. Oct 2012 A1
20120255017 Sallam Oct 2012 A1
20120260342 Dube et al. Oct 2012 A1
20120266244 Green et al. Oct 2012 A1
20120278886 Luna Nov 2012 A1
20120291126 Lagar-Cavilla et al. Nov 2012 A1
20120297489 Dequevy Nov 2012 A1
20120330801 McDougal et al. Dec 2012 A1
20120331553 Aziz et al. Dec 2012 A1
20130014259 Gribble et al. Jan 2013 A1
20130036472 Aziz Feb 2013 A1
20130047257 Aziz Feb 2013 A1
20130074185 McDougal et al. Mar 2013 A1
20130086684 Mohler Apr 2013 A1
20130097699 Balupari et al. Apr 2013 A1
20130097706 Titonis et al. Apr 2013 A1
20130111587 Goel et al. May 2013 A1
20130117852 Stute May 2013 A1
20130117855 Kim et al. May 2013 A1
20130139264 Brinkley et al. May 2013 A1
20130160125 Likhachev et al. Jun 2013 A1
20130160127 Jeong et al. Jun 2013 A1
20130160130 Mendelev et al. Jun 2013 A1
20130160131 Madou et al. Jun 2013 A1
20130167236 Sick Jun 2013 A1
20130174214 Duncan Jul 2013 A1
20130185789 Hagiwara et al. Jul 2013 A1
20130185795 Winn et al. Jul 2013 A1
20130185798 Saunders et al. Jul 2013 A1
20130191915 Antonakakis et al. Jul 2013 A1
20130196649 Paddon et al. Aug 2013 A1
20130227691 Aziz et al. Aug 2013 A1
20130246370 Bartram et al. Sep 2013 A1
20130247186 LeMasters Sep 2013 A1
20130263260 Mahaffey et al. Oct 2013 A1
20130291109 Staniford et al. Oct 2013 A1
20130298243 Kumar et al. Nov 2013 A1
20130318038 Shiffer et al. Nov 2013 A1
20130318073 Shiffer et al. Nov 2013 A1
20130325791 Shiffer et al. Dec 2013 A1
20130325792 Shiffer et al. Dec 2013 A1
20130325871 Shiffer et al. Dec 2013 A1
20130325872 Shiffer et al. Dec 2013 A1
20140032875 Butler Jan 2014 A1
20140053260 Gupta et al. Feb 2014 A1
20140053261 Gupta et al. Feb 2014 A1
20140130158 Wang et al. May 2014 A1
20140137180 Lukacs et al. May 2014 A1
20140169762 Ryu Jun 2014 A1
20140179360 Jackson et al. Jun 2014 A1
20140181131 Ross Jun 2014 A1
20140189687 Jung et al. Jul 2014 A1
20140189866 Shiffer et al. Jul 2014 A1
20140189882 Jung et al. Jul 2014 A1
20140237600 Silberman et al. Aug 2014 A1
20140280245 Wilson Sep 2014 A1
20140283037 Sikorski et al. Sep 2014 A1
20140283063 Thompson et al. Sep 2014 A1
20140328204 Klotsche et al. Nov 2014 A1
20140337836 Ismael Nov 2014 A1
20140344926 Cunningham et al. Nov 2014 A1
20140351935 Shao et al. Nov 2014 A1
20140380473 Bu et al. Dec 2014 A1
20140380474 Paithane et al. Dec 2014 A1
20150007312 Pidathala et al. Jan 2015 A1
20150096022 Vincent et al. Apr 2015 A1
20150096023 Mesdaq et al. Apr 2015 A1
20150096024 Haq et al. Apr 2015 A1
20150096025 Ismael Apr 2015 A1
20150180886 Staniford et al. Jun 2015 A1
20150186645 Aziz et al. Jul 2015 A1
20150199513 Ismael et al. Jul 2015 A1
20150199531 Ismael et al. Jul 2015 A1
20150199532 Ismael et al. Jul 2015 A1
20150220735 Paithane et al. Aug 2015 A1
20150372980 Eyada Dec 2015 A1
20160004869 Ismael et al. Jan 2016 A1
20160006756 Ismael et al. Jan 2016 A1
20160044000 Cunningham Feb 2016 A1
20160127393 Aziz et al. May 2016 A1
20160191547 Zafar et al. Jun 2016 A1
20160191550 Ismael et al. Jun 2016 A1
20160196425 Davidov et al. Jul 2016 A1
20160261612 Mesdaq et al. Sep 2016 A1
20160285914 Singh et al. Sep 2016 A1
20160301703 Aziz Oct 2016 A1
20160335110 Paithane et al. Nov 2016 A1
20170083703 Abbasi et al. Mar 2017 A1
20180013770 Ismael Jan 2018 A1
20180048660 Paithane et al. Feb 2018 A1
20180121316 Ismael et al. May 2018 A1
20180288077 Siddiqui et al. Oct 2018 A1
20190104147 Rouatbi et al. Apr 2019 A1
Foreign Referenced Citations (11)
Number Date Country
2439806 Jan 2008 GB
2490431 Oct 2012 GB
0206928 Jan 2002 WO
0223805 Mar 2002 WO
2007117636 Oct 2007 WO
2008041950 Apr 2008 WO
2011084431 Jul 2011 WO
2011112348 Sep 2011 WO
2012075336 Jun 2012 WO
2012145066 Oct 2012 WO
2013067505 May 2013 WO
Non-Patent Literature Citations (57)
Entry
“Network Security: NetDetector-Network Intrusion Forensic System (NIFS) Whitepaper”, (“NetDetector Whitepaper”), (2003).
“When Virtual is Better Than Real”, IEEEXplore Digital Library, available at, http://ieeexplore.ieee.org/xpl/articleDetails.sp?reload=true&amumbe-r=990073, (Dec. 7, 2013).
Abdullah, et al., Visualizing Network Data for Intrusion Detection, 2005 IEEE Workshop on Information Assurance and Security, pp. 100-108.
Adetoye, Adedayo, et al., ”Network Intrusion Detection & Response System“, (“Adetoye”), (Sep. 2003).
Apostolopoulos, George; hassapis, Constantinos; “V-eM: A cluster of Virtual Machines for Robust, Detailed, and High-Performance Network Emulation”, 14th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, Sep. 11-14, 2006, pp. 117-126.
Aura, Tuomas, “Scanning electronic documents for personally identifiable information”, Proceedings of the 5th ACM workshop on Privacy in electronic society. ACM, 2006.
Baecher, “The Nepenthes Platform: An Efficient Approach to collect Malware”, Springer-verlag Berlin Heidelberg, (2006), pp. 165-184.
Bayer, et al., “Dynamic Analysis of Malicious Code”, J Comput Virol, Springer-Verlag, France., (2006), pp. 67-77.
Boubalos, Chris, “extracting syslog data out of raw pcap dumps, seclists.org, Honeypots mailing list archives”, available at http://seclists.org/honeypots/2003/q2/319 (“Boubalos”), (Jun. 5, 2003).
Chaudet, C., et al., “Optimal Positioning of Active and Passive Monitoring Devices”, International Conference on Emerging Networking Experiments and Technologies, Proceedings of the 2005 ACM Conference on Emerging Network Experiment and Technology, CoNEXT ‘05, Toulousse, France, (Oct. 2005), pp. 71-82.
Chen, P. M. and Noble, B. D., “When Virtual is Better Than Real, Department of Electrical Engineering and Computer Science”, University of Michigan (“Chen”) (2001).
Cisco “Intrusion Prevention for the Cisco ASA 5500-x Series” Data Sheet (2012).
Cohen, M. I., “PyFlag-An advanced network forensic framework”, Digital investigation 5, Elsevier, (2008), pp. S112-S120.
Costa, M., et al., “Vigilante: End-to-End Containment of Internet Worms”, SOSP ‘05, Association for Computing Machinery, Inc., Brighton U.K., (Oct. 23-26, 2005).
Didier Stevens, “Malicious PDF Documents Explained”, Security & Privacy, IEEE, IEEE Service Center, Los Alamitos, CA, US, vol. 9, No. 1, Jan. 1, 2011, pp. 80-82, XP011329453, ISSN: 1540-7993, DOI: 10.1109/MSP.2011.14.
Distler, “Malware Analysis: An Introduction”, SANS Institute InfoSec Reading Room, SANS Institute, (2007).
Dunlap, George W., et al., “ReVirt: Enabling Intrusion Analysis through Virtual-Machine Logging and Replay”, Proceeding of the 5th Symposium on Operating Systems Design and Implementation, USENIX Association, (“Dunlap”), (Dec. 9, 2002).
FireEye Malware Analysis & Exchange Network, Malware Protection System, FireEye Inc., 2010.
FireEye Malware Analysis, Modern Malware Forensics, FireEye Inc., 2010.
FireEye v.6.0 Security Target, pp. 1-35, Version 1.1, FireEye Inc., May 2011.
Goel, et al., Reconstructing System State for Intrusion Analysis, Apr. 2008 SIGOPS Operating Systems Review, vol. 42 Issue 3, pp. 21-28.
Heng Yin et al, Panorama: Capturing System-Wide Information Flow for Malware Detection and Analysis, Research Showcase @ CMU, Carnegie Mellon University, 2007.
Hiroshi Shinotsuka, Malware Authors Using New Techniques to Evade Automated Threat Analysis Systems, Oct. 26, 2012, http://www.symantec.com/connect/blogs/, pp. 1-4.
Idika et al., A-Survey-of-Malware-Detection-Techniques, Feb. 2, 2007, Department of Computer Science, Purdue University.
Isohara, Takamasa, Keisuke Takemori, and Ayumu Kubota. “Kernel-based behavior analysis for android malware detection.” Computational intelligence and Security (CIS), 2011 Seventh International Conference on. IEEE, 2011.
Kaeo, Merike, “Designing Network Security”, (“Kaeo”), (Nov. 2003).
Kevin A Roundy et al: “Hybrid Analysis and Control of Malware”, Sep. 15, 2010, Recent Advances in Intrusion Detection, Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 317-338, XP01950454 ISBN:978-3-642-15511-6.
Khaled Salah et al.: “Using Cloud Computing to Implement a Security Overlay Network”, SECURITY & PRIVACY, IEEE, IEEE SERVICE CENTER, Los Alamitos, CA, US, vol. 11, No. 1, Jan. 1, 2013.
Kim, H., et al., “Autograph: Toward Automated, Distributed Worm Signature Detection”, Proceedings of the 13th Usenix Security Symposium (Security 2004), San Diego, (Aug. 2004), pp. 271-286.
King, Samuel T., et al., “Operating System Support for Virtual Machines”, (“King”), (2003).
Kreibich, C., et al., “Honeycomb-Creating Intrusion Detection Signatures Using Honeypots”, 2nd Workshop on Hot Topics in Networks (HotNets-11), Boston, USA, (2003).
Kristoff, J., “Botnets, Detection and Mitigation: DNS-Based Techniques”, NU Security Day, (2005), 23 pages.
Lastline Labs, The Threat of Evasive Malware, Feb. 25, 2013, Lastline Labs, pp. 1-8.
Li et al., A VMM-Based System Call Interposition Framework for Program Monitoring, Dec. 2010, IEEE 16th International Conference on Parallel and Distributed Systems, pp. 706-711.
Lindorfer, Martina, Clemens Kolbitsch, and Paolo Milani Comparetti. “Detecting environment-sensitive malware.” Recent Advances in Intrusion Detection. Springer Berlin Heidelberg, 2011.
Marchette, David J., “Computer Intrusion Detection and Network Monitoring: A Statistical Viewpoint”, (“Marchette”), (2001).
Moore, D., et al., “Internet Quarantine: Requirements for Containing Self-Propagating Code”, INFOCOM, vol. 3, (Mar. 30-Apr. 3, 2003), pp. 1901-1910.
Morales, Jose A., et al., ““Analyzing and exploiting network behaviors of malware.””, Security and Privacy in Communication Networks. Springer Berlin Heidelberg, 2010. 20-34.
Mori, Detecting Unknown Computer Viruses, 2004, Springer-Verlag Berlin Heidelberg.
Natvig, Kurt, “SANDBOXII: Internet”, Virus Bulletin Conference, (“Natvig”), (Sep. 2002).
NetBIOS Working Group. Protocol Standard for a NetBIOS Service on a TCP/UDP transport: Concepts and Methods. STD 19, RFC 1001, Mar. 1987.
Newsome, J., et al., “Dynamic Taint Analysis for Automatic Detection, Analysis, and Signature Generation of Exploits on Commodity Software”, In Proceedings of the 12th Annual Network and Distributed System Security, Symposium (NDSS ‘05), (Feb. 2005).
Nojiri, D., et al., “Cooperation Response Strategies for Large Scale Attack Mitigation”, DARPA Information Survivability Conference and Exposition, vol. 1, (Apr. 22-24, 2003), pp. 293-302.
Oberheide et al., CloudAV.sub.--N-Version Antivirus in the Network Cloud, 17th USENIX Security Symposium USENIX Security ‘08 Jul. 28-Aug. 1, 2008 San Jose, CA.
Reiner Sailer, Enriquillo Valdez, Trent Jaeger, Roonald Perez, Leendert van Doorn, John Linwood Griffin, Stefan Berger., sHype: Secure Hypervisor Appraoch to Trusted Virtualized Systems (Feb. 2, 2005) (“Sailer”).
Silicon Defense, “Worm Containment in the Internal Network”, (Mar. 2003), pp. 1-25.
Singh, S., et al., “Automated Worm Fingerprinting”, Proceedings of the ACM/USENIX Symposium on Operating System Design and Implementation, San Francisco, California, (Dec. 2004).
Thomas H. Ptacek, and Timothy N. Newsham, “Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection”, Secure Networks, (“Ptacek”), (Jan. 1998).
Vladimir Getov: “Security as a Service in Smart Clouds ‒ Opportunities and Concerns”, COMPUTER SOFTWARE AND APPLICATIONS CONFERENCE (COMPSAC), 2012 IEEE 36TH annual, IEEE, Jul. 16, 2012.
Wahid et al., Characterising the Evolution in Scanning Activity of Suspicious Hosts, Oct. 2009, Third International Conference on Network and System Security, pp. 344-350.
Whyte, et al., “DNS-Based Detection of Scanning Works in an Enterprise Network”, Proceedings of the 12th Annual Network and Distributed System Security Symposium, (Feb. 2005), 15 pages.
Williamson, Matthew M., “Throttling Viruses: Restricting Propagation to Defeat Malicious Mobile Code”, ACSAC Conference, Las Vegas, NV, USA, (Dec. 2002), pp. 1-9.
Zhang et al., The Effects of Threading, Infection Time, and Multiple-Attacker Collaboration on Malware Propagation, Sep. 2009, IEEE 28th International Symposium on Reliable Distributed Systems, pp. 73-82.
Venezia, Paul, “NetDetector Captures Intrusions”, InfoWorld Issue 27, (“Venezia”), (Jul. 14, 2003).
Yuhei Kawakoya et al: “Memory behavior-based automatic malware unpacking in stealth debugging environment”, Malicious and Unwanted Software (Malware), 2010 5th International Conference on, IEEE, Piscataway, NJ, USA, Oct. 19, 2010, pp. 39-46, XP031833827, ISBN:978-1-4244-8-9353-1.
“Mining Specification of Malicious Behavior”—Jha et al, UCSB, Sep. 2007 https://www.cs.ucsb.edu/.about.chris/research/doc/esec07.sub.-mining.pdf- .
Gregg Keizer: “Microsoft’s HoneyMonkeys Show Patching Windows Works”, Aug. 8, 2005, XP055143386, Retrieved from the Internet: URL: http://www.informationweek.com/microsofts-honeymonkeys-show-patching-windows-works/d/d-d/1035069? [retrieved on Jun. 1, 2016].